uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,564,015
arxiv
\section{Introduction} A quantitative prediction of fluid flow, sound propagation, or chemical transport in strongly correlated disordered media, such as sedimentary rock, frequently employs representative microscopic models of the microstructure as input. A large number of microscopic models have been proposed over the years to represent the microstructure of porous media \cite{fat56a,sch74,CD77,zim82,RS85,JA87,SK87,oxa91,adl92,BT93,sah95,FJ95,JC96,AAMHSS97,OBA98}. Microscopic models do not reproduce the exact microstructure of the medium at hand, but are based on the idea that the experimental sample is a representative realization drawn from a statistical ensemble of similar microstructures. Hence it is necessary to have methods for distinguishing microstructures quantitatively \cite{hil95d,hil96g,hil92f,sic97}. This is particularly important for attempts to generate porous microstructures in an automatic computerized process \cite{qui84,adl92,YT98a,YT98b,BT93}. Despite the generality of the problem sketched above our discussion will be focussed on fluid flow through sedimentary rocks. In particular we will discuss Fontainebleau sandstone. This model system has (together with Berea sandstone) acquired the status of a reference standard for modeling and analysing sedimentary rocks \cite{BZ85,BCZ87,adl92,YT98a,TSA93}. General geometric characterization methods traditionally include porosities, specific surface areas, and sometimes correlation functions \cite{sch74,adl92,ste85,dul92,BO97}. Recently a more refined, quantitative characterization for general stochastic microstructures was based on local porosity theory (LPT) \cite{hil92a,hil95d,hil91d,hil92b,hil92f,hil93b,hil93c,hil94b,hil98a}. LPT is currently the most general geometric characterization method because it contains as a special case also the characterization through correlation functions (see \cite{hil95d} for details). Local porosity theory is used in this paper to distinguish quantitatively various models for Fontainebleau sandstone. More precisely, the objective of this work is to give a quantitative comparison of four microstructures. One of them is an experimental sample of Fontainebleau sandstone, while three of the microstructures are synthetic samples from computer simulation models for Fontainebleau sandstone. One of the models is a sedimentation and diagenesis model that tries to mimick the formation of sandstone through deposition and cementation of spherical grains. Two purely stochastic models generate random realizations of microstructures with prescribed porosity and correlation function. The first of these is based on Fourier space filtering of Gaussian random fields, and the second is based on a simulated annealing algorithm. In section \ref{lpt} we introduce and define the geometrical quantities that will be used to distinguish the microstructures. In section \ref{samples} we present the four microstructures, their generation and characterization in terms of the generation procedure. In section \ref{results} we present the results and discuss the differences between the four microstructures. \section{Measured Quantities} \label{lpt} \subsection {Porosity and Correlation Functions} Consider a rock sample occupying a subset $\SSS\subset\RRR^d$ of the physical space ($d=3$ in the following). The sample $\SSS$ contains two disjoint subsets $\SSS=\PPP\cup\MMM$ with $\PPP\cap\MMM=\emptyset$ where $\PPP$ is the pore space and $\MMM$ is the rock or mineral matrix and $\emptyset$ is the empty set. The porosity $\phi(\SSS)$ of such a two component porous medium is defined as the ratio $\phi(\SSS) = V(\PPP)/V(\SSS)$ which gives the volume fraction of pore space. Here $V(\PPP)$ denotes the volume of the pore space, and $V(\SSS)$ is the total sample volume. For the sample data analysed here the set $\SSS$ is a rectangular parallelepiped whose sidelengths are $M_1,M_2$ and $M_3$ in units of the lattice constant $a$ (resolution) of a simple cubic lattice. Thus the sample is represented in practice as the subset $\SSS = [1,M_1]\times[1,M_2]\times[1,M_3]\subset \ZZZZ^3$ of an infinite cubic lattice. $\ZZZZ$ denotes the set of integers, and $[1,M_i]\subset\ZZZZ$ are intervals. The position vectors ${\bf r}_i={\bf r}_{i_1...i_d}=(a i_1,...,a i_d)$ with integers $1\leq i_j \leq M_j$ are used to label the lattice points, and ${\bf r}_i$ is a shorthand notation for ${\bf r}_{i_1...i_d}$. A configuration (or microstructure) ${\cal Z}$ of a $2$-component medium is then given as \begin{equation} {\cal Z}=(Z_1,...,Z_N)=(\ch{\PPP}({\bf r}_1),...,\ch{\PPP}({\bf r}_N)) \end{equation} where $N=M_1M_2M_3$, and \begin{equation} \ch{\GGG}({\bf r}) = \left\{ \begin{array}{r@{\quad:\quad}l} 1 & \mbox{for}\quad {\bf r}\in\GGG \\ 0 & \mbox{for}\quad {\bf r}\notin\GGG \end{array} \right . \label{charfunc} \end{equation} is the characteristic (or indicator) function of a set $\GGG$ that indicates when a point is inside or outside of $\GGG$. A stochastic medium is defined through the discrete probability density \begin{equation} p(z_1,...,z_N) = \mbox{Prob}\{(Z_1=z_1)\wedge...\wedge(Z_N=z_N)\} \end{equation} where $z_i\in\{0,1\}$. Expectation values of functions $f({\cal Z})=f(z_1,...,z_N)$ are defined as \begin{equation} \langle f(z_1,...,z_N) \rangle = \sum_{z_1=0}^1...\sum_{z_N=0}^1 f(z_1,...,z_N)p(z_1,...,z_N) \end{equation} where the summations run over all configurations. If the medium is statistically homogeneous (stationary) then the average porosity is given as \begin{equation} \gex{\phi}= \mbox{Prob}\{{\bf r}_0\in\PPP\} = \gex{\ch{\PPP}({\bf r}_0)} \label{avgporosity} \end{equation} where ${\bf r}_0$ is an arbitrary lattice site. If the medium is also ergodic then the limit \begin{equation} \lim_{N\rightarrow \infty}\phi(\SSS)= \gex{\phi} \end{equation} exists. There are, however, many subtleties associated with this limit (see \cite{hil95d} for details). Finally, we now define the correlation function for a homogeneous medium as the expectation \begin{equation} \label{eq:g_gen} G({\bf r}_0,{\bf r}) = G({\bf r}-{\bf r}_0) = \frac{\gex{\ch{\PPP}({\bf r}_0)\ch{\PPP}({\bf r})}-\gex{\phi}^2} {\gex{\phi}(1-\gex{\phi})} . \end{equation} If the medium is also isotropic $G({\bf r})=G(|{\bf r}|)=G(r)$. Obviously $G(0)=1$ and $G(\infty)=0$. \subsection{Local Porosity Distributions} The basic idea of local porosity theory is to measure geometric observables within a bounded (compact) subset of the porous medium and to collect these measurements into various histograms. Let $\KKK({\bf r},L)$ denote a cube of sidelength $L$ centered at the lattice vector ${\bf r}$. The set $\KKK({\bf r},L)$ defines a measurement cell inside of which local geometric properties such as porosity or specific internal surface are measured \cite{hil91d}. The local porosity in this measurement cell $\KKK({\bf r},L)$ is defined as \begin{equation} \phi({\bf r},L)=\frac{V(\PPP\cap\KKK({\bf r},L))}{V(\KKK({\bf r},L))} \label{lpd1} \end{equation} where $V(\GGG)$ is the volume of the set $\GGG\subset\RRR^d$. The local porosity distribution $\mu(\phi,L)$ is defined as \begin{equation} \mu(\phi,L) = \frac{1}{m}\sum_{\bf r}\delta(\phi-\phi({\bf r},L)) \label{lpd2} \end{equation} where $m$ is the number of placements of the measurement cell $\KKK({\bf r},L)$. Ideally all measurement cells should be disjoint \cite{hil91d}, but in practice this cannot be achieved because of poor statistics. The results presented below are obtained by placing $\KKK({\bf r},L)$ on all lattice sites ${\bf r}$ which are at least a distance $L/2$ from the boundary of $\SSS$, and hence in the following \begin{equation} m = \prod^3_{i=1}(M_i-L+1) \label{lpd3} \end{equation} will be used. $\mu(\phi,L)$ is the empirical probability density function (histogram) of local porosities. Its support is the unit interval. In the following we denote averages with respect to $\mu(\phi,L)$ by an overline. Thus for a homogeneous and ergodic medium \begin{equation} \overline{\phi}(L) = \int_0^1 \phi\mu(\phi,L)\;d\phi = \gex{\phi} \end{equation} is the expected local porosity. In practice deviations from the last equality may occur if the measurement cells are overlapping. Figure \ref{alp} below shows the average local porosity as function of $L$ for all four samples analyzed in this paper showing that deviations can be as large as 0.5 percent. The deviations may be partly intrinsic and partly due to oversampling the central regions because the measurement cells are overlapping. Similarly the variance of local porosities is found as \cite{hil95d} \begin{eqnarray} \sigma^2 (L) = \overline{(\phi(L)-\overline{\phi}(L))^2} & = & \int_0^1[\phi-\overline{\phi}(L)]^2\mu(\phi,L)\;d\phi \nonumber\\ & = & \frac{1}{L^3}\gex{\phi}(1-\gex{\phi})\left(1+ \frac{2}{L^3}\sum_{{{\bf r}_i,{\bf r}_j\in\KKK({\bf r}_0,L)}\atop{i\neq j}} G({\bf r}_i-{\bf r}_j)\right) \label{variance} \end{eqnarray} where $\KKK({\bf r}_0,L)$ is any cubic measurement cell. It is simple to determine $\mu(\phi,L)$ in the limits $L\rightarrow 0$ and $L\rightarrow \infty$ of small and large measurement cells. For small cells one finds generally \cite{hil91d,hil95d} \begin{equation} \mu(\phi,L=0) = \phi(\SSS)\delta(\phi-1)+(1-\phi(\SSS))\delta(\phi) \label{lpd4} \end{equation} where $\phi(\SSS)$ is the sample porosity. If the sample is macroscopically homogeneous and ergodic then one expects \begin{equation} \mu(\phi,L\rightarrow\infty) = \delta(\phi-\phi(\SSS)) \label{lpd5} \end{equation} indicating that in both limits the geometrical information contained in $\mu(\phi,L)$ consists of the single number $\phi(\SSS)$. The macroscopic limit, however, involves the question of macroscopic heterogeneity versus macroscopic homogeneity (for more information see \cite{hil95d}). In any case, if eqs. (\ref{lpd4}) and (\ref{lpd5}) hold it follows that there exists a special length scale $L^*$ defined as \begin{equation} L^*=\min\{L : \mu(0,L)=\mu(1,L)=0\} \label{lpd6} \end{equation} at which the $\delta$-distributions at $\phi=0$ and $\phi=1$ both vanish for the first time. \subsection{Local Percolation Probabilities} The local percolation probabilities characterize the connectivity of measurement cells of a given local porosity. Let \begin{equation} \Lambda_\alpha({\bf r},L)=\left\{ \begin{array}{r@{\quad:\quad}l} 1 & {\rm if~}\KKK({\bf r},L){\rm ~percolates~in~``}\alpha\mbox{\rm ''-direction}\\[6pt] 0 & {\rm otherwise} \end{array} \right. \label{lpp1} \end{equation} be an indicator for percolation. What is meant by ``$\alpha$''-direction is summarized in Table~\ref{lpp}. A cell $\KKK({\bf r},L)$ is called ``percolating in the $x$-direction'' if there exists a path inside the set $\PPP\cap\KKK({\bf r},L)$ connecting those two faces of $\SSS$ that are vertical to the $x$-axis. Similarly for the other directions. Thus $\Lambda_3=1$ indicates that the cell can be traversed along all 3 directions, while $\Lambda_c=1$ indicates that there exists at least one direction along which the block is percolating. The local percolation probability in the ``$\alpha$''-direction is now defined through \begin{equation} \lambda_\alpha(\phi,L) = \frac{ \sum_{\bf r} \Lambda_\alpha({\bf r},L)\delta_{\phi\phi({\bf r},L)}} {\sum_{\bf r}\delta_{\phi\phi({\bf r},L)}} \label{lpp2} \end{equation} where $\delta_{\phi\phi({\bf r},L)}=1$ if $\phi=\phi({\bf r},L)$ and $0$ otherwise. The local percolation probability $\lambda_\alpha(\phi,L)$ gives the fraction of measurement cells of sidelength $L$ with local porosity $\phi$ that are percolating in the ``$\alpha$''-direction. \subsection{Total Fraction of Percolating Cells} The total fraction of all cells percolating along the ``$\alpha$''-direction is given by integration over all local porosities as \begin{equation} p_\alpha(L)=\int_0^1\mu(\phi,L)\lambda_\alpha(\phi,L)\;d\phi \label{pL1} \end{equation} This quantitiy provides an important characteristic for constructing equivalent network models. It gives the fraction of network elements (bond, sites etc.) which have to be permeable in an equivalent network. \section{Despcription of Microstructures} \label{samples} \subsection{Experimental Sample of Fontainebleau Sandstone} The experimental sample is a threedimensional microtomographic image of Fontainebleau sandstone. This sandstone is a popular reference standard because of its exceptional chemical, crystallographic and microstructural simplicity \cite{BZ85,BCZ87}. Fontainebleau sandstone consists of monocrystalline quartz grains that have been eroded for long periods before being deposited in dunes along the shore during the Oligocene, i.e. roughly 30 million years ago. It is well sorted containing grains of around $200\mu$m in diameter. During its geological evolution, that is still not fully understood, the sand was cemented by silica crytallizing around the grains. Fontainebleau sandstone exhibits intergranular porosity ranging from $0.03$ to roughly $0.3$ \cite{BCZ87}. The computer assisted microtomography was carried out on a micro-plug drilled from a larger original core. The original core from which the micro-plug was taken had a porosity of $0.1484$, a permability of $1.3 D$ and a formation factor of $22.1$. The porosity $\phi(\SSS_{\sf EX})$ of our microtomographic data set is only 0.1355(see Table \ref{ovrvw}). The difference between the porosity of the original core and that of the final data set is due to the heterogeneity of the sandstone and to the difference in sample size. The experimental sample is referred to as {\sf EX} in the following. The pore space of the experimental sample is visualized in Figure \ref{smplA}. \subsection{Sedimentation, Compaction and Diagenesis Model} The sedimentation, compaction and diagenesis model, abbreviated as ~{\sf DM} ~in the following, is obtained by numerically modelling the main geological sandstone-forming processes \cite{BO97,OBA98}. Image analysis of backscattered electron/cathodo-luminescence images of thin sections provides input data such as porosity, grain size distribution, a visual estimate of the degree of compaction, the amount of quartz cement and clay contents and texture. The sandstone modelling is carried out in three main steps: grain sedimentation, compaction and diagenesis. Here we give only a rough sketch of the algorithms and refer the reader to \cite{BO97,OBA98} for a detailed description. Grain sedimentation commences with image analysis of thin sections. The grain size distribution is measured using an erosion-dilation algorithm. Spherical grains with random diameters chosen from the grain size distribution are dropped onto the grain bed and relaxed into a potential energy minimum. The sedimentation environment may be low-energy (local minimum) or high-energy (global minimum). Compaction reduces the bulk volume (and porosity) in response to vertical stress from the overburden. It is modelled here as a linear process in which the vertical coordinate of every sandgrain is shifted vertically downwards by an amount proportional to the original vertical position. The proportionality constant is called the compaction factor. Its value for our Fontainebleau sandstone is estimated to be $0.1$ from thin section analysis. In the diagenesis part only a subset of known diagenetical processes are simulated, namely quartz cement overgrowth and precipitation of authigenic clay on the free surface. Quartz cement overgrowth is modeled by radially enlarging each grain. If $R_0$ denotes the radius of the originally deposited spherical grain, its new radius along the direction ${\bf r}$ from grain center is taken to be \cite{SK87,OBA98} \begin{equation} R({\bf r}) = R_0 + \min(a\ell({\bf r})^\gamma,\ell({\bf r})) \end{equation} where $\ell({\bf r})$ is the distance between the surface of the original spherical grain and the surface of its Voronoi ployhedron along the direction ${\bf r}$. The constant $a$ controls the amount of cement, and the growth exponent $\gamma$ controls the type of cement overgrowth. For $\gamma>0$ the cement grows preferentially into the pore bodies, for $\gamma=0$ it grows concentrically, and for $\gamma<0$ quartz cement grows towards the pore throats \cite{OBA98}. Authigenic clay growth is simulated by precipitating clay voxels on the free mineral surface. The clay texture may be pore-lining or pore-filling or a combination of the two. For modeling the Fontainebleau sandstone we used a compaction factor of $0.1$, and the cementation parameters $\gamma=-0.6$ and $a=2.9157$. The resulting configuration of our sample ~{\sf DM} ~is displayed in Figure \ref{smplB}. \subsection{Gaussian Field Reconstruction Model} \label{GF} The Gaussian field ({\sf GF}) reconstruction model provides a random pore space configuration in such a way that its correlation function $G_{\sf GF}({\bf r})$ equals a prescribed reference correlation function $G_0({\bf r})$. In our case $G_0({\bf r})=G_{\sf EX}({\bf r})$ the reference is the correlation function of the experimental sample described above. The method of Gaussian field reconstruction is well documented in the literature \cite{qui84,AJQ90,adl92,YFKTA93}, and we shall only make a few remarks that the reader may find of interest when implementing the method. Given the reference correlation function $G_{\sf EX}({\bf r})$ and porosity $\phi(\SSS_{\sf EX})$ of the experimental sample the three main steps of constructing the sample $\SSS_{\sf GF}$ with correlation function $G_{\sf GF}({\bf r})=G_{\sf EX}({\bf r})$ are as follows: \begin{enumerate} \item A standard Gaussian field $X({\bf r})$ is generated which consists of statistically independent Gaussian random variables $X\in\RRR$ at each lattice point ${\bf r}$. \item \label{l2} The field $X({\bf r})$ is first passed through a linear filter which produces a correlated Gausssian field $Y({\bf r})$ with zero mean and unit variance. The reference correlation function $G_{\sf EX}({\bf r})$ and porosity $\phi(\SSS_{\sf EX})$ enter into the mathematical construction of this linear filter. \item The correlated field $Y({\bf r})$ is then passed through a nonlinear discretization filter which produces the reconstructed sample $\SSS_{\sf GF}$. \end{enumerate} Details of these three main steps are documented in Ref. \cite{qui84,AJQ90}. However, in these traditional methods the process described in step \ref{l2} is computationally difficult because it requires the solution of a very large set of non-linear equations. We have followed an alternate and computationally more efficient method proposed in Ref. \cite{adl92} which uses Fourier Transforms. For the sake of completeness we briefly describe this. Later we shall discuss some of the difficulties experienced while implementing this. In the Fourier transform method the linear filter in step \ref{l2} is defined in Fourier space through \begin{equation} Y({\bf k}) = \alpha (G_Y({\bf k}))^{\frac{1}{2}}X({\bf k}), \end{equation} where $M=M_1=M_2=M_3$ is the sidelength of a cubic sample, $\alpha = M^{\frac{d}{2}}$ is the normalisation factor, and \begin{equation} X({\bf k}) = \frac{1}{M^d}\sum_{{\bf r}} X({\bf r})e^{2\pi i{\bf k}\cdot{\bf r}} \end{equation} denotes the Fourier transform of $X({\bf r})$. Similarly $Y({\bf k})$ is the Fourier transform of $Y({\bf r})$, and $G_Y({\bf k})$ is the Fourier transform of the correlation function $G_Y({\bf r})$. $G_Y({\bf r})$ has to be computed by an inverse process from the correlation function $G_{\sf EX}({\bf r})$ and porosity of the experimental reference (details in \cite{adl92}). It is important to note that the Gaussian field reconstruction requires a large separation $\xi_{\sf EX}\ll N^{1/d}$ where $\xi_{\sf EX}$ is the correlation length of the experimental reference, and $N=M_1M_2M_3$ is the number of sites. $\xi_{\sf EX}$ is defined as the length such that $G_{\sf EX}(r)\approx 0$ for $r>\xi_{\sf EX}$. If the condition $\xi_{\sf EX}\ll N^{1/d}$ is violated then step \ref{l2} of the reconstruction fails in the sense that the correlated Gaussian field $Y({\bf r})$ does not have zero mean and unit variance. In such a situation the filter $G_Y({\bf k})$ will differ from the Fourier transform of the correlation function of the $Y({\bf r})$. It is also difficult to calculate $G_Y(r)$ accurately near $r=0$ \cite{adl92}. This leads to a discrepancy at small $r$ between $G_{\sf GF}(r)$ and $G_{\sf EX}(r)$. The problem can be overcome by choosing large $M$ as we verified in $d=1$ and $d=2$. However, in $d=3$ very large $M$ also demands prohibitively large memory. In earlier work \cite{AJQ90,adl92} the correlation function $G_{\sf EX}({\bf r})$ was sampled down to a lower resolution, and the reconstruction algorithm then proceeded with such a rescaled correlation function. This leads to a reconstructed sample $\SSS_{\sf GF}$ which also has a lower resolution. Such reconstructions have lower average connectivity compared to the original model \cite{hil98d} Because we intend a quantitative comparison with the microstructure of $\SSS_{\sf EX}$ it is necessary to retain the same level of resolution. Hence we use throughout this article the original correlation function $G_{\sf EX}({\bf r})$ without subsampling. Because $G_{\sf EX}(r)$ is nearly $0$ for $r>30a$ we have truncated $G_{\sf EX}(r)$ at $r=30a$ to save computer time. The final configuration $\SSS_{\sf GF}$ with $M=256$ generated by Gaussian filtering reconstruction that is used in the comparison to experiment is displayed in Figure \ref{smplC}. \subsection{Simulated Annealing Reconstruction Model} The simulated annealing ({\sf SA}) reconstruction model is a second method to generate a threedimensional random microstructure with prescribed porosity and correlation function. A simplified implementation was recently discussed in Ref. \cite{YT98a} and we follow their algorithm here. The method generates a configuration $\SSS_{\sf SA}$ by minimizing the deviations between $G_{\sf SA}({\bf r})$ and a predefined reference function $G_0({\bf r})$. Of course in our case we have again the Fontainebleau sandstone as reference, i.e. $G_0({\bf r})=G_{\sf EX}({\bf r})$. The reconstruction is performed on a cubic lattice with side length $M=M_1=M_2=M_3$ and lattice spacing $a$. The lattice is initialized randomly with $0$'s and $1$'s such that the volume fraction of $0$'s equals $\phi(\SSS_{\sf EX})$. This porosity is preserved throughout the simulation. For the sake of numerical efficiency the autocorrelation function is evaluated in a simplified form using \cite{YT98a} \begin{eqnarray} \ & & \widetilde{G}_{\sf SA}(r)\left(\widetilde{G}_{\sf SA}(0)- \widetilde{G}_{\sf SA}(0)^2\right)+\widetilde{G}_{\sf SA}(0)^2 = \nonumber \\ \label{eq:g_r} & &=\frac{1}{3M^3} \sum_{\mathbf r} \ch{\MMM}({\mathbf r}) \left( \ch{\MMM}({\mathbf r}+r{\mathbf e}_1) + \ch{\MMM}({\mathbf r}+r{\mathbf e}_2) + \ch{\MMM}({\mathbf r}+r{\mathbf e}_3) \right) \end{eqnarray} where ${\mathbf e}_i$ are the unit vectors in direction of the coordinate axes, $r=0,\dots,\frac{M}{2}-1$, and where a tilde $\widetilde{~}$ is used to indicate the directional restriction. The sum $\sum_{\mathbf r}$ runs over all $M^3$ lattice sites ${\bf r}$ with periodic boundary conditions, i.e. $r_i+r$ is evaluated modulo $M$. We now perform a simulated annealing algorithm to minimize the "energy" function \begin{equation} E = \sum_r\left(\widetilde{G}_{\sf SA}(r)-G_{\sf EX}(r)\right)^2 , \end{equation} defined as the sum of the squared deviations of $\widetilde{G}_{\sf SA}$ from the experimental correlation function $G_{\sf EX}$. Each update starts with the exchange of two pixels, one from pore space, one from matrix space. Let $n$ denote the number of the proposed update step. Introducing an acceptance parameter $T_n$, which may be interpreted as an $n$-dependent temperature, the proposed configuration is accepted with probability \begin{equation} p = \min\left(1,\exp\left(-\frac{E_n-E_{n-1}}{T_nE_{n-1}}\right)\right) . \end{equation} Here the energy and the correlation function of the configuration is denoted as $E_n$ and $\widetilde{G}_{{\sf SA},n}$, respectively. The evaluation of $\widetilde{G}_{{\sf SA},n}$ does not require a complete recalculation. It suffices to update the correlation function $\widetilde{G}_{{\sf SA},n-1}$ of the previous configuration by adding or subtracting those products in (\ref{eq:g_r}) which changed due to the exchange of pixels. In case the proposed move is rejected, the old configuration is restored. The generation of a configuration with correlation $G_{\sf EX}$ is achieved by lowering $T$. At low $T$ the system approaches a configuration that minimizes the energy function. In our simulations we lower $T_n$ with $n$ as \begin{equation} T_n = \exp\left(-\frac{n}{100000}\right) . \end{equation} We stop the simulation when 20000 consecutive updates are rejected. In our simulation this happened after $2.5\times 10^8$ updates ($\approx 15$ steps per site). The resulting configuration $\SSS_{\sf SA}$ for the simulated annealing reconstruction is displayed in Figure \ref{smplD}. Our definition of the correlation function in (\ref{eq:g_r}) deserves some comment. A complete evaluation of the correlation function as defined in (\ref{eq:g_gen}) requires such a great numerical expense that the algorithm is too slow to allow threedimensional reconstructions within a reasonable time. Therefore, to increase the speed of the algorithm, the correlation function is only evaluated along the directions of the coordinate axes as indicated in (\ref{eq:g_r}). As a result of this simplification the reconstructed sample may cease to be isotropic. It will in general deviate from the reference correlation function in all directions other than those of the axes. In the special case of the correlation function of the Fontainebleau sandstone, however, this effect seems to be small (see below). This may serve as an a posteriori justification for using (\ref{eq:g_r}). \section{Results and Discussion} \label{results} We begin our presentation of the results with an analysis of traditional quantities such as porosities and correlation functions of the four samples. Then we proceed to a visual characterization of the threedimensional images. Next we shall discuss local porosities and percolation probabilities, and finally we conclude with implications for transport properties. \subsection{Conventional Analysis} Table \ref{ovrvw} gives a synopsis of different properties of the four samples. The preparation of the various samples was described in detail in section \ref{samples}. The dimensions and porosities also need no further comment. Samples {\sf GF}~ and {\sf SA}~ were constructed to have the same correlation function as sample {\sf EX}. This is indicated in the line labeled $G({\bf r})$. In Figure \ref{corrf} we plot the directionally averaged correlation functions $G(r)=(G(r,0,0)+G(0,r,0)+G(0,0,r))/3$ of the four samples where $G(r_1,r_2,r_3)=G({\bf r})$. $G_{\sf DM}(r)$ differs clearly from the rest. Accidentally, however, $G_{\sf DM}(0,0,r)\approx G_{\sf EX}(0,0,r)$. $G_{\sf GF}(r)$ differs from $G_{\sf EX}(r)$ for small $r$ as discussed in section \ref{GF} above. Remember also that by construction $G_{\sf GF}(r)$ is not expected to equal $G_{\sf EX}({\bf r})$ for $r$ larger than 30. The discrepancy at small $r$ reflects the quality of the linear filter, and it is also responsible for the differences of the porosity and specific internal surface. Although the reconstruction method of sample $\SSS_{\sf SA}$ is intrinsically anisotropic the correlation function of sample {\sf SA}~ agrees also in the diagonal directions with that of sample {\sf EX}. Sample $\SSS_{\sf DM}$ on the other hand has an anisotropic correlation function. If two samples have the same correlation function they are also expected to have also the same specific internal surface as calculated from \begin{equation} S = \left.-4\gex{\phi}(1-\gex{\phi})\frac{dG(r)}{dr}\right|_{r=0} . \end{equation} The next line in Table \ref{ovrvw} labeled $S$ gives the specific internal surfaces. If one defines a decay length by the first zero of the correlation function then the decay length is roughly $18a$ for samples {\sf EX}, {\sf GF}~ and {\sf SA}. For sample {\sf DM}~ it is somewhat smaller mainly in the $x$- and $y$-direction. The correlation length, which will be of the order of the decay length, is thus relatively large compared to the system size. Together with the fact that the percolation threshold for continuum systems is typically around $0.15$ this might explain why models {\sf GF}~ and {\sf SA}~ are connected in spite of their low value of the porosity. In summary, the samples $\SSS_{\sf GF}$ and $\SSS_{\sf SA}$ were constructed to be indistinguishable with respect to porosity and correlations from $\SSS_{\sf EX}$. The imperfection of the reconstruction method for sample {\sf GF}~, however, accounts for the deviations of its correlation function at small $r$ from that of sample {\sf EX}. \subsection{Visual inspection of images} We now collect results from a visual comparison. Visual inspection of Figures \ref{smplA} through \ref{smplD} reveals that none of the models $\SSS_{\sf DM},\SSS_{\sf GF}$ or $\SSS_{\sf SA}$ resemble closely the experimental microstructure $\SSS_{\sf EX}$. This applies in particular to samples {\sf GF}~ and {\sf SA}~ which were constructed to match the traditional geometrical characteristics of sample {\sf EX}, such as porosity, specific surface and correlation function. Figures \ref{smplA} through \ref{smplD} suggest that samples $\SSS_{\sf GF}$ and $\SSS_{\sf SA}$ have isolated islands of matrix space although this cannot be seen directly because the pore space is rendered opaque. Isolated islands of matrix space cannot exist in a real porous medium such as sample {\sf EX}. They are also absent in the compaction and diagenesis model {\sf DM}. The comparison is indicated in the line labeled ``isolated $\MMM$'' in Table \ref{ovrvw}. The pore surfaces in samples {\sf GF}~ and {\sf SA}~ are much rougher than in samples {\sf EX}~ and {\sf DM}. Sample {\sf DM}~ appears visually more homogeneous than the other samples. Although there is no anisotropy visible for sample {\sf DM}~ from Figure \ref{smplB} its connectivity properties will be found below to be strongly anisotropic. In summary the traditional characteristics such as porosity, specific surface and correlation functions are insufficient to distinguish different microstructures. Visual inspection of the pore space by the human eye indicates that samples {\sf GF}~ and {\sf SA}~ have a similar structure which, however, differs from the structure of sample {\sf EX}. Although sample {\sf DM}~ resembles sample {\sf EX}~ more closely with respect to surface roughness it differs visibly in the shape of the grains. \subsection{Local Porosity Analysis} We turn to an analysis of the fluctuations in local porosities. The differences in visual appearance of the microstructures find a quantitative expression here. The local porosity distributions $\mu(\phi,20)$ of the four samples at $L=20a$ are displayed as the solid lines in Figures \ref{lppA} through \ref{lppD}. The ordinates for these curves are plotted on the right vertical axis. The figures show that the original sample exhibits stronger porosity fluctuations than the three model samples except for sample {\sf SA}~ which comes close. Sample {\sf DM}~ has the narrowest distribution which indicates that it is most homogeneous. Figures \ref{lppA}--\ref{lppD} show also that the component at the origin, $\mu(0,20)$, is largest for sample {\sf EX}, and smallest for sample {\sf GF}. For samples {\sf DM}~ and {\sf SA}~ the values of $\mu(0,20)$ are intermediate and comparable. Plotting $\mu(0,L)$ as a function of $L$ we find that this remains true for all $L$. These results indicate that the experimental sample {\sf EX}~ is more strongly heterogeneous than the models, and that large regions of matrix space occur more frequently in sample {\sf EX}. A similar conclusion may be drawn from the variance of local porosity fluctuations which will be studied below. The conclusion is also consistent with the results for $L^*$ shown in Table \ref{ovrvw}. $L^*$ gives the sidelength of the largest cube that can be fit into matrix space, and thus $L^*$ may be viewed as a measure for the size of the largest grain. Table \ref{ovrvw} shows that the experimental sample has a larger $L^*$ than all the models. It is interesting to note that plotting $\mu(1,L)$ versus $L$ also shows that the curve for the experimental sample lies above those for the other samples for all $L$. Thus, also the size of the largest pore and the pore space heterogeneity are largest for sample {\sf EX}. If $\mu(\phi,L^*)$ is plotted for all four samples one finds two groups. The first group is formed by samples {\sf EX}~ and {\sf DM}, the second by samples {\sf GF}~ and {\sf SA}. Within each group the curves $\mu(\phi,L^*)$ nearly overlap, but they differ strongly between them. Figures \ref{alp}, \ref{vlp}, and \ref{slp} exhibit the dependence of the local porosity fluctuations on $L$. In Figure \ref{vlp} we plot the variance of the local porosity fluctuations, defined in eq.(\ref{variance}) as function of $L$. The variances for all samples indicate an approach to a $\delta$-distribution according to eq. (\ref{lpd5}). Again sample {\sf DM}~ is most homogeneous in the sense that its variance is smallest. The agreement between samples {\sf EX}, {\sf GF}~ and {\sf SA}~ reflects the agreement of their correlation functions, and is expected by virtue of eq. (\ref{variance}). Figure \ref{slp} shows the skewness as a function of $L$ calculated from \begin{equation} \kappa_3(L) = \frac{\overline{(\phi(L)-\overline{\phi}(L))^3}}{\sigma(L)^3} \end{equation} where $\sigma(L)$ is the variance defined in eq. (\ref{variance}). $\kappa_3$ characterizes the asymmetry of the distribution, and the difference between the most probable local porosity and its average. Again samples {\sf GF}~ and {\sf SA}~ behave similarly, but sample {\sf DM}~ and sample {\sf EX}~ differ from each other, and from the rest. At $L=4a$ the local porosity distributions $\mu(\phi,4)$ show small spikes at equidistantly spaced porosities for samples {\sf EX}~ and {\sf DM}, but not for samples {\sf GF}~ and {\sf SA}. The spikes indicate that models {\sf EX}~ and {\sf DM}~ have a smoother surface than models {\sf GF}~ and {\sf SA}. For smooth surfaces and small measurement cell size porosities corresponding to an interface intersecting the measurement cell occur with higher frequency, and this gives rise to spikes. The presence of isolated islands of pore or matrix space reduces these spikes. It is unclear at present whether the spikes persist when the measurement cells are chosen to be nonoverlapping. \subsection{Local Percolation Analysis} Visual inspection of Figures \ref{smplA} through \ref{smplD} did not allow us to recognize the degree of connectivity of the various samples. A quantitative characterization of the connectivity is provided by the local percolation probabilities \cite{hil91d,hil98a}, and it is here that the samples differ most dramatically. The samples {\sf EX}, {\sf DM}~, {\sf GF}~ and {\sf SA}~ are globally connected in all three directions. This, however, does not imply that they have similar connectivity. The last line in Table \ref{ovrvw} gives the fraction of blocking cells at the porosity $0.1355$ and for $L^*$. It gives a first indication that the connectivity of samples {\sf DM}~ and {\sf GF}~ is, in fact, much poorer than that of the experimental sample {\sf EX}. Figures \ref{lppA} through \ref{lppD} give a more complete account of the situation by exhibiting $\lambda_\alpha(\phi,20)$ for $\alpha=3,c,x,y,z$ for all four samples. First one notes that sample {\sf DM}~ is strongly anisotropic in its connectivity. It has a higher connectivity in the $z$-direction than in the $x$- or $y$-direction. This might be due to the anisotropic compaction process. $\lambda_z(\phi,20)$ for sample {\sf DM}~ differs from that of sample {\sf EX}~ although their correlation functions in the $z$-direction are very similar. The $\lambda$-functions for samples {\sf EX}~ and {\sf DM}~ rise much more rapidly than those for samples {\sf GF}~ and {\sf SA}. The inflection point of the $\lambda$-curves for samples {\sf EX}~ and {\sf DM}~ is much closer to the most probable porosity (peak) than in samples {\sf GF}~ and {\sf SA}. All of this indicates that connectivity in cells with low porosity is higher for samples {\sf EX}~ and {\sf DM}~ than for samples {\sf GF}~ and {\sf SA}. In samples {\sf GF}~ and {\sf SA}~ only cells with high porosity are percolating on average. In sample {\sf DM}~ the curves $\lambda_x,\lambda_y$ and $\lambda_3$ show strong fluctuations for $\lambda\approx 1$ at values of $\phi$ much larger than the $\gex{\phi}$ or $\phi(\SSS_{\sf DM})$. This indicates a large number of high porosity cells which are nevertheless blocked. The reason for this is perhaps that the linear compaction process in the underlying model blocks horizontal pore throats and decreases horizontal spatial continuity more effectively than in the vertical direction, as shown in \cite{BO97}, Table 1 p. 142. The absence of spikes in $\mu(\phi,4)$ for samples {\sf GF}~ and {\sf SA}~ combined with the fact that cells with average porosity ($\approx 0.135$) are rarely percolating suggests that these samples have a random morphology similar to percolation. \subsection{Implications for Transport Properties} The connectivity analysis of local porosity theory allows to make some predictions for transport properties (such as conductivity or permeability) without actually calculating them. A detailed comparison between the predictions of local porsity theory and exact calculation of transport properties will appear elsewhere \cite{WBH99} These predictions are made by calculating the total fraction of percolating cells eq. (\ref{pL1}). The insets in Figures \ref{lppA} through \ref{lppD} show the functions $p_\alpha(L)=\overline{\lambda_\alpha(\phi,L)}$ for $\alpha=3,x,y,z,c$ for each sample. The curves for samples {\sf EX}~ and {\sf DM}~ are similar but differ from those for samples {\sf GF}~ and {\sf SA}. In Figure \ref{total} we plot the curves $p_3(L)$ of all four samples in a single figure. The samples fall into two groups \{{\sf EX},{\sf DM}\} and \{{\sf GF},{\sf SA}\} that behave very differently. Figure \ref{total} shows that reconstruction methods \cite{adl92,YT98a} based on correlation functions do not reproduce the connectivity properties of porous media. As a consequence, within the effective medium approximation of local porosity theory \cite{hil91d} samples {\sf GF}~ and {\sf SA}~ would both yield much lower permeabilities or conductivities than those of samples {\sf EX}~ and {\sf DM}. Based on these results it appears questionable whether correlation function reconstruction can produce reliable models for the prediction of transport. \vspace*{3cm} ACKNOWLEDGEMENT: The authors are grateful to Dr.David Stern, (Exxon Reserach Production Company) for providing the experimental data set, and to the Deutsche Forschungsgemeinschaft for financial support. \newpage \section*{Table Captions} \newcounter{tab} \begin{list}{\textbf{Table \arabic{tab}:}} {\usecounter{tab} \setlength{\labelwidth}{2.2cm} \setlength{\labelsep}{0.3cm} \setlength{\itemindent}{0pt} \setlength{\leftmargin}{2.5cm} \setlength{\rightmargin}{0cm} \setlength{\parsep}{0.5ex plus0.2ex minus0.1ex} \setlength{\itemsep}{0ex plus0.2ex}} \item \label{lpp} Legend for index $\alpha$ of local percolation probabilities $\lambda_\alpha(\phi,L)$. \item \label{ovrvw} Overview of various properties for the four samples \end{list} \newpage \begin{center} {\small TABLE \ref{lpp}}\\[12pt] \begin{tabular}{|c|c|} \hline index $\alpha$ & meaning \\\hline $x$ & $x$-direction\\ $y$ & $y$-direction\\ $z$ & $z$-direction\\ $3$ & ($x\wedge y\wedge z$)-direction\\ $c$ & ($x\vee y\vee z$)-direction\\\hline \end{tabular} \end{center} \begin{center} {\small TABLE \ref{ovrvw} }\\[12pt] \begin{tabular}{|l||c|c|c|c|} \hline Properties & $\SSS_{\sf EX}$ & $\SSS_{\sf DM}$ & $\SSS_{\sf GF}$ & $\SSS_{\sf SA}$\\\hline Origin & Experiment & Diagenesis Model & Gaussian Field & Simulated Annealing\\ $M_1\times M_2 \times M_3$ & $300\times 300\times 299$ & $255\times 255\times 255$ & $256\times 256\times 256$ & $256\times 256\times 256$\\ $\phi(\SSS)$ & 0.1355 & 0.1356 & 0.1421 & 0.1354\\ $G({\bf r})$ & $G_{\sf EX}$ & $G_{\sf DM}$ & $G_{\sf GF}\approx G_{\sf EX}$ & $G_{\sf SA}=G_{\sf EX}$\\ $S$ from $\left.\frac{dG}{dr}\right|_{r=0}$ & 0.078 & 0.082 & 0.125 & 0.083\\ Isotropy & $xyz$ & $xy$ & $xyz$ & $xyz$\\ isolated $\MMM$ & No & No & Yes & Yes\\ Pore surface & smooth & smooth & rough & rough \\ $L^*$ & 35 & 25 & 23 & 27\\ Connectivity & $xyz$ & $xyz$ & $xyz$ & $z$\\ $1-\lambda_c(0.1355,L^*)$ & 0.0045 & 0.0239 & 0.3368 & 0.3527\\ \hline \end{tabular} \end{center} \bibliographystyle{ieeetr} \textheight24.5cm
1,108,101,564,016
arxiv
\section{\texorpdfstring{Introduction.}{Introduction}}\label{sec1} \subsection{\texorpdfstring{Variable, but predictable.}{Variable, but predictable}}\label{sec1.1} The notion of \emph{queues} has been used extensively as a powerful abstraction in studying dynamic resource allocation systems, where one aims to match \emph{demands} that arrive over time with available \emph{resources}, and a queue is used to store currently unprocessed demands. Two important ingredients often make the design and analysis of a queueing system difficult: the demands and resources can be both \emph{variable} and \emph{unpredictable}. \emph{Variability} refers to the fact that the arrivals of demands or the availability of resources can be highly volatile and nonuniformly distributed across the time horizon.\vadjust{\goodbreak} \emph{Unpredictability} means that such nonuniformity ``tomorrow'' is unknown to the decision maker ``today,'' and she is obliged to make allocation decisions only based on the state of the system at the moment, and some statistical estimates of the future. \begin{figure} \includegraphics{973f01.eps} \caption{An illustration of the admissions control problem, with a constraint on the a rate of redirection.}\label{figdelill} \end{figure} While the world will remain volatile as we know it, in many cases, the amount of {unpredictability about the future} may be reduced thanks to \emph{forecasting} technologies and the increasing accessibility of data. For instance: \begin{longlist}[(3)] \item[(1)] advance booking in the hotel and textile industries allows for accurate forecasting of demands ahead of time \cite{FR96}; \item[(2)] the availability of monitoring data enables traffic controllers to predict the traffic pattern around potential bottlenecks \cite{SWO02}; \item[(3)] advance scheduling for elective surgeries could inform care providers several weeks before the intended appointment \cite{KH02}. \end{longlist} In all of these examples, future demands remain \emph{exogenous} and variable, yet the decision maker is revealed with (some of) their realizations. \emph{Is there significant performance gain to be harnessed by} ``\emph{looking into the future}?'' In this paper we provide a largely affirmative answer, in the context of a class of admissions control problems. \subsection{\texorpdfstring{Admissions control viewed as resource allocation.}{Admissions control viewed as resource allocation}}\label{secintroadminresourcepool} We begin by informally describing our problem. Consider a single queue equipped with a server that runs at rate $1-p$ jobs per unit time, where $p$ is a fixed constant in $(0,1)$, as depicted in Figure~\ref{figdelill}. The queue receives a stream of incoming jobs, arriving at rate $\lambda\in(0,1)$. If $\lambda> 1-p$, the arrival rate is greater than the server's processing rate, and some form of \emph{admissions control} is necessary in order to keep the system stable. In particular, upon its arrival to the system, a job will either be \emph{admitted} to the queue, or \emph{redirected}. In the latter case, the job does not join the queue, and, from the perspective of the queue, disappears from the system entirely. The goal of the decision maker is to minimize the average delay experienced by the admitted jobs, while obeying the constraint that the average rate at which jobs are redirected \emph {does not exceeded $p$}.\footnote{Note that as $\lambda\to1$, the minimum rate of admitted jobs, $\lambda-p$, approaches the server's capacity $1-p$, and hence we will refer to the system's behavior when $\lambda\to1$ as the \emph{heavy-traffic regime}.} One can think of our problem as that of \emph{resource allocation}, where a decision maker tries to match incoming demands with two types of processing resources: a~\emph{slow local resource} that corresponds to the server and a \emph{fast external resource} that can process any job redirected to it almost instantaneously. Both types of resources are \emph{constrained}, in the sense that their capacities ($1-p$ and $p$, resp.) cannot change over time, by physical or contractual predispositions. The processing time of a job at the fast resource is \emph{negligible compared to that at the slow resource}, as long as the rate of redirection to the fast resource stays below $p$ in the long run. Under this interpretation, minimizing the average delay across \emph{all} jobs is equivalent to minimizing the average delay across just the \emph{admitted} jobs, since the jobs redirected to the fast resource can be thought of being processed immediately and experiencing no delay at all. For a more concrete example, consider a web service company that enters a long-term contract with an external cloud computing provider for a fixed amount of computation resources (e.g., virtual machine instance time) over the contract period.\footnote{\emph{Example}. As of September 2012, Microsoft's Windows Azure cloud services offer a 6-month contract for \$71.99 per month, where the client is entitled for up to 750 hours of virtual machine (VM) instance time each month, and any additional usage would be charged at a 25\% higher rate. Due to the large scale of the Azure data warehouses, the speed of any single VM instance can be treated as roughly constant and independent of the total number of instances that the client is running concurrently.} During the contract period, any incoming request can be either served by the in-house server (slow resource), or be redirected to the cloud (fast resource), and in the latter case, the job does not experience congestion delay since the scalability of cloud allows for multiple VM instance to be running in parallel (and potentially on different physical machines). The decision maker's constraint is that the total amount of redirected jobs to the cloud must stay below the amount prescribed by the contract, which, in our case, translates into a maximum redirection rate over the contract period. Similar scenarios can also arise in other domains, where the slow versus fast resources could, for instance, take on the forms of: \begin{longlist}[(3)] \item[(1)] an in-house manufacturing facility versus an external contractor; \item[(2)] a slow toll booth on the freeway versus a special lane that lets a car pass without paying the toll; \item[(3)] hospital bed resources within a single department versus a cross-departmen\-tal central bed pool. \end{longlist} In a recent work \cite{TX12}, a mathematical model was proposed to study the benefits of resource pooling in large scale queueing systems, which is also closely connected to our problem. They consider a multi-server system where a fraction $1-p$ of a total of $N$ units of processing resources (e.g., CPUs) is distributed among a set of $N$ local servers,\vadjust{\goodbreak} each running at rate $1-p$, while the remaining fraction of $p$ is being allocated in a centralized fashion, in the form of a central server that operates at rate $pN$ (Figure~\ref{figpooling}). It is not difficult to see, when $N$ is large, the central server operates at a significantly faster speed than the local servers, so that a job processed at the central server experiences little or no delay. In fact, the admissions control problem studied in this paper is essentially the problem faced by one of the local servers, in the regime where $N$ is large (Figure~\ref{figpoolingcqueue}). This connection is explored in greater detail in Appendix~\ref{secresourcepooling}, where we discuss what the implications of our results in context of resource pooling systems. \begin{figure}[t] \includegraphics{973f02.eps} \caption{Illustration of a model for resource pooling with distributed and centralized resources~\cite{TX12}.}\label{figpooling}\vspace*{-3pt} \end{figure} \begin{figure}[b]\vspace*{-3pt} \includegraphics{973f03.eps} \caption{Resource pooling using a central queue.}\label{figpoolingcqueue} \end{figure} \subsection{\texorpdfstring{Overview of main contributions.}{Overview of main contributions}}\label{sec1.3} We preview some of the main results in this section. The formal statements will be given in Section~\ref{secresults}. \subsubsection{\texorpdfstring{Summary of the problem.}{Summary of the problem}}\label{sec1.3.1} We consider a continuous-time admissions control problem, depicted in Figure~\ref{figdelill}. The problem is characterized by three parameters: $\lambda, p$ and $w$: \begin{longlist}[(2)] \item[(1)] Jobs arrive to the system at a rate of $\lambda$ jobs per unit time, with $\lambda\in(0,1)$. The server operates at a rate of $1-p$ jobs per unit time, with $p\in(0,1)$.\vadjust{\goodbreak} \item[(2)] The decision maker is allowed to decide whether an arriving job is admitted to the queue, or redirected away, with the goal of minimizing the time-average queue length,\footnote{By Little's law, the average queue length is essentially the same as average delay, up to a constant factor; see Section~\ref{secperformancemeasure}.} and subject to the constraint that the time-average rate of redirection does not exceed $p$ jobs per unit time. \item[(3)] The decision maker has access to \emph{information about the future}, which takes the form of a \emph{lookahead window} of length $w\in\mathbb{R}_{+}$. In particular, at any time $t$, the times of arrivals and service availability within the interval $[t,t+w]$ are revealed to the decision maker. We will consider the following cases of $w$: \begin{enumerate}[(a)] \item[(a)]$w=0$, the \emph{online problem}, where no future information is available. \item[(b)]$w=\infty$, the \emph{offline problem}, where entire the future has been revealed. \item[(c)]$0<w<\infty$, where future is revealed only up to a finite lookahead window. \end{enumerate} \end{longlist} Throughout, we will fix $p\in(0,1)$, and be primarily interested in the system's behavior in the \emph{heavy-traffic regime} of $\lambda \to1$. \subsubsection{\texorpdfstring{Overview of main results.}{Overview of main results}}\label{sec1.3.2} Our main contribution is to demonstrate that the performance of a redirection policy is highly sensitive to the amount of future information available, measured by the value of $w$. Fix $p\in(0,1)$, and let the arrival and service processes be Poisson. For the online problem ($w=0$), we show the optimal time-average queue length, $C^{\mathrm{opt}}_0$, approaches infinity in the heavy-traffic regime, at the rate \[ C^{\mathrm{opt}}_0 \sim\log_{1/(1-p)}\frac{1}{1-\lambda}\qquad\mbox{as }\lambda\to1. \] In sharp contrast, the optimal average queue length among offline policies \mbox{($w=\infty$)}, $C^{\mathrm{opt}}_{\infty}$, converges to a \emph{constant}, \[ C^{\mathrm{opt}}_\infty\to\frac{1-p}{p}\qquad\mbox{as }\lambda\to1 \] and this limit is achieved by a so-called no-job-left-behind policy. Figure~\ref{figdelayscale} illustrates this difference in delay performance for a particular value of $p$. \begin{figure} \includegraphics{973f04.eps} \caption{Comparison of optimal heavy-traffic delay scaling between online and offline policies, with $p=0.1$ and $\lambda\to1$. The value $C(p,\lambda,\pi)$ is the resulting average queue length as a function of $p$, $\lambda$ and a policy $\pi$.}\label{figdelayscale} \end{figure} Finally, we show that the no-job-left-behind policy for the offline problem can be modified, so that the \emph{same} optimal heavy-traffic limit of $\frac{1-p}{p}$ is achieved even with a \emph{finite} lookahead window, $w(\lambda)$, where \[ w(\lambda) = \mathcal{O} \biggl(\log\frac{1}{1-\lambda} \biggr)\qquad\mbox{as } \lambda\to1. \] This is of practical importance because in any realistic application, only a finite amount of future information can be obtained. On the methodological end, we use a sample path-based framework to analyze the performance of the offline and finite lookahead policies, borrowing tools from renewal theory and the theory of random walks. We believe that our techniques could be substantially generalized to incorporate general arrival and service processes, diffusion approximations as well as observational noises. See Section~\ref{secconclusions} for a more elaborate discussion. \subsection{\texorpdfstring{Related work.}{Related work}}\label{sec1.4} There is an extensive body of work devoted to various Markov (or \emph {online}) admissions control problems; the reader is referred to the survey of \cite{Sti85} and references therein. Typically, the problem is formulated as an \mbox{instance} of a Markov decision problem (MDP), where the decision maker, by \mbox{admitting} or rejecting incoming jobs, seeks to maximize a long-term average objective \mbox{consisting} of rewards (e.g., throughput) minus costs (e.g., waiting time experienced by a customer). The case where the maximization is performed subject to a constraint on some average cost has also been studied, and it has been shown, for a family of reward and cost functions, that an optimal policy assumes a ``threshold-like'' form, where the decision maker redirects the next job only if the current queue length is great or equal to $L$, with possible randomization if at level $L-1$, and always admits the job if below $L-1$; cf.~\cite{BR86}. Indeed, our problem, where one tries to minimize average queue length (delay) subject to a lower-bound on the throughput (i.e., a maximum redirection rate), can be shown to belong to this category, and the online heavy-traffic scaling result is a straightforward extension following the MDP framework, albeit dealing with technicalities in extending the threshold characterization to an infinite state space, since we are interested in the regime of $\lambda \to1$. However, the resource allocation interpretation of our admissions control problem as that of matching jobs with fast and slow resources, and, in particular, its connections to resource pooling in the many-server limit, seems to be largely unexplored. The difference in motivation perhaps explains why the optimal online heavy-traffic delay scaling of $\log_{1/(1-p)}\frac{1}{1-\lambda}$ that emerges by fixing $p$ and taking $\lambda\to1$ has not appeared in the literature, to the best our knowledge. There is also an extensive literature on \emph{competitive analysis}, which focuses on the \emph{worst-case} performance of an online algorithms compared to that of an optimal offline version (i.e., knowing the entire input sequence). The reader is referred to~\cite {AE98} for a comprehensive survey, and the references therein on packing-type problems, such as load balancing and machine scheduling \cite{Aza98}, and call admission and routing \cite{BAP93}, which are more related to our problem. While our optimality result for the policy with a finite lookahead window is stated in terms of the \emph {average} performance given stochastic inputs, we believe that the analysis can be extended to yield worst-case competitive ratios under certain input regularity conditions. In sharp contrast to our knowledge of the online problems, significantly less is known for settings in which information about the future is taken into consideration. In \cite{Naw90}, the author considers a variant of the flow control problem where the decision maker knows the job size of the arriving customer, as well as the arrival and time and job size of the next customer, with the goal of maximizing certain discounted or average reward. A characterization of an optimal stationary policy is derived under a standard semi-Markov decision problem framework, since the lookahead is limited to the next arriving job. In \cite{CH93}, the authors consider a scheduling problem with one server and $M$ parallel queues, motivated by applications in satellite systems where the link qualities between the server and the queues vary over time. The authors compare the throughput performance between several online policies with that of an offline policy, which has access to all future instances of link qualities. However, the offline policy takes the form of a Viterbi-like dynamic program, which, while being throughput-optimal by definition, provides limited qualitative insight. One challenge that arises as one tries to move beyond the online setting is that policies with lookahead typically do not admit a clean Markov description, and hence common techniques for analyzing Markov decision problems do not easily apply. To circumvent the obstacle, we will first relax our problem to be fully offline, which turns out to be surprisingly amenable to analysis. We then use the insights from the optimal offline policy to construct an optimal policy with a finite look-ahead window, in a rather straightforward manner. In other application domains, the idea of exploiting future information or predictions to improve decision making has been explored. Advance reservations (a~form of future information) have been studied in lossy networks \cite{CJP99,LR07} and, more recently, in revenue management \cite{SL11}. Using simulations, \cite{KH02} demonstrates that the use of a one-week and two-week advance scheduling window for elective surgeries can improve the efficiency at the associated intensive care unit (ICU). The benefits of advanced booking program for supply chains have been shown in \cite{FR96} in the form of reduced demand uncertainties. While similar in spirit, the motivations and dynamics in these models are very different from ours. Finally, our formulation of the slow an fast resources had been in part inspired by the literature of resource pooling systems, where one improves overall system performance by (partially) sharing individual resources in collective manner. The connection of our problem to a specific multi-server model proposed by \cite{TX12} is discussed in Appendix~\ref{secresourcepooling}. For the general topic of resource pooling, interested readers are referred to \cite{MR98,HL99,BW01,MS04} and the references therein. \subsection{\texorpdfstring{Organization of the paper.}{Organization of the paper}}\label{sec1.5} The rest of the paper is organized as follows. The mathematical model for our problem is described in Section~\ref{secmodel}. Section~\ref{secresults} contains the statements of our main results, and introduces the no-job-leftb-behind policy ($\pi_{\mathrm{NOB}}$), which will be a central object of study for this paper. Section~\ref{secinterpret} presents two alternative descriptions of the no-job-left-behind policy that have important structural, as well as algorithmic, implications. Sections~\ref{secoptonline}--\ref{secfinitelookahead} are devoted to the proofs for the results concerning the online, offline and finite-lookahead policies, respectively. Finally, Section~\ref{secconclusions} contains some concluding remarks and future directions. \section{\texorpdfstring{Model and setup.}{Model and setup}}\label{secmodel} \subsection{\texorpdfstring{Notation.}{Notation}}\label{sec2.1} We will denote by $\mathbb{N}$, $\mathbb{Z}_{+}$ and $\mathbb {R}_{+}$, the set of natural numbers, nonnegative integers and nonnegative reals, respectively. Let $f,g\dvtx \mathbb{R}_{+}\to\mathbb {R}_{+}$ be two functions. We will use the following asymptotic notation throughout: $f(x)\lesssim g(x)$ if $\lim_{x\to1}\frac {f(x)}{g(x)}\leq1$, $f(x)\gtrsim g(x)$ if $\lim_{x\to1}\frac {f(x)}{g(x)}\geq1$; $f(x)\ll g(x)$ if $\lim_{x\to1}\frac {f(x)}{g(x)}= 0$ and $f(x)\gg g(x)$ if $\lim_{x\to1}\frac {f(x)}{g(x)} = \infty$. \subsection{\texorpdfstring{System dynamics.}{System dynamics}}\label{sec2.2} An illustration of the system setup is given in Figure~\ref {figdelill}. The system consists of a single-server queue running in continuous time ($t\in\mathbb{R}_+)$, with an unbounded buffer that stores all unprocessed jobs. The queue is assumed to be empty at $t=0$. Jobs arrive to the system according to a Poisson process with rate $\lambda$, $\lambda\in(0,1 )$, so that the intervals between two adjacent arrivals are independent and exponentially distributed with mean $\frac{1}{\lambda}$. We will denote by $ \{A(t)\dvtx t\in\mathbb{R}_{+} \}$ the cumulative arrival process, where $A(t)\in\mathbb{Z}_{+}$ is the total number of arrivals to the system by time $t$. The processing of jobs by the server is modeled by a Poisson process of rate $1-p$. When the service process receives a jump at time $t$, we say that a service token is generated. If the queue is not empty at time $t$, exactly one job ``consumes'' the service token and leaves the system immediately. Otherwise, the service token is ``wasted'' and has no impact on the future evolution of the system.\footnote{When the queue is nonempty, the generation of a token can be interpreted as the completion of a previous job, upon which the server is ready to fetch the next job. The time between two consecutive tokens corresponds to the service time. The waste of a token can be interpreted as the server starting to serve a ``dummy job.'' Roughly speaking, the service token formulation, compared to that of a constant speed server processing jobs with exponentially distributed sizes, provides a performance upper-bound due to the inefficiency caused by dummy jobs, but has very similar performance in the heavy-traffic regime, in which the tokens are almost never wasted. Using such a point process to model services is not new, and the reader is referred to \cite{TX12} and the references therein. It is, however, important to note a key assumption implicit in the service token formulation: the processing times are intrinsic to the server, and \emph{independent} of the job being processed. For instance, the sequence of service times will not depend on the order in which the jobs in the queue are served, so long as the server remains busy throughout the period. This distinction is of little relevance for an $M/M/1$ queue, but can be important in our case, where the redirection decisions may depend on the future. See discussion in Section~\ref{secconclusions}.} We will denote by $ \{S(t)\dvtx t\in \mathbb{R}_{+} \}$ the cumulative token generation process, where $S(t)\in\mathbb{Z}_{+}$ is the total number of service tokens generated by time $t$. When $\lambda>1-p$, in order to maintain the stability of the queue, a decision maker has the option of ``redirecting'' a job \emph{at the moment of its arrival}. Once redirected, a job effectively ``disappears,'' and for this reason, we will use the word \textit{deletion} as a synonymous term for redirection throughout the rest of the paper, because it is more intuitive to think of deleting a job in our subsequent sample-path analysis. Finally, the decision maker is allowed to delete up to a time-average rate of $p$. \subsection{\texorpdfstring{Initial sample path.}{Initial sample path}}\label{sec2.3} Let $ \{ Q^{0} (t )\dvtx {t\in\mathbb{R}_{+}} \}$ be the continuous-time queue length process, where $Q^0(t)\in\mathbb {Z}_{+}$ is the queue length at time $t$ if \emph{no deletion} is applied at any time. We say that an \textit{event} occurs at time $t$ if there is either an arrival, or a generation of service token, at time $t$. Let $T_{n}$, $n\in\mathbb{N}$, be the time of the $n$th event in the system. Denote by $ \{ Q^{0} [n ]\dvtx n\in\mathbb{Z}_{+} \} $ the embedded discrete-time process of $ \{Q^{0} ( t ) \}$, where $Q^0 [n ]$ is the length of the queue sampled immediately after the $n$th event,\footnote{The notation $f(x-)$ denotes the right-limit of $f$ at $x$: $f(x-)=\lim_{y\downarrow x}f(y)$. In this particular context, the values of $Q^{0} [n ]$ are well defined, since the sample paths of Poisson processes are right-continuous-with-left-limits (RCLL) almost surely.} \[ Q^{0} [n ]= Q^{0} (T_{n}- ),\qquad n\in \mathbb{N} \] with the initial condition $Q^0[0]=0$. It is well known that $Q^0$ is a random walk on~$\mathbb{Z}_{+}$, such that for all $x_1,x_2\in\mathbb {Z}_{+}$ and $n\in\mathbb{Z}_{+}$, \begin{equation} \qquad \mathbb{P} \bigl(Q^0[n+1]=x_2 \mid Q^0[n]=x_1 \bigr) = \cases{ \displaystyle\frac{\lambda}{\lambda+1-p}, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr \displaystyle\frac{1-p}{\lambda+1-p}, &\quad$x_2-x_1=-1$, \vspace*{5pt}\cr 0, &\quad otherwise,} \label{eqQ0trans1} \end{equation} if $x_1>0$ and \begin{equation} \mathbb{P} \bigl(Q^0[n+1]=x_2 \mid Q^0[n]=x_1 \bigr) = \cases{ \displaystyle\frac{\lambda}{\lambda+1-p}, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr \displaystyle\frac{1-p}{\lambda+1-p}, &\quad$ x_2-x_1=0$, \vspace*{5pt}\cr 0, &\quad otherwise,}\label{eqQ0trans2} \end{equation} if $x_1=0$. Note that, when $\lambda>1-p$, the random walk $Q^0$ is transient. The process $Q^0$ contains \emph{all relevant information} in the arrival and service processes, and will be the main object of study of this paper. We will refer to $Q^0$ as the \emph{initial sample path} throughout the paper, to distinguish it from sample paths obtained after deletions have been made. \subsection{\texorpdfstring{Deletion policies.}{Deletion policies}}\label{sec2.4} Since a deletion can only take place when there is an arrival, it suffices to define the locations of deletions with respect to the discrete-time process $ \{Q^0[n]\dvtx n\in\mathbb{Z}_{+} \}$, and throughout, our analysis will focus on discrete-time queue length processes unless otherwise specified. Let $\Phi(Q )$ be the locations of all arrivals in a discrete-time queue length process $Q$, that is, \[ \Phi(Q )= \bigl\{ n\in\mathbb{N}\dvtx Q [n ]>Q [n-1 ] \bigr\} \] and for any $M\subset\mathbb{Z}_{+}$, define the counting process $ \{I(M,n)\dvtx n\in\mathbb{N} \}$ associated with $M$ as\footnote{$|X|$ denotes the cardinality of $X$.} \begin{equation} \label{eqIdef} I(M,n) = \bigl\llvert\{1, \ldots, n \}\cap M \bigr \rrvert. \end{equation} \begin{defn}[(Feasible deletion sequence)]\label{deffeasibleDel} The sequence $M= \{ m_{i} \} $ is said to be a {feasible deletion sequence} with respect to a discrete-time queue length process, $Q^{0}$, if all of the following hold: \begin{longlist}[(2)] \item[(1)] All elements in $M$ are unique, so that at most one deletion occurs at any slot. \item[(2)] $M\subset\Phi(Q^{0} )$, so that a deletion occurs only when there is an arrival. \item[(3)] \begin{equation} \limsup_{n\rightarrow\infty}\frac{1}{n} I (M,n )\leq\frac{p}{\lambda+(1-p)}\qquad\mbox{a.s.} \label{eqrateconstr} \end{equation} so that the time-average deletion rate is at most $p$. \end{longlist} In general, $M$ is also allowed to be a finite set. \end{defn} The denominator $\lambda+ (1-p )$ in equation~(\ref {eqrateconstr}) is due to the fact that the total rate of events in the system is $\lambda+ (1-p )$.\footnote{This is equal to the total rate of jumps in $A(\cdot)$ and $S(\cdot)$.} Analogously, the deletion rate in continuous time is defined by \begin{equation} r_d = (\lambda+1-p)\cdot\limsup_{n\rightarrow\infty} \frac{1}{n} I (M,n ). \end{equation} The impact of a deletion sequence to the evolution of the queue length process is formalized in the following definition. \begin{defn}[(Deletion maps)]\label{defdelMap} Fix an initial queue length process $ \{ Q^{0}[n]\dvtx\break n\in\mathbb{N} \}$ and a corresponding feasible deletion sequence $M= \{ m_{i} \}$. \begin{longlist}[(2)] \item[(1)] The \textit{point-wise deletion map} $D_{P} (Q^{0},m )$ outputs the resulting process after a deletion is made to $Q^{0}$ in slot $m$. Let $Q'=D_{P} (Q^{0},m )$. Then \begin{equation} Q' [n ]=\cases{ Q^{0} [n ]-1, &\quad$n\geq m$ and $Q^0[t]>0\ \forall t\in\{m,\ldots,n \}$; \vspace*{5pt}\cr Q^{0} [n ], &\quad otherwise,}\label{eqxitr} \end{equation} \item[(2)] the \textit{multi-point deletion map }$D (Q^{0},M )$ outputs the resulting process after all deletions in the set $M$ are made to $Q^{0}$. Define $Q^{i}$ recursively as $Q^{i}=D_{P} (Q^{i-1},m_{i} )$, $\forall i\in\mathbb{N}$. Then $Q^{\infty }=D (Q^{0},M )$ is defined as the point-wise limit \begin{equation} Q^{\infty}[n]=\lim_{i\rightarrow\min\{|M|,\infty\} }Q^{i}[n]\qquad\forall n \in\mathbb{Z}_{+}. \label{eqqinft} \end{equation} \end{longlist} \end{defn} The definition of the point-wise deletion map reflects the earlier assumption that the service time of a job only depends on the speed of the server at the moment and is independent of the job's identity; see Section~\ref{secmodel}. Note also that the value of $Q^{\infty} [n ]$ depends only on the total number of deletions before $n$ [equation~(\ref{eqxitr})], which is at most $n$, and the limit in equation~(\ref{eqqinft}) is justified. Moreover, it is not difficult to see that the order in which the deletions are made has no impact on the resulting sample path, as stated in the lemma below. The proof is omitted. \begin{lem} \label{lemlocindp} Fix an initial sample path $Q^0$, and let $M$ and $\widetilde{M}$ be two feasible deletion sequences that contain the same elements. Then $D (Q^{0},M )=D (Q^{0},\widetilde {M} )$. \end{lem} We next define the notion of a deletion policy that outputs a deletion sequence based on the (limited) knowledge of an initial sample path $Q^0$. Informally, a deletion policy is said to be $w$-lookahead if it makes its deletion decisions based on the knowledge of $Q^0$ up to $w$ units of time into the future (in continuous time). \begin{defn}[($w$-lookahead deletion policies)] \label{defw-pred} Fix $w\in \mathbb{R}_{+}\cup\{\infty\}$. Let $\mathcal {F}_t=\sigma(Q^0(s);s\leq t )$ be the natural filtration induced by $ \{Q^0(t)\dvtx t\in\mathbb{R}_{+} \}$ and $\mathcal{F}_\infty= \bigcup_{t\in\mathbb{Z}_{+}}\mathcal{F}_t$. A {$w$-predictive deletion policy} is a mapping, $\pi\dvtx \mathbb {Z}_{+}^{\mathbb{R}_+}\rightarrow\mathbb{N}^{\infty}$, such that: \begin{longlist}[(2)] \item[(1)] $M=\pi(Q^{0} )$ is a feasible deletion sequence a.s.; \item[(2)] $ \{n\in M \}$ is $\mathcal{F}_{T_n+w}$ measurable, for all $n\in\mathbb{N}$. \end{longlist} We will denote by $\Pi_{w}$ the family of all $w$-lookahead deletion policies. \end{defn} The parameter $w$ in Definition~\ref{defw-pred} captures the amount of information that the deletion policy has about the future: \begin{longlist}[(2)] \item[(1)] When $w=0$, all deletion decisions are made solely based on the knowledge of the system up to the current time frame. We will refer to $\Pi_{0}$ as \textit{online policies}. \item[(2)] When $w=\infty$, the entire sample path of $Q^0$ is revealed to the decision maker at $t=0$. We will refer to $\Pi_{\infty}$ as \textit{offline policies}. \item[(3)] We will refer to $\Pi_w, 0<w<\infty$, as policies with a \emph{lookahead window of size $w$}. \end{longlist} \subsection{\texorpdfstring{Performance measure.}{Performance measure}}\label{secperformancemeasure} Given a discrete-time queue length process $Q$ and $n\in\mathbb{N}$, denote by $S (Q,n )\in\mathbb{Z}_{+}$ the partial sum \begin{equation} S (Q,n )=\sum_{k=1}^{n}Q [k ].\label{eqpartials} \end{equation} \begin{defn}[(Average post-deletion queue length)] Let $Q^{0}$ be an initial queue length process. Define $C(p,\lambda,\pi) \in \mathbb{R}_{+}$ as the expected average queue length after applying a deletion policy $\pi$, \begin{equation} C(p,\lambda,\pi)=\mathbb{E} \biggl(\limsup_{n\rightarrow\infty} \frac{1}{n}S \bigl(Q_{\pi}^{\infty},n \bigr) \biggr), \label{eqC} \end{equation} where $Q_{\pi}^{\infty}=D (Q^{0},\pi(Q^{0} ) )$, and the expectation is taken over all realizations of~$Q^{0}$ and the randomness used by $\pi$ internally, if any. \end{defn} \begin{remark*}[(Delay versus queue length)] By Little's law, the long-term average waiting time of a typical customer in the queue is equal to the long-term average queue length divided by the arrival rate (independent of the service discipline of the server). Therefore, if our goal is to minimize the average waiting time of the jobs that remain after deletions, it suffices to use $C(p,\lambda,\pi)$ as a performance metric in order to judge the effectiveness of a deletion policy $\pi$. In particular, denote by $T_{\mathrm{all}}\in\mathbb{R}_{+}$ the time-average queueing delay experienced by all jobs, where deleted jobs are assumed to have a delay of zero, then $\mathbb{E}(T_{\mathrm{all}}) = \frac{1}{\lambda} C(p,\lambda, \pi)$, and hence the average queue length and delay coincide in the heavy-traffic regime, as $\lambda\to 1$. With an identical argument, it is easy to see that the average delay among \emph{admitted} jobs, $T_{\mathrm{adt}}$, satisfies $\mathbb {E} (T_{\mathrm{adt}} )=\frac{1}{\lambda-r_d} C(p,\lambda, \pi)$, where $r_d$ is the continuous-time deletion rate under $\pi$. Therefore, we may use the terms ``delay'' and ``average queue length'' interchangeably in the rest of the paper, with the understanding that they represent essentially the same quantity up to a constant. Finally, we define the notion of an optimal delay within a family of policies. \end{remark*} \begin{defn}[(Optimal delay)] Fix $w\in\mathbb{R}_{+}$. We call $C_{\Pi _{w}}^{*}(p,\lambda)$ the optimal delay in $\Pi_{w}$, where \begin{equation} C_{\Pi_{w}}^{*}(p,\lambda)=\inf_{\pi\in\Pi_{w}}C(p, \lambda,\pi).\label{eqLstr} \end{equation} \end{defn} \section{\texorpdfstring{Summary of main results.}{Summary of main results}}\label{secresults} We state the main results of this paper in this section, whose proofs will be presented in Sections~\ref{secoptonline}--\ref{secfinitelookahead}. \subsection{\texorpdfstring{Optimal delay for online policies.}{Optimal delay for online policies}}\label{sec3.1} \begin{defn}[(Threshold policies)] We say that $\pi_{\mathrm{th}}^{L}$ is an $L$-threshold policy, if a job arriving at time $t$ is deleted if and only if the queue length at time $t$ is greater or equal to $L$. \end{defn} The following theorem shows that the class of threshold policies achieves the optimal heavy-traffic delay scaling in $\Pi_0$. \begin{teo}[(Optimal online policies)]\label{teoonline} Fix $p\in(0,1)$, and let \[ L (p,\lambda) = \biggl\lceil\log_{\lambda/(1-p)}\frac{p}{1-\lambda} \biggr\rceil. \] Then: \begin{longlist}[(2)] \item[(1)] $\pi_{\mathrm{th}}^{L (p,\lambda)}$ is feasible for all $\lambda\in(1-p,1 )$.\vspace*{1pt} \item[(2)] $\pi_{\mathrm{th}}^{L (p,\lambda)}$ is asymptotically optimal in $\Pi_{0}$ as $\lambda\to1$, \[ C \bigl(p,\lambda,\pi_{\mathrm{th}}^{L (p,\lambda)} \bigr)\sim C_{\Pi_{0}}^{*} (p,\lambda)\sim\log_{1/(1-p)} \frac {1}{1-\lambda}\qquad\mbox{as }\lambda\rightarrow1. \] \end{longlist} \end{teo} \begin{pf} See Section~\ref{secoptonline}. \end{pf} \subsection{\texorpdfstring{Optimal delay for offline policies.}{Optimal delay for offline policies}}\label{sec3.2} Given the sample path of a random walk $Q$, let $U (Q,n )$ the number of slots till $Q$ reaches the level $Q[n]-1$ after slot $n$: \begin{equation} U (Q,n )=\inf\bigl\{j\geq1\dvtx Q [n+j ]=Q[n]-1 \bigr\}. \end{equation} \begin{defn}[(No-job-left-behind policy\footnote{The reason for choosing this name will be made in clear in Section~\ref{secinterpstack}, using the ``stack'' interpretation of this policy.})]\label{defnob} Given an initial sample path $Q^{0}$, the no-job-left-behind policy, denoted by $\pi _{\mathrm{NOB}}$, deletes all arrivals in the set $\Psi$, where \begin{equation} \Psi= \bigl\{n\in\Phi\bigl(Q^0 \bigr)\dvtx U \bigl(Q^0,n \bigr)=\infty\bigr\}. \label{eqpsi} \end{equation} We will refer to the deletion sequence generated by $\pi_{\mathrm{NOB}}$ as $M^\Psi= \{m^\Psi_i\dvtx\break i\in\mathbb{N} \}$, where $M^\Psi= \Psi$. \end{defn} In other words, $\pi_{\mathrm{NOB}}$ would delete a job arriving at time $t$ if and only if the initial queue length process never returns to below the current level in the future, which also implies that \begin{equation} Q^0[n] \geq Q^0 \bigl[m^\Psi_i \bigr]\qquad\forall i\in\mathbb{N}, n\geq m^\Psi_i. \label{eqmiinc} \end{equation} Examples of the $\pi_{\mathrm{NOB}}$ policy being applied to a particular sample path are given in Figures~\ref{figwater1} and~\ref{figwater2} (illustration), as well as in Figure~\ref{figsamplepaths} (simulation). \begin{figure \includegraphics{973f05.eps} \caption{Illustration of applying $\pi_{\mathrm{NOB}}$ to an initial sample path, $Q^0$, where the deletions are marked by bold red arrows.}\label{figwater1} \end{figure} \begin{figure} \includegraphics{973f06.eps} \caption{The solid lines depict the resulting sample path, $\widetilde{Q} =D (Q^0,M^\Psi)$, after applying $\pi_{\mathrm{NOB}}$ to $Q^0$.}\label{figwater2} \end{figure} \begin{figure \includegraphics{973f07.eps} \caption{Example sample paths of $Q^0$ and those obtained after applying $\pi_{\mathrm{th}}^{L (p,\lambda)}$ and $\pi_{\mathrm{NOB}}$ to $Q^0$, with $p=0.05$ and $\lambda=0.999$.}\label{figsamplepaths} \end{figure} It turns out that the delay performance of $\pi_{\mathrm{NOB}}$ is about as good as we can hope for in heavy traffic, as is formalized in the next theorem. \begin{teo}[(Optimal offline policies)] \label{teooffline} Fix $p\in(0,1 )$. \begin{longlist}[(2)] \item[(1)] $\pi_{\mathrm{NOB}}$ is feasible for all $\lambda\in (1-p,1 )$ and\footnote{It is easy to see that $\pi_{\mathrm{NOB}}$ is not a very efficient deletion policy for relatively small values of $\lambda$. In fact, $C (p,\lambda,\pi_{\mathrm{NOB}} )$ is a \emph {decreasing} function of $\lambda$. This problem can be fixed by injecting into the arrival process an Poisson process of ``dummy jobs'' of rate $1-\lambda-\varepsilon$, so that the total rate of arrival is $1-\varepsilon$, where $\varepsilon\approx0$. This reasoning implies that $(1-p)/p$ is a uniform upper-bound of $C^*_{\Pi_\infty}(p,\lambda)$ for all $\lambda\in(0,1)$. } \begin{equation} C (p,\lambda,\pi_{\mathrm{NOB}} )=\frac{1-p}{\lambda-(1-p)}. \end{equation} \item[(2)] $\pi_{\mathrm{NOB}}$ is asymptotically optimal in $\Pi_{\infty}$ as $\lambda\to1$, \[ \lim_{\lambda\rightarrow1}C (p,\lambda,\pi_{\mathrm{NOB}} )=\lim _{\lambda\rightarrow1}C_{\Pi_{\infty}}^{*} (p,\lambda)= \frac{1-p}{p}. \] \end{longlist} \end{teo} \begin{pf} See Section~\ref{secoffline}. \end{pf} \begin{remark}[(Heavy-traffic ``delay collapse'')] It is perhaps surprising to observe that the heavy-traffic scaling essentially ``collapses'' under $\pi_{\mathrm{NOB}}$: the average queue length converges to a finite value, $\frac{1-p}{p}$, as $\lambda\to1$, which is in sharp contrast~with the optimal scaling of $\sim\log _{1/(1-p)}\frac{1}{1-\lambda}$ for the\vspace*{1pt} online policies, given by Theorem~\ref{teoonline}; see Figure~\ref{figdelayscale} for an illustration of this difference. A ``stack'' interpretation of the no-job-left-behind policy (Section~\ref{secinterpantireac}) will help us understand intuitively {why} such a drastic discrepancy exists between the online and offline heavy-traffic scaling behaviors. Also, as a by-product of Theorem~\ref{teooffline}, observe that the heavy-traffic limit scales, in $p$, as \begin{equation} \lim_{\lambda\rightarrow1}C_{\Pi_{\infty}}^{*} (p,\lambda) \sim \frac{1}{p}\qquad\mbox{as $p \to0$.} \end{equation} This is consistent with an intuitive notion of ``flexibility'': delay should degenerate as the system's ability to redirect away jobs diminishes. \end{remark} \begin{remark}[(Connections to branching processes and Erd\H {o}s--R\'enyi random graphs)] Let $d<1<c$ satisfy $de^{-d}=ce^{-c}$. Consider a Galton--Watson birth process in which each node has $Z$ children, where $Z$ is Poisson with mean $c$. Conditioning on the finiteness of the process gives a Galton--Watson process where $Z$ is Poisson with mean $d$. This occurs in the classical analysis of the Erd\H{o}s--R\'enyi random graph $G(n,p)$ with $p=c/n$. There will be a giant component and the deletion of that component gives a random graph $G(m,q)$ with $q=d/m$. As a rough analogy, $\pi_{\mathrm{NOB}}$ deletes those nodes that would be in the giant component. \end{remark} \subsection{\texorpdfstring{Policies with a finite lookahead window.}{Policies with a finite lookahead window}}\label{sec3.3} In practice, infinite prediction into the future is certainly too much to ask for. In this section, we show that a natural modification of $\pi_{\mathrm{NOB}}$ allows for the \emph{same delay} to be achieved, using only a \emph{finite} lookahead window, whose length, $w(\lambda)$, increases to infinity as $\lambda\to1$.\footnote{In a way, this is not entirely surprising, since the $\pi_{\mathrm{NOB}}$ leads to a deletion rate of $\lambda-(1-p)$, and there is an additional $p-[\lambda -(1-p)]=1-\lambda$ unused deletion rate that can be exploited. } Denote by $w\in\mathbb{R}_{+}$ the size of the lookahead window in continuous time, and $W(n)\in\mathbb{Z}_{+}$ the window size in the discrete-time embedded process $Q^0$, starting from slot $n$. Letting $T_n$ be the time of the $n$th event in the system, then \begin{equation} W(n) = \sup\{ k \in\mathbb{Z}_{+}\dvtx T_{n+k} \leq T_{n}+w \}. \label{eqwdiscrete} \end{equation} For $x \in\mathbb{N}$, define the set of indices \begin{equation} U (Q,n,x )=\inf\bigl\{j\in\{1,\ldots,x \}\dvtx Q [n+j ]=Q[n]-1 \bigr\}. \end{equation} \begin{defn}[($w$-no-job-left-behind policy)] Given an initial sample path $Q^{0}$ and $w>0$, the $w$-no-job-left-behind policy, denoted by $\pi _{\mathrm{NOB}}^w$, deletes all arrivals in the set $\Psi^{w}$, where \[ \Psi^{w} = \bigl\{n\in\Phi\bigl(Q^0 \bigr)\dvtx U \bigl(Q^0,n,W(n) \bigr)=\infty\bigr\}. \] \end{defn} It is easy to see that $\pi_{\mathrm{NOB}}^w$ is simply $\pi_{\mathrm{NOB}}$ applied within the confinement of a finite window: a job at $t$ is deleted if and only if the initial queue length process does not return to below the current level \emph{within the next $w$ units of time}, assuming no further deletions are made. Since the window is finite, it is clear that $\Psi^w \supset\Psi$ for any $w<\infty$, and hence $C (p,\lambda,\pi_{\mathrm{NOB}}^w )\leq C (p,\lambda,\pi _{\mathrm{NOB}} )$ for all $\lambda\in(1-p)$. The only issue now becomes that of feasibility: by making decision only based on a finite lookahead window, we may end up deleting at a rate greater than $p$. The following theorem summarizes the above observations and gives an upper bound on the appropriate window size, $w$, as a function of $\lambda$.\footnote{Note that Theorem~\ref{teolookahead} implies Theorem~\ref{teooffline} and is hence stronger.} \begin{teo}[(Optimal delay scaling with finite lookahead)]\label{teolookahead} Fix $p\in (0,1 )$. There exists $C>0$, such that if \[ w (\lambda)= C\cdot\log\frac{1}{1-\lambda}, \] then $\pi_{\mathrm{NOB}}^{w(\lambda)}$ is feasible and \begin{equation} C \bigl(p,\lambda,\pi_{\mathrm{NOB}}^{w(\lambda)} \bigr)\leq C (p,\lambda, \pi_{\mathrm{NOB}} ) = \frac{1-p}{\lambda-(1-p)}. \end{equation} Since $C_{\Pi_{w (\lambda)}}^* (p,\lambda) \geq C_{\Pi_{\infty}}^* (p,\lambda)$ and $C^*_{\Pi _{w (\lambda)}}(p,\lambda)\leq C (p,\lambda,\pi _{\mathrm{NOB}}^{w(\lambda)} )$, we also have that \begin{equation} \lim_{\lambda\rightarrow1}C_{\Pi_{w (\lambda )}}^{*} (p,\lambda) = \lim _{\lambda\rightarrow1}C_{\Pi _{\infty}}^{*} (p,\lambda)= \frac{1-p}{p}. \end{equation} \end{teo} \begin{pf} See Section~\ref{secpfthmlookahead}. \end{pf} \subsubsection{\texorpdfstring{Delay-information duality.}{Delay-information duality}}\label{sec3.3.1} Theorem~\ref{teolookahead} says that one can attain the same heavy-traffic delay performance as the optimal offline algorithm if the size of the lookahead window scales as $\mathcal{O}(\log\frac {1}{1-\lambda})$. Is this the minimum amount of future information necessary to achieve the same (or comparable) heavy-traffic delay limit as the optimal offline policy? We conjecture that this is the case, in the sense that there exists a matching lower bound, as follows.\looseness=-1 \begin{figure}[b] \includegraphics{973f08.eps} \caption{``Delay vs. Information.'' Best achievable heavy traffic delay scaling as a function of the size of the lookahead window, $w$. Results presented in this paper are illustrated in the solid lines and circles, and the gray dotted line depicts our conjecture of the unknown regime of $0<w(\lambda)\lesssim\log(\frac{1}{1-\lambda } )$.}\label{figcollapse} \end{figure} \begin{conj}\label{conjinfolowerbound} Fix $p\in(0,1)$. If $w(\lambda)\ll\log\frac{1}{1-\lambda}$ as $\lambda\to1$, then \[ \limsup_{\lambda\to1}C^*_{\Pi_{w(\lambda)}}(p,\lambda) = \infty. \] In other words, ``delay collapse'' can occur only if $w(\lambda) = \Theta(\log\frac{1}{1-\lambda} )$. \end{conj} If the conjecture is proven, it would imply a \emph{sharp transition} in the system's heavy-traffic delay scaling behavior, around the critical ``threshold'' of $w(\lambda) = \Theta(\log\frac {1}{1-\lambda} )$. It would also imply the existence of a symmetric {dual relationship} between \emph{future information} and \emph{queueing delay}: $\Theta(\log\frac{1}{1-\lambda} )$ amount of information is required to achieve a finite delay limit, and one has to suffer $\Theta(\log\frac{1}{1-\lambda} )$ in delay, if only finite amount of future information is available. Figure~\ref{figcollapse} summarizes the main results of this paper from the angle of the delay-information duality. The dotted line\vadjust{\goodbreak} segment marks the unknown regime and the sharp transition at its right endpoint reflects the view of Conjecture~\ref{conjinfolowerbound}. \section{\texorpdfstring{Interpretations of $\pi_{\mathrm{NOB}}$.}{Interpretations of pi NOB}}\label{secinterpret} We present two equivalent ways of describing the no-job-left-behind policy $\pi_{\mathrm{NOB}}$. The \emph{stack interpretation} helps us derive asymptotic deletion rate of $\pi_{\mathrm{NOB}}$ in a simple manner, and illustrates the superiority of $\pi_{\mathrm{NOB}}$ compared to an online policy. Another description of $\pi_{\mathrm{NOB}}$ using time-reversal shows us that the set of deletions made by $\pi_{\mathrm{NOB}}$ can be calculated efficiently in linear time (with respect to the length of the time horizon). \subsection{\texorpdfstring{Stack interpretation.}{Stack interpretation}}\label{secinterpstack} Suppose that the service discipline adopted by the server is that of last-in-first-out (LIFO), where the it always fetches a task that has arrived the latest. In other words, the queue works as a \emph {stack}. Suppose that we first simulate the stack without any deletion. It is easy to see that,\vadjust{\goodbreak} when the arrival rate $\lambda$ is greater than the service rate $1-p$, there will be a growing set of jobs at the bottom of the stack that will \emph{never} be processed. Label all such jobs as ``left-behind.'' For example, Figure~\ref{figwater1} shows the evolution of the queue over time, where all ``left-behind'' jobs are colored with a blue shade. One can then verify that the policy $\pi _{\mathrm{NOB}}$ given in Definition~\ref{defnob} is equivalent to deleting all jobs that are labeled ``left-behind,'' hence the namesake ``No Job Left Behind.'' Figure~\ref{figwater2} illustrates applying $\pi _{\mathrm{NOB}}$ to a sample path of $Q^0$, where the $i$th job to be deleted is precisely the $i$th job among all jobs that would have never been processed by the server under a LIFO policy. One advantage of the stack interpretation is that it makes obvious the fact that the deletion rate induced by $\pi_{\mathrm{NOB}}$ is equal to $\lambda-(1-p)<p$, as illustrated in the following lemma. \begin{lem} For all $\lambda>1-p$, the following statements hold: \label{lemnobbasic} \begin{longlist}[(2)] \item[(1)] With probability one, there exists $T<\infty$, such that every service token generated after time $T$ is matched with some job. In other words, the server never idles after some finite time. \item[(2)] Let $Q=D (Q^0,M^\Psi)$. We have \begin{equation} \limsup_{n\to\infty}\frac{1}{n}I \bigl(M^\Psi,n \bigr) \leq\frac{\lambda- (1-p )}{\lambda+1-p}\qquad\mbox{a.s.}, \label{eqIMnlim} \end{equation} which implies that $\pi_{\mathrm{NOB}}$ is feasible for all $p\in(0,1)$ and $\lambda\in(1-p,1)$. \end{longlist} \end{lem} \begin{pf} See Appendix~\ref{applemnobbasic}. \end{pf} \subsubsection{\texorpdfstring{``Anticipation'' vs. ``reaction.''}{``Anticipation'' vs. ``reaction''}}\label{secinterpantireac} Some geometric intuition from the stack interpretation shows that the power of $\pi_{\mathrm{NOB}}$ essentially stems from being highly \emph{anticipatory}. Looking at\vadjust{\goodbreak} Figure~\ref {figwater1}, one sees that the jobs that are ``left behind'' at the bottom of the stack correspond to those who arrive during the intervals where the initial sample path $Q^0$ is taking a consecutive ``upward hike.'' In other words, $\pi_{\mathrm{NOB}}$ begins to delete jobs when it anticipates that the arrivals are \emph{just about to} get intense. Similarly, a job in the stack will be ``served'' if $Q^0$ curves down eventually in the future, which corresponds $\pi_{\mathrm{NOB}}$'s stopping deleting jobs as soon as it anticipates that the next few arrivals can be handled by the server alone. In sharp contrast is the nature of the optimal online policy, $\pi_{\mathrm{th}}^{L (p,\lambda)}$, which is by definition ``reactionary'' and begins to delete only when the current queue length has already reached a high level. The differences in the resulting sample paths are illustrated via simulations in Figure~\ref{figsamplepaths}. For example, as $Q^0$ continues to increase during the first 1000 time slots, $\pi_{\mathrm{NOB}}$ begins deleting immediately after $t=0$, while no deletion is made by $\pi _{\mathrm{th}}^{L (p,\lambda)}$ during this period.\looseness=1 As a rough analogy, the offline policy starts to delete \emph{before} the arrivals get busy, but the online policy can only delete \emph {after} the burst in arrival traffic has been realized, by which point it is already ``too late'' to fully contain the delay. This explains, to certain extend, why $\pi_{\mathrm{NOB}}$ is capable of achieving ``delay collapse'' in the heavy-traffic regime (i.e., a finite limit of delay as $\lambda\to1$, Theorem~\ref{teooffline}), while the delay under even the best online policy diverges to infinity as $\lambda\to1$ (Theorem~\ref{teoonline}). \subsection{\texorpdfstring{A linear-time algorithm for $\pi_{\mathrm{NOB}}$.}{A linear-time algorithm for pi NOB}}\label{seclinearalgo} While the offline deletion problem serves as a nice abstraction, it is impossible to actually store information about the \emph{infinite} future in practice, even if such information is available. A natural finite-horizon version of the offline deletion problem can be posed as follows: given the values of $Q^0$ over the first $N$ slots, where $N$ finite, one would like to compute the set of deletions made by $\pi_{\mathrm{NOB}}$, \[ M^\Psi_N = M^\Psi\cap\{1,\ldots, N\} \] assuming that $Q^0[n]> Q^0[N]$ for all $n\geq N$. Note that this problem also arises in computing the sites of deletions for the $\pi _{\mathrm{NOB}}^w$ policy, where one would replace $N$ with the length of the lookahead window, $w$. We have the following algorithm, which identifies all slots on which a new ``minimum'' (denoted by the variable $S$) is achieved in $Q^0$, when viewed in the \emph{reverse} order of time. It is easy to see that the running time of the above algorithm scales linearly with respect to the length of the time horizon, $N$. Note that this is not the unique linear-time algorithm. In fact, one can verify that the simulation procedure used in describing the stack interpretation of $\pi_{\mathrm{NOB}}$ (Section~\ref{secinterpret}), which keeps track of which jobs would eventually be served, is itself a linear-time algorithm. However, the time-reverse version given here is arguably more intuitive and simpler to describe. \eject \hrule\vspace{0.05in} {\bf\emph{A linear-time algorithm for $\pi_{\mathrm{NOB}}$}} \hrule\vspace{0.05in} \begin{algorithmic \STATE$S\leftarrow Q^0[N]$ and $M^\Psi_N\leftarrow\varnothing$ \FOR{$n = N$ down to $1$} \IF{$Q^0[n]<S$} \STATE$M^\Psi_N\leftarrow M^\Psi_N\cup\{n+1\}$ \STATE$S \leftarrow Q^0[n]$ \ELSE \STATE$M^\Psi_N\leftarrow M^\Psi_N$ \ENDIF \ENDFOR \RETURN$M^\Psi_N$ \hrule\vspace{0.05in} \end{algorithmic} \section{\texorpdfstring{Optimal online policies.}{Optimal online policies}}\label{secoptonline} Starting from this section and through Section~\ref{secfinitelookahead}, we present the proofs of the results stated in Section~\ref{secresults}. We begin with showing Theorem~\ref{teoonline}, by formulating the online problem as a Markov decision problem (MDP) with an average cost constraint, which then enables us to use existing results to characterize the form of optimal policies. Once the family of threshold policies has been shown to achieve the optimal delay scaling in $\Pi _0$ under heavy traffic, the exact form of the scaling can be obtained in a fairly straightforward manner from the steady-state distribution of a truncated birth--death process. \subsection{\texorpdfstring{A Markov decision problem formulation.}{A Markov decision problem formulation}}\label{sec5.1} Since both the arrival and service processes are Poisson, we can formulate the problem of finding an optimal policy in $\Pi_0$ as a continuous-time Markov decision problem with an average-cost constraint, as follows. Let $ \{Q(t)\dvtx t\in\mathbb{R}_{+} \} $ be the resulting continuous-time queue length process after applying some policy in $\Pi_0$ to $Q^0$. Let $T_k$ be the $k$th upward jump in $Q$ and $\tau_k$ the length of the $k$th inter-jump interval, $\tau_k = T_{k}-T_{k-1}$. The task of a deletion policy, $\pi\in\Pi_0$, amounts to choosing, for each of the inter-jump intervals, a \emph {deletion action}, $a_k \in[0,1]$, where the value of $a_k$ corresponds to the probability that the next arrival during the current inter-jump interval will be deleted. Define $R$ and $K$ to be the \emph {reward} and \emph{cost} functions of an inter-jump interval, respectively, \begin{eqnarray} \label{eqreward} R(Q_k,a_k,\tau_k) &=& -Q_k\cdot\tau_k, \\ K(Q_k,a_k,\tau_k) &=& \lambda(1-a_k) \tau_k, \label{eqcost} \end{eqnarray} where $Q_k = Q(T_k)$. The corresponding MDP seeks to maximize the time-average reward \begin{equation} \widebar{R}_\pi= \liminf_{n \to\infty} \frac{\mathbb{E}_\pi (\sum_{k=1}^n R(Q_k,a_k,\tau_k) )}{\mathbb{E}_\pi(\sum_{k=1}^n \tau_k )} \end{equation} while obeying the average-cost constraint \begin{equation} \widebar{C}_\pi= \limsup_{n \to\infty} \frac{\mathbb{E}_\pi (\sum_{k=1}^n K(Q_k,a_k,\tau_k) )}{\mathbb{E}_\pi(\sum_{k=1}^n \tau_k )} \leq p. \label{eqavgcost} \end{equation} To see why this MDP solves our deletion problem, observe that $\widebar {R}_\pi$ is the negative of the time-average queue length, and $\widebar {C}_\pi$ is the time-average deletion rate. It is well known that the type of constrained MDP described above admits an optimal policy that is stationary \cite{AS91}, which means that the action $a_k$ depends solely on the current state, $Q_k$, and is independent of the time index $k$. Therefore, it suffices to describe $\pi$ using a sequence, $ \{b_q\dvtx q\in\mathbb {Z}_{+} \}$, such that $ a_k = b_q$ whenever $Q_k=q$. Moreover, when the state space is finite,\footnote{This corresponds to a finite buffer size in our problem, where one can assume that the next arrival is automatically deleted when the buffer is full, independent of the value of $a_k$.} stronger characterizations of the $b_q$'s have been obtained for a family of reward and cost functions under certain regularity assumptions (Hypotheses 2.7, 3.1 and 4.1 in \cite{BR86}), which ours do satisfy [equations~(\ref{eqreward}) and (\ref {eqcost})]. Theorem~\ref{teoonline} will be proved using the next-known result (adapted from Theorem 4.4 in \cite{BR86}): \begin{lem} \label{lemquasithresh} Fix $p$ and $\lambda$, and let the buffer size $B$ be finite. There exists an optimal stationary policy, $ \{b^*_q \}$, of the form \[ b^*_q=\cases{ 1, &\quad$q< L^*-1$, \vspace*{3pt}\cr \xi, &\quad$q=L^*-1$, \vspace*{3pt}\cr 0, & \quad$q\geq L^*$} \] for some $L^*\in\mathbb{Z}_{+}$ and $\xi\in[0,1]$. \end{lem} \subsection{\texorpdfstring{Proof of Theorem \protect\ref{teoonline}.}{Proof of Theorem 1}}\label{secpfthmonline} In words, Lemma~\ref{lemquasithresh} states that the optimal policy admits a ``quasi-threshold'' form: it deletes the next arrival when $Q(t)\geq L^*$, admits when $Q(t)<L^*-1$, and admits with probability $\xi$ when $Q(t)=L^*-1$. Suppose, for the moment, that the statements of Lemma \ref{lemquasithresh} also hold when the buffer size is infinite, an assumption to be justified by the end of the proof. Denoting by $\pi ^*_p$ the stationary optimal policy associated with $ \{ b^*_q \}$, when the constraint on the average of deletion is $p$ [equation~(\ref{eqavgcost})]. The evolution of $Q(t)$ under $\pi ^*_p$ is that of a birth--death process truncated at state $L^*$, with the transition rates given in Figure~\ref{figbdchain}, and the time-average queue length is equal to the expected queue length in steady state. Using standard calculations involving the steady-state distribution of the induced Markov process, it is not difficult to verify that \begin{equation} C\bigl(p,\lambda,\pi_{\mathrm{th}}^{L^* -1}\bigr)\leq C\bigl(p,\lambda, \pi^*_p\bigr) \leq C\bigl(p,\lambda,\pi_{\mathrm{th}}^{L^*} \bigr), \label{eqCCC1} \end{equation} where $L^*$ is defined as in Lemma~\ref{lemquasithresh}, and $C(p,\lambda,\pi)$ is the time-average queue length under policy $\pi $, defined in equation~(\ref{eqC}). \begin{figure} \includegraphics{973f09.eps} \caption{The truncated birth--death process induced by $\pi ^*_p$.}\label{figbdchain} \end{figure} Denote by $ \{\mu^L_i\dvtx i\in\mathbb{N} \}$ the steady-state probability of the queue length being equal to $i$, under a threshold policy $\pi_{\mathrm{th}}^{L}$. Assuming $\lambda\neq1-p$, standard calculations using the balancing equations yield \begin{equation} \mu^L_i = \biggl(\frac{\lambda}{1-p}\biggr)^i\cdot\biggl(\frac {1-(\lambda/(1-p))}{1- (\lambda/(1-p))^{L+1}} \biggr)\qquad\forall1\leq i \leq L \label{eqmui} \end{equation} and $\mu_i^L=0$ for all $i \geq L+1$. The time-average queue length is given by \begin{eqnarray} C\bigl(p,\lambda, \pi_{\mathrm{th}}^{L}\bigr) &=& \sum _{i=1}^L i\cdot\mu^L_i \nonumber \\[-8pt] \\[-8pt] &=&\frac{\theta}{ (\theta-1 ) (\theta^{L+1}-1 )}\cdot\bigl[1-\theta^L+L\theta^L( \theta-1) \bigr], \nonumber \end{eqnarray} where $\theta= \frac{\lambda}{1-p}$. Note that when $\lambda>1-p$, $\mu_i^L$ is decreasing with respect to $L$ for all $i\in\{ 0,1,\ldots,L \}$ [equation~(\ref{eqmui})], which implies that the time-average queue length is monotonically increasing in $L$, that is, \begin{eqnarray} \label{eqCmonL} &&C\bigl(p,\lambda, \pi_{\mathrm{th}}^{L+1}\bigr) - C \bigl(p,\lambda, \pi_{\mathrm{th}}^{L}\bigr) \nonumber \\ &&\qquad= (L+1)\cdot\mu^{L+1}_{L+1} + \sum _{i=0}^{L} i\cdot\bigl(\mu^{L+1}_i - \mu^L_i\bigr) \nonumber \\ &&\qquad\geq(L+1)\cdot\mu^{L+1}_{L+1} + L \cdot\Biggl(\sum _{i=0}^{L} \mu^{L+1}_i - \mu^L_i \Biggr) \\ &&\qquad= (L+1)\cdot\mu^{L+1}_{L+1} + L\cdot\bigl(1-\mu _{i}^{L+1}-1 \bigr) \nonumber \\ &&\qquad= \mu^{L+1}_{L+1}>0. \nonumber \end{eqnarray} It is also easy to see that, fixing $p$, since we have that $\theta >1+\delta$ for all $\lambda$ sufficiently close to $1$, where $\delta >0$ is a fixed constant, we have \begin{equation} \qquad C\bigl(p,\lambda, \pi_{\mathrm{th}}^{L}\bigr) = \biggl( \frac{\theta^{L+1}}{\theta ^{L+1}-1} \biggr)L-\frac{\theta}{\theta-1}\cdot\frac{\theta ^{L}-1}{\theta^{L+1}-1} \sim L\qquad\mbox{as $L\to\infty$.} \label{eqCthresh} \end{equation} Since deletions only occur when $Q(t)$ is in state $L$, from equation~(\ref{eqmui}), the average rate of deletions in continuous time under $\pi_{\mathrm{th}}^{L}$ is given by \begin{equation} \quad r_d \bigl(p,\lambda,\pi_{\mathrm{th}}^{L}, \bigr) = \lambda\cdot\pi_L = \lambda\cdot\biggl(\frac{\lambda}{1-p} \biggr)^L\cdot\biggl(\frac{1-(\lambda/(1-p))}{1- (\lambda/(1-p))^{L+1}} \biggr). \label{eqrdthr} \end{equation} Define \begin{equation} L(x,\lambda)=\min\bigl\{L \in\mathbb{Z}_{+}\dvtx r_d \bigl(p,\lambda,\pi_{\mathrm{th}}^{L}, \bigr) \leq x \bigr\}, \label{eqLlamp} \end{equation} that is, $L(x,\lambda)$ is the smallest $L$ for which $\pi_{\mathrm{th}}^{L}$ remains feasible, given a deletion rate constraint of $x$. Using equations~(\ref{eqrdthr}) and (\ref{eqLlamp}) to solve for $L(p,\lambda)$, we obtain, after some algebra, \begin{equation}\label{eqLlamp2} L (p,\lambda) = \biggl\lceil\log_{\lambda/(1-p)}\frac{p}{1-\lambda} \biggr\rceil \sim\log_{1/(1-\tilde{p})}\frac{1}{1-\lambda}\qquad\mbox{as } \lambda\to1 \end{equation} and, by combining equation~(\ref{eqLlamp2}) and equation~(\ref {eqCthresh}) with $L=L (p,\lambda)$, we have \begin{equation} C\bigl(p,\lambda, \pi_{\mathrm{th}}^{L(p,\lambda)}\bigr) \sim L(p,\lambda) \sim \log_{1/(1-p)}\frac{1}{1-\lambda}\qquad\mbox{as } \lambda\to1. \label {eqCthropt} \end{equation} By equations~(\ref{eqCmonL}) and (\ref{eqLlamp}), we know that $\pi_{\mathrm{th}}^{L(p,\lambda)}$ achieves the minimum average queue length among all feasible threshold policies. By equation~(\ref{eqCCC1}), we must have that \begin{equation} C \bigl(p,\lambda,\pi_{\mathrm{th}}^{L (p,\lambda) -1} \bigr)\leq C\bigl (p,\lambda, \pi^*_p\bigr) \leq C \bigl(p,\lambda,\pi_{\mathrm{th}}^{L(p,\lambda)} \bigr). \label{eqCCC2} \end{equation} Since Lemma~\ref{lemquasithresh} only applies when $B<\infty$, equation~(\ref{eqCCC2}) holds whenever the buffer size, $B$, is greater than $L(p,\lambda)$, but finite. We next extend equation~(\ref {eqCCC2}) to the case of $B=\infty$. Denote by $\nu_p^*$ a stationary optimal policy, when $B=\infty$ and the constraint on average deletion rate is equal to $p$ [equation~(\ref{eqavgcost})]. The upper bound on $C(p,\lambda,\pi^*_p)$ in equation~(\ref {eqCCC2}) automatically holds for $C(p,\lambda,\nu^*_p)$, since $C(p,\lambda,\pi_{\mathrm{th}}^{L(p,\lambda)})$ is still feasible when $B=\infty$. It remains to show a lower bound of the form \begin{equation} C\bigl(p,\lambda,\nu^*_p\bigr)\geq C \bigl(p,\lambda, \pi_{\mathrm{th}}^{L (p,\lambda) -2} \bigr), \label{eqClower} \end{equation} when\vspace*{1pt} $B=\infty$, which, together with the upper bound, will have implied that the scaling of $ C(p,\lambda, \pi_{\mathrm{th}}^{L(p,\lambda)})$ [equation (\ref{eqCthropt})] carries over to $\nu^*_p$, \begin{equation} \qquad C \bigl(p,\lambda, \nu^*_p \bigr) \sim C\bigl(p,\lambda, \pi _{\mathrm{th}}^{L(p,\lambda)}\bigr) \sim\log_{1/(1-p)}\frac{1}{1-\lambda}\qquad\mbox{as } \lambda\to1, \end{equation} thus proving Theorem~\ref{teoonline}. To show equation~(\ref{eqClower}), we will use a straightforward truncation argument that relates the performance of an optimal policy under $B=\infty$ to the case of $B<\infty$. Denote by $ \{ b^*_q \}$ the deletion probabilities of a stationary optimal policy, $\nu_p^*$, and by $ \{b^*_q(B') \}$ the deletion probabilities for a truncated version, $\nu_p^*(B')$, with \[ b^*_q\bigl(B'\bigr)=\mathbb{I} \bigl(q\leq B' \bigr)\cdot b_q^* \] for all $q\geq0$. Since $\nu_p^*$ is optimal and yields the minimum average queue length, it is without loss of generality to assume that the Markov process for $Q(t)$ induced by $\nu_p^*$ is positive recurrent. Denoting by $ \{\mu_i^* \}$ and $ \{\mu ^*_i(B') \}$ the steady-state probability of queue length being equal to $i$ under $\nu_p^*$ and $\nu_p^*(B')$, respectively, it follows from the positive recurrence of $Q(t)$ under $\nu_p$ and some algebra, that \begin{equation} \lim_{B'\to\infty} \mu^{*}_i \bigl(B'\bigr) = \mu^*_i \label{eqptwmucon} \end{equation} for all $i\in\mathbb{Z}_{+}$ and \begin{equation} \lim_{B'\to\infty} C \bigl(p,\lambda,\nu_p^* \bigl(B'\bigr) \bigr) = C \bigl(p,\lambda,\nu_p^* \bigr). \end{equation} By equation (\ref{eqptwmucon}) and the fact that $b_i^*(B')=b^*_i$ for all $0\leq i \leq B'$, we have that\footnote{Note that, in general, $r_d (p,\lambda,\nu_p^*(B') )$ could be greater than $p$, for any finite $B'$.} \begin{eqnarray} \lim_{B'\to\infty}r_d \bigl(p,\lambda, \nu_p^*\bigl(B'\bigr) \bigr) &=& \lim _{B'\to\infty}\lambda\sum_{i=0}^\infty \mu^*_i\bigl(B'\bigr)\cdot\bigl(1-b_i^* \bigl(B'\bigr) \bigr) \nonumber \\ &=& r_d \bigl(p,\lambda, \nu_p^* \bigr) \\ &\leq& p.\nonumber \end{eqnarray} It is not difficult to verify, from the definition of $L(p,\lambda)$ [equation~(\ref{eqLlamp})], that \[ \lim_{\delta\to0} L(p+\delta,\lambda) \geq L(p,\lambda)-1, \] for all $p,\lambda$. For all $\delta>0$, choose $B'$ to be sufficiently large, so that \begin{eqnarray} \label{eqCtoCB} C \bigl(p,\lambda,\nu_p^*\bigl(B' \bigr) \bigr) &\leq& C \bigl(p,\lambda,\nu_p^* \bigr)+\delta, \\ L \bigl(\lambda,r_d \bigl(p,\lambda,\nu_p^* \bigl(B'\bigr) \bigr) \bigr) &\geq& L(p,\lambda)-1. \label{eqrdeps} \end{eqnarray} Let $p'=r_d (p,\lambda,\nu_p^*(B') )$. Since $b_i^*(B')=0$ for all $i\geq B'+1$, by equation~(\ref{eqrdeps}) we have \begin{equation} C \bigl(p,\lambda,\nu_p^*\bigl(B'\bigr) \bigr) \geq C \bigl(p,\lambda,\pi^*_{p'} \bigr), \label{eqBinftoBfin} \end{equation} where $\pi^*_p$ is the optimal stationary policy given in Lemma~\ref{lemquasithresh} under any the finite buffer size $B>B'$. We have \begin{eqnarray}\label{eqClower2} && C \bigl(p,\lambda,\nu_p^* \bigr)+\delta \nonumber \\ &&\qquad \stackrel{\mathrm{(a)}} {\geq} C \bigl(p,\lambda,\nu_p^* \bigl(B'\bigr) \bigr) \nonumber \\ &&\qquad\stackrel{\mathrm{(b)}} {\geq} C \bigl(p,\lambda,\pi^*_{p'} \bigr) \\ &&\qquad \stackrel{\mathrm{(c)}} {\geq} C \bigl(p,\lambda,\pi_{\mathrm{th}}^{L(p',\lambda )-1}\bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(d)}} {\geq} C \bigl(p,\lambda,\pi_{\mathrm{th}}^{L(p,\lambda )-2}\bigr), \nonumber \end{eqnarray} where the inequalities (a) through (d) follow from equations~(\ref {eqCtoCB}), (\ref{eqBinftoBfin}), (\ref{eqCCC2}) and (\ref {eqrdeps}), respectively. Since equation~(\ref{eqClower2}) holds for all $\delta>0$, we have proven equation~(\ref{eqClower}). This completes the proof of Theorem~\ref{teoonline}. \section{\texorpdfstring{Optimal offline policies.}{Optimal offline policies}}\label{secoffline} We prove Theorem~\ref{teooffline} in this section, which is completed in two parts. In the first part (Section~\ref{secperformoffline}), we give a full characterization of the sample path resulted by applying $\pi_{\mathrm{NOB}}$ (Proposition~\ref{propQRW}), which turns out to be a \emph{recurrent} random walk. This allows us to obtain the steady-state distribution of the queue length under $\pi_{\mathrm{NOB}}$ in closed-form. From this, the expected queue length, which is equal to the time-average queue length, $C (p,\lambda, \pi_{\mathrm{NOB}} )$, can be easily derived and is shown to be $\frac{1-p }{\lambda -(1-p)}$. Several side results we obtain along this path will also be used in subsequent sections. The second part of the proof (Section~\ref{secoptmalityofflineproof}) focuses on showing the heavy-traffic optimality of $\pi_{\mathrm{NOB}}$ among the class of all feasible offline policies, namely, that $\lim_{\lambda\to1} C (p,\lambda, \pi _{\mathrm{NOB}} )=\lim_{\lambda\to1} C^*_{\Pi_\infty} (p,\lambda )$, which, together with the first part, proves Theorem~\ref{teooffline} (Section~\ref{secpfthmoffline}). The optimality result is proved using a sample-path-based analysis, by relating the resulting queue length sample path of $\pi_{\mathrm{NOB}}$ to that of a greedy deletion rule, which has an optimal deletion performance over a \emph {finite} time horizon, $ \{1,\ldots, N \}$, given any initial sample path. We then show that the discrepancy between $\pi _{\mathrm{NOB}}$ and the greedy policy, in terms of the resulting time-average queue length after deletion, diminishes almost surely as $N\to\infty$ and $\lambda\to1$ (with the two limits taken in this order). This establishes the heavy-traffic optimality of $\pi_{\mathrm{NOB}}$. \subsection{\texorpdfstring{Additional notation.}{Additional notation}}\label{sec6.1} Define $\widetilde{Q}$ as the resulting queue length process after applying $\pi_{\mathrm{NOB}}$ \[ \widetilde{Q}=D \bigl(Q^0,M^\Psi\bigr) \label{eqqtil} \] and $Q$ as the shifted version of $\widetilde{Q}$, so that $Q$ starts from the first deletion in~$\widetilde{Q}$,\looseness=-1 \begin{equation} Q[n] = \widetilde{Q}\bigl[n +m^\Psi_1\bigr],\qquad n \in \mathbb{Z}_{+}.\vadjust{\goodbreak} \end{equation}\looseness=0 We say that $B = \{l,\ldots,u \} \subset\mathbb{N}$ is a \textit{busy period} of $Q$ if \begin{equation} \qquad Q[l-1]=Q[u]=0\quad\mbox{and}\quad Q[n]>0\qquad\mbox{for all }n\in\{l,\ldots,u-1 \}. \label{eqbusydef} \end{equation} We may write $B_j = \{l_j,\ldots, u_j\}$ to mean the $j$th busy period of $Q$. An example of a busy period is illustrated in Figure~\ref{figwater2}. Finally, we will refer to the set of slots between two adjacent deletions in $Q$ (note the offset of $m_1$), \begin{equation} E_i = \bigl\lbrace m^\Psi_i-m^\Psi_1,m^\Psi_i+1-m^\Psi _1,\ldots,m^\Psi_{i+1}-1-m^\Psi_1 \bigr\rbrace \label{eqEjdef} \end{equation} as the $i$th \textit{deletion epoch}. \subsection{\texorpdfstring{Performance of the no-job-left-behind policy.}{Performance of the no-job-left-behind policy}}\label{secperformoffline} For simplicity of notation, throughout this section, we will denote by $M= \{m_i\dvtx i \in\mathbb{N} \}$ the deletion sequence generated by applying $\pi_{\mathrm{NOB}}$ to $Q^0$, when there is no ambiguity (as opposed to using $M^\Psi$ and $m^\Psi_i$). The following lemma summarizes some important properties of $Q$ which will be used repeatedly. \begin{lem} \label{lemqmi0} Suppose $1>\lambda>1-p>0$. The following hold with probability one: \begin{longlist}[(2)] \item[(1)] For all $n\in\mathbb{N}$, we have $Q[n] = Q^0[n+m_1] - I(M,n+m_1)$. \item[(2)] For all $i\in\mathbb{N}$, we have $n=m_i-m_1$, if and only if \begin{equation} Q[n]=Q[n-1]=0 \end{equation} with the convention that $Q[-1]= 0$. In other words, the appearance of two consecutive zeros in $Q$ is equivalent to having a deletion on the second zero. \item[(3)] $Q[n]\in\mathbb{Z}_{+}$ for all $n\in\mathbb{Z}_{+}$. \end{longlist} \end{lem} \begin{pf} See Appendix~\ref{applemqmi0} \end{pf} The next proposition is the main result of this subsection. It specifies the probability law that governs the evolution of $Q$. \begin{prop} \label{propQRW} $ \{Q[n]\dvtx n\in\mathbb{Z}_{+} \}$ is a random walk on $\mathbb{Z}_{+}$, with $Q[0]=0$, and, for all $n\in\mathbb{N}$ and $x_1,x_2\in\mathbb{Z}_{+}$, \[ \mathbb{P} \bigl(Q[n+1]=x_2 \mid Q[n]=x_2 \bigr) = \cases{ \displaystyle\frac{1-p}{\lambda+1-p}, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr \displaystyle\frac{\lambda}{\lambda+1-p}, &\quad$x_2-x_1=-1$, \vspace*{5pt}\cr 0, &\quad otherwise,} \] if $x_1>0$ and \[ \mathbb{P} \bigl(Q[n+1]=x_2 \mid Q[n]=x_1 \bigr) = \cases{ \displaystyle\frac{1-p}{\lambda+1-p}, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr \displaystyle\frac{\lambda}{\lambda+1-p}, &\quad$x_2-x_1=0$, \vspace*{5pt}\cr 0, &\quad otherwise,} \] if $x_1=0$. \end{prop} \begin{pf} For a sequence $ \{X[n]\dvtx n\in\mathbb{N} \}$ and $s,t\in \mathbb{N}$, $s\leq t$, we will use the shorthand \[ X_{s}^t= \bigl\{X[s],\ldots, X[t] \bigr\}. \] Fix $n\in N$, and a sequence $ (q_1,\ldots,q_n )\subset \mathbb{Z}_{+}^n$. We have \begin{eqnarray}\label{eqQcond0} &&\mathbb{P} \bigl(Q[n]=q[n] | Q_1^{n-1}=q_1^{n-1} \bigr)\nonumber \\ &&\qquad = \sum_{k=1}^{n} \mathop{\sum_{t_1,\ldots,t_k,}}_{t_k\leq n-1+t_1} \mathbb{P} \bigl(Q[n]=q[n] | Q_1^{n-1}=q_1^{n-1}, m_{1}^k = t_1^k, m_{k+1}\geq n +t_1 \bigr)\hspace*{-20pt} \\ &&\hspace*{88pt}{}\times \mathbb{P} \bigl(m_{1}^k = t_1^k, m_{k+1}\geq n +t_1 | Q_1^{n-1}=q_1^{n-1} \bigr). \nonumber \end{eqnarray} Restricting to the values of $t_i$'s and $q[i]$'s under which the summand is nonzero, the first factor in the summand can be written as \begin{eqnarray}\label{eqQcond1} \qquad&& \mathbb{P} \bigl(Q[n]=q[n] | Q_1^{n-1}=q_1^{n-1}, m_{1}^k = t_1^k, m_{k+1}\geq n+t_1 \bigr) \nonumber \\ &&\qquad = \mathbb{P} \bigl(\widetilde{Q}[n+m_1]=q[n] | {\widetilde{Q} }_{m_1+1}^{m_1+n-1} =q_1^{n-1}, m_{1}^k = t_1^k, m_{k+1}\geq n+t_1 \bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(a)}} {=} \mathbb{P} \Bigl(Q^0[n+t_1]=q[n]+k | Q^0[s+t_1]=q[s]+I \bigl( \{t_i \}_{i=1}^k,s+t_1 \bigr), \nonumber\\[-8pt]\\[-8pt] &&\hspace*{153pt}\forall1\leq s \leq n-1 \mbox{ and } \min_{r\geq n+t_1} Q^0[r] \geq k \Bigr) \nonumber \\ &&\qquad \stackrel{\mathrm{(b)}} {=} \mathbb{P} \Bigl(Q^0[n+t_1]=q[n]+k | Q^0[n-1+t_1]=q[n-1]+k\nonumber \\ &&\hspace*{190.5pt} \mbox{and } \min_{r\geq n+t_1} Q^0[r] \geq k \Bigr), \nonumber \end{eqnarray} where $\widetilde{Q}$ was defined in equation~(\ref{eqqtil}). Step~(a) follows from Lemma~\ref{lemqmi0} and the fact that $t_k\leq n-1+t_1$, and (b) from the Markov property of $Q^0$ and the fact that the events $ \{\min_{r\geq n+t_1} Q^0[r] \geq k \}$, $ \{ Q^0[n+t_1]=q[n]+k \}$ and their intersection, depend only on the values of $ \{Q^0[s]\dvtx s\geq n+t_1 \}$, and are hence independent of $ \{Q^0[s]\dvtx 1\leq s \leq n-2+t_1 \}$ conditional on the value of $Q^0[t_1+n-1]$. Since the process $Q$ lives in $\mathbb{Z}_{+}$ (Lemma~\ref{lemqmi0}), it suffices to consider the case of $q[n]=q[n-1]+1$, and show that \begin{eqnarray} && \mathbb{P} \Bigl(Q^0[n+t_1]=q[n-1]+1+k | Q^0[n-1+t_1]=q[n-1]+k \nonumber \\ &&\hspace*{192pt} \mbox{and }\min _{r\geq n+t_1} Q^0[r] \geq k \Bigr) \\ &&\qquad = \frac{1-p}{\lambda+1-p}\nonumber \end{eqnarray} for all $q[n-1]\in\mathbb{Z}_{+}$. Since $Q[m_i-m_1]=Q[m_i-1-m_1]=0$ for all $i$ (Lemma~\ref{lemqmi0}), the fact that $q[n]=q[n-1]+1>0$ implies that \begin{equation} n < m_{k+1}-1+m_1. \label{eqn1} \end{equation} Moreover, since $Q^0[m_{k+1}-1]=k$ and $n< m_{k+1}-1+m_1$, we have that \begin{equation} q[n]>0\qquad\mbox{implies } Q^0[t]=k\qquad\mbox{for some } t\geq n+1+m_1. \label{eqAcondition} \end{equation} We consider two cases, depending on the value of $q[n-1]$. \begin{longlist} \item[{\textit{Case} 1: $q[n-1]>0$.}] Using the same argument that led to equation~(\ref{eqAcondition}), we have that \begin{equation} q[n-1]>0\qquad\mbox{implies } Q^0[t]=k\qquad\mbox{for some } t\geq n +m_1. \label{eqn2} \end{equation} It is important to note that, despite the similarity of their conclusions, equations~(\ref{eqAcondition}) and (\ref{eqn2}) are different in their assumptions (i.e., $q[n]$ versus $q[n-1]$). We have \begin{eqnarray}\label{eqQcond2} \qquad &&\mathbb{P} \Bigl(Q^0[n+t_1]=q[n-1]+1+k | Q^0[n-1+t_1]=q[n-1]+k\nonumber \\ &&\hspace*{193pt}\mbox{and } \min _{r\geq n+t_1} Q^0[r] \geq k \Bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(a)}} {=}\mathbb{P} \Bigl(Q^0[n+t_1]=q[n-1]+1+k | Q^0[n-1+t_1]=q[n-1]+k \nonumber\\[-8pt]\\[-8pt] &&\hspace*{227pt}\mbox{and } \min _{r\geq n+t_1} Q^0[r] = k \Bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(b)}} {=}\mathbb{P} \Bigl(Q^0[2]=q[n-1]+1 | Q^0[1]=q[n-1]\mbox{ and } \min_{r\geq2}Q^0[r] = 0 \Bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(c)}} {=} \frac{1-p}{\lambda+1-p}, \nonumber \end{eqnarray} where (a) follows from equation~(\ref{eqn2}), (b) from the stationary and space-homogeneity of the Markov chain $Q^0$ and (c) from the following well-known property of a transient random walk conditional to returning to zero: \end{longlist} \begin{lem} \label{lemdualRW} Let $ \{X[n]\dvtx n\in\mathbb{N} \}$ be a random walk on $\mathbb{Z}_{+}$, such that for all $x_1,x_2\in\mathbb{Z}_{+}$ and $n\in\mathbb{N}$, \[ \mathbb{P} \bigl(X[n+1]=x_2 \mid X[n]=x_2 \bigr) = \cases{ q, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr 1-q, & \quad$x_2-x_1=-1$, \vspace*{5pt}\cr 0, &\quad otherwise,} \] if $x_1>0$ and \[ \mathbb{P} \bigl(X[n+1]=x_2 \mid X[n]=x_1 \bigr) = \cases{ q, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr 1-q, & \quad$x_2-x_1=0$, \vspace*{5pt}\cr 0, &\quad otherwise,} \] if $x_1=0$, where $q\in(\frac{1}{2},1 )$. Then for all $x_1,x_2\in\mathbb{Z}_{+}$ and $n\in\mathbb{N}$, \[ \mathbb{P} \Bigl(X[n+1]=x_2 | X[n]=x_1, \min _{r \geq n+1} X[r] = 0 \Bigr) = \cases{ 1-q, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr q, &\quad$x_2-x_1=-1$, \vspace*{5pt}\cr 0, &\quad otherwise,} \] if $x_1>0$ and \[ \mathbb{P} \Bigl(X[n+1]=x_2 | X[n]=x_1, \min _{r \geq n+1} X[r] = 0 \Bigr) = \cases{ 1-q, &\quad$x_2-x_1=1$, \vspace*{5pt}\cr q, &\quad$x_2-x_1=0$, \vspace*{5pt}\cr 0, &\quad otherwise,} \] if $x_1=0$. In other words, conditional on the eventual return to $0$ and before it happens, a transient random walk obeys the same probability law as a random walk with the reversed one-step transition probability. \end{lem} \begin{pf} See Appendix~\ref{applemdualRW}. \end{pf} \begin{longlist} \item[{\textit{Case} 2: $q[n-1]=0$}] We have \begin{eqnarray}\label{eqQcond3} \qquad &&\mathbb{P} \Bigl(Q^0[n+t_1]=q[n-1]+1+k | Q^0[n-1+t_1]=q[n-1]+k\nonumber \\ &&\hspace*{190pt} \mbox{ and } \min _{r\geq n+t_1} Q^0[r] \geq k \Bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(a)}} {=}\mathbb{P} \Bigl(Q^0[n+t_1]=1+k\nonumber \mbox{ and } \min_{r> n+t_1} Q^0[r] = k | Q^0[n-1+t_1]=k \nonumber\\[-8pt]\\[-8pt] &&\hspace*{220pt} \mbox{ and } \min_{r\geq n+t_1} Q^0[r] \geq k \Bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(b)}} {=} \mathbb{P} \Bigl(Q^0[2]=2 \mbox{ and } \min _{r>2} Q^0[r] = 1 | Q^0[1]=1 \mbox{ and } \min_{r\geq2} Q^0[r] \geq1 \Bigr), \nonumber \\ &&\qquad \stackrel{\triangle} {=} x, \nonumber \end{eqnarray} where (a) follows from equation~(\ref{eqAcondition}) [note its difference with equation~(\ref{eqn2})], and (b) from the stationarity and space-homogeneity of $Q^0$, and the assumption that $k\geq1$ [equation~(\ref{eqQcond0})]. Since equations~(\ref{eqQcond2}) and (\ref{eqQcond3}) hold for all $x_1,k\in\mathbb{Z}_{+}$ and $n\geq m_1+1$, by equation~(\ref {eqQcond0}), we have that \begin{eqnarray}\label{eqtransit1} && \mathbb{P} \bigl(Q[n]=q[n] | Q_1^{n-1}=q_1^{n-1} \bigr) \nonumber\\[18pt]\\[-30pt] &&\qquad = \cases{\displaystyle\frac{1-p}{\lambda+1-p}, &\quad$q[n]-q[n-1]=1$, \vspace*{5pt}\cr \displaystyle\frac{\lambda}{\lambda+1-p}, &\quad$q[n]-q[n-1]=-1$, \vspace*{5pt}\cr 0, &\quad otherwise,}\nonumber \end{eqnarray} if $q[n-1]>0$ and \begin{eqnarray} \qquad\mathbb{P} \bigl(Q[n]=q[n] | Q_1^{n-1}=q_1^{n-1} \bigr) = \cases{x, &\quad$q[n]-q[n-1]=1$, \vspace*{5pt}\cr 1-x, &\quad$q[n]-q[n-1]=0$, \vspace*{5pt}\cr 0, & \quad otherwise,}\label{eqtransit2} \end{eqnarray} if $q[n-1]=0$, where $x$ represents the value of the probability in equation~(\ref{eqQcond3}). Clearly, $Q[0]=Q^0[m_1]=0$. We next show that $x$ is indeed equal to $\frac{1-p}{\lambda+1-p}$, which will have proven Proposition~\ref{propQRW}. One can in principle obtain the value of $x$ by directly computing the probability in line (b) of equation~(\ref{eqQcond3}), which can be quite difficult to do. Instead, we will use an indirect approach that turns out to be computationally much simpler: we will relate $x$ to the rate of deletion of $\pi_{\mathrm{NOB}}$ using renewal theory, and then solve for $x$. As a by-product of this approach, we will also get a better understanding of an important regenerative structure of $\pi_{\mathrm{NOB}}$ [equation~(\ref{eqlimfrac1})], which will be useful for the analysis in subsequent sections. By equations~(\ref{eqtransit1}) and (\ref{eqtransit2}), $Q$ is a positive recurrent Markov chain, and $Q[n]$ converges to a well-defined steady-state distribution, $Q[\infty]$, as $n\to\infty$. Letting $\pi_i = \mathbb{P} (Q[\infty]=i )$, it is easy to verify via the balancing equations that \begin{equation} \pi_i = \pi_0\frac{x(\lambda+1-p)}{\lambda}\cdot\biggl( \frac {1-p}{\lambda} \biggr)^{i-1}\qquad\forall i\geq1 \end{equation} and since $\sum_{i\geq0}\pi_i=1$, we obtain \begin{equation} \pi_0 = \frac{1}{1+x\cdot (\lambda+1-p)/(\lambda-(1-p))}. \end{equation} Since the chain $Q$ is also irreducible, the limiting fraction of time that $Q$ spends in state 0 is therefore equal to $\pi_0$, \begin{equation} \qquad \lim_{n\to\infty} \frac{1}{n}\sum _{t=1}^n \mathbb{I} \bigl(Q[t]=0 \bigr) = \pi_0= \frac{1}{1+x\cdot(\lambda+1-p)/(\lambda-(1-p))}. \label{eqlimzero} \end{equation} Next, we would like to know many of these visits to state 0 correspond to a deletion. Recall the notion of a busy period and deletion epoch, defined in equations~(\ref{eqbusydef}) and (\ref{eqEjdef}), respectively. By Lemma~\ref{lemqmi0}, $n$ corresponds to a deletion if any only if $Q[n]=Q[n-1]=0$. Consider a deletion in slot $m_i$. If $Q[m_i+1]=0$, then $m_i+1$ also corresponds to a deletion, that is, $m_i+1 = m_{i+1}$. If instead $Q[m_i+1]=1$, which happens with probability $x$, the fact that $Q[m_{i+1}-1]=0$ implies that there exists at least one busy period, $\{l,\ldots,u\}$, between $m_i$ and $m_{i+1}$, with $l=m_i$ and $u \leq m_{i+1}-1$. At the end of this period, a new busy period starts with probability $x$ and so on. In summary, a deletion epoch $E_i$ consists of the slot $m_i-m_1$, plus $N_i$ busy periods, where the $N_i$ are i.i.d., with\footnote{$\operatorname{Geo}(p)$ denotes a geometric random variable with mean $\frac{1}{p}$.} \begin{equation} N_1 \stackrel{d} {=} \operatorname{Geo}(1-x)-1 \end{equation} and hence \begin{equation}\label{eqEidecomp} |E_i| = 1+\sum_{j=1}^{N_i} B_{i,j}, \end{equation} where $ \{B_{i,j}\dvtx i,j\in\mathbb{N} \}$ are i.i.d. random variables, and $B_{i,j}$ corresponds to the length of the $j$th busy period in the $i$th epoch. Define $W[t] = (Q[t],Q[t+1] )$, $t\in\mathbb{Z}_{+}$. Since $Q$ is Markov, $W[t]$ is also a Markov chain, taking values in $\mathbb{Z}_{+}^2$. Since a deletion occurs in slot $t$ if and only if $Q[t]=Q[t-1]=0$ (Lemma~\ref{lemqmi0}), $|E_i|$ corresponds to excursion times between two adjacent visits of $W$ to the state $(0,0)$, and hence are i.i.d. Using the elementary renewal theorem, we have \begin{equation} \lim_{n\to\infty} \frac{1}{n} I (M,n ) = \frac {1}{\mathbb{E}(|E_1|)}\qquad\mbox{a.s.} \label{eqlimfrac1} \end{equation} and by viewing each visit of $W$ to $(0,0)$ as a renewal event and using the fact that exactly one deletion occurs within a deletion epoch. Denoting by $R_i$ the number of visits to the state 0 within $E_i$, we have that $R_i=1+N_i$. Treating $R_i$ as the reward associated with the renewal interval $E_i$, we have, by the time-average of a renewal reward process (cf. Theorem 6, Chapter~3, \cite{Gal96}), that \begin{equation} \lim_{n\to\infty} \frac{1}{n} \sum _{t=1}^n \mathbb{I} \bigl(Q[t]=0 \bigr) = \frac{\mathbb{E} (R_1 )}{\mathbb {E} (|E_1| )} = \frac{\mathbb{E} (N_1 )+1}{\mathbb{E} (|E_1| )}\qquad\mbox{a.s.} \label{eqlimfrac2} \end{equation} by treating each visit of $Q$ to $(0,0)$ as a renewal event. From equations (\ref{eqlimfrac1})~and~(\ref{eqlimfrac2}), we have \begin{equation} \frac{\lim_{n\to\infty} (1/n)I (M,n )}{\lim_{n\to\infty} (1/n) \sum_{t=1}^n \mathbb{I} (Q[t]=0 ) } = \frac{1}{\mathbb{E}(N_1)} = 1-x. \label{eqlimfrac} \end{equation} Combining equations~(\ref{eqIMnlim}), (\ref{eqlimzero}) and (\ref {eqlimfrac}), and the fact that $\mathbb{E}(N_1)=\mathbb{E}(\operatorname{Geo}(1-x))-1=\frac{1}{1-x}-1$, we have \begin{equation} \frac{\lambda-(1-p)}{\lambda+1-p} \cdot\biggl[1+x\cdot\frac {\lambda+1-p}{\lambda-(1-p)} \biggr]= 1-x, \end{equation} which yields \begin{equation} x=\frac{1-p}{\lambda+1-p}. \end{equation} This completes the proof of Proposition~\ref{propQRW}.\quad\qed \end{longlist}\noqed \end{pf} We summarize some of the key consequences of Proposition~\ref{propQRW} below, most of which are easy to derive using renewal theory and well-known properties of positive-recurrent random walks. \begin{prop} \label{proppnobperf} Suppose that $1>\lambda>1-p>0$, and denote by $Q[\infty]$ the steady-state distribution of $Q$. \begin{longlist}[(2)] \item[(1)] For all $i\in\mathbb{Z}_{+}$, \begin{equation} \mathbb{P} \bigl(Q[\infty]=i \bigr) = \biggl(1-\frac{1-p}{\lambda } \biggr)\cdot \biggl(\frac{1-p}{\lambda} \biggr)^i. \end{equation} \item[(2)] Almost surely, we have that \begin{equation} \lim_{n\to\infty}\frac{1}{n}\sum_{i=1}^n Q[i] = \mathbb{E} \bigl(Q [\infty] \bigr) = \frac{1-p}{\lambda- (1-p )}. \label{eqQavg} \end{equation} \item[(3)] Let $E_i = \{m^\Psi_{i},m^\Psi_{i}+1,\ldots,m^\Psi_{i+1}-1,m^\Psi_{i+1} \}$. Then the $|E_i|$ are i.i.d., with \begin{equation} \mathbb{E} \bigl(|E_1| \bigr) = \frac{1}{\lim_{n\to\infty}(1/n) I (M^\Psi,n )}= \frac{\lambda+1-p}{\lambda-(1-p)} \end{equation} and there exists $a,b>0$ such that for all $x\in\mathbb{R}_+$ \begin{equation} \mathbb{P} \bigl(|E_1| \geq x \bigr) \leq a\cdot\exp(- b\cdot x ). \label{eqEiExp} \end{equation} \item[(4)] Almost surely, we have that \begin{equation} m^\Psi_{i} \sim\frac{1}{\mathbb{E} (|E_1| )}\cdot i = \frac{\lambda-(1-p)}{\lambda+1-p}\cdot i \label{eqmilim} \end{equation} as $i \to\infty$. \end{longlist} \end{prop} \begin{pf} Claim $1$ follows from the well-known steady-state distribution of a random walk, or equivalently, the fact that $Q[\infty ]$ has the same distribution as the steady-state number of jobs in an $M/M/1$ queue with traffic intensity $\rho= \frac{1-p}{\lambda}$. For Claim $2$, since $Q$ is an irreducible Markov chain that is positive recurrent, it follows that its time-average coincides with $\mathbb{E} (Q[\infty] )$ almost surely. The fact that $E_i$'s are i.i.d. was shown in the discussion preceding equation~(\ref{eqlimfrac1}) in the proof of Proposition~\ref{propQRW}. The value of $\mathbb{E} (|E_1| )$ follows by combining equations~(\ref{eqIMnlim}) and (\ref{eqlimfrac1}). Let $B_{i,j}$ be the length of the $j$th busy period [defined in equation~(\ref{eqbusydef})] in $E_i$. By definition, $B_{1,1}$ is distributed as the time till the random walk $Q$ reaches state $0$, starting from state $1$. We have \[ \mathbb{P} (B_{1,1}\geq x ) \leq\mathbb{P} \Biggl(\sum _{j=1}^{ \lfloor x \rfloor} X_j \leq-1 \Biggr), \] where the $X_j$'s are i.i.d., with $\mathbb{P} (X_1=1 )=\frac{1-p}{\lambda+1-p}$ and $\mathbb{P} (X_1=-1 )=\frac{\lambda}{\lambda+1-p}$, which, by the Chernoff bound, implies an exponential tail bound for\break $\mathbb{P} (B_{1,1}\geq x )$, and in particular, \begin{equation}\label{eqGBlim} \lim_{\theta\downarrow0} G_{B_{1,1}}(\theta) = 1. \vadjust{\goodbreak} \end{equation} By equation~(\ref{eqEidecomp}), the moment generating function for $|E_1|$ is given by \begin{eqnarray} G_{|E_1|}(\varepsilon) &= & \mathbb{E} \bigl(\exp\bigl(\varepsilon\cdot |E_1| \bigr) \bigr) \nonumber \\ &=& \mathbb{E} \Biggl(\exp\Biggl(\varepsilon\cdot\Biggl(1+\sum _{j=1}^{N_1}B_{1,j} \Biggr) \Biggr) \Biggr) \nonumber\\[-8pt]\\[-8pt] &\stackrel{\mathrm{(a)}} {=} & \mathbb{E} \bigl(e^\varepsilon\bigr)\cdot\mathbb{E} \bigl(\exp\bigl({N_1}\cdot G_{B_{1,1}}(\varepsilon) \bigr) \bigr) \nonumber \\ &= & \mathbb{E} \bigl(e^\varepsilon\bigr)\cdot G_{N_1} \bigl(\ln \bigl(G_{B_{1,1}}(\varepsilon) \bigr) \bigr),\nonumber \end{eqnarray} where (a) follows from the fact that $ \{N_1 \}\cup \{B_{1,j}\dvtx j\in\mathbb{N} \}$ are mutually independent, and $G_{N_1}(x)= \mathbb{E} (\exp(x\cdot N_1 ) )$. Since $N_1 \stackrel{d}{=} \operatorname{Geo}(1-x)-1$,\break $\lim_{x\downarrow 0}G_{N_1}(x)=1$, and by equation~(\ref{eqGBlim}), we have that $\lim _{\varepsilon\downarrow0}G_{|E_1|}(\varepsilon)=1$, which implies equation~(\ref{eqEiExp}). Finally, equation~(\ref{eqmilim}) follows from the third claim and the elementary renewal theorem. \end{pf} \subsection{\texorpdfstring{Optimality of the no-job-left-behind policy in heavy traffic.}{Optimality of the no-job-left-behind policy in heavy traffic}}\label{secoptmalityofflineproof} This section is devoted to proving the optimality of $\pi_{\mathrm{NOB}}$ as $\lambda\to1$, stated in the second claim of Theorem~\ref{teooffline}, which we isolate here in the form of the following proposition. \begin{prop} \label{propnobopt} Fix $p\in(0,1)$. We have that \[ \lim_{\lambda\rightarrow1}C (p,\lambda,\pi_{\mathrm{NOB}} )=\lim _{\lambda\rightarrow1}C^*_{\Pi_\infty} (p,\lambda). \] \end{prop} The proof is given at the end of this section, and we do so by showing the following: \begin{longlist}[(2)] \item[(1)] Over a finite horizon $N$ and given a fixed number of deletions to be made, a~greedy deletion rule is optimal in minimizing the post-deletion area under $Q$ over $ \{1,\ldots, N \}$. \item[(2)] Any point of deletion chosen by $\pi_{\mathrm{NOB}}$ will also be chosen by the greedy policy, as $N\to\infty$. \item[(3)] The fraction of points chosen by the greedy policy but not by $\pi_{\mathrm{NOB}}$ diminishes as $\lambda\to1$, and hence the delay produced by $\pi_{\mathrm{NOB}}$ is the best possible, as $\lambda\to1$. \end{longlist} Fix $N\in\mathbb{N}$. Let $S (Q,N )$ be the partial sum $S (Q,N )=\sum_{n=1}^{N}Q [n ]$. For any sample path $Q$, denote by $\Delta(Q,n )$ the marginal decrease of area under $Q$ over the horizon $ \{1,\ldots,N \}$ by applying a deletion at slot $n$, that is, \[ \Delta_P (Q,N,n )=S (Q,N )-S \bigl(D_P (Q,n ),N \bigr) \] and, analogously, \[ \Delta\bigl(Q,N,M' \bigr)=S (Q,N )-S \bigl(D \bigl(Q,M' \bigr),N \bigr), \] where $M'$ is a deletion sequence. We next define the notion of a greedy deletion rule, which constructs a deletion sequence by recursively adding the slot that leads to the maximum marginal decrease in $S(Q,N)$. \begin{defn}[(Greedy deletion rule)]\label{defgdRule} Fix an initial sample path $Q^{0}$ and $K,N\in\mathbb{N}$. The \textit{greedy deletion rule} is a mapping, $G (Q^{0},N,K )$, which outputs a finite deletion sequence $M^G= \{ m_{i}^{G}\dvtx 1\leq i\leq K \} $, given by \begin{eqnarray*} m_{1}^{G} & \in& \arg\max_{m\in\Phi(Q^0,N )}\Delta _P \bigl(Q^{0},N,m \bigr), \\ m_{k}^{G} & \in& \arg\max_{m\in\Phi(Q^{k-1},N )}\Delta _P \bigl(Q_{M^G}^{k-1},N,m \bigr),\qquad2\leq k \leq K, \end{eqnarray*} where $\Phi(Q,N ) = \Phi(Q ) \cap\{ 1,\ldots,N \}$ is the set of all locations in $Q$ in the first $N$~slots that can be deleted, and $Q_{M^G}^{k}=D (Q^{0}, \{ m_{i}^{G}\dvtx 1\leq i\leq k \} )$. Note that we will allow $m_{k}^{G}= \infty$, if there is no more entry to delete [i.e., $\Phi(Q^{k-1} )\cap\{1,\ldots,N \}=\varnothing$]. \end{defn} We now state a key lemma that will be used in proving Theorem~\ref{teooffline}. It shows that over a finite horizon and for a finite number of deletions, the greedy deletion rule yields the maximum reduction in the area under the sample path. \begin{lem}[(Dominance of greedy policy)]\label{lemgddom} Fix an initial sample path $Q^{0}$, horizon $N \in\mathbb{N}$ and number of deletions $K\in\mathbb{N}$. Let $M'$ be any deletion sequence with $I(M',N)=K$. Then \[ S \bigl(D \bigl(Q^{0},M' \bigr),N \bigr)\geq S \bigl(D \bigl(Q^{0},M^G \bigr),N \bigr), \] where $M^G=G (Q^{0},N,K )$ is the deletion sequence generated by the greedy policy. \end{lem} \begin{pf} By Lemma~\ref{lemlocindp}, it suffices to show that, for any sample path $ \{Q[n]\in\mathbb{Z}_{+}\dvtx n\in\mathbb {N} \}$ with $|Q[n+1]-Q[n]|=1$ if $Q[n]>0$ and $|Q[n+1]-Q[n]|\in \{0,1 \}$ if $Q[n]=0$, we have \begin{eqnarray}\label{eqgrrec} && S \bigl(D \bigl(Q,M' \bigr),N \bigr) \nonumber\\[-8pt]\\[-8pt] &&\qquad \geq\Delta_P \bigl(Q,N,m_{1}^{G} \bigr)+\mathop{\min_{\llvert\widetilde{M}\rrvert =k-1,}}_{\widetilde{M}\subset\Phi(D (Q,m_{1}^{G} ),N )}S\bigl(D \bigl(Q_{M^G}^{1},\widetilde{M} \bigr),N \bigr).\nonumber \end{eqnarray} By induction, this would imply that we should use the greedy rule at every step of deletion up to $K$. The following lemma states a simple monotonicity property. The proof is elementary, and is omitted. \begin{lem}[(Monotonicity in deletions)]\label{lemMonotonicity-in-Deletions} Let $Q$ and $Q'$ be two sample paths such that \[ Q [n ]\leq Q' [n ]\qquad\forall n\in\{ 1,\ldots,N \}. \] Then, for any $K\geq1$, \begin{equation}\label{eqmonDel1} \mathop{\min_{\llvert M\rrvert=K,}}_{M\subset\Phi(Q,N )}S \bigl(D (Q,M ),N \bigr)\leq\mathop{\min_{\llvert M\rrvert=K,}}_{M\subset\Phi(Q',N )}S \bigl(D \bigl (Q',M \bigr),N \bigr) \end{equation} and, for any finite deletion sequence $M'\subset\Phi(Q,N )$, \begin{equation} \Delta\bigl(Q,N,M' \bigr)\geq\Delta\bigl(Q',N,M' \bigr).\label {eqmonDel2} \end{equation} \end{lem} Recall the definition of a busy period in equation~(\ref{eqbusydef}). Let $J(Q,N)$ be the total number of busy periods in $ \{Q[n]\dvtx 1\leq n\leq N \}$, with the additional convention $Q[N+1]\stackrel {\triangle}{=}0$ so that the last busy period always ends on $N$. Let $B_j = \{l_j,\ldots,u_j \}$ be the $j$th busy period. It can be verified that a deletion in location $n$ leads to a decrease in the value of $S(Q,N)$ that is no more than the width of the busy period to which $n$ belongs; cf. Figure~\ref{figwater2}. Therefore, by definition, a greedy policy always seeks to delete in each step the first arriving job during a longest busy period in the current sample path, and hence \begin{equation} \Delta\bigl(Q,N, G(Q,N,1) \bigr) = \max_{1\leq j \leq J(Q,N)}\llvert B_{j}\rrvert. \label{eqbubble1} \end{equation} Let \[ \mathcal{J}^{*}(Q,N)= \arg\max_{1\leq j \leq J(Q,N)} \llvert B_j\rrvert. \] We consider the following cases, depending on whether $M'$ chooses to delete any job in the busy periods in $\mathcal{J}^{*}(Q,N)$. \begin{longlist} \item[{\textit{Case} 1: $M' \cap(\bigcup_{j\in\mathcal {J}^{*}(Q,N)}B_{j} )\neq\varnothing$.}] If $l_{j^*}\in M'$ for some $j^{*}\in\mathcal{J}^{*}$, by equation~(\ref{eqbubble1}), we can set $m_{1}^{G}$ to $l_{j^*}$. Since $m_{1}^{G}\in M'$ and the order of deletions does not impact the final resulting delay (Lemma~\ref{lemlocindp}), we have that equation~(\ref{eqgrrec}) holds, and we are done. Otherwise, choose $m^{*}\in M'\cap B_{j^{*}}$ for some $j^{*}\in\mathcal{J}^{*}$, and we have $m^*> l_{j^*}$. Let \[ Q'= D_P \bigl(Q,m^{*} \bigr)\quad\mbox{and}\quad \widehat{Q} = D_P (Q,l_{j^*} ). \] Since $Q [n ]>0$, $\forall n\in\{l_{j^*}, \ldots, u_{j^*}-1 \}$, we have $\widehat{Q} [n ]=Q [n ]-1\leq Q'[n]$, $\forall n\in\{l_{j^*}, \ldots, u_{j^*}-1 \}$ and $Q'[n]=Q[n]=\widehat{Q}[n]$, $\forall n \notin \{l_{j^*}, \ldots, u_{j^*}-1 \}$, which implies that \begin{equation} \widehat{Q} [n ]\leq Q' [n ]\qquad\forall n\in\{1,\ldots,N \}.\label{eqcase1dom} \end{equation} Equation~(\ref{eqgrrec}) holds by combining equation~(\ref{eqcase1dom}) and equation~(\ref{eqmonDel1}) in Lemma~\ref{lemMonotonicity-in-Deletions}, with $K=k-1$. \end{longlist} \begin{longlist} \item[{\textit{Case} 2: $M'\cap(\bigcup_{j\in\mathcal{J}^{*}(Q,N)}B_{j} )= \varnothing$.}] Let\vspace*{2pt} $m^{*}$ be any element in $M'$ and $Q'=D_P (Q,m^{*} )$. Clearly, $Q [n ]\geq Q' [n ]$ for\vspace*{1pt} all $n\in \{1,\ldots,N \}$, and by equation~(\ref{eqmonDel2}) in Lemma \ref{lemMonotonicity-in-Deletions}, we have that\footnote{For finite sets $A$ and $B$, $A\setminus B = \{a\in A\dvtx a\notin B \}$.} \begin{equation} \Delta\bigl(Q,N,M'\setminus\bigl\{ m^{*} \bigr\} \bigr)\geq\Delta\bigl(D_P \bigl(Q,m^* \bigr),N,M' \setminus\bigl\{ m^* \bigr\} \bigr). \label{eqcase2-diff-dom} \end{equation} Since \textbf{$M'\cap(\bigcup_{j\in\mathcal {J}^{*}(Q,N)}B_{j} )=\varnothing$}, we have that \begin{equation} \qquad\Delta_P \bigl(D \bigl(Q,M'\setminus\bigl\{ m^{*} \bigr\} \bigr),N,m_{1}^{G} \bigr)=\max _{1\leq j \leq J(Q,N)}\llvert B_{j}\rrvert>\Delta_P \bigl(Q,N,m^{*} \bigr).\label{eqcase2-step-dom} \end{equation} Let $\widehat{M}=m_{1}^{G}\cup(M'\setminus\{ m^{*} \} )$, and we have that \begin{eqnarray*} && S \bigl(D (Q,\widehat{M} ),N \bigr) \\ &&\qquad = S (Q,N )-\Delta\bigl(Q,N,M'\setminus\bigl\{ m^{*} \bigr\} \bigr)-\Delta_P \bigl(D \bigl(Q,M'\setminus\bigl\{ m^{*} \bigr\} \bigr),N,m_{1}^{G} \bigr) \\ &&\qquad \stackrel{\mathrm{(a)}} {\leq} S (Q,N )-\Delta\bigl(D_P \bigl(Q,m^* \bigr),N,M'\setminus\bigl\{ m^{*} \bigr\} \bigr) \\ &&\quad\qquad{} -\Delta_P \bigl(D \bigl(Q,M'\setminus\bigl\{ m^{*} \bigr\} \bigr),N,m_{1}^{G} \bigr) \\ &&\qquad \stackrel{\mathrm{(b)}} {<} S (Q,N )-\Delta\bigl(D_P \bigl(Q,m^* \bigr),N,M'\setminus\bigl\{ m^{*} \bigr\} \bigr)-\Delta _P \bigl(Q,N,m^{*} \bigr) \\ &&\qquad = S \bigl(D \bigl(Q,M' \bigr),N \bigr), \end{eqnarray*} where (a) and (b) follow from equations~(\ref{eqcase2-diff-dom}) and (\ref{eqcase2-step-dom}), respectively, which shows that equation~(\ref{eqgrrec}) holds (and in this case the inequality there is strict). Cases 1~and~2 together complete the proof of Lemma~\ref{lemgddom}.\quad\qed \end{longlist}\noqed \end{pf} We are now ready to prove Proposition~\ref{propnobopt}. \begin{pf*}{Proof of Proposition~\ref{propnobopt}} Lemma~\ref{lemgddom} shows that, for any fixed number of deletions over a finite horizon $N$, the greedy deletion policy (Definition~\ref{defgdRule}) yields the smallest area under the resulting sample path, $Q$, over $ \{ 1,\ldots, N \}$. The main idea of proof is to show that the area under $Q$ after applying $\pi_{\mathrm{NOB}}$ is asymptotically the same as that of the greedy policy, as $N\to\infty$ and $\lambda\to1$ (in this particular order of limits). In some sense, this means that the jobs in $M^\Psi$ account for almost all of the delays in the system, as $\lambda\to1$. The following technical lemma is useful. \begin{lem} \label{lemtopk} For a finite set $S\subset\mathbb{R}$ and $k\in\mathbb{N}$, define \[ f(S,k) = \frac{\mathrm{sum\ of\ the}\ k\ \mathrm{largest\ elements\ in}\ S}{|S|}. \] Let $ \{X_i\dvtx 1\leq i \leq n \}$ be i.i.d. random variables taking values in $\mathbb{Z}_{+}$, where \mbox{$\mathbb{E} (X_1 )<\infty$}. Then for any sequence of random variables $ \{H_n\dvtx n\in \mathbb{N} \}$, with\vadjust{\goodbreak} \mbox{$H_n \lesssim\alpha n$} a.s. as $n \to \infty$ for some $\alpha\in(0,1)$, we have \begin{equation} \qquad \limsup_{n \to\infty} f \bigl( \{X_i\dvtx 1\leq i \leq n \},H_n \bigr) \leq\mathbb{E} \bigl(X_1\cdot\mathbb{I} \bigl(X_1\geq\widebar{F}{}^{ -1}_{X_1}(\alpha) \bigr) \bigr)\qquad\mbox{a.s.}, \end{equation} where $\widebar{F}{}^{ -1}_{X_1}(y) = \min\{x\in\mathbb{N}\dvtx \mathbb{P} (X_1\geq x )< y \}$. \end{lem} \begin{pf} See Appendix~\ref{applemtopk}. \end{pf} Fix an initial sample path $Q^0$. We will denote by $M^\Psi= \{ m^\Psi_i\dvtx i\in\mathbb{N} \}$ the deletion sequence generated by $\pi_{\mathrm{NOB}}$ on $Q^0$. Define \begin{equation} l (n ) = n-\max_{1\leq i \leq I (M^\Psi,n )} |E_i|, \label{eqldef} \end{equation} where $E_i$ is the $i$th deletion epoch of $M^\Psi$, defined in equation~(\ref{eqEjdef}). Since $Q^0[n]\geq Q^0[m_i]$ for all $i\in \mathbb{N}$, it is easy to check that \[ \Delta_P \bigl(D \bigl(Q^0, \bigl\{m^\Psi_j\dvtx 1\leq j \leq i-1 \bigr\} \bigr),n,m^\Psi_i \bigr) = n-m^\Psi_i +1 \] for all $i\in\mathbb{N}$. The function $l$ was defined so that the first $I(M^\Psi,l(n))$ deletions made by a greedy rule over the horizon $ \{1,\ldots,n \}$ are exactly $ \{1,\ldots, l(n) \}\cap M^\Psi$. More formally, we have the following lemma. \begin{lem} \label{lemGdoverlap} Fix $n\in\mathbb{N}$, and let $M^G=G (Q^{0},n,I (M^\Psi,l (n ) ) )$. Then $m_{i}^{G}=m^\Psi_i$, for all $i\in\{1,\ldots, I (M^\Psi,l(n) ) \}$. \end{lem} Fix $K\in\mathbb{N}$, and an arbitrary feasible deletion sequence, $\widetilde{M}$, generated by a policy in $\Pi_\infty$. We can write \begin{eqnarray}\label{eqIineq} I \bigl(\widetilde{M}, m^\Psi_K \bigr) &=& I \bigl(M^\Psi, l \bigl(m^\Psi_K \bigr) \bigr) + \bigl(I \bigl(M^\Psi, m^\Psi_K \bigr) - I \bigl(M^\Psi,l \bigl(m^\Psi_K \bigr) \bigr) \bigr)\nonumber \\ &&{} + \bigl(I \bigl(\widetilde{M}, m^\Psi_K \bigr)-I \bigl(M^\Psi, m^\Psi_K \bigr) \bigr)\nonumber \\ &=& I \bigl(M^\Psi, l \bigl(m^\Psi_K \bigr) \bigr) + \bigl(K - I \bigl(M^\Psi,l \bigl(m^\Psi_K \bigr) \bigr) \bigr) \\ &&{} + \bigl(I \bigl(\widetilde{M}, m^\Psi_K \bigr)-I \bigl(M^\Psi, m^\Psi_K \bigr) \bigr)\nonumber \\ &=& I \bigl(M^\Psi, l \bigl(m^\Psi_K \bigr) \bigr) + h(K), \nonumber \end{eqnarray} where \begin{equation} h(K) = \bigl(K - I \bigl(M^\Psi,l \bigl(m^\Psi_K \bigr) \bigr) \bigr) + \bigl(I \bigl(\widetilde{M}, m^\Psi_K \bigr)-I \bigl(M^\Psi, m^\Psi_K \bigr) \bigr). \end{equation} We have the following characterization of $h$. \begin{lem} \label{lemhK} $ h(K) \lesssim\frac{1-\lambda}{\lambda-(1-p)} \cdot K$, as $K\to \infty$, a.s. \end{lem} \begin{pf} See Appendix~\ref{applemhK}.\vadjust{\goodbreak} \end{pf} Let \begin{equation} M^{G,n} = G \bigl(Q^0,n, I (\widetilde{M},n ) \bigr), \end{equation} where the greedy deletion map $G$ was defined in Definition~\ref{defgdRule}. By Lemma~\ref{lemGdoverlap} and the definition of $M^{G,n}$, we have that \begin{equation} M^\Psi\cap\bigl\{1,\ldots, l \bigl(m^\Psi_K \bigr) \bigr\} \subset M^{G,m^\Psi_K}. \end{equation} Therefore, we can write \begin{equation} M^{G,m^\Psi_K} = \bigl(M^\Psi\cap\bigl\{1,\ldots, l \bigl(m^\Psi_K \bigr) \bigr\} \bigr) \cup \widebar{M}{}^G_K, \label{eqMGoverline} \end{equation} where $\widebar{M}{}^G_K\stackrel{\triangle}{=}M^{G,m^\Psi_K} \setminus(M^\Psi\cap\{1,\ldots, l (m^\Psi _K ) \} )$. Since $\llvert M^{G,m^\Psi_K}\rrvert = I (\widetilde{M},m^\Psi_K )$ by definition, by equation~(\ref {eqIineq}), \begin{equation} \bigl\llvert\widebar{M}{}^G_K\bigr\rrvert= h(K). \end{equation} We have \begin{eqnarray} \label{eqSdiff} && S \bigl(D \bigl(Q^0,M^\Psi\bigr),m^\Psi_K \bigr) - S \bigl(D \bigl(Q^0,\widetilde{M} \bigr),m^\Psi_K \bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(a)}} {\leq} S \bigl(D \bigl(Q^0,M^\Psi \bigr),m^\Psi_K \bigr) - S \bigl(D \bigl(Q^0,M^{G,m^\Psi_K} \bigr),m^\Psi_K \bigr) \\ &&\qquad \stackrel{\mathrm{(b)}} {=} \Delta\bigl(D \bigl(Q^0,M^\Psi \bigr),m^\Psi_K,\widebar{M}{}^G_K \bigr),\nonumber \end{eqnarray} where (a) is based on the dominance of the greedy policy over any finite horizon (Lemma~\ref{lemgddom}), and (b) follows from equation~(\ref{eqMGoverline}). Finally, we claim that there exists $g(x)\dvtx \mathbb{R}\to\mathbb {R}_+$, with $g(x)\to0$ as $x\to1$, such that \begin{equation} \limsup_{K \to\infty} \frac{\Delta(D (Q^0,M^\Psi ),m^\Psi_K,\widebar{M}{}^G_K )}{m^\Psi_K} \leq g(\lambda)\qquad\mbox{a.s.}\label{eqdeltadim} \end{equation} Equations~(\ref{eqSdiff}) and (\ref{eqdeltadim}) combined imply that \begin{eqnarray} C (p,\lambda, \pi_{\mathrm{NOB}} ) &=& \limsup_{K\to\infty} \frac {S (D (Q^0,M^\Psi),m^\Psi_K )}{m^\Psi_K} \nonumber \\ &\leq& g(\lambda)+ \limsup_{K\to\infty} \frac{S (D (Q^0,\widetilde{M} ),m^\Psi_K )}{m^\Psi_K}, \\ &= & g(\lambda)+ \limsup_{n \to\infty} \frac{S (D (Q^0,\widetilde{M} ),n )}{n}\qquad\mbox{a.s.},\nonumber \end{eqnarray} which shows that \[ C (p,\lambda, \pi_{\mathrm{NOB}} ) \leq g(\lambda) + \inf_{\pi \in\Pi_\infty}C (p,\lambda, \pi). \] Since $g(\lambda)\to0$ as $\lambda\to1$, this proves Proposition \ref{propnobopt}. To show equation~(\ref{eqdeltadim}), denote by $Q$ the sample path after applying $\pi_{\mathrm{NOB}}$, \[ Q = D \bigl(Q^0,M^\Psi\bigr)\vadjust{\goodbreak} \] and by $V_i$ the area under $Q$ within $E_i$, \[ V_i = \sum_{n=m^\Psi_i}^{m^\Psi_{i+1}-1} Q [n ]. \] An example of $V_i$ is illustrated as the area of the shaded region in Figure~\ref{figwater2}. By Proposition~\ref{propQRW}, $Q$ is a Markov chain, and so is the process $W[n]= (Q[n], Q[n+1] )$. By Lemma~\ref{lemqmi0}, $E_i$ corresponds to the indices between two adjacent returns of the chain $W$ to state $(0,0)$. Since the $i$th return of a Markov chain to a particular state is a stopping time, it can be shown, using the strong Markov property of $W$, that the segments of $Q$, $ \{Q[n]\dvtx n\in E_i \}$, are mutually independent and identically distributed among different values of $i$. Therefore, the $V_i$'s are i.i.d. Furthermore, \begin{equation} \mathbb{E} (V_1 )\stackrel{\mathrm{(a)}} {\leq} \mathbb{E} \bigl(|E_1|^2 \bigr) \stackrel{\mathrm{(b)}} {<} \infty, \end{equation} where (a) follows from the fact that $|Q[n+1]-Q[n]|\leq1$ for all $n$, and hence $V_i\leq|E_i|^2$ for any sample path of $Q^0$, and (b) from the exponential tail bound on $\mathbb{P}(|E_1|\geq x)$, given in equation~(\ref{eqEiExp}). Since the value of $Q$ on the two ends of $E_i$, $m^\Psi_i$ and $m^\Psi_{i+1}-1$, are both zero, each additional deletion within $E_i$ cannot produce a marginal decrease of area under $Q$ of more than $V_i$; cf. Figure~\ref{figwater2}. Therefore, the\vspace*{1pt} value of~$\Delta (D (Q^0,M^\Psi),m^\Psi_K,\widebar{M}{}^G_K )$ can be no greater than the sum of the $h(K)$ largest $V_i$'s over the horizon $n\in\{1,\ldots, m^\Psi_K \}$. We have \begin{eqnarray} && \limsup_{K \to\infty} \frac{\Delta(D (Q^0,M^\Psi ),m^\Psi_K,\widebar{M}{}^G_K )}{m^\Psi_K} \nonumber \\ &&\qquad = \limsup_{K \to\infty} f \bigl( \{V_i\dvtx 1\leq i \leq K \},h(K) \bigr) \cdot\frac{K}{m^\Psi_K} \nonumber\\[-8pt]\\[-8pt] &&\qquad \stackrel{\mathrm{(a)}} {=} \limsup_{K \to\infty} f \bigl( \{V_i\dvtx 1\leq i \leq K \},h(K) \bigr) \cdot\frac{\lambda+1-p}{\lambda -(1-q)} \nonumber \\ &&\qquad \stackrel{\mathrm{(b)}} {=} \mathbb{E} \biggl(V_1\cdot\mathbb{I} \biggl(X_1\geq\widebar{F}{}^{ -1}_{V_1} \biggl( \frac{1-\lambda}{\lambda -(1-p)} \biggr) \biggr) \biggr) \cdot\frac{\lambda+1-p}{\lambda-(1-q)},\nonumber \end{eqnarray} where (a) follows from equation~(\ref{eqmilim}), and (b) from Lemmas~\ref{lemtopk} and~\ref{lemhK}. Since $\mathbb{E} (V_1 )< \infty$, and $\widebar{F}{}^{ -1}_{V_1}(x)\to\infty$ as $x \to0$, it follows that \[ \mathbb{E} \biggl(V_1\cdot\mathbb{I} \biggl(X_1\geq \widebar{F}{}^{ -1}_{V_1} \biggl(\frac{1-\lambda}{\lambda-(1-p)} \biggr) \biggr) \biggr) \to0 \] as $\lambda\to1$. Equation~(\ref{eqdeltadim}) is proved by setting \[ g(\lambda) = \mathbb{E} \biggl(V_1\cdot\mathbb{I} \biggl(X_1\geq \widebar{F}{}^{ -1}_{V_1} \biggl(\frac{1-\lambda}{\lambda-(1-p)} \biggr) \biggr) \biggr) \cdot\frac{\lambda+1-p}{\lambda-(1-q)}. \] This completes the proof of Proposition~\ref{propnobopt}. \end{pf*} \subsubsection{\texorpdfstring{Why not use greedy?}{Why not use greedy}}\label{sec6.3.1} The proof of Proposition~\ref{propnobopt} relies on a sample-path-wise coupling to the performance of a greedy deletion rule. It is then only natural to ask: since the time horizon is indeed finite in all practical applications, why do not we simply use the greedy rule as the preferred offline policy, as opposed to $\pi_{\mathrm{NOB}}$? There are at least two reasons for focusing on $\pi_{\mathrm{NOB}}$ instead of the greedy rule. First, the structure of the greedy rule is highly global, in the sense that each deletion decision uses information of the entire sample path over the horizon. As a result, the greedy rule tells us little on how to design a good policy with a \emph{fixed} lookahead window (e.g., Theorem~\ref{teolookahead}). In contrast, the performance analysis of $\pi_{\mathrm{NOB}}$ in Section~\ref {secperformoffline} reveals a highly \emph{regenerative} structure: the deletions made by $\pi_{\mathrm{NOB}}$ essentially depend only on the dynamics of $Q^0$ in the same deletion epoch (the $E_i$'s), and what happens beyond the current epoch becomes irrelevant. This is the key intuition that led to our construction of the finite-lookahead policy in Theorem~\ref{teolookahead}. A second (and perhaps minor) reason is that of computational complexity. By a small sacrifice in performance, $\pi_{\mathrm{NOB}}$ can be efficiently implemented using a linear-time algorithm (Section~\ref{seclinearalgo}), while it is easy to see that a naive implementation of the greedy rule would require super-linear complexity with respect to the length of the horizon. \subsection{\texorpdfstring{Proof of Theorem \protect\ref{teooffline}.}{Proof of Theorem 2}}\label{secpfthmoffline} The fact that $\pi_{\mathrm{NOB}}$ is feasible follows from equation~(\ref {eqIMnlim}) in Lemma~\ref{lemnobbasic}, that is, \[ \limsup_{n\to\infty}\frac{1}{n}I \bigl(M^\Psi,n \bigr) \leq\frac{\lambda- (1-p )}{\lambda+1-p}<\frac{p}{\lambda +1-p}\qquad\mbox{a.s.} \] Let $ \{\widetilde{Q}[n]\dvtx n\in\mathbb{Z}_{+} \}$ be the resulting sample path after applying $\pi_{\mathrm{NOB}}$ to the initial sample path $ \{Q^0[n]\dvtx n\in\mathbb{Z}_{+} \}$, and let \[ Q[n]=\widetilde{Q} \bigl[n+m^\Psi_1 \bigr]\qquad\forall n \in\mathbb{N}, \] where $m^\Psi_1$ is the index of the first deletion made by $\pi_{\mathrm{NOB}}$. Since $\lambda>1-p$, the random walk $Q^0$ is transient, and hence $m^\Psi_1<\infty$ almost surely. We have that, almost surely, \begin{eqnarray}\label{eqcplam} C (p,\lambda, \pi_{\mathrm{NOB}} ) &=& \lim_{n\to\infty} \frac {1}{n}\sum_{i=1}^n \widetilde{Q}[i] \nonumber \\ &=& \lim_{n\to\infty}\frac{1}{n}\sum _{i=1}^{m^\Psi_1} \widetilde{Q}[i] + \lim _{n\to\infty}\frac{1}{n}\sum_{i=1}^n Q[i] \\ &=& \frac{1-p}{\lambda-(1-p)}, \nonumber \end{eqnarray} where the last equality follows from equation~(\ref{eqQavg}) in Proposition~\ref{proppnobperf}, and the fact that $m_1<\infty$ almost surely. Letting $\lambda\to1$ in equation~(\ref{eqcplam}) yields the finite limit of delay under heavy traffic, \[ \lim_{\lambda\to1}C (p,\lambda, \pi_{\mathrm{NOB}} ) = \lim _{\lambda\to1} \frac{1-p}{\lambda-(1-p)} = \frac{1-p}{p}. \] Finally, the delay optimality of $\pi_{\mathrm{NOB}}$ in heavy traffic was proved in Proposition~\ref{propnobopt}, that is, that \[ \lim_{\lambda\to1}C (p,\lambda,\pi_{\mathrm{NOB}} )= \lim _{\lambda\to1}C^*_{\Pi_\infty} (p,\lambda). \] This completes the proof of Theorem~\ref{teooffline}. \section{\texorpdfstring{Policies with a finite lookahead.}{Policies with a finite lookahead}}\label{sec7} \label{secfinitelookahead} \subsection{\texorpdfstring{Proof of Theorem \protect\ref{teolookahead}.}{Proof of Theorem 3}}\label{secpfthmlookahead} As pointed out in the discussion preceding Theorem~\ref{teolookahead}, for any initial sample path and $w<\infty$, an arrival that is deleted under the\vadjust{\goodbreak} $\pi _{\mathrm{NOB}}$ policy will also be deleted under $\pi_{\mathrm{NOB}}^{w}$. Therefore, the delay guarantee for $\pi_{\mathrm{NOB}}$ (Theorem~\ref{teooffline}) carries over to $\pi_{\mathrm{NOB}}^{w(\lambda)}$, and for the rest of the proof, we will be focusing on showing that $\pi_{\mathrm{NOB}}^{w(\lambda)}$ is feasible under an appropriate scaling of $w(\lambda)$. We begin by stating an exponential tail bound on the distribution of the discrete-time predictive window, $W(\lambda,n)$, defined in equation~(\ref{eqwdiscrete}), \[ W(\lambda,n) = \max\bigl\{ k \in\mathbb{Z}_{+}\dvtx T_{n+k} \leq T_{n}+w(\lambda) \bigr\}. \] It is easy to see that $ \{W (\lambda,m^\Psi_i )\dvtx i \in\mathbb{N} \}$ are i.i.d., with $W (\lambda,m^\Psi _1 )$ distributed as a Poisson random variable with mean $(\lambda+1-p)w(\lambda)$. Since \[ \mathbb{P} \bigl(W \bigl(\lambda,m^\Psi_1 \bigr) \geq x \bigr) \leq\mathbb{P} \Biggl( \sum_{k=1}^{ \lfloor w(\lambda) \rfloor} X_k \Biggr), \] where the $X_k$ are i.i.d. Poisson random variables with mean $\lambda +(1-p)$, applying the Chernoff bound, we have that, there exist $c,d>0$ such that \begin{equation} \mathbb{P} \biggl(W \bigl(\lambda,m^\Psi_1 \bigr) \geq \frac {\lambda+1-p}{2} \cdot w(\lambda) \biggr) \leq c\cdot\exp\bigl(-d\cdot w( \lambda)\bigr) \end{equation} for all $w(\lambda)>0$. We now analyze the deletion rate resulted by the $\pi_{\mathrm{NOB}}^{w(\lambda )}$ policy. For the pure purpose of analysis (as opposed to practical efficiency), we will consider a new deletion policy, denoted by $\sigma ^{w(\lambda)}$, which can be viewed as a relaxation of $\pi _{\mathrm{NOB}}^{w(\lambda)}$. \begin{defn} Fix $w\in\mathbb{R}_{+}$. The deletion policy $\sigma^w$ is defined such that for each deletion epoch $E_i$, $i\in\mathbb{N}$: \begin{longlist}[(2)] \item[(1)] if $|E_i| \leq W (\lambda,m^\Psi_i )$, then only the first arrival of this epoch, namely, the arrival in slot $m^\Psi_i$, is deleted; \item[(2)] otherwise, all arrivals within this epoch are deleted. \end{longlist} \end{defn} It is easy to verify that $\sigma^{w}$ can be implemented with $w$ units of look-ahead, and the set of deletions made by $\sigma ^{w(\lambda)}$ is a strict superset of $\pi_{\mathrm{NOB}}^{w(\lambda)}$ almost surely. Hence, the feasibility of $\sigma^{w(\lambda)}$ will imply that of $\pi_{\mathrm{NOB}}^{w(\lambda)}$. Denote by $D_i$ the number of deletions made by $\sigma^{w(\lambda)}$ in the epoch $E_i$. By the construction of the policy, the $D_i$ are i.i.d., and depend only on the length of $E_i$ and the number of arrivals within. We have\footnote{For simplicity of notation, we assume that $\frac{\lambda+1-p}{2}\cdot w(\lambda)$ is always an integer. This does not change the scaling behavior of $w(\lambda)$. } \begin{eqnarray}\label{eqED1} \mathbb{E} (D_1 )&\leq& 1+ \mathbb{E} \bigl[|E_i| \cdot\mathbb{I} \bigl(|E_i|\geq W \bigl(\lambda, m^\Psi_i \bigr) \bigr) \bigr] \nonumber \\ &\leq& 1+ \mathbb{E} \biggl[|E_i|\cdot\mathbb{I} \biggl(|E_i| \geq\frac{\lambda+1-p}{2}\cdot w(\lambda) \biggr) \biggr] \nonumber \\ &&{} + \mathbb{E} \bigl(|E_i| \bigr)\cdot\mathbb{P} \biggl( W \bigl( \lambda,m^\Psi_i \bigr) \leq\frac{\lambda+1-p}{2}\cdot w( \lambda) \biggr) \nonumber\\[-8pt]\\[-8pt] &\leq& 1 + \Biggl(\sum_{k = ((\lambda+1-p)/2)\cdot w(\lambda )}^{\infty} k \cdot a \cdot\exp(-b\cdot k) \Biggr)\nonumber \\ &&{} + \frac{\lambda}{\lambda-(1-p)}\cdot c\cdot\exp\bigl(-d\cdot w( \lambda)\bigr)\nonumber \\ &\stackrel{\mathrm{(a)}} {\leq}& 1+ h \cdot w(\lambda) \cdot\exp\bigl(-l\cdot w(\lambda)\bigr)\nonumber \end{eqnarray} for some $h,l>0$, where (a) follows from the fact that $\sum _{k=n}^\infty k\cdot\exp(-b\cdot k) = \mathcal{O} (n \cdot\exp (-b \cdot n) )$ as $n\to\infty$. Since the $D_i$ are i.i.d., using basic renewal theory, it is not difficult to show that the average rate of deletion in discrete time under the policy $\sigma^{w(\lambda)}$ is equal to $\frac{\mathbb {E}(D_1)}{\mathbb{E}(E_1)}$. In order for the policy to be feasible, one must have that \begin{equation} \frac{\mathbb{E}(D_1)}{\mathbb{E}(E_1)} = \frac{\mathbb {E}(D_1)}{\lambda} \leq\frac{p}{\lambda+ 1-p}. \label{eqdelrate1} \end{equation} By equations~(\ref{eqED1}) and (\ref{eqdelrate1}), we want to ensure that \[ \frac{p\lambda}{\lambda- (1-p)} \geq1+h\cdot w(\lambda)\cdot\exp\bigl (-l\cdot w(\lambda) \bigr), \] which yields, after taking the logarithm on both sides, \begin{equation} w(\lambda)\geq\frac{1}{b} \log\biggl(\frac{1}{1-\lambda} \biggr) + \frac{1}{b} \log\biggl(\frac{[\lambda-(1-p)]\cdot h\cdot w(\lambda)}{1-p} \biggr). \end{equation} It is not difficult to verify that for all $p\in(0,1)$ there exists a constant $C$ such that the above inequality holds for all $\lambda\in (1-p,1)$, by letting $w(\lambda)=C\log(\frac{1}{1-\lambda})$. This proves the feasibility of $\sigma^{w(\lambda)}$, which implies that $\pi_{\mathrm{NOB}}^{w(\lambda)}$ is also feasible. This completes the proof of Theorem~\ref{teolookahead}. \section{\texorpdfstring{Concluding remarks and future work.}{Concluding remarks and future work}}\label{sec8} \label{secconclusions} The main objective of this paper is to study the impact of future information on the performance of a class of admissions control problems, with a constraint on the time-average rate of redirection. Our model is motivated as a study of a dynamic resource allocation problem between slow (congestion-prone) and fast (congestion-free) processing resources. It could also serve as a simple canonical model for analyzing delays in large server farms or cloud clusters with resource pooling \cite{TX12}; cf. Appendix~\ref{secresourcepooling}. Our main results show that the availability of future information can dramatically reduce the delay experienced by admitted customer: the delay converges to a finite constant even as the traffic load approaches the system capacity (``heavy-traffic delay collapse''), if the decision maker is allowed for a sufficiently large lookahead window (Theorem~\ref{teolookahead}). There are several interesting directions for future exploration. On the theoretical end, a main open question is whether a matching lower-bound on the amount of future information required to achieve the heavy-traffic delay collapse can be proved (Conjecture~\ref{conjinfolowerbound}), which, together with the upper bound given in Theorem~\ref{teolookahead}, would imply a duality between delay and the length of lookahead into the future. Second, we believe that our results can be generalized to the cases where the arrival and service processes are non-Poisson. We note that the $\pi_{\mathrm{NOB}}$ policy is indeed feasible for a wide range of non-Poisson arrival and service processes (e.g., renewal processes), as long as they satisfy a form of strong law of large number, with appropriate time-average rates (Lemma~\ref{lemnobbasic}). It seems more challenging to generalize results on the optimality of $\pi _{\mathrm{NOB}}$ and the performance guarantees. However, it may be possible to establish a generalization of the delay optimality result using limiting theorems (e.g., diffusion approximations). For instance, with sufficiently well-behaved arrival and service processes, we expect that one can establish a result similar to Proposition~\ref{propQRW} by characterizing the resulting queue length process from $\pi_{\mathrm{NOB}}$ as a reflected Brownian motion in $\mathbb{R}_+$, in the limit of $\lambda\to1$ and $p\to0$, with appropriate scaling. Another interesting variation of our problem is the setting where each job comes with a prescribed \emph{size}, or \emph{workload}, and the decision maker is able to observe both the arrival times and workloads of jobs up to a finite lookahead window. It is conceivable that many analogous results can be established for this setting, by studying the associated workload (as opposed to queue length) process, while the analysis may be less clean due to the lack of a simple random-walk-based description of the system dynamics. Moreover, the \emph{server} could potentially exploit additional information of the jobs' workloads in making scheduling decisions, and it is unclear what the performance and fairness implications are for the design of admissions control policies. There are other issues that need to be addressed if our offline policies (or policies with a finite lookahead) are to be applied in practice. A most important question can be the impact of \emph {observational noise} to performance, since in reality the future seen in the lookahead window cannot be expected to match the actual realization exactly. We conjecture, based on the analysis of $\pi _{\mathrm{NOB}}$, that the performance of both $\pi_{\mathrm{NOB}}$, and its finite-lookahead version, is robust to small noises or perturbations (e.g., if the actual sample path is at most $\varepsilon$ away from the predicted one), while it remains to thoroughly verify and quantify the extend of the impact, either empirically or through theory. Also, it is unclear what the best practices should be when the lookahead window is very small relative to the traffic intensity $\lambda$ ($w \ll\log {\frac{1}{1-\lambda}}$), and this\vspace*{1pt} regime is not covered by the results in this paper (as illustrated in Figure~\ref{figcollapse}). \begin{appendix} \section{\texorpdfstring{Additional Proofs}{Additional Proofs}}\label{secA} \subsection{\texorpdfstring{Proof of Lemma \protect\ref{lemnobbasic}.}{Proof of Lemma 2}}\label{applemnobbasic} Since $\lambda> 1-p$, with probability one, there exists $T<\infty$ such that the continuous-time queue length process without deletion satisfies $Q^0(t)>0$ for all $t\geq T$. Therefore, without any deletion, all service tokens are matched with some job after time $T$. By the stack interpretation, $\pi_{\mathrm{NOB}}$ only deletes jobs that would not have been served, and hence does not change the original matching of service tokens to jobs. This proves the first claim. By the first claim, since all subsequent service tokens are matched with a job after some time $T$, there exists some $N<\infty$, such that \begin{equation} \widetilde{Q}[n] = \widetilde{Q}[N] + \bigl(A[n]-A[N] \bigr)- \bigl(S[n]-S[N] \bigr) - I \bigl(M^\Psi,n \bigr) \label{eqqevolv} \end{equation} for all $n \geq N$, where $A[n]$ and $S[n]$ are the cumulative numbers of arrival and service tokens by slot $n$, respectively. The\vspace*{1pt} second claim follows by multiplying both sides of equation~(\ref{eqqevolv}) by $\frac{1}{n}$, and using the fact that $\lim_{n\to\infty}\frac {1}{n}A[n]=\frac{\lambda}{\lambda+1-p}$ and $\lim_{n\to\infty }\frac{1}{n}S[n]=\frac{1-p}{\lambda+1-p}$ a.s., $\widetilde {Q}[n]\geq0$ for all $n$ and $\widetilde{Q}[N]<\infty$ a.s. \subsection{\texorpdfstring{Proof of Lemma \protect\ref{lemqmi0}.}{Proof of Lemma 4}}\label{secA.2} \label{applemqmi0} (1)~Recall\vspace*{1pt} the point-wise deletion map, $D_P (Q,n )$, defined in Definition~\ref{defdelMap}. For any initial sample path $Q^0$, let $Q^1 = D_P(Q^0,m)$ for some $m \in\mathbb{N}$. It is easy to see that, for all $n>m$, $Q^1[n] = Q^0[n]-1$, if and only if $Q^0[s]\geq1$ for all $s \in\{m+1,\ldots, n \}$. Repeating this argument $I(M,n)$ times, we have that \begin{equation} Q[n] = \widetilde{Q}[n+m_1] = Q^0[n+m_1] - I (M,n+m_1 ), \label{eqQnevol1} \end{equation} if and only if for all $k \in\{1,\ldots,I(M,n+m_1) \}$, \begin{equation} Q^0[s] \geq k\qquad\mbox{for all $s\in\{m_k+1,\ldots,n+m_1 \}$}. \label{eqqmi0} \end{equation} Note that equation~(\ref{eqqmi0}) is implied by (and in fact, equivalent to) the definition of the $m_k$'s (Definition~\ref{defnob}), namely, that for all $k\in\mathbb{N}$, $Q^0[s]\geq k$ for all $s\geq m_k+1$. This proves the first claim. (2)~Suppose $Q[n]=Q[n-1]=0$. Since $\mathbb{P} (Q^0[t] \neq Q^0[t-1] \mid Q^0[t-1]>0 )=1$ for all $t\in\mathbb{N}$ [cf. equation~(\ref{eqQ0trans1})] at least one deletion occurs on the slots $ \{n-1+m_1,n+m_1 \}$. If the deletion occurs on $n+m_1$, we are done. Suppose a deletion occurs on $n-1+m_1$. Then\vadjust{\goodbreak} $Q^0[n+m_1]\geq Q^0[n-1+m_1]$, and hence \[ Q^0[n+m_1]= Q^0[n-1+m_1]+1, \] which implies that a deletion must also occur on $n+m_1$, for otherwise $Q[n]=Q[n-1]+1 = 1 \neq0$. This shows that $n=m_i-m_1$ for some $i\in \mathbb{N}$. Now, suppose that $n=m_i-m_1$ for some $i\in\mathbb{N}$. Let \begin{equation} n_k = \inf\bigl\{n\in\mathbb{N}\dvtx Q^0[n]=k\mbox{ and } Q^0[t]\geq k,\ \forall t\geq n \bigr\}. \end{equation} Since the random walk $Q^0$ is transient, and the magnitude of its step size is at most~$1$, it follows that $n_k<\infty$ for all $k\in \mathbb{N}$ a.s., and that $m_k= n_k$, $\forall k\in\mathbb{N}$. We have \begin{eqnarray} \label{eqQnevol2} Q[n] &\stackrel{\mathrm{(a)}} {=} & Q^0 [n+m_1]-I (M,n+m_1) \nonumber \\ &= & Q^0[m_i] - I (M,m_i ) \nonumber\\[-8pt]\\[-8pt] &\stackrel{\mathrm{(b)}} {=} & Q^0[n_i] - i\nonumber \\ &= & 0,\nonumber \end{eqnarray} where (a) follows from equation~(\ref{eqQnevol1}) and (b) from the fact that $n_i=m_i$. To show that $Q[n-1]=0$, note that since $n=m_i-m_1$, an arrival must have occurred in $Q^0$ on slot $m_i$, and hence $Q^0[n-1+m_1]=Q^0[n+m_1]-1$. Therefore, by the definition of $m_i$, \[ Q^0[t] -Q^0[n-1+m_1]= \bigl(Q^0[t]-Q^0[n+m_1] \bigr)+1 \geq0\qquad\forall t\geq n+m_1, \] which implies that $n-1 = m_{i-1}-m_1$, and hence $Q[n-1]=0$, in light of equation~(\ref{eqQnevol2}). This proves the claim. (3)~For all $n\in\mathbb{Z}_{+}$, we have \begin{eqnarray} Q[n] &=& Q [m_{I (M,n+m_1 )}-m_1 ]+ \bigl(Q[n]-Q [m_{I (M,n+m_1 )}-m_1 ] \bigr) \nonumber \\ &\stackrel{\mathrm{(a)}} {=}& Q[n]-Q [m_{I (M,n+m_1 )}-m_1 ] \nonumber\\[-8pt]\\[-8pt] &\stackrel{\mathrm{(b)}} {=}& Q^0[n+m_1]-Q^0 [m_{I (M,n+m_1 )} ]\nonumber \\ &\stackrel{\mathrm{(c)}} {=} & 0,\nonumber \end{eqnarray} where (a) follows from the second claim [cf. equation~(\ref {eqQnevol2})], (b) from the fact that there is no deletion on any slot in $ \{I (M,n+m_1 ),\ldots, n+m_1 \}$ and (c) from the fact that $n+m_1\geq I (M,n+m_1 )$ and equation~(\ref{eqmiinc}). \subsection{\texorpdfstring{Proof of Lemma \protect\ref{lemdualRW}.}{Proof of Lemma 5}}\label{applemdualRW} Since the random walk $X$ lives in $\mathbb{Z}_{+}$ and can take jumps of size at most $1$, it suffices to verify that \[ \mathbb{P} \Bigl(X[n+1]=x_1+1 | X[n]=x_1, \min _{r \geq n+1} X[r] = 0 \Bigr) = 1-q\vadjust{\goodbreak} \] for all $x_1\in\mathbb{Z}_{+}$. We have \begin{eqnarray} \label{eqXcond1} && \mathbb{P} \Bigl(X[n+1]=x_1+1 | X[n]=x_1, \min _{r \geq n+1} X[r] = 0 \Bigr) \nonumber \\ &&\qquad = \mathbb{P} \Bigl(X[n+1]=x_1+1, \min_{r \geq n+1} X[r] = 0 | X[n]=x_1 \Bigr)\nonumber \\ &&\quad\qquad{} \bigm/ \mathbb{P} \Bigl(\min_{r \geq n+1} X[r] = 0 | X[n]=x_1 \Bigr) \nonumber \\ &&\qquad \stackrel{\mathrm{(a)}} {=} \Bigl(\mathbb{P} \bigl(X[n+1]=x_1+1 | X[n]=x_1 \bigr) \\ &&\hspace*{36pt}{}\times \mathbb{P} \Bigl(\min_{r \geq n+1} X[r] = 0 | X[n+1]=x_1+1 \Bigr)\Bigr)\nonumber \\ &&\quad\qquad{}\bigm / \mathbb{P} \Bigl(\min_{r \geq n+1}X[r] = 0 | X[n]=x_1 \Bigr)\nonumber \\ &&\qquad \stackrel{\mathrm{(b)}} {=} q\cdot\frac{h(x_1+1)}{h(x_1)},\nonumber \end{eqnarray} where \[ h(x) = \mathbb{P} \Bigl(\min_{r \geq2} X[r] = 0 | X[1]=x \Bigr) \] and steps (a) and (b) follow from the Markov property and stationarity of $X$, respectively. The values of $ \{h(x)\dvtx x\in \mathbb{Z}_{+} \}$ satisfy the set of harmonic equations \begin{equation} \label{eqharm1} h(x)=\cases{ q\cdot h(x+1)+(1-q)\cdot h (x-1), &\quad $x\geq1$, \vspace*{5pt}\cr q\cdot h(1)+1-q, &\quad$x=0$} \end{equation} with the boundary condition \begin{equation} \lim_{x\to\infty} h(x) =0. \label{eqharm2} \end{equation} Solving equations~(\ref{eqharm1}) and (\ref{eqharm2}), we obtain the unique solution \[ h(x) = \biggl(\frac{1-q}{q} \biggr)^x \] for all $x\in\mathbb{Z}_{+}$. By equation~(\ref{eqXcond1}), this implies that \[ \mathbb{P} \Bigl(X[n+1]=x_1+1 | X[n]=x_1, \min _{r \geq n+1} X[r] = 0 \Bigr) = q\cdot\frac{1-q}{q}=1-q, \] which proves the claim. \subsection{\texorpdfstring{Proof of Lemma \protect\ref{lemtopk}.}{Proof of Lemma 8}}\label{secA.4} \label{applemtopk} By the definition of $\widebar{F}{}^{ -1}_{X_1}$ and the strong law of large numbers (SLLN), we have \begin{equation}\quad \lim_{n \to\infty} \frac{1}{n}\sum _{i=1}^n \mathbb{I} \bigl(X_i \geq \widebar{F}{}^{ -1}_{X_1}(\alpha) \bigr) = \mathbb{E} \bigl( \mathbb{I} \bigl(X_i \geq\widebar{F}{}^{ -1}_{X_1}( \alpha) \bigr) \bigr) < \alpha\qquad\mbox{a.s.} \label{eqslln1}\vadjust{\goodbreak} \end{equation} Denote by $S_{n,k}$ set of top $k$ elements in $ \{X_i\dvtx 1\leq i \leq n \}$. By equation~(\ref{eqslln1}) and the fact that $H_n \lesssim\alpha n$ a.s., there exists $N>0$ such that \[ \mathbb{P} \bigl\{ \exists N \mbox{ s.t. } \min{S_{n,H_n}} \geq \widebar{F}{}^{ -1}_{X_1}(\alpha),\ \forall n\geq N \bigr\} = 1, \] which implies that \begin{eqnarray} && \limsup_{n \to\infty} f \bigl( \{X_i\dvtx 1\leq i \leq n \},H_n \bigr) \nonumber \\ &&\qquad \leq \limsup_{n\to\infty} \frac{1}{n}\sum _{i=1}^n X_i\cdot\mathbb{I} \bigl(X_i \geq\widebar{F}{}^{ -1}_{X_1}(\alpha) \bigr) \\ &&\qquad = \mathbb{E} \bigl(X_1\cdot\mathbb{I} \bigl(X_1\geq \widebar{F}{}^{ -1}_{X_1}(\alpha) \bigr) \bigr)\qquad\mbox{a.s.},\nonumber \end{eqnarray} where the last equality follows from the SLLN. This proves our claim. \subsection{\texorpdfstring{Proof of Lemma \protect\ref{lemhK}.}{Proof of Lemma 10}}\label{applemhK} We begin by stating the following fact: \begin{lem} \label{lemmaxexpo} Let $ \{X_i\dvtx i\in\mathbb{N} \}$ be i.i.d. random variables taking values in $\mathbb{R}_+$, such that for some $a,b>0$, $\mathbb {P} (X_1\geq x )\leq a\cdot\exp(-b\cdot x)$ for all $x\geq 0$. Then \[ \max_{1\leq i \leq n} X_i = o(n)\qquad\mbox{a.s.} \] as $n\to\infty$. \end{lem} \begin{pf} \begin{eqnarray} \lim_{n\to\infty} \mathbb{P} \biggl(\max_{1\leq i \leq n} X_i \leq\frac{2}{b}\ln n \biggr) &=& \lim_{n\to\infty} \mathbb{P} \biggl(X_1 \leq\frac{2}{b}\ln n \biggr)^n \nonumber \\ &\leq&\lim_{n\to\infty} \bigl(1-a\cdot\exp(- 2\ln n) \bigr)^n \nonumber\\[-8pt]\\[-8pt] & =& \lim_{n\to\infty} \biggl(1-\frac{a}{n^2} \biggr)^n \nonumber \\ & =& 1.\nonumber \end{eqnarray} In other words, $\max_{1\leq i \leq n} X_i \leq\frac{2}{b}\ln n$ a.s. as $n\to\infty$, which proves the claim. \end{pf} Since the $|E_i|$'s are i.i.d. with $\mathbb{E} (|E_1| )=\frac{\lambda+1-p}{\lambda-(1-p)}$ (Proposition~\ref{proppnobperf}), we have that, almost surely, \begin{equation} \qquad m^\Psi_K = \sum_{i=0}^{K-1} |E_i|\sim\mathbb{E} \bigl(|E_1| \bigr)\cdot K = \frac{\lambda+1-p}{\lambda-(1-p)} \cdot K\qquad\mbox{as $K \to\infty$} \label{eqmnbKasymp1} \end{equation} by the strong law of large numbers. By Lemma~\ref{lemmaxexpo} and equations~(\ref{eqEiExp}), we have \begin{equation} \max_{1\leq i \leq K}|E_i| = o(K)\qquad\mbox{a.s.} \label{eqEsmallK}\vadjust{\goodbreak} \end{equation} as $K\to\infty$. By equation~(\ref{eqEsmallK}) and the fact that $I (M^\Psi, m^\Psi_K )=K$, we have \begin{eqnarray} \label{eqsmalloK1} K - I \bigl(M^\Psi,l \bigl(m^\Psi_K \bigr) \bigr) &=& K - I \Bigl(M^\Psi,m^\Psi_K - \max _{1\leq i \leq K}|E_i| \Bigr) \nonumber \\ &\stackrel{\mathrm{(a)}} {\leq}& K - I \bigl(M^\Psi,m^\Psi_K \bigr) + \max_{1\leq i \leq K}|E_i| \nonumber\\[-8pt]\\[-8pt] &=& \max_{1\leq i \leq K}|E_i| \nonumber \\ &=& o (K )\qquad\mbox{a.s.}\nonumber \end{eqnarray} as $K\to\infty$, where (a) follows from the fact that at most one deletion can occur in a single slot, and hence $I(M,n+m)\leq I(M,n)+m$ for all $m,n\in\mathbb{N}$. Since $\widetilde{M}$ is feasible, \begin{equation} I (\widetilde{M},n ) \lesssim\frac{p}{\lambda+1-p}\cdot n \label{eqtilMratio} \end{equation} as $n\to\infty$. We have \begin{eqnarray} h(K)&=& \bigl(K - I \bigl(M^\Psi,l \bigl(m^\Psi_K \bigr) \bigr) \bigr) + \bigl(I \bigl(\widetilde{M}, m^\Psi_K \bigr)-I \bigl(M^\Psi, m^\Psi_K \bigr) \bigr) \nonumber \\ &\stackrel{\mathrm{(a)}} {\lesssim}& \bigl(K - I \bigl(M^\Psi,l \bigl(m^\Psi_K \bigr) \bigr) \bigr) + \frac{p}{\lambda+1-p} \cdot m^\Psi_K - K \nonumber \\ &\stackrel{\mathrm{(b)}} {\sim} & \biggl(\frac{p}{\lambda+1-p}\cdot\frac {\lambda+1-p}{\lambda-(1-p)} -1 \biggr)\cdot K, \nonumber \\ &= & \frac{1-\lambda}{\lambda-(1-p)} \cdot K\qquad\mbox{a.s.} \nonumber \end{eqnarray} as $K \to\infty$, where (a) follows from equations~(\ref {eqmnbKasymp1}) and (\ref{eqtilMratio}), (b) from equations~(\ref {eqmnbKasymp1}) and (\ref{eqsmalloK1}), which completes the proof. \section{\texorpdfstring{Applications to resource pooling}{Applications to resource pooling}}\label{secresourcepooling} We discuss in this section some of the implications of our results in the context of a multi-server model for resource pooling \cite{TX12}, illustrated in Figure~\ref{figpooling}, which has partially motivated our initial inquiry. We briefly review the model in \cite{TX12} below, and the reader is referred to the original paper for a more rigorous description. Fix a coefficient $p\in[0,1]$. The system consists of $N$ stations, each of which receives an arrival stream of jobs at rate $\lambda\in(0,1)$ and has one queue to store the unprocessed jobs. The system has a total amount of processing capacity of $N$ jobs per unit time and is divided between two types of servers. Each queue is equipped with a \emph {local server} of rate $1-p$, which is capable of serving only the jobs directed to the respective station. All stations share a \emph{central server} of rate $pN$, which always fetches a job from the most loaded station, following a longest-queue-first (LQF) scheduling policy. In other words, a fraction $p$ of the total processing resources is being \emph{pooled} in a centralized fashion, while the remainder is distributed across individual stations. All arrival and service token generation processes are assumed to be Poisson and independent from one another (similarly to Section~\ref{secmodel}). A main result of \cite{TX12} is that even a small amount of resource pooling (small but positive $p$) can have significant benefits over a fully distributed system ($p=0$). In particular, for any $p>0$, and in the limit as the system size $N \to\infty$, the average delay across the whole system scales as $\sim\log_{1/(1-p)}{\frac{1}{1-\lambda }}$, as $\lambda\to1$; note that this is the same scaling as in Theorem~\ref{teoonline}. This is an exponential improvement over the scaling of ${\sim}\frac{1}{1-\lambda}$ when no resource pooling is implemented; that is, $p=0$. We next explain how our problem is connected to the resource pooling model described above, and how the current paper suggests that the results in \cite{TX12} can be extended in several directions. Consider a similar $N$-station system as in \cite{TX12}, with the only difference being that instead of the central server fetching jobs from the local stations, the central server simply fetches jobs from a ``central queue,'' which stores jobs redirected from the local stations (see Figure~\ref{figpoolingcqueue}). Denote by $ \{R_i(t)\dvtx t\in \mathbb{R}_{+} \}$, $i\in\{1,\ldots, N\}$, the counting process where $R_i(t)$ is the cumulative number of jobs redirected to the central queue from station $i$ by time $t$. Assume that $\limsup_{t \to\infty} \frac{1}{t}R_i(t) = p-\varepsilon$ almost surely for all $i\in\{1,\ldots,N\}$, for some $\varepsilon>0$.\footnote{Since the central server runs at rate $pN$, the rate of $R_i(t)$ cannot exceed $p$, assuming it is the same across all $i$. } From the perspective of the central queue, it receives an arrival stream $R^N$, created by merging $N$ redirection streams, $R^N(t) = \sum_{i=1}^N R_i(t)$. The process $R^N$ is of rate $(p-\varepsilon)N$, and it is served by a service token generation process of rate $pN$. The traffic intensity of the of central queue (arrival rate divided by service rate) is therefore $\rho_c = (p-\varepsilon)N/pN=1-\varepsilon /p<1$. Denote by $Q^N\in\mathbb{Z}_{+}$ the length of the central queue in steady-state. Suppose that it can be shown that\footnote{For an example where this is true, assume that every local station adopts a randomized rule and redirects an incoming job to the central queue with probability $\frac{p-\varepsilon}{\lambda}$ [and that $\lambda$ is sufficiently close to $1$ so that$\frac{p-\varepsilon}{\lambda}\in (0,1)$]. Then $R_i(t)$ is a Poisson process, and by the merging property of Poisson processes, so is $R_N(t)$. This implies that the central\vspace*{1pt} queue is essentially an $M/M/1$ queue with traffic intensity $\rho_c = (p-\varepsilon)/p$, and we have that $\mathbb{E} (Q^N )= \frac{\rho_c}{1-\rho_c}$ for all $N$. } \begin{equation} \limsup_{N\to\infty} \mathbb{E} \bigl(Q^N \bigr) < \infty. \label{eqcentralfinite} \end{equation} A key consequence of equation~(\ref{eqcentralfinite}) is that, for large values of $N$, $Q^N$ becomes negligible in the calculation of the system's average queue length: the average queue length across the whole system coincides with the average queue length among the \emph {local} stations, as $N\to\infty$. In particular, this implies that, in the limit of $N\to\infty$, the task of scheduling for the resource pooling system could alternatively be implemented by running a separate admissions control mechanism, with the rate of redirection equal to $p-\varepsilon$, where all redirected jobs are sent to the central queue, granted that the streams of redirected jobs ($R_i(t)$) are sufficiently well behaved so that equation~(\ref{eqcentralfinite}) holds. This is essentially the justification for the equivalence between the resource pooling and admissions control problems, discussed at the beginning of this paper (Section~\ref{secintroadminresourcepool}). With this connection in mind, several implications follow readily from the results in the current paper, two of which are given below: \begin{longlist}[(2)] \item[(1)] The original longest-queue-first scheduling policy employed by the central server in \cite{TX12} is \emph{centralized}: each fetching decision of the central server \mbox{requires} the full knowledge of the queue lengths at all local stations. However, Theorem~\ref{teoonline} suggests that the same system-wide delay scaling in the resource pooling scenario could also be achieved by a \emph {distributed} implementation: each server simply runs the same threshold policy, $\pi_{\mathrm{th}}^{L (p-\varepsilon,\lambda)}$, and routes all deleted jobs to the central queue. To prove this rigorously, one needs to establish the validity of equation~(\ref {eqcentralfinite}), which we will leave as future work. \item[(2)] A fairly tedious stochastic coupling argument was employed in \cite{TX12} to establish a matching lower bound for the $\sim\log _{1/(1-p)}{\frac{1}{1-\lambda}}$ delay scaling, by showing that the performance of the LQF policy is no worse than any other online policy. Instead of using stochastic coupling, the lower bound in Theorem~\ref{teoonline} immediately implies a lower bound for the resource pooling problem in the limit of $N\to\infty$, if one assumes that the central server adopts a \emph{symmetric} scheduling policy, where the it does not distinguish between two local stations beyond their queue lengths.\footnote{This is a natural family of policies to study, since all local servers, with the same arrival and service rate, are indeed identical.} To see this, note that the rate of $R_i(t)$ are identical under any symmetric scheduling policy, which implies that it must be less than $p$ for all $i$. Therefore, the lower bound derived for the admissions control problem on a \emph{single queue} with a redirection rate of $p$ automatically carries over to the resource pooling problem. Note that, unlike the previous item, this lower bound does not rely on the validity of equation~(\ref{eqcentralfinite}). \end{longlist} Both observations above exploit the equivalence of the two problems in the regime of $N\to\infty$. With the same insight, one could also potentially generalize the delay scaling results in \cite{TX12} to scenarios where the arrival rates to the local stations are nonuniform, or where future information is available. Both extensions seem difficult to accomplish using the original framework of \cite {TX12}, which is based on a fluid model that heavily exploits the symmetry in the system. On the downside, however, the results in this paper tell us very little when system size $N$ is \emph{small}, in which case it is highly conceivable that a centralized scheduling rule, such as the longest-queue-first policy, can out-perform a collection of decentralized admissions control rules. \end{appendix} \section*{\texorpdfstring{Acknowledgment.}{Acknowledgment}}\label{sec9} The authors are grateful for the anonymous reviewer's feedback.
1,108,101,564,017
arxiv
\section{Introduction} Liquid ammonia is particularly well-known as a solvent which sustains long-lived solvated electrons formed by the dissolution of alkali metals~\cite{Zurek2009/10.1002/anie.200900373}. Recently, we used the flexible combination of refrigerated liquid microjet X-ray photoelectron spectroscopy~\cite{Faubel1997/10.1063/1.474034,Buttersack2019/10.1063/1.5141359} (XPS) and advanced \textit{ab initio} calculations to characterize the electronic structure of neat liquid ammonia~\cite{Buttersack2019/10.1021/jacs.8b10942} as well as the alkali metal solutions~\cite{Buttersack2020/10.1126/science.aaz7607}. In the latter, the hallmark XPS feature of the solvated electron is located at the electron binding energy of $-$2.0~eV relative to the vacuum level and its concentration dependence was used to experimentally map the electrolyte-to-metal transition~\cite{Buttersack2020/10.1126/science.aaz7607}. Solvated electrons, essentially localized electrons bound in cavities formed within the solvent structure, act as powerful chemical reducing agents and, as such, find applications in numerous organic reductions. Arguably, the best known example is the Birch reduction of benzene in the environment of solvated electrons with the addition of an aliphatic alcohol~\cite{Birch1946/10.1038/158585c0}. During the course of the reaction, the solvated electron binds to the benzene molecule, forming the benzene radical anion as the first reactive intermediate. This chemical role of the benzene radical anion as well as its prominent position as the simplest example of an aromatic anion has prompted several experimental~\cite{Shida1973/10.1021/ja00792a005,Tuttle1958/10.1021/ja01553a005,Moore1981/10.1021/j150604a010,Sanche1973/10.1063/1.1679228} and theoretical~\cite{Hinde1978/10.1021/ja00483a010,Bazante2015/10.1063/1.4921261} studies of the species in the past. A particularly intriguing conclusion that arises from these studies is that the stability of the species is environment-dependent. In particular, the isolated benzene radical anion represents an unbound metastable shape resonance with a life time on the femtosecond time scale, which was consistently demonstrated both by \textit{ab initio} calculations~\cite{Hinde1978/10.1021/ja00483a010, Bazante2015/10.1063/1.4921261} and by electron scattering experiments~\cite{Sanche1973/10.1063/1.1679228} in the gas phase. In contrast, the feasibility of the Birch reduction and various spectroscopic experiments performed in different polar solvents~\cite{Tuttle1958/10.1021/ja01553a005,Shida1973/10.1021/ja00792a005,Moore1981/10.1021/j150604a010}, which measure the species over extended time scales, imply the stability of the electronic structure of the benzene radical anion as well as its thermodynamic stability in the context of a chemical equilibrium with solvated electrons~\cite{Marasas2003/10.1021/jp026893u}. In addition to the non-trivial behavior of the electronic structure with respect to solvation, the presence of an excess electron in an initially energetically degenerate quantum state gives rise to a dynamic multimode $E \otimes e$ Jahn--Teller (JT) effect~\cite{OBrien1993/10.1119/1.17197,Bersuker2006} which results in complex behavior of the electronic structure as well as the molecular geometry. In particular, the optimal molecular structure of the benzene radical anion is not a highly symmetric hexagonal one like that of the neutral benzene parent molecule, but is rather represented by a continuum of lower-symmetry structures that form the so-called pseudorotation path~\cite{Bazante2015/10.1063/1.4921261}. Anticipating a future XPS measurement of the benzene radical anion as a natural continuation of the metal-ammonia solutions research, we have previously investigated the benzene radical anion in a liquid ammonia solution using computational methods with the aim to shed light on its structure, dynamics, and spectroscopy and to provide a theoretical basis to aid the interpretation of various experimental data. In our original work~\cite{Brezina2020/10.1021/acs.jpclett.0c01505}, we performed \textit{ab initio} molecular dynamics (AIMD) of the explicitly solvated anion under periodic boundary conditions. These simulations were realized at the hybrid density functional theory (DFT) level of electronic structure which we have shown to be, despite its high computational cost, a necessary methodological component to obtain a physically meaningful description of the benzene radical anion. At this level of theory, the excess electron spontaneously localizes on the benzene ring and remains stable for the length of the simulation, indicating the presence of a bound electronic state. Based on these simulations, we then addressed the structure of the solute and tracked the systematic geometry distortions and pseudorotation due to the dynamic JT effect that persist in the thermalized bulk system. More recently, we approached the problem of the solvent-induced stability of the benzene radical anion from the point of view of molecular clusters derived from the original condensed-phase AIMD simulations~\cite{Kostal2021/10.1021/acs.jpca.1c04594}. In that study, we calculated the excess electron vertical binding energy using explicit ionization in clusters of increasing size and found results ranging from $-$2.0 to $-$3.0~eV at the infinite cluster size limit depending on the specific methodology. The present work aims to shed light on the electronic structure of the benzene radical anion by employing advanced electronic structure calculations and analysis performed on our original AIMD thermal geometries. In the spatial domain, we describe the probability distribution of the excess electron and its correlation with the underlying JT distortions of molecular geometry using unsupervised machine learning methods~\cite{Glielmo2021/10.1021/acs.chemrev.0c01195}. These methods have been used to analyze molecular dynamics trajectories and characterize representative molecular configurations of the studied systems in a bias-free way~\cite{Gasparotto2014/10.1063/1.4900655,Cerriotti2011/10.1073/pnas.1108486108}. Here, we employ clustering analysis not only to the distribution of nuclear configurations of the benzene radical anion, but we also use it in conjunction with dimensionality reduction to characterize the electronic structure. Then, in the energy domain, we aim to predict the binding energies of all the valence electrons in the studied system which can be directly compared to XPS data. To avoid the unphysical orbital energies directly available from the AIMD on-the-fly Kohn--Sham (KS) DFT electronic structure, we perform computationally demanding condensed-phase G$_0$W$_0$ calculations~\cite{Huser2013/10.1103/PhysRevB.87.235132,Wilhelm2016/10.1021/acs.jctc.6b00380} on the AIMD geometries to predict the electronic densities of states (EDOS). To better understand the contributions of the individual species in the system in question, we additionally employ an approach which projects the EDOS on local atomic orbitals to resolve the calculated data by species and in space. The rest of this paper is organized as follows. In Section~\ref{sec:methodology}, we discuss the details of the performed simulations and calculations and describe the technical foundations of the employed analysis. The main findings are then presented and discussed in Section~\ref{sec:results}. There, we first focus on the results pertaining to the JT effect on the electronic structure and its correlation with the underlying molecular geometry. Then, we move on to the energetics of the electronic structure and the question of the stability and binding energy of the solvated benzene radical anion. These results are referenced against those for neutral benzene solvated in liquid ammonia and neat liquid ammonia itself. Finally, we summarize our results and draw conclusions in Section~\ref{sec:conclusions}. \section{Methodology} \label{sec:methodology} \subsection{AIMD simulations} The original AIMD simulations of the benzene radical anion and neutral benzene in liquid ammonia under periodic boundary conditions were realized using the the CP2K 5.1 package~\cite{Hutter2014/10.1002/wcms.1159, Guidon2008/10.1063/1.2931945,Guidon2009/10.1021/ct900494g} and its Gaussian and plane wave~\cite{Lippert1997/10.1080/002689797170220} electronic structure module Quickstep~\cite{Vandevondele2005/10.1016/j.cpc.2004.12.014}. Both simulated systems consisted of one solute molecule and 64 solvent molecules in a cubic box of a fixed side length of 13.745~\AA\ and 13.855~\AA\ for the benzene radical anion and neutral benzene, respectively. The nuclei were propagated with a 0.5~fs time step in the canonical ensemble at 223~K using the stochastic velocity-rescaling thermostat~\cite{Bussi2007/10.1063/1.2408420}. The electronic structure was calculated using the revPBE0-D3 hybrid density functional~\cite{Perdew1996/10.1103/PhysRevLett.77.3865, Zhang1998/10.1103/PhysRevLett.80.890, Adamo1999/10.1063/1.478522, Goerigk2011/10.1039/c0cp02984j} to limit the self-interaction error, as required for the localization of the excess electron and the stability of the benzene radical anion~\cite{Brezina2020/10.1021/acs.jpclett.0c01505}. The KS wavefunctions were expanded into the TZV2P primary basis set~\cite{VandeVondele2007/10.1063/1.2770708}, while the density was expanded in an auxiliary plane-wave basis with a 400~Ry cutoff. GTH pseudopotentials~\cite{Goedecker1996/10.1103/PhysRevB.54.1703} were used to represent the core $1s$ electrons of the heavy atoms. Additionally, the auxiliary density matrix method~\cite{Guidon2010/10.1021/ct1002225} with the cpFIT3 auxiliary basis set~\cite{Guidon2010/10.1021/ct1002225} was used to accelerate the computationally demanding hybrid DFT electronic structure calculations. The total simulated time was 100~ps for both systems, each collected from five 20~ps trajectories initialized from decorrelated and equilibrated initial conditions. \subsection{G$_0$W$_0$ calculations} In this work, we use the G$_0$W$_0$ method~\cite{Huser2013/10.1103/PhysRevB.87.235132,Wilhelm2016/10.1021/acs.jctc.6b00380} which gives access to physically meaningful one-electron energy levels of the investigated condensed-phase, periodic systems. This is in contrast with orbital energies of the underlying KS DFT which should not formally be considered as one-electron energies. For each solute, these calculations are performed on top of 205 DFT-AIMD thermal structures extracted from the AIMD trajectories with a 0.5~ps stride with the revPBE0-D3/TZV2P KS wavefunctions being used as a starting point to obtain the corrected G$_0$W$_0$ energies. These calculations are realized using the CP2K package, version 7.1. The self-energy is described analytically over the real frequency axis using the Padé approximation and the Newton-Raphson fixed point iteration is employed for numerical solution of the corresponding algebraic equations. The influence of periodic boundary conditions on the G$_0$W$_0$ energies is minimized by employing a periodicity correction scheme~\cite{Wilhelm2017/10.1103/PhysRevB.95.235123}. The resulting EDOS, obtained as the distribution of the G$_0$W$_0$ energies, is described as a continuous probability density function through the kernel density estimation method using a Gaussian kernel with a 0.02~eV bandwidth. The G$_0$W$_0$ calculations performed in periodic boundary conditions do not directly provide the absolute values of electron binding energies due to the absence of the explicit liquid-vacuum boundary. Thus, to access the absolutely positioned EDOS, the whole spectrum must be shifted on the energy axis by a suitable constant. In other works, this was achieved by auxiliary slab calculations that provide an estimate of the shift~\cite{Ambrosio2018/10.1021/acs.jpclett.8b00891,Gaiduk2018/10.1038/s41467-017-02673-z}. In our previous work on neat liquid ammonia combining G$_0$W$_0$ calculations with liquid XPS~\cite{Buttersack2019/10.1021/jacs.8b10942,Faubel1997/10.1063/1.474034}, we aligned the average energy of the calculated liquid $\mathrm{3a_1}$ peak to $-$9.09~eV, the average of the same peak obtained experimentally. This bypassed the need for additional \textit{ab initio} calculations and facilitated the comparison of the whole spectrum between theory and experiment. Here, we exploit the fact that, as detailed in Section~\ref{sec:results}, the electronic perturbation of the liquid ammonia solvent by the presence of the benzene radical anion is minor. As such, the total EDOS was shifted to match the same experimental valence liquid ammonia peak as in our previous work. The value of the shift was determined from the mean energy of the $\mathrm{3a_1}$ ammonia peak of the total EDOS with the solute included (other options are discussed in Section~\ref{sec:additional-results} of the supplementary material). To gain insight into the contributions of individual chemical species to the total G$_0$W$_0$ EDOS, we decompose this quantity into separate densities for each species and address the differences between the neat ammonia data and the data from systems with solutes. Specifically, we rely on the original formulation of the projected density of states (PDOS) for KS orbitals~\cite{Hunt2003/10.1016/S0009-2614(03)00954-0}, which projects the total EDOS on the respective part of the atomic orbital basis set of every atom in the system individually. Extending the original approach, we use these projections for the G$_0$W$_0$-corrected binding energies, since the spatial orbitals are identical between KS DFT and G$_0$W$_0$. For each atom and each configuration, each G$_0$W$_0$ energy is assigned a weight based on the magnitude of the projection of the corresponding orbital on that atom. Naturally, these atomic contributions can be collected into molecular contributions as needed for each particular system. The total EDOS can be expressed as the following ensemble average over the contributing structures \begin{equation} \rho(E) = \left\langle \sum_n \delta(E - E_n) \right\rangle, \end{equation} where $E_n$ are the G$_0$W$_0$ one-electron energy eigenvalues and angle brackets denote an average over the ensemble of thermal structures. To decompose it, we use a projection on an atom-centered linear combination of atomic orbitals (LCAO) basis set $\{\ket{I\gamma}\}$. This basis satisfies the completeness relation over the spanned space \begin{equation} \sum_I \sum_{\gamma} \ket{I\gamma}\bra{I\gamma} = \hat{\mathbf{1}}, \end{equation} where the summation runs over all atoms $I$ and all additional quantum numbers $\gamma$ and $\hat{\mathbf{1}}$ denotes the identity operator. Using the orthonormality of the original KS orbitals $\ket{\psi_n}$ that remain unchanged during the G$_0$W$_0$ calculation, we can expand the total EDOS definition as a sum over atomic projections as \begin{equation} \begin{split} \rho(E) & = \left\langle \sum_n \braket{\psi_n}{\psi_n} \delta(E - E_n) \right\rangle \\ & = \left\langle \sum_n \sum_I \sum_\gamma \braket{\psi_n}{I\gamma}\braket{I\gamma}{\psi_n} \delta(E - E_n) \right\rangle \\ & = \sum_I \left\langle \sum_n \sum_\gamma |\braket{\psi_n}{I\gamma}|^2 \delta(E - E_n) \right\rangle \\ & \equiv \sum_I \langle S_I(E) \rangle \equiv \sum_I \rho_I (E), \end{split} \end{equation} where we have labeled the overlap-weighted kernel of the thermal average $S_I(E)$ and the whole thermally averaged atomic projection $\rho_I (E)$. These atomic projections can then be summed over arbitrary subsets of atoms to obtain a PDOS on any species in question. Moreover, we can further resolve the atomic contributions as a function of distance $r$ from a chosen point of reference as the following two dimensional distribution \begin{equation} \rho(E, r) = \frac{1}{4\pi r^2 g(r)} \sum_I \langle S_I(E) \delta(r - r_I) \rangle, \end{equation} where $r_I$ is the distance of the $I$-th atom from the point of reference and the normalization factor in the denominator based on the radial distribution function $g(r)$ of the chosen species around the same point of reference ensures a uniform marginal distribution in $r$. \subsection{Clustering Analysis} The clustering of the relevant feature space vectors is based on the Gaussian Mixture Model (GMM) as implemented in the scikit-learn Python library~\cite{Pedregosa2012/}. For the molecular geometries, we assign features as all vibrational normal modes with JT-active symmetry to naturally describe the distortions in an 8D configuration space. For the electronic structure, we use a high-dimensional abstract feature space that relies on a Fourier decomposition of the respective spin densities. Both feature spaces are described in detail in the following paragraphs. The GMM algorithm was chosen over the commonly used $k$-means clustering since it allows to reach a similar goal in a more flexible and general way and, moreover, yields a continuous parametrization of the obtained clusters in terms of high-dimensional Gaussian functions that can be used to evaluate the cluster membership probability. The full covariance in all dimensions was employed to account for possible spatial anisotropy of the clusters and a tight convergence limit of $10^{-5}$ was used. \section{Results and Discussion} \label{sec:results} In the following paragraphs, we focus on the spatial character of the excess electron of the benzene radical anion using the spin density, an observable quantity obtained directly from an unrestricted Kohn--Sham (KS) DFT calculation. We aim at a description of the evolution of the spin density in the context of the condensed-phase JT effect, which governs the distortions of the underlying molecular geometry of the benzene radical anion solvated in liquid ammonia~\cite{Brezina2020/10.1021/acs.jpclett.0c01505}. Specifically, we ask if the molecular distortions correlate with the immediate shape of the spin density and, therefore, if information about the JT state of the solute can be extracted directly from the electronic structure of the solvated species, similarly to how it can be extracted from its molecular geometry. Later on, we turn our attention to the energetics of the electronic structure, predict electronic densities of states for the studied system, and discuss then in detail in the context of the question of the stability of the solvated benzene radical anion as well as from the perspective of interpretation of XPS data. \subsection{The Jahn--Teller Effect on the Molecular and Electronic Structure} The essence of the JT effect in the benzene radical anion is as follows. As the $D_{6h}$-symmetric benzene molecule accepts an excess electron, the formed degenerate $\mathrm{E_{2u}}$ electronic state of the radical anion becomes unstable since it corresponds to a conical intersection between two adiabatic potential energy hypersurfaces (APESs). This instability is resolved by a symmetry-lowering distortion along the JT-active normal modes of $\mathrm{e_{2g}}$ symmetry, which brings the system into a minimum on the pseudorotational path on the lower branch of the JT-split APES. At the same time, the symmetry of the initial electronic state is reduced as well, with two new possible lower symmetries, $\mathrm{A_u}$ and $\mathrm{B_{1u}}$, corresponding to the ground state of the benzene radical anion in the two opposite distortions of the molecular geometry. \subsubsection{Clustering of Molecular Geometries} The natural coordinates to describe the molecular distortions are the four degenerate pairs of JT-active normal modes. These are adopted here consistently with our previous work from the vibrational normal modes of an optimized neutral benzene molecule since it shares the same molecular structure and the point group with the radical anion in its reference undistorted geometry. A physically meaningful observation of the JT pseudorotation can be made by averaging the full 8D data over all modes that do not exhibit a strong enough JT split to be observable in the thermal system. Thus, the pseudorotation can be represented as a 2D distribution in the pair of remaining $\mathrm{e_{2g}}$ modes at 1654~cm$^{-1}$ which show an appreciably strong JT effect. In this case, the free energy landscape of the pseudorotation valley is essentially flat and the path around it is described by the pseudorotation angle $\theta =\arctan2(Q_y / Q_x)$, a scalar parameter which represents the polar angle in the 2D subspace of the relevant normal mode coordinates labeled as $Q_x$ and $Q_y$~\cite{Brezina2020/10.1021/acs.jpclett.0c01505}. In order to analyze the full 8D distribution, we applied the GMM clustering algorithm to the normal modes data set with the aim to find representative distortions. However, unlike in the case of the electronic structure discussed in the following paragraphs, the resulting clustering of the data is not satisfactory for several reasons. Motivated by the threefold symmetry of the reference gas-phase APES, we attempted to separate the data into both three and six clusters. In both cases, clustering of comparable quality was obtained which implies that there is no clear number of natural clusters in the data set. This is further supported by additional attempts to cluster the data into a number of clusters that does not respect the inherent symmetry of the problem: again, similar outputs were produced. Moreover, the clustering is generally not reproducible and inconsistent positions of clusters are obtained each time. As a measure of clustering performance, we use silhouette coefficients, which range from $-$1 (wrong clustering) through 0 (poor clustering) to $+$1 (excellent clustering)~\cite{Rousseeuw1987/10.1016/0377-0427(87)90125-7}. If we cluster our data into three groups, the average silhouette coefficient does not exceed the value of $\sim$0.08, which quantifies the insufficient separation of the data (a silhouette plot is presented in Section~\ref{sec:additional-results} of the supplementary material). This demonstrated lack of clear separation in the molecular geometries suggests that the remaining modes do not bring much additional structure to the data set in comparison to the reduced 2D distribution in $Q_x$ and $Q_y$ and the essentially flat character of the probability distribution around the pseudorotation valley generalizes to the full dimensionality. Therefore, we adhere to the simpler continuous parametrization by the pseudorotation angle $\theta$ to describe molecular distortions in the following analysis. \subsubsection{Spin density dimensionality reduction} \begin{figure}[tb!] \centering \includegraphics[width=\linewidth]{Spin-densities.pdf} \caption{The spin densities of the $\mathrm{A_{u}}$ and $\mathrm{B_{1u}}$ electronic states of the benzene radical anion. The presented idealized geometries and spin densities were obtained from a finite basis set gas-phase calculation at the hybrid DFT level, as used for the AIMD simulations; similar spin densities are however observed in the condensed-phase simulations. The positive deviations of the spin density are shown in green at two contours, 0.025~$a_0^{-3}$ (opaque), and 0.006~$a_0^{-3}$ (transparent), while the negative deviations are shown in purple at the same isovalues with a negative sign. The molecular structure of the benzene radical anion is shown in gray as a whole.} \label{fig:spin_density} \end{figure} To motivate the analysis of the electronic structure of the solvated species, we consider optimized gas-phase benzene radical anion structures where the excess electron is artificially localized due to a finite orbital basis set. The spin density distributions for the two distinct JT distoritions are shown in Figure~\ref{fig:spin_density}. The spin density of the $\mathrm{A_u}$ state (left) is characterized by four atom-centered maxima and two less pronounced minima localized along one of the $C_2$ symmetry axes; the $\mathrm{B_{1u}}$ spin density (right) exhibits two maxima localized on distal carbon atoms along the corresponding $C_2$ axis and two elongated bridge-like positive deviations over a pair of carbon--carbon bonds parallel with this $C_2$ axis. We also have to take into account that the high symmetry of the benzene molecular geometry allows for distortion in three equivalent directions corresponding to the three horizontal, apex-to-apex $C_2$ axes in the $D_{6h}$ point group. These distortions are represented by two sets of three equivalent stationary points around the point of high symmetry on the pseudorotation APES, each separated by a pseudorotation angle of 60$^\circ$ from its opposite-kind neighbors and by 120$^\circ$ from its pseudorotated images. As a consequence, three equivalent $\mathrm{A_u}$-type and three equivalent $\mathrm{B_{1u}}$-type spin densities exist which correspond to the six APES stationary points. The pseudorotation of the nuclear geometry between these minima is discussed in detail in Reference~\citenum{Brezina2020/10.1021/acs.jpclett.0c01505}; a video file illustrating the evolution of the spin density on top of the pseudorotating geometry in the idealized gas-phase case is included in the supplementary material and described in Section~\ref{sec:SI_videos}. Eventually, we want to analyze the thermal data in the condensed phase---the natural environment where the solvated benzene radical anion is electronically stable and physically relevant observation of the JT effect and the associated spin density can be made. For this purpose, we design a two-step dimensionality reduction procedure that represents the spin densities in a feature space of reasonable dimension in such a way that the two idealized JT-distorted cases can be distinguished. The periodicity of the spin density along the aromatic ring makes it advantageous to express its spatial dependence in terms of a local spherical coordinate system $r, \vartheta$ and $\varphi$ (where $\varphi$ is the polar angle ranging from 0 to $2\pi$). These coordinates are obtained by the usual transformation from a local Cartesian system in which the $x, y$-plane is represented by the molecular plane of the benzene radical anion and the $z$-axis by its normal with its origin at the solute center of mass (see Section~\ref{sec:analysis-details} of the supplementary material for details). The spherical coordinates represent a natural description for the systems in question and allow to reduce the dimensionality of the full spin density into a one-dimensional (1D) function by partial integration. As documented in Section~\ref{sec:additional-results} of the supplementary material, the 1D spin densities in $r$ and $\vartheta$ show practically perfect overlap for the two spin density types and thus bring no distinction between them. The information that distinguishes the two types is contained in the remaining possible spin density in $\varphi$ \begin{equation} \rho_\mathrm{s}(\varphi) = \int_{0}^{\pi} \int_{0}^{r_{\mathrm{max}}} \dd \vartheta \dd r \ r^2 \sin \vartheta \rho_\mathrm{s}(r, \vartheta, \varphi), \end{equation} that describes the character of the spin density around the benzene ring. Its shape can be traced back to the spatial characteristics of the full spin densities through the respective sequence of the 1D maxima and minima along the aromatic ring, as shown for the idealized spin densities in Figure~\ref{fig:1D-spin_density}, top panel, full lines. In terms of $\rho_\mathrm{s}(\varphi)$, the pseudorotation of each type of the full spin density by 120$^\circ$ translates simply into a 120$^\circ$ shift on the $\varphi$-axis. \begin{figure}[tb!] \centering \includegraphics[width=\linewidth]{1D-spin-densities.pdf} \caption{The relevant 1D spin densities $\rho_\mathrm{s}(\varphi)$ for the $\mathrm{A_u}$ and $\mathrm{B_{1u}}$ states (Figure~\ref{fig:spin_density}). The original functions are shown as black solid lines. The $N = 20$ Fourier-reconstructed curves are shown as dashed blue and orange lines, respectively. The bottom panel illustrates five samples equidistant in their degree $n$ from the employed $N = 20$ Fourier basis with the sine components shown in dark gray and cosine components in light gray.} \label{fig:1D-spin_density} \end{figure} At this point, the 3D spin density is reduced to a 1D function that is still fully capable of distinguishing between the two spin density types. An additional level of simplification that opens the door to numerical analysis is achieved by mapping the continuous $2\pi$-periodic 1D spin densities onto discreet vectors by means of a Fourier series and noting that only the first few harmonics are necessary to achieve a highly accurate decomposition as demonstrated by the dashed curves in the top panel of Figure~\ref{fig:1D-spin_density}. This set of Fourier coefficients clearly distinguishes the two idealized spin densities in relatively few dimensions. While the technical aspects of this step are discussed in detail in Section~\ref{sec:analysis-details} of the supplementary material, we note here that the Fourier decomposition was performed using the first 20 harmonics, yielding an 82-dimensional Euclidean feature vector for each spin density sample (a total of $2(2N+1)$ real coefficients are needed for a Fourier series counting $N$ harmonic functions). \subsubsection{Clustering of the Electronic Structure} We are now able to represent each spin density distribution in a compact way and can move to the analysis of the electronic structure of the condensed-phase system. A visual inspection of the trajectory~\cite{Brezina2020/10.1021/acs.jpclett.0c01505} of the solvated benzene radical anion clearly reveals the presence of two limiting spin density structures similar to the optimized ones. Therefore, we aim to perform an analysis that will allow us to divide the observed ensemble of condensed-phase spin densities into six categories centered around each of the limiting spin density structures and including the surrounding thermal population. Once this is established, one can examine the correlation between the immediate electronic structure and the underlying molecular geometry of the solute. To categorize the spin densities of the thermal solvated system, we turn again to GMM clustering to separate the data now concisely represented as feature vectors constructed out of Fourier coefficients. GMM is able not only to split the data into natural clusters, but also to provide a continuous parametrization of each cluster through evaluation of posterior probabilities of cluster membership. Indeed, in this case, the data set splits cleanly into six clusters as shown by the cluster silhouettes presented in Figure~\ref{fig:Silhouette-plot} which average to the mean silhouette coefficient of $\sim$0.4 and contain no outliers for the $\mathrm{A_u}$ state and only a small number of outliers (negative silhouette coefficients) for the $\mathrm{B_{1u}}$ state. Additional clustering validation is documented in Section~\ref{sec:additional-results} of the supplementary material. The centers of the six clusters then correspond to the electronic structure at the six possible $D_{2h}$ distortions and the population of each cluster corresponds to the thermal fluctuations around these minima. This is directly shown by summing up the Fourier series defined by the coordinates of the cluster centers to obtain new 1D spin densities. These exhibit physically meaningful properties such as close-to-reference shapes (such as those shown in Figure~\ref{fig:1D-spin_density}) and the expected 120$^\circ$ shifts within each type group (see the supplementary material, Section~\ref{sec:additional-results}). While these findings show that the excess electron structure is analogous to that found for the benzene radical anion in the gas phase using a comparable finite orbital basis set, it is important to keep in mind that such a system converges to an unbound state when the size of the basis set is increased. Only in the condensed phase is the species actually bound and its JT effect observable and the electronic states physically meaningful and potentially experimentally measurable. \begin{figure}[tb!] \centering \includegraphics[width=\linewidth]{Silhouettes.pdf} \caption{Characterization of the clustering of the electronic structure by the means of silhouette plots. The top three clusters represent the $\mathrm{A_u}$ clusters, the bottom three the $\mathrm{B_{1u}}$ clusters.} \label{fig:Silhouette-plot} \end{figure} \subsubsection{Correlation between the electronic structure and molecular geometry} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{Epsilon-theta-justpolar.pdf} \caption{Correlation of the electronic structure with the distortion of nuclear geometry. Each molecular distortion is characterized here by the value of the pseudorotation angle $\theta$. The distributions of $\theta$ weighted by the corresponding electronic parameters $p(\mathrm{A_u})$ (blue) and $p(\mathrm{B_{1u}})$ (orange) are shown in polar coordinates with an offset zero-distance.} \label{fig:JT} \end{figure} To quantify the correlation between the molecular structure and the spin density we exploit the features of the trained Gaussian mixture model to assign a posterior probability of belonging to a specific cluster to each spin density data point. Thus, a generalized single-valued parameter $p(\mathrm{A_u})$, which can be defined as a sum over all $\mathrm{A_u}$-type cluster probabilities, gives the overall probability that a data point is of the $\mathrm{A_u}$-type, including all three possible pseudorotations. Clearly, the same can be done for the $\mathrm{B_{1u}}$-type clusters and the identity for complementary probabilities that $p(\mathrm{A_u}) + p(\mathrm{B_{1u}})= 1$ has to hold. Now, since each spin density data point has a unique molecular geometry associated with it, the proposed probability parameters can be directly correlated with the underlying molecular distortions characterized by the pseudorotation angle $\theta$ as defined above. We use these electronic probability parameters to weight each point contributing to the probability distribution in $\theta$, which is almost uniform originally. This splits it into two distinct distributions, each with three well-defined peaks separated by a 120$^\circ$ increment. These are shown in Figure~\ref{fig:JT}, exploiting a representation in polar coordinates with an offset origin. The presented complementary distributions clearly show that the individual symmetries of the molecular distortions are accompanied by spin densities of the same type, as can be deduced from the fact that the distortion at $\theta = 0^\circ$ is uniquely identified with the distortion of molecular geometry corresponding to the $\mathrm{A_u}$ electronic state. It thus appears that the electronic character of the JT effect of the benzene radical anion in liquid ammonia follows closely the predicted gas-phase theory while the solvent acts as a stabilizing, but non-perturbing environment. Due to the correlation shown in Figure~\ref{fig:JT}, we conclude that similar information about the JT effect can be extracted from the immediate spin density as well as from the immediate molecular geometry of the solute. Even though the molecular geometries undergo almost free pseudorotation with effectively no free energy barriers and can not therefore be clustered into distinct populations of different pseudorotamers, the situation is different for the electronic state of the system. As it moves along the pseudorotation path, it transitions rather sharply between ground-state spin densities of the two possible symmetries, as revealed by our analysis. \subsection{Energetics of the Electronic Structure} At this point, we turn our attention to the energetics of the electronic structure of the whole studied system in terms of one-electron levels. The single-electron energies are calculated using the G$_0$W$_0$ method~\cite{Huser2013/10.1103/PhysRevB.87.235132,Wilhelm2016/10.1021/acs.jctc.6b00380} on an ensemble of 205 structures drawn with a 0.5~ps stride from our previously published hybrid DFT trajectories of the benzene radical anion as well as neutral benzene for comparison. The absolute energies of the whole spectrum were shifted as detailed in Section~\ref{sec:methodology}. The distribution of the obtained G$_0$W$_0$ quasiparticle energies, which accurately approximate electron binding energies, represents the EDOS and is shown in panel A of Figure~\ref{fig:edos}. The dominant three-peak pattern in both systems can be readily related to the neat liquid ammonia EDOS~\cite{Buttersack2019/10.1021/jacs.8b10942}, shown here in gray shading for reference. In our systems with solutes, it is accompanied by a multitude of low-intensity features along the whole range of energies. We can now use the projection approach detailed in Section~\ref{sec:methodology} to isolate these features and examine the solute and solvent spectra separately. \begin{figure}[tb!] \centering \includegraphics[width=\linewidth]{EDOS-PDOS.pdf} \caption{The solvated benzene radical and neutral benzene total G$_0$W$_0$ EDOS and the PDOS projections on the solutes. Panel A: the total EDOS of the benzene radical anion (red) and neutral benzene (black) in liquid ammonia. The calculated pure liquid ammonia EDOS~\cite{Buttersack2019/10.1021/jacs.8b10942} is shown in gray. Consistently with the published pure ammonia data, the corresponding peaks are labeled by the symmetry labels of the gas-phase ammonia molecular orbitals. Panel B: PDOS of the benzene radical anion. The projection shows a detailed account of the electronic structure of the anion, including the highest occupied state, marked by its binding energy and a black triangle. Panel C: The benzene radical anion PDOS resolved for the two type of JT-relevant electronic structure symmetries. Note that the small differences between the blue ($\mathrm{A_{u}}$) and orange ($\mathrm{B_{1u}}$) curves, caused by sampling from the corresponding smaller subsets of the calculated G$_0$W$_0$ energies, are insignificant within the available statistics. The PDOS of both JT structures is therefore identical. Panel D: PDOS of neutral benzene in liquid ammonia.} \label{fig:edos} \end{figure} Focusing first on the benzene radical anion, we obtain the solute PDOS shown in panel B of Figure~\ref{fig:edos}. Clearly, this component isolates the low-intensity features that do not overlap with the neat ammonia EDOS and, moreover, uncovers additional ones that were previously contained in the high-intensity solvent peaks. Most notably, this solute PDOS suggests that the highest energy state, occupied by the excess electron, is fully accounted for by the solute, consistent with the previously observed spatial localization of the spin density~\cite{Brezina2020/10.1021/acs.jpclett.0c01505}. Its mean binding energy of $-$2.34~eV and the absence of tails extending into the positive values prove that the excess electron on benzene is bound relative to the vacuum level, thus conclusively answering the question of stability of the molecular structure of the anion as long as it is solvated in liquid ammonia. This is in excellent agreement with the vertical electron binding energy of $-$2.30~eV obtained by explicit ionization calculations of benzene radical anion and ammonia gas-phase clusters in the infinite cluster size limit~\cite{Kostal2021/10.1021/acs.jpca.1c04594}. Compared to neutral benzene (Figure~\ref{fig:edos}, panel D), the whole G$_0$W$_0$ anion solute PDOS is systematically shifted towards weaker binding energies by several electronvolts. Its shape is modified as well, including several peak splittings not observed in the neutral system. These are likely due to the overall lower symmetry of the anion, rather than due to the presence of two distinct JT pseudorotamers, which give rise to identical PDOS within the available statistical sampling, as shown in panel C of Figure~\ref{fig:edos}. Since the excess electron binding energy in the benzene radical anion is close to the binding energy of the solvated electron of $-2.0$~eV~\cite{Buttersack2020/10.1126/science.aaz7607}, an overlap might arise in an experimental photoelectron spectrum if the two species coexist in equilibrium, leading to a single broader peak or perhaps a double peak feature. This suggests that the excess electron binding energy itself might not be sufficient to prove the presence of the benzene radical anion. However, a viable workaround exists in the predicted changes of the lower electronic levels of benzene after the addition of the excess electron. These are large enough to be measured and several bands are localized in the regions where no overlap with the solvent signal is expected, as clearly shown by the projected densities. Next, we concentrate on the solvent subspace. In Figure~\ref{fig:dr-pdos}, the solvent PDOS shown in the left-hand side panels in gray shading features subtle differences compared to the EDOS of neat ammonia. These appear because of the changes of the electronic structure of the solvent molecules induced by the interaction with the radical anion solute. To better quantify this perturbation, we exploit the molecular resolution of the PDOS projection to resolve the solvent PDOS as a function of distance between the solute center of mass and the ammonia nitrogen atoms (Figure~\ref{fig:dr-pdos}, main panels). The uniformity of the resolved distribution along the distance axis is achieved by factoring out the probability density in this distance. In an infinite system, this is proportional to $4\pi r^2 g(r)$, where $g(r)$ is the radial distribution function. For our finite simulation cell, this quantity is shown in the bottom panel of Figure~\ref{fig:dr-pdos}; note the decay starting after $\sim$7~\AA\ that corresponds to half the length of the simulation box. The distance resolution reveals a small systematic shift towards weaker electron binding energies in the proximity of the charged solute, up to 0.4~eV in the case of the $\mathrm{1e}$ peak. The origin of this effect can be attributed to the presence of the excess electron, since neutral benzene does not have a similar effect on liquid ammonia; its resolved peaks are essentially flat over the studied distance range (see Section~\ref{sec:additional-results} of the supplementary material). The small magnitude of the perturbation of solvent one-electron levels by the solute can be used to justify the alternative method of spectrum resolution by subtraction of the neat solvent that is typically used in an experimental setting where a projection is not an option~\cite{Seidel2011/10.1021/jp203997p}. The possible causes of the observed effect include the screening of the electrostatic interaction with the excess charge by the bulk solvent and are discussed in Section~\ref{sec:additional-results} of the supplementary material in terms of molecular clusters in open boundary conditions. Additionally, we present a detailed validation of the required PDOS properties in Section~\ref{sec:additional-results} of the supplementary material. \begin{figure}[tb!] \centering \includegraphics[width=\linewidth]{Resolved-PDOS-anion.pdf} \caption{Electronic density of states projected on the solute subspace and resolved as a function of distance from the center of mass of the radical anion. Black dashed lines denote the mean of each peak again as a function of distance. The left side panel shows the total solute PDOS in gray.} \label{fig:dr-pdos} \end{figure} \section{Conclusions} \label{sec:conclusions} The reported analysis of the electronic structure of the solvated benzene radical anion in liquid ammonia complements the analysis of molecular geometry from our previous work and provides results that can be directly related to future experimental measurements of the system studied here. The JT behavior of the solvated radical anion is analogous to that predicted for the idealized gas-phase species based on fundamental theory and symmetries. The electronic state and its associated spin density correlates strongly with the dynamic distortion of the molecular geometry as it undergoes motion through the almost flat pseudorotation valley. It thus turns out that the presence of the solvent is key to stabilize the studied system electronically but does not perturb it substantially from the perspective of the JT effect. This sets the stage for possible experimental studies of the consequences of the JT effect on the molecular and electronic structure of the benzene radical anion which is not an option in the gas phase where the radical anion does not exhibit long-term stability. However, such experiments would have to rely on ultrafast techniques so that the individual JT structures are observed rather than their high-symmetry average. We quantified the solvent-induced stability of the benzene radical anion using accurate and computationally demanding condensed-phase G$_0$W$_0$ calculations performed on thermal geometries sampled from a hybrid DFT AIMD simulation. We estimated the binding energy of the excess electron to be $-$2.34~eV relative to the vacuum level, clearly showing that the excess electron represents a bound quantum state in solution. Moreover, the density of states obtained from such calculations predicts the complete valence electronic structure and thus provides a way to interpret future photoelectron spectroscopy measurements. The present work showcases the descriptive power of accurate molecular simulations and detailed analysis of their outputs. We captured subtle quantum effects in both the spatial and energy domains and obtained a detailed description of the solvated benzene radical anion in liquid ammonia, as well as a prediction of its electronic density of states that complements our previous prediction of the vibrational density of states. The immediate next step lies in exploiting the synergy between the calculations reported here and future liquid photoelectron spectroscopy measurements. Referencing the results against the baseline of the solvated neutral benzene molecule further aids the interpretation of the anticipated experimental results. This combination has the potential to experimentally corroborate the solvent-induced stability of the benzene radical anion. One remaining issue is the computational description of the thermodynamic equilibrium between the benzene radical anion and solvated electrons that will provide additional insight into the experimentally observable chemical properties of the solvated benzene radical anion as well as an entryway to the theoretical exploration of the chemistry of the Birch reduction. \section*{Supplementary Material} Additional data analysis details, additional results concerning the spin density dimensionality reduction, the evaluation of the GMM clustering and the projected densities of states as well as a video file visualizing the evolution of spin density over the pseudorotating molecular structure of the benzene radical anion are presented as supplementary material. \begin{acknowledgments} K.B. acknowledges funding from the IMPRS for Many Particle Systems in Structured Environments. This work was supported by the Project SVV 260586 of Charles University. This work was partially supported by the OP RDE project (No. CZ.02.2.69/0.0/0.0/18\_070/0010462), International mobility of researchers at Charles University (MSCA- IF II). P.J. is thankful for support from the European Regional Development Fund (Project ChemBioDrug no. CZ.02.1.01/0.0/0.0/16\_019/0000729) and acknowledges the Humboldt Research Award. This work was supported by The Ministry of Education, Youth and Sports from the Large Infrastructures for Research, Experimental Development and Innovations Project ``IT4Innovations National Supercomputing Center -- LM2015070''. The authors thank Hubert Beck and Tomáš Martinek for helpful comments on the manuscript. \end{acknowledgments} \section*{Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request. \section*{References}
1,108,101,564,018
arxiv
\section{Acknowledgments} We received funding from the TNO Appl.AI program, the province of North Brabant in the Netherlands and the SmartwayZ.NL program. We also want to thank Taoufik Bakri and Bachtijar Ashari for preparing our mobility datasets. \section{Introduction} Decision support systems based on machine learning models are being developed for a growing number of domains. To deploy these systems for operational use, it is crucial that the system provides tangible explanations. It needs to be transparent about the underlying inference process of its predictions and about its own limitations. For example, in a medical setting it is important that a doctor knows under which circumstances the system is unable to provide reliable advice regarding a diagnosis~\cite{Papanastasopoulos2020}. Similarly, when a decision support system is used for investment decisions in policy making, then it is important that the users of the system get informed about the uncertainty associated with the advice it provides~\cite{Arroyo2019}. The importance of explanation capabilities is also emphasized by the guidelines on trustworthy AI from the European Commission~\cite{EU2019}, which includes explainability about capabilities and limitations of AI models as a key requirement. Developing methods for explainability in machine learning has gained significant interest in recent years~\cite{Burkart2021}. Global explanations give an overall view of the knowledge encoded in the model. This is very relevant for knowledge discovery like in biology or medical applications. Examples are model coefficients as in logistic regression \cite{Cox1958} and indications of feature importance such as with random forests~\cite{Breiman2002} and gradient boosting~\cite{Chen2016}. Local explanations on the other hand explain predictions for individual datapoints. For example, SHAP and LIME explain the class prediction of a datapoint by highlighting features which are locally most influencing the prediction~\cite{Lundberg2017,Ribeiro2016}. In addition to explaining class predictions, methods are needed that explain the performance of a classifier. Such explanations can be used by a data scientist to understand under which circumstances a base classifier does or does not not perform well. If the explanation defines that the model does not perform well for a specific subset of the data, then the data scientist may decide to look for additional data, additional features, or otherwise attempt to improve the model in a focused way. The explanations can also be used to inform e.g. a consultant or medical doctor about circumstances in which a model cannot be trusted, which is also relevant for engineers who bring models to production. In existing literature only a method for explaining the uncertainty of individual predictions has been proposed (e.g., \citeauthor{Antoran2021}, \citeyear{Antoran2021}). For explaining the performance characteristics and limitations of classifiers globally, no methods have been published to the best of our knowledge. This paper presents a model-agnostic PERFormance EXplainer (PERFEX) to derive explanations about characteristics of classification models. Given a \emph{base} classifier, a dataset and a classification performance metric such as the prediction accuracy, we propose a \emph{meta} learning algorithm that separates the feature space in regions with high and low prediction accuracy and enables to generate compact explanations for these regions. In the following sections we define the problem formally and overview related work. Then, we describe PERFEX in detail. We evaluate the method in experiments based on several classification methods and datasets including our own case study. The experiments show that PERFEX provides clear explanations in scenarios where explanations from SHAP and LIME are not sufficient to gain trust. We finalize the paper with our conclusions. \section{Related Work} In this section we overview the work related to the stated problem ranging from model agnostic individual prediction to cluster based explanations and explaining uncertainties. First, SHAP~\cite{Lundberg2017} and LIME~\cite{Ribeiro2016} can be used to create explanations about individual predictions. SP-LIME~\cite{Ribeiro2016} is a variant of LIME which aims to enable a user to assess whether a model can be trusted by providing an explanation for a group of samples as set of individual explanations. This is problematic in domains with many features, it requires that the user inspects many instances, and it is unclear whether a set of local explanations gives global understanding of a model. Anchors~\cite{Ribeiro2018} is related to LIME and aims to explain how a model behaves on unseen instances, but only locally. K-LIME is another variant of LIME, which is part of the H2O Driverless AI platform~\cite{H2O}. It performs a $k$-means clustering, and for each cluster it fits a linear model to explain features influencing the predictions in that cluster. In contrast to our problem, it uses a normal classification model fitting criterion instead of explaining a (base) learner using its performance metric. Interpretable clustering~\cite{Bertsimas2021} clusters data based on a tree structure. It derives an optimal clustering tree using mixed-integer optimization, and the branches in the tree structure make the clustering interpretable. Although this approach may deliver compact cluster explanations, like the LIME variants, it models the distribution of the data itself instead of the prediction structure of a base learner. A clustering based on the predictions of a model cannot be easily integrated in this exact optimization framework, especially if the computation of the performance metric is non-linear. Explanations of prediction characteristics of a classifier is related to explanations of uncertainty. The CLUE method~\cite{Antoran2021} can be used to explain which parts of the input of a deep neural network cause uncertainty by providing a counterfactual explanation in the input space. CLUE only provides an uncertainty explanation for an individual input, and it cannot be used to inform the user about the circumstances under which a model is uncertain. Our work also relates to Interpretable Confidence Measures~(ICM), which uses the accuracy as a proxy for uncertainty~\cite{VanderWaa2020}. A prediction for a datapoint is considered to be uncertain if the classifier makes mistakes for similar datapoints. Our problem is to provide e.g. uncertainty explanations for groups of datapoints, whereas ICM only focuses on individual datapoints. Finally, there is a link with Emerging Pattern Mining~(EPM), which can be used to capture contrasts between classes~\cite{Dong1999}. An important difference is that EPM aims at find patterns in data, while we aim at finding patterns in the modeled data (by potentially any classifier). \section{Problem Statement} We consider a classification task in which a base classifier~$\mathcal{C}$ is trained to assign a datapoint~$x$ to a class. The set~$\mathcal{K}=\{c_1, c_2, \ldots, c_k \}$ contains all~$k$ classes considered, and~$\mathcal{C}(x) \in \mathcal{K}$ denotes the class to which datapoint~$x$ belongs according to~$\mathcal{C}$. The classifier is trained using a tabular dataset~$\mathcal{X}_t$ containing~$n$ datapoints, and for each datapoint~$x_i \in \mathcal{X}_t$ the true class label is denoted by~$y_i \in \mathcal{K}$. Each datapoint in the dataset is defined by~$m$ feature values, and we use~$x^j$ to refer to feature value~$j$ of datapoint~$x$. The prediction performance of classifier~$\mathcal{C}$ can be measured using standard metrics, such as accuracy, precision, recall, F1-score and expected calibration error~\cite{Guo2017}. We define the prediction performance metric~$\mathcal{M}$ as a function that takes a classifier~$\mathcal{C}$, test datapoints~$x_1,\ldots, x_p$ and true labels~$y_1, \ldots, y_p$ as input, and it computes a real-valued score as output. The problem we consider is: given a classifier~$\mathcal{C}$, a metric~$\mathcal{M}$, an independent dataset~$\mathcal{X}$ and corresponding ground truth labels~$\mathcal{Y}$, find a compact explanation for subgroups of the data having either low or high (sub) performance. The compactness refers to the amount of information that the explanation presents to the user. As an example we consider prediction accuracy as performance metric~$\mathcal{M}$, and we visually illustrate the problem based on a one-dimensional dataset with feature~$z$, as shown in Figure~\ref{fig:example}. The symbols indicate whether predictions from a given base classifier are correct (dot) or not (cross) when predicting for the ten datapoints that are shown. The overall prediction accuracy is~$0.6$. However, this number does not tell us under which circumstances the classifier performs well, and when it does not perform well. We would like to create explanations which tell that the classifier does not perform well for~$z<0$~(accuracy~$2/5=0.4$), while it does perform well otherwise~(accuracy~$4/5=0.8$). Instead of accuracy other performance metrics~$\mathcal{M}$ may be used, such as precision, recall, F1-score and expected calibration error. \begin{figure}[t] \includegraphics[width=\linewidth]{images/fig_example.pdf} \caption{\label{fig:example}One-dimensional dataset with correct predictions (dot) and incorrect predictions (cross)} \end{figure} \section{Classifier PERFormance EXplainer} \label{sec:trees} This section describes our method to find compact explanations for subsets of datapoints based on local high or low performance of the base learner. As overviewed in the Related Work section applying clustering algorithms, such as $k$-means, is not suitable because $k$-means does not cluster based on~$\mathcal{M}$. A clustering based on a decision tree can address this problem, because datapoints in leafs can be seen as clusters and the branch conditions in the tree can be used to extract explanations. If the classifier accuracy is used as metric~$\mathcal{M}$, then a standard decision tree can be fitted which uses train targets which equal 1 if the base classifier predicts correctly for a datapoint, and 0 otherwise. This would yield a tree which distinguishes subsets of data with low accuracy from subsets of data with high accuracy, and allows for explanations. However, for other performance metrics~$\mathcal{M}$ such targets cannot be defined. We introduce PERFEX, model-agnostic method to explain the prediction performance of a base classifier for any performance metric~$\mathcal{M}$. \subsection{Creating Subsets of Data using Tree Structure} \label{sec:treegeneration} \begin{figure}[t] \includegraphics[width=\linewidth]{images/fig_decomposition.pdf} \caption{\label{fig:decomposition}Splitting dataset~$\mathcal{X}$ into subsets~$\mathcal{X}'$ and~$\mathcal{X}''$} \end{figure} The basic idea of PERFEX is to divide~$\mathcal{X}$ up in a hierarchical manner leading to a tree-structured meta learner. This enables us to naturally split the data based on a split condition that depends on~$\mathcal{M}$, similar to the construction of classification trees. More importantly, through the hierarchical process the tree has typically a limited depth, such that when we use the branches as conditions in a decision rule, it leads to a compact explanation. The process is schematically illustrated in Figure~\ref{fig:decomposition}. For convenience we illustrate the tree construction based on the same accuracy values as in Figure~\ref{fig:example}. In the root node we consider a classifier~$\mathcal{C}$, metric~$\mathcal{M}$, dataset~$\mathcal{X}$ and the corresponding labels. The prediction metric score for $\mathcal{X}$ can be obtained by evaluating~$\mathcal{M}$, which gives accuracy 0.6 in the figure. This value has been computed using the full dataset~$\mathcal{X}$, but it does not enable the user to understand when this metric value is low or high. We provide this additional understanding to the user by decomposing~$\mathcal{X}$ into two subsets~$\mathcal{X}' \subset \mathcal{X}$ and $\mathcal{X}'' \subset \mathcal{X}$, such that $\mathcal{M}$ evaluates to a low value for $\mathcal{X}'$ and to a high value for $\mathcal{X}''$. This process is illustrated by the child nodes, which evaluate to an accuracy of 0.4 and 0.8, respectively. The branch conditions in the tree can be used to explain to a user when the performance metric evaluates to a low or high value. \begin{algorithm}[t] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{classifier~$\mathcal{C}$, dataset~$\mathcal{X}$, labels~$y_i$ ($\forall x_i \in \mathcal{X}$), prediction metric~$\mathcal{M}$, minimum subset size~$\alpha$} \Output{subsets~$\mathcal{X}' \subset \mathcal{X}$ and $\mathcal{X''} \subset \mathcal{X}$ with corresponding labels, split condition~$s$} $\mathcal{X}' \leftarrow \emptyset$,~~$\mathcal{X''} \leftarrow \emptyset$,~~$s\leftarrow (0,0)$, ~~$\beta \leftarrow 0$\\ \For{$j=1,\ldots,m$}{ \ForEach{unique value $v$ of feature~$j$ in $\mathcal{X}$}{ $\hat{\mathcal{X}}' \leftarrow \emptyset$,~~$\hat{\mathcal{X}}'' \leftarrow \emptyset$\label{line:subsetstart}\\ \ForEach{$x \in \mathcal{X}$}{ \uIf{$x^j \leq v$}{\label{line:comparison} $\hat{\mathcal{X}}' \leftarrow \hat{\mathcal{X}}' \cup \{ x \}$ } \Else{ $\hat{\mathcal{X}}'' \leftarrow \hat{\mathcal{X}}'' \cup \{ x \}$ } }\label{line:subsetend} $e' \leftarrow$ evaluate $\mathcal{M}$ using $\mathcal{C}$, $\hat{\mathcal{X}}'$ and labels\label{line:beststart}\\ $e'' \leftarrow$ evaluate $\mathcal{M}$ using $\mathcal{C}$, $\hat{\mathcal{X}}''$ and labels\\ $\beta' \leftarrow |e' - e''|$\\ \If{$\beta' > \beta$ and $|\hat{\mathcal{X}}'| \geq \alpha$ and $|\hat{\mathcal{X}}''| \geq \alpha$}{ $\beta \leftarrow \beta'$,~$\mathcal{X}' \leftarrow \hat{\mathcal{X}}'$,~$\mathcal{X}'' \leftarrow \hat{\mathcal{X}}''$,~$s \leftarrow (j, \beta)$ }\label{line:bestend} } } \caption{\label{alg:treegeneration}PERFEX} \end{algorithm} The tree structure in Figure~\ref{fig:decomposition} can be automatically created using an algorithm that closely resembles to procedure for generating decision trees for classification and regression~\cite{Breiman1984}. A key difference is that we use a split condition based on~$\mathcal{M}$ during the tree generation procedure, rather than using e.g. the Gini impurity. Algorithm~\ref{alg:treegeneration} shows how to split a dataset~$\mathcal{X}$ for all possible features into subsets~$\mathcal{X}' \subset \mathcal{X}$ and $\mathcal{X}'' \subset \mathcal{X}$ using prediction metric~$\mathcal{M}$ as split criterion. It enumerates all possible splits into subsets $\mathcal{X}'$ and $\mathcal{X}''$. For numerical and binary features a less-than-or-equal condition can be used on line~\ref{line:comparison}, and for categorical features an equality condition should be used. For features with continuous values, it may be practical to consider only a fixed number of quantiles, rather than enumerating all unique values. After creating subsets on lines~\ref{line:subsetstart}-\ref{line:subsetend}, it uses~$\mathcal{M}$ to evaluate the metric value for both subsets, and it keeps track of the best split found so far. The quality of a split is determined by computing the difference between the performance metric values of both subsets. Since we want to distinguish subsets with low and high metric values, the algorithm returns the subsets with maximum difference. The split condition corresponding to the best split is stored in the tuple~$s$, which contains both the index of the feature and the feature value used for splitting. Algorithm~\ref{alg:treegeneration} shows how one node of the tree ($\mathcal{X}$) is divided into two child nodes ($\mathcal{X}'$, $\mathcal{X}''$). In order to create a full tree, the algorithm should be applied again to~$\mathcal{X}'$ and~$\mathcal{X}''$. This process repeats until a fixed depth is reached. Another stop criterion based on confidence intervals is discussed below. \subsection{Confidence Intervals on Values of~$\mathcal{M}$} Splitting data using Algorithm~\ref{alg:treegeneration} should terminate if the size of either~$\mathcal{X}'$ or $\mathcal{X}''$ becomes too small to provide a good estimation of metric~$\mathcal{M}$. This can be assessed based on a confidence interval on~$e'$ and $e''$. We only discuss the derivation for~$e'$ because for~$e''$ the procedure is identical. The actual derivation is dependent on the chosen metric~$\mathcal{M}$. Below we illustrate it for the metrics accuracy and precision. For accuracy the estimator~$e' = u~/~|\mathcal{X}'|$ can be used, in which~$u$ represents the total number of correct predictions. The estimator~$e'$ follows a binomial distribution, and therefore we can use a binomial proportion confidence interval: \begin{align} \left( e' - z \sqrt{\frac{e'(1-e')}{|\mathcal{X}'|}}, e' + z \sqrt{\frac{e'(1-e')}{|\mathcal{X}'|}} \right), \end{align} in which~$z$ denotes the Z-score of the desired confidence level~\cite{Pan2002}. Given maximum interval width~$D$, combined with the insight that the term $e'(1-e')$ takes a value that is at most~$0.25$, we obtain the minimum number of datapoints, which can be used as termination condition: \begin{align} z \sqrt{\frac{0.25}{|\mathcal{X}'|}} = \frac{D}{2}~~~~\Rightarrow~~~~|\mathcal{X}'| = \frac{z^2}{D^2}. \end{align} For example, when using a 95 percent confidence level and maximum interval width~$0.1$, the minimum number of datapoints in $\mathcal{X}'$ equals $1.96^2~/~0.1^2 \approx 384$. For other proportion metrics such as precision the derivation is slightly different, because precision does not depend on all datapoints in~$\mathcal{X}'$. For example, the precision for class~$c_i$ equals~$u~/~|\{ x \in \mathcal{X}'~|~ \mathcal{C}(x)=c_i \} |$, in which $u$ denotes the number of datapoints in $\{ x \in \mathcal{X}'~|~ \mathcal{C}(x)=c_i \}$ which were predicted correctly. By applying the same derivation as above, it can be seen that the termination condition for tree generation should be based on the number of datapoints in $\{ x \in \mathcal{X}'~|~ \mathcal{C}(x)=c_i \}$ rather than~$\mathcal{X}'$. \subsection{Tree Evaluation using Test Set} \begin{algorithm}[t] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{tree~$\mathcal{T}$ created by recursively applying Algorithm~\ref{alg:treegeneration}, metric~$\mathcal{M}$, classifier~$\mathcal{C}$, dataset~$\mathcal{X}$, test set $\bar{\mathcal{X}}$, and labels} \Output{mean absolute error~$\hat{e}$, metric difference~$d$} $L \leftarrow$~set of leafs in~$\mathcal{T}$,~~$\hat{e} \leftarrow 0$\\ \ForEach{leaf~$l \in L$}{ $\mathcal{X}_l \leftarrow$~datapoints in leaf~$l$ after applying~$\mathcal{T}$ to $\mathcal{X}$\\ $\bar{\mathcal{X}}_l \leftarrow$~datapoints in leaf~$l$ after applying~$\mathcal{T}$ to $\bar{\mathcal{X}}$\\ $e_l \leftarrow$~evaluate $\mathcal{M}$ using $\mathcal{C}$, $\mathcal{X}_l$ and labels\\ $\bar{e}_l \leftarrow$~evaluate $\mathcal{M}$ using $\mathcal{C}$, $\bar{\mathcal{X}}_l$ and labels\\ $\hat{e} \leftarrow \hat{e} + |e_l-\bar{e}_l|$ } $\hat{e} \leftarrow \hat{e}~/~|L|$,~~~~$d \leftarrow (\max_{l \in L} e_l - \min_{l \in L} e_l)$ \caption{\label{alg:error}Estimate quality of tree using test set} \end{algorithm} The tree quality can also be evaluated using a separate test set~$\bar{\mathcal{X}}$. First, the datapoints in $\bar{\mathcal{X}}$ are assigned to leafs. After that, the metric value in each leaf can be computed based on the assigned datapoints. Intuitively, it can be expected that the metric value for datapoints in a leaf of the tree is similar for $\mathcal{X}$ and $\bar{\mathcal{X}}$, regardless of the performance of the base classifier and regardless of the performance metric~$\mathcal{M}$. For example, if the accuracy in all the leafs of the tree is low, then the estimated accuracy in the leafs will also be low when using another dataset from the same distribution. Algorithm~\ref{alg:error} shows how the tree quality is determined by computing the mean absolute error based on the errors of the individual leafs. The output variable~$d$ can be used to assess to what extent PERFEX distinguishes subsets with low and high metric values. \subsection{Generating Explanations} \label{sec:generate_explanations} The tree structure created by Algorithm~\ref{alg:treegeneration} can be used to extract explanations that can be presented to a user in a text-based format. Each leaf in the tree represents a subset of the data with a corresponding metric value. Therefore, we can print information about the leaf which explains to the user how the subset of data has been constructed, and what the metric value is, as illustrated below for one leaf: \begin{Verbatim}[fontsize=\small] There are 134 datapoints for which the following conditions hold: length > 10.77, length <= 12.39 and for these datapoints accuracy is 0.68 \end{Verbatim} For each leaf we print the number of datapoints, the prediction metric value computed on the same subset of data, and the conditions that were used to split the data. The conditions can be extracted from the tree by taking the conditions used in the nodes along the path from root to leaf. \subsection{Example using 2D dataset} We provide an example using a dataset with two features, which shows visually how PERFEX creates an explanation. It will also show that explanations for the class prediction are not the same as the explanations based on~$\mathcal{M}$. The dataset is shown in Figure~\ref{fig:exampledataset} and consists of the classes red and blue, generated using Gaussian blobs with centers~$(10,10)$ and~$(30,10)$. For the purpose of the example we flip labels of some datapoints for which~$y>12$. The majority of the datapoints for which~$x < 20$ belongs to red, and the majority belongs to blue if~$x \geq 20$. Our base classifier is a decision tree with depth~1, predicting red if~$x < 20$ and blue otherwise. We investigate when the base classifier has a low accuracy by applying Algorithm~\ref{alg:treegeneration} with accuracy as metric~$\mathcal{M}$: \begin{Verbatim}[fontsize=\footnotesize] There are 100 datapoints for which the following conditions hold: y > 10.96 and for these datapoints accuracy is 0.72 \end{Verbatim} \begin{Verbatim}[fontsize=\footnotesize] There are 200 datapoints for which the following conditions hold: y <= 10.96 and for these datapoints accuracy is 1.0 \end{Verbatim} This explanation shows that the accuracy is lower if~$y > 10.96$, which is also the area in which datapoints belong to two classes. More importantly, it shows that the explanation for accuracy depends on~$y$, whereas the prediction made by the base classifier (and its explanation) only depend on~$x$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{images/2d_example.pdf} \caption{\label{fig:exampledataset}Example dataset with two features and two classes} \end{figure} \section{Experiments} We present the results of our experiments based on Gaussian data as well as several standard classifiers and datasets. \subsection{Evaluation of Tree Error with Gaussian Data} We start with two experiments to empirically study two hypotheses from the previous section. We use data from Gaussian distributions, allowing us to carefully control the difficulty of the prediction task. In our first experiment we show that PERFEX can be used to model a chosen prediction metric even if the original class prediction task is hard. We assume that the data comes from a one-dimensional dataset defined by two Gaussians, as shown in Figure~\ref{fig:bayes_error}. The Gaussian with~$\mu=10$ and~$\sigma=2$ corresponds to the first class, and remains fixed. The datapoints of the second class follow a Gaussian distribution with~$\mu=10+\delta$ and $\sigma=2$. The parameter~$\delta>0$ is used to control the overlap of both Gaussians, which affects the difficulty of the prediction task. In the figure this is visualized for~$\delta=3$. Since the standard deviation of both distributions is the same, the difficulty of the prediction task can be expressed using the region of error, which is visualized using the shaded red area. We define a classifier which predicts the class for a datapoint~$x$ by taking the class for which the probability density is maximum:~$\mathcal{C}(x) = \argmax_{i \in \{0,1\}} f(\mu_i, \sigma_i, x)$, in which~$f$ denotes the probability density function. It can be expected that the prediction performance of the classifier drops if the region of error grows. This is confirmed in Figure~\ref{fig:results_gaussians}, which shows the weighted F1 score of the classifier for an increasing region of error. We also created a PERFEX tree for accuracy, for which the mean absolute error (MAE, computed by Algorithm~\ref{alg:error}) is also shown. The error is close to zero, which confirms that PERFEX can model the accuracy of the classifier~$\mathcal{C}$ even if the performance of this classifier is low. \begin{figure}[t] \centering \includegraphics[width=0.87\linewidth]{images/example_bayes_error.pdf} \caption{\label{fig:bayes_error}Distribution with two classes defined by two Gaussians and one feature. The shaded area is the region of error.} \end{figure} Now we show that a generated tree can be used to model a prediction metric for a given classifier if the data used for creating the meta decision tree comes from the same distribution as the data used for creating the base classifier. We conduct an experiment in which we measure the error of the PERFEX tree, and we gradually shift the data distribution for creating the tree, which causes the error to increase. We use two Gaussians for creating the prediction model~$\mathcal{C}$, with~$\mu_0 = 10$, $\mu_1 = 13$ and~$\sigma_0 = \sigma_1 = 2$. The data used for creating the PERFEX tree uses the same distributions, except that $\mu_1 = 13 + \delta$ with $\delta \geq 0$. If~$\delta = 0$, then all datasets come from the same distribution, and in that case the error of the meta decision tree is low, as can be seen in Figure~\ref{fig:results_distribution_shift}. If we shift the distribution of the data for creating our tree by setting~$\delta > 0$, then we expect that the error increases due to this mismatch in the data. The figure confirms this, and it shows that a tree can be fitted only if its training data comes from the same distribution as the classifier training data. \begin{figure}[t] \centering \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=0.93\linewidth]{images/results_gaussians.pdf} \caption{Error for increasing region of error} \label{fig:results_gaussians} \end{subfigure}% \hfill \begin{subfigure}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{images/results_distribution_shift.pdf} \caption{Error of meta model for shifted data distribution} \label{fig:results_distribution_shift} \end{subfigure} \caption{Results of experiments with Gaussian data} \end{figure} \subsection{Evaluation on Several Datasets and Models} We apply PERFEX to different datasets, classifiers and split conditions based on several metrics. Given that the meta tree needs sufficient data to create generalizable clusters, 4 classification datasets with at least 1000 datapoints from the UCI repository \cite{uci:2019} were chosen: abalone, car evaluation, contraceptive method choice, and occupancy detection. While experimenting, we noticed that the classification of occupancy had almost perfect scores on the test-set. In that case, the meta model would not be able to create clusters. For that reason, we made the classification task more difficult by only including two features in the dataset: CO2 and temperature. Finally, we also included a fifth 2D dataset called \emph{gaussian blobs}, which contains three clusters of datapoints that are partially overlapping. These clusters were sampled from a isotropic Gaussian distribution with cluster centers (10, 10), (20,12) and (15, 15), and a standard deviation of 3. We use this dataset to validate whether the tree is able to distinguish the non-overlapping regions with perfect scores and the overlapping regions with lower scores. \begin{table*}[t] \small \centering \def0.8{0.1} \begin{tabular}{llrrrrrrr} \toprule \textbf{Dataset} & $\mathcal{C}$ & \textbf{Accuracy of} $\mathcal{C}$ & \textbf{Num. leafs} & \textbf{Tree depth} & \textbf{Min accuracy} & \textbf{Max accuracy} & \textbf{MAE} & \textbf{STD AE} \\ \midrule \multirow{5}{*}{Abalone} & SVC & 53,6\% & 6 & 6 & 32,1\% & 84,3\% & 5,7\% & 4,6\% \\ & LR & 56,5\% & 6 & 6 & 37,2\% & 83,5\% & 4,9\% & 3,8\% \\ & RF & 54,5\% & 6 & 6 & 43,9\% & 82,7\% & 3,6\% & 2,7\% \\ & DT & 52,5\% & 6 & 6 & 37,0\% & 78,7\% & 5,1\% & 3,0\% \\ & KNN & 49,0\% & 6 & 6 & 32,2\% & 82,5\% & 4,2\% & 2,6\% \\ \midrule & SVC & 95,0\% & 2 & 2 & 87,6\% & 99,1\% & 1,1\% & 0,7\% \\ Car & LR & 90,5\% & 4 & 4 & 74,8\% & 100,0\% & 1,9\% & 2,9\% \\ evaluation & RF & 95,2\% & 2 & 2 & 89,2\% & 98,5\% & 1,4\% & 0,2\% \\ & DT & 94,8\% & 3 & 3 & 85,8\% & 100,0\% & 1,6\% & 0,9\% \\ & KNN & 87,5\% & 4 & 4 & 65,5\% & 99,4\% & 3,8\% & 3,9\% \\ \midrule & SVC & 44,3\% & 3 & 3 & 33,0\% & 63,3\% & 7,2\% & 5,3\% \\ Contraceptive & LR & 53,4\% & 3 & 3 & 41,1\% & 63,8\% & 8,8\% & 2,0\% \\ method & RF & 51,4\% & 4 & 4 & 38,3\% & 62,4\% & 4,3\% & 2,4\% \\ choice & DT & 52,3\% & 3 & 3 & 38,7\% & 62,6\% & 4,7\% & 1,3\% \\ & KNN & 48,2\% & 4 & 3 & 36,0\% & 62,4\% & 9,5\% & 6,2\% \\ \midrule & SVC & 86,8\% & 12 & 6 & 20,6\% & 98,8\% & 3,6\% & 2,4\% \\ Occupancy & LR & 81,9\% & 10 & 6 & 21,1\% & 98,5\% & 3,9\% & 3,6\% \\ detection & RF & 94,5\% & 7 & 6 & 80,3\% & 100,0\% & 2,0\% & 1,7\% \\ & DT & 93,5\% & 9 & 6 & 72,5\% & 98,3\% & 3,5\% & 2,0\% \\ & KNN & 88,6\% & 9 & 6 & 66,0\% & 98,6\% & 3,7\% & 2,0\% \\ \midrule & SVC & 80,3\% & 8 & 6 & 73,4\% & 100,0\% & 2,0\% & 2,8\% \\ Gaussian & LR & 80,2\% & 8 & 6 & 73,1\% & 100,0\% & 1,2\% & 1,2\% \\ blobs & RF & 78,6\% & 9 & 6 & 69,9\% & 100,0\% & 2,3\% & 1,5\% \\ & DT & 76,9\% & 7 & 6 & 70,1\% & 98,5\% & 2,8\% & 2,8\% \\ & KNN & 75,6\% & 8 & 6 & 67,7\% & 100,0\% & 3,5\% & 2,6\% \\ \bottomrule \end{tabular} \caption{\label{table:experiment_different_datasets_models}Evaluation of PERFEX using several datasets and base classifiers with accuracy as metric~$\mathcal{M}$} \end{table*} Each dataset was split into a train set (50\%), test 1 set (25\%) and test 2 set (25\%), in a stratified manner according to the target. The train set was used to train 5 base classifiers: Logistic Regression (LR), Support Vector Machine with RBF kernel (SVM), Random Forest (RF), Decision Tree (DT), and KNN with K=3. Test set 1 was used to evaluate the base classifier and to build the PERFEX tree with maximum depth 6. The tree is used to cluster the datapoints of test set 1 and test set 2, separately. This tree is evaluated by comparing the accuracy scores of the corresponding clusters of the test sets using Mean Absolute Error (MAE), as described in Algorithm~\ref{alg:error}. This shows whether PERFEX is able to generalize the accuracy estimates to an unseen dataset. Table \ref{table:experiment_different_datasets_models} shows both the performance of the base classifiers and the corresponding PERFEX tree based on accuracy. The classification models have a diverse accuracy, ranging from 51\% to 95\%. For PERFEX the table shows the amount of leaves, the depth of the tree, the minimum and maximum accuracies among the leaves, the MAE, and the STD of the Absolute Error (AE). PERFEX is able to separate datapoints with high and low accuracy, with the lowest difference of 9.3\% (Car RF) and the highest of 78.2\% (Contraceptives SVC). For the gaussian blobs, we see clusters with perfect or almost perfect scores, as expected. The MAE is generally low, ranging from 1.1\% to 9.5\%. We also see a pattern in which classification models with high accuracy result in lower MAE. The supplement contains results for other metrics~$\mathcal{M}$, and more details on the datasets and our code\footnote{\url{https://github.com/erwinwalraven/perfex}}. \subsection{Limitations of SHAP and LIME} SHAP and LIME were introduced as methods to explain why a classifier makes a prediction, and to gain trust about the prediction. However, this can be dangerous in practice because SHAP and LIME provide explanations regardless of the classifier performance. We show that circumstances exist in which SHAP and LIME mark specific features as very important for a high-confidence prediction, while PERFEX clearly indicates that people should not rely on the classifier. \begin{figure*}[t] \centering \begin{subfigure}{0.24\linewidth} \centering \includegraphics[height=2.8cm]{images/lime_shap_dataset.pdf} \caption{Dataset} \label{fig:lime_shap_dataset} \end{subfigure}% \hfill \begin{subfigure}{0.24\linewidth} \centering \includegraphics[height=2.8cm]{images/lime_shap_highlight.pdf} \caption{Highlighted datapoints} \label{fig:lime_shap_highlight} \end{subfigure}% \hfill \begin{subfigure}{0.24\linewidth} \centering \includegraphics[height=2.8cm]{images/lime_shap_decision_boundary.pdf} \caption{Decision boundary} \label{fig:lime_shap_decision_boundary} \end{subfigure}% \hfill \begin{subfigure}{0.24\linewidth} \centering \includegraphics[height=2.8cm]{images/lime_shap_boxplot.pdf} \caption{Feature importance} \label{fig:lime_shap_boxplot} \end{subfigure}% \caption{Results of experiment with SHAP and LIME, with feature $x^0$ horizontal and feature $x^1$ vertical} \end{figure*} We consider a scenario in which a doctor uses a classifier to create predictions for patients that arrive, and SHAP and LIME are used to inform the doctor about the importance of features. The classifier is a random forest that was trained by a data scientist using the dataset shown in Figure~\ref{fig:lime_shap_dataset}. It can be seen that it may be difficult to predict in the area where both classes are overlapping. However, the doctor is not aware of this, and during model development the data scientist concluded based on accuracy (0.76) that the performance of the classifier is sufficient. Suppose that a patient arrives with features~$x^0=10$ and~$x^1=12$. It may happen that the classifier assigns a high score to one class, and SHAP and LIME highlight one feature as much more important than the other. This is not desirable, because the doctor gets the impression that the system can be trusted, while the classifier should not be used for such patients. We now show that datapoints exist for which the described problem arises. According to PERFEX the cluster with the lowest accuracy (0.51) is defined by~$11.2 \leq x^0 \leq 13.6$. In Figure~\ref{fig:lime_shap_highlight} we highlight datapoints that belong to this cluster, and for which two additional properties hold. First, the random forest assigns at least score 0.8 to the predicted class. Second, the prediction made by the random forest is not correct. For each highlighted datapoint we apply SHAP and LIME, which gives importance~$i_0$ for feature~$x^0$ and importance~$i_1$ for feature~$x^1$. Next, we compute~$\max(|i_0|, |i_1|) - \min(|i_0|, |i_1|)$, which is high if the absolute importance of one feature is higher than the other. The results are summarized in Figure~\ref{fig:lime_shap_boxplot}, in which we can see that both explanation methods give the impression that one feature is much more important than the other. Suppose that the doctor would investigate one of the highlighted datapoints. It would get the impression that the model is very confident, because the output score is at least~0.8, while it is actually incorrect. Additionally, SHAP and LIME define that one feature is more important than the other. The prediction and explanation combined suggest that the model can be trusted. PERFEX is a crucial tool in this scenario because it would inform the doctor that classifier accuracy tends to be low for similar datapoints. Finally, we investigate why SHAP and LIME indicate that one feature is more important than the other. The classifier decision boundary is shown in Figure~\ref{fig:lime_shap_decision_boundary}. The highlighted datapoints are located close to the boundary. We can see that SHAP and LIME attempt to explain the behavior of the classifier locally, and due to the shape of the boundary both features have varying influence on the predictions. This also confirms our intuition that SHAP and LIME only explain local behaviour of the classifier. \section{Case Study: Modality Choices in Mobility} We present a case study in which we apply PERFEX in the context of mobility. Cities are facing a transition from conventional mobility concepts such as cars and bikes to so-called new mobility concepts such as ride sharing and e-scooters~\cite{Schade2014}. To support this transition, policy makers would like to predict and understand existing modality choices for trips in their city. They use a decision support system which uses a classifier to predict the modality that an individual chooses for a trip, based on trip properties as well as personal characteristics. The classes correspond to the modalities: car, car as a passenger, public transport, bike, walk. Each datapoint is a trip consisting of trip properties and characteristics of the traveler. The trip properties define the travel time for each modality, the cost for car and the cost for public transport. For the traveler a datapoint defines whether the traveler has a driving license, whether they own a car, and whether they are the main user of the car. Our dataset consists of 40266 trips from a travel survey conducted by Statistics Netherlands~\cite{CBS2019}. PERFEX is model-agnostic and applies to any base classifier, but for this specific case study we choose a random forest to illustrate the explanations. For prediction we train a random forest with 100 trees in the forest and at least 5 datapoints in each leaf. The accuracy of the final model is 0.91. We illustrate PERFEX based on two user questions. \begin{userquestion} When is the model not able to predict public transport trips as such? \end{userquestion} For a mobility researcher analyzing the use of public transport it is important to know whether the model is actually able to label public transport trips as such. This information can be provided to the researcher by applying our method with the recall of public transport as a performance metric. \begin{Verbatim}[fontsize=\footnotesize] There are 4163 trips for which the following conditions hold: travel time public transport > 1800 seconds cost public transport > 0.74 euro travel time bike <= 1809 seconds and for these trips the class recall is 0.07 \end{Verbatim} \begin{userquestion} When does the model assign high scores to both public transport and bike? \end{userquestion} Finally, we consider a mobility researcher that wants to investigate for which trips the model expects that both public transport and bike can be chosen. In order to answer this question we use a custom performance metric during tree construction. For each datapoint we take the minimum of the predicted scores for public transport and bike, and the metric~$\mathcal{M}$ takes the mean of these values. The mean becomes high if the model assigns a high score to both classes. The explanation below intuitively makes sense: if walking takes a long time and if the traveler does not have a car, then both public transport and biking may be suitable choices. \begin{Verbatim}[fontsize=\footnotesize] There are 100 trips for which the following conditions hold: cost public transport <= 21.78 euro traveler does not own a car travel time walk > 6069 seconds and for these trips the model assigns on average at least score 0.19 to both classes \end{Verbatim} \section{Conclusions} We presented PERFEX, a model-agnostic method to create explanations about the performance of a given base classifier. Our method creates a clustering of a dataset based on a tree structure, such that subsets of data can be distinguished in which a given prediction metric is low or high. PERFEX can be used to e.g. explain under which circumstances predictions of a model are not accurate, which is highly relevant in the context of building trustworthy decision support systems. Our experiments have shown that PERFEX can be used to create explanations for various datasets and classification models, even if the base classifier hardly differentiates classes. It also shows that PERFEX is an important tool in scenarios in which SHAP and LIME are not sufficient to gain trust. PERFEX currently only uses subsets defined by AND-clauses, and therefore we aim to also investigate other types of subsets in future work~\cite{Speakman2016}. \section{Evaluation Results for Other Metrics} The PERFEX method can be applied to any performance metric~$\mathcal{M}$. In the paper we included only the results of the experiments conducted with accuracy. Table~\ref{table:precision}, \ref{table:recall} and \ref{table:f1} show the results for precision, recall and f1-score (in all cases weighted by support), respectively. This shows that PERFEX can also provide explanations about model performance for other performance metrics. \section{Details on Experimental Setup} In our experiments involving multiple datasets and classifiers we use the python library scikit-learn 0.24.2 to train the models with default parameters. For PERFEX we set the minimum number of datapoints per leaf (i.e., $\alpha$) to 100 and the maximum tree depth is 6. Furthermore, we do not split a dataset~$\mathcal{X}$ if the metric value difference after the split is smaller than $0.05$. In the case study we use the same settings, except that the tree depth is set to 3. Table~\ref{table:datasets} shows for each dataset the number of datapoints, the number of features and the number of classes. \begin{table}[h] \begin{tabular}{llll} \toprule Dataset & \#datapoints & \#features & \#classes\\ \midrule Abalone & 4177 & 8 & 3\\ Car evaluation & 1728 & 21 & 4 \\ Contraceptive m.c. & 1473 & 24 & 3 \\ Occ. detection & 10000 & 2 & 2 \\ Gauss. blobs & 10000 & 2 & 3\\ Case study data & 40266 & 11 & 5\\ \bottomrule \end{tabular} \caption{\label{table:datasets}Details about datasets used in experiments and case study} \end{table} \section{Source code of PERFEX} The source code of PERFEX can be found as a python package on \url{https://github.com/erwinwalraven/perfex}. \begin{table*}[t] \small \centering \def0.8{0.8} \begin{tabular}{llrrrrrrr} \toprule \textbf{Dataset} & $\mathcal{C}$ & \multicolumn{1}{l}{\textbf{Precision of} $\mathcal{C}$} & \multicolumn{1}{l}{\textbf{Num. leafs}} & \multicolumn{1}{l}{\textbf{Tree depth}} & \multicolumn{1}{l}{\textbf{Min precision}} & \multicolumn{1}{l}{\textbf{Max precision}} & \multicolumn{1}{l}{\textbf{MAE}} & \multicolumn{1}{l}{\textbf{STD AE}} \\ \midrule \multirow{5}{*}{Abalone} & SVM & 38,6\% & 9 & 6 & 11,1\% & 80,9\% & 8,9\% & 5,8\% \\ & LR & 56,5\% & 8 & 6 & 25,0\% & 83,7\% & 14,1\% & 14,7\% \\ & RF & 52,3\% & 6 & 6 & 33,9\% & 75,3\% & 4,4\% & 1,7\% \\ & DT & 51,4\% & 6 & 6 & 41,1\% & 78,4\% & 2,5\% & 2,0\% \\ & KNN & 49,4\% & 7 & 6 & 27,1\% & 81,2\% & 4,4\% & 3,6\% \\ \midrule & SVM & 95,3\% & 3 & 3 & 84,7\% & 99,6\% & 3,3\% & 2,9\% \\ Car & LR & 90,6\% & 4 & 4 & 73,6\% & 100,0\% & 2,2\% & 2,9\% \\ evaluation & RF & 95,7\% & 2 & 2 & 88,8\% & 99,7\% & 1,8\% & 0,1\% \\ & DT & 96,2\% & 2 & 2 & 91,6\% & 99,6\% & 2,2\% & 0,8\% \\ & KNN & 86,9\% & 4 & 4 & 67,2\% & 100,0\% & 3,4\% & 3,6\% \\ \midrule & SVM & 34,0\% & 3 & 3 & 20,3\% & 63,6\% & 15,2\% & 15,2\% \\ Contraceptive & LR & 52,7\% & 4 & 4 & 41,5\% & 64,7\% & 7,5\% & 5,1\% \\ method & RF & 52,3\% & 3 & 3 & 42,0\% & 61,1\% & 6,8\% & 5,7\% \\ choice & DT & 52,4\% & 3 & 3 & 44,1\% & 64,0\% & 3,6\% & 3,4\% \\ & KNN & 47,6\% & 4 & 3 & 35,0\% & 65,4\% & 7,1\% & 4,8\% \\ \midrule & SVM & 86,5\% & 13 & 6 & 26,3\% & 99,5\% & 3,8\% & 3,4\% \\ Occupancy & LR & 80,8\% & 7 & 6 & 9,1\% & 97,3\% & 4,0\% & 3,6\% \\ detection & RF & 94,4\% & 8 & 6 & 84,2\% & 99,1\% & 3,3\% & 2,7\% \\ & DT & 93,5\% & 9 & 6 & 72,6\% & 98,4\% & 3,9\% & 2,5\% \\ & KNN & 88,3\% & 12 & 6 & 66,1\% & 99,5\% & 3,9\% & 2,5\% \\ \midrule & SVM & 80,5\% & 10 & 6 & 70,4\% & 100,0\% & 5,8\% & 5,9\% \\ Gaussian & LR & 80,3\% & 7 & 6 & 72,7\% & 100,0\% & 4,9\% & 4,7\% \\ blobs & RF & 78,0\% & 8 & 6 & 71,6\% & 100,0\% & 3,5\% & 2,9\% \\ & DT & 77,6\% & 6 & 6 & 71,7\% & 95,2\% & 3,5\% & 2,9\% \\ & KNN & 75,4\% & 7 & 6 & 65,4\% & 99,1\% & 2,3\% & 2,0\% \\ \bottomrule \end{tabular} \caption{\label{table:precision}Evaluation of PERFEX using several datasets and base classifiers with precision as metric~$\mathcal{M}$} \end{table*} $ $ \\ \newpage $ $ \\ \begin{table*}[t] \small \centering \def0.8{0.8} \begin{tabular}{llrrrrrrr} \toprule \textbf{Dataset} & $\mathcal{C}$ & \multicolumn{1}{l}{\textbf{Recall of }$\mathcal{C}$} & \multicolumn{1}{l}{\textbf{Num. leafs}} & \multicolumn{1}{l}{\textbf{Tree depth}} & \multicolumn{1}{l}{\textbf{Min recall}} & \multicolumn{1}{l}{\textbf{Max recall}} & \multicolumn{1}{l}{\textbf{MAE}} & \multicolumn{1}{l}{\textbf{STD AE}} \\ \midrule \multirow{5}{*}{Abalone} & SVM & 53,6\% & 6 & 6 & 33,1\% & 84,3\% & 5,8\% & 4,6\% \\ & LR & 56,5\% & 6 & 6 & 36,5\% & 83,5\% & 5,2\% & 4,2\% \\ & RF & 54,1\% & 7 & 6 & 29,4\% & 81,9\% & 5,1\% & 2,9\% \\ & DT & 53,4\% & 6 & 6 & 42,3\% & 79,5\% & 4,8\% & 2,6\% \\ & KNN & 49,0\% & 7 & 6 & 25,2\% & 80,3\% & 4,5\% & 5,7\% \\ \midrule & SVM & 95,0\% & 2 & 2 & 87,6\% & 99,1\% & 1,1\% & 0,7\% \\ Car & LR & 90,5\% & 4 & 4 & 74,8\% & 100,0\% & 1,9\% & 2,9\% \\ evaluation & RF & 94,8\% & 2 & 2 & 88,6\% & 98,2\% & 2,2\% & 1,0\% \\ & DT & 96,3\% & 2 & 2 & 90,8\% & 99,4\% & 1,3\% & 0,3\% \\ & KNN & 87,5\% & 4 & 4 & 65,5\% & 99,4\% & 3,8\% & 3,9\% \\ \midrule & SVM & 44,3\% & 3 & 3 & 33,0\% & 63,3\% & 7,2\% & 5,3\% \\ Contraceptive & LR & 53,4\% & 3 & 3 & 41,1\% & 63,8\% & 8,8\% & 2,0\% \\ method & RF & 52,7\% & 3 & 3 & 40,8\% & 63,8\% & 8,2\% & 6,8\% \\ choice & DT & 51,6\% & 3 & 3 & 38,7\% & 60,4\% & 3,0\% & 0,5\% \\ & KNN & 48,2\% & 4 & 3 & 36,0\% & 62,4\% & 9,5\% & 6,2\% \\ \midrule & SVM & 86,8\% & 10 & 6 & 31,1\% & 98,3\% & 3,4\% & 2,3\% \\ Occupancy & LR & 81,9\% & 10 & 6 & 24,9\% & 98,2\% & 4,1\% & 2,9\% \\ detection & RF & 94,6\% & 9 & 6 & 81,1\% & 99,1\% & 2,6\% & 2,0\% \\ & DT & 93,5\% & 9 & 6 & 72,8\% & 97,8\% & 3,4\% & 1,9\% \\ & KNN & 88,6\% & 8 & 6 & 64,5\% & 98,6\% & 3,5\% & 2,1\% \\ \midrule & SVM & 80,3\% & 7 & 6 & 74,0\% & 99,5\% & 1,9\% & 1,9\% \\ Gaussian & LR & 80,2\% & 9 & 6 & 71,1\% & 100,0\% & 2,3\% & 2,6\% \\ blobs & RF & 78,5\% & 7 & 6 & 72,3\% & 99,2\% & 2,0\% & 3,3\% \\ & DT & 77,2\% & 8 & 6 & 69,0\% & 100,0\% & 2,8\% & 2,1\% \\ & KNN & 75,6\% & 9 & 6 & 66,1\% & 99,5\% & 2,3\% & 2,1\% \\ \bottomrule \end{tabular} \caption{\label{table:recall}Evaluation of PERFEX using several datasets and base classifiers with recall as metric~$\mathcal{M}$} \end{table*} $ $ \\ \newpage $ $ \\ \begin{table*}[t] \small \centering \def0.8{0.8} \begin{tabular}{lllrrrrrrr} \toprule \textbf{Dataset} & $\mathcal{C}$ & \textbf{Split metric} & \multicolumn{1}{l}{\textbf{F1-score of} $\mathcal{C}$} & \multicolumn{1}{l}{\textbf{Num. leafs}} & \multicolumn{1}{l}{\textbf{Tree depth}} & \multicolumn{1}{l}{\textbf{Min f1-score}} & \multicolumn{1}{l}{\textbf{Max f1-score}} & \multicolumn{1}{l}{\textbf{MAE}} & \multicolumn{1}{l}{\textbf{STD AE}} \\ \midrule \multirow{5}{*}{Abalone} & SVM & f1-score & 44,3\% & 9 & 6 & 16,1\% & 77,1\% & 6,5\% & 5,9\% \\ & LR & f1-score & 53,5\% & 6 & 6 & 35,3\% & 77,0\% & 7,9\% & 5,7\% \\ & RF & f1-score & 52,8\% & 6 & 6 & 33,7\% & 75,5\% & 5,2\% & 2,2\% \\ & DT & f1-score & 51,3\% & 7 & 6 & 18,7\% & 77,7\% & 5,4\% & 2,7\% \\ & KNN & f1-score & 49,1\% & 6 & 6 & 31,9\% & 80,9\% & 4,3\% & 2,9\% \\ \midrule & SVM & f1-score & 94,9\% & 2 & 2 & 87,4\% & 99,1\% & 1,2\% & 0,6\% \\ Car & LR & f1-score & 90,2\% & 4 & 4 & 73,2\% & 100,0\% & 2,2\% & 2,9\% \\ evaluation & RF & f1-score & 95,2\% & 2 & 2 & 89,3\% & 98,4\% & 1,9\% & 1,2\% \\ & DT & f1-score & 96,6\% & 2 & 2 & 91,3\% & 99,4\% & 2,3\% & 1,2\% \\ & KNN & f1-score & 86,9\% & 4 & 4 & 65,0\% & 99,7\% & 3,4\% & 3,6\% \\ \midrule & SVM & f1-score & 35,2\% & 3 & 3 & 20,1\% & 59,6\% & 6,3\% & 6,5\% \\ Contraceptive & LR & f1-score & 52,7\% & 3 & 3 & 40,0\% & 62,4\% & 8,3\% & 3,1\% \\ method & RF & f1-score & 51,1\% & 3 & 3 & 39,4\% & 60,4\% & 4,0\% & 3,1\% \\ choice & DT & f1-score & 52,0\% & 3 & 3 & 40,5\% & 65,8\% & 3,7\% & 1,9\% \\ & KNN & f1-score & 47,7\% & 4 & 3 & 33,3\% & 63,7\% & 8,2\% & 6,2\% \\ \midrule & SVM & f1-score & 85,6\% & 7 & 6 & 44,9\% & 97,9\% & 4,2\% & 3,6\% \\ Occupancy & LR & f1-score & 81,1\% & 8 & 6 & 15,5\% & 98,1\% & 4,6\% & 3,7\% \\ detection & RF & f1-score & 94,5\% & 9 & 6 & 74,1\% & 100,0\% & 2,4\% & 2,2\% \\ & DT & f1-score & 93,5\% & 9 & 6 & 76,6\% & 98,3\% & 4,8\% & 2,7\% \\ & KNN & f1-score & 88,4\% & 10 & 6 & 64,5\% & 98,9\% & 4,1\% & 3,5\% \\ \midrule & SVM & f1-score & 80,4\% & 8 & 6 & 74,2\% & 100,0\% & 3,0\% & 4,4\% \\ Gaussian & LR & f1-score & 80,2\% & 9 & 6 & 73,1\% & 100,0\% & 3,7\% & 5,2\% \\ blobs & RF & f1-score & 78,3\% & 8 & 6 & 70,0\% & 98,9\% & 2,8\% & 2,3\% \\ & DT & f1-score & 77,3\% & 8 & 6 & 69,7\% & 98,7\% & 4,2\% & 3,2\% \\ & KNN & f1-score & 75,5\% & 8 & 6 & 68,5\% & 100,0\% & 4,9\% & 2,9\% \\ \bottomrule \end{tabular} \caption{\label{table:f1}Evaluation of PERFEX using several datasets and base classifiers with f1-score as metric~$\mathcal{M}$} \end{table*}
1,108,101,564,019
arxiv
\section{Introduction} One of the important machine learning tasks is to compare pairs of objects, for example, pairs of images, pairs of data vectors, etc. There are a lot of approaches for solving the task. One of the approaches is based on computing a corresponding pairwise metric function which measures a distance between data vectors or a similarity between the vectors. This approach is called the metric learning \cite{Bellet-etal-2013,Kulis-2012,Zheng-etal-2016}. It is pointed out by Bellet et al. \cite{Bellet-etal-2013} in their review paper that the metric learning aims to adapt the pairwise real-valued metric function, for example, the Mahalanobis distance or the Euclidean distance, to a problem of interest using the information provided by training data. A detailed description of the metric learning approaches is also represented by Le Capitaine \cite{LeCapitaine-2016} and by Kulis \cite{Kulis-2012}. The basic idea underlying the metric learning solution is that the distance between similar objects should be smaller than the distance between different objects. Suppose there is a training set $S$ $=\{(\mathbf{x}_{i},\mathbf{x}_{j ,y_{ij}),\ (i,j)\in K\}$ consisting of $N$ pairs of examples $\mathbf{x _{i}\in \mathbb{R}^{m}$ and $\mathbf{x}_{j}\in \mathbb{R}^{m}$ such that a binary label $y_{ij}\in \{0,1\}$ is assigned to every pair $(\mathbf{x _{i},\mathbf{x}_{j})$. If two data vectors $\mathbf{x}_{i}$ and $\mathbf{x _{j}$ are semantically similar or belong to the same class of objects, then $y_{ij}$ takes the value $0$. If the vectors correspond to different or semantically dissimilar objects, then $y_{ij}$ takes the value $1$. This implies that the training set $S$ can be divided into two subsets. The first subset is called the similar or positive set and is defined as \[ \mathcal{S}=\{(\mathbf{x}_{i},\mathbf{x}_{j}):\mathbf{x}_{i}\text{ and }\mathbf{x}_{j}\ \text{are semantically similar and }y_{ij}=0\}. \] The second subset is the dissimilar or negative set. It is defined as \[ \mathcal{D}=\{(\mathbf{x}_{i},\mathbf{x}_{j}):\mathbf{x}_{i}\text{ and }\mathbf{x}_{j}\ \text{are semantically dissimilar and }y_{ij}=1\}. \] If we have two observation vectors $\mathbf{x}_{i}\in \mathbb{R}^{m}$ and $\mathbf{x}_{j}\in \mathbb{R}^{m}$ from the training set, then the distance $d(\mathbf{x}_{i},\mathbf{x}_{j})$ should be minimized if $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ are semantically similar, and it should be maximized between dissimilar $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$. The most general and popular real-valued metric function is the squared Mahalanobis distance $d_{M ^{2}(\mathbf{x}_{i},\mathbf{x}_{j})$ which is defined for vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ as \[ d_{M}^{2}(\mathbf{x}_{i},\mathbf{x}_{j})=(\mathbf{x}_{i}-\mathbf{x _{j})^{\mathrm{T}}M(\mathbf{x}_{i}-\mathbf{x}_{j}). \] Here $M\in \mathbb{R}^{m\times m}$ is a symmetric positive semi-defined matrix. If $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ are random vectors from the same distribution with covariance matrix $C$, then $M=C^{-1}$. If $M$ is the identity matrix, then $d_{M}^{2}(\mathbf{x}_{i},\mathbf{x}_{j})$ is the squared Euclidean distance. Given subsets $\mathcal{S}$ and $\mathcal{D}$, the metric learning optimization problem can be formulated as follows: \[ M^{\ast}=\arg \min_{M}\left[ J(M,\mathcal{D},\mathcal{S})+\lambda \cdot R(M)\right] , \] where $J(M,\mathcal{D},\mathcal{S})$ is a loss function that penalizes violated constraints; $R(M)$ is some regularizer on $M$; $\lambda \geq0$ is the regularization parameter. There are many useful loss functions $J$ which take into account the condition that the distance between similar objects should be smaller than the distance between different objects. These functions define a number of learning methods. It should be noted that the learning methods using the Mahalanobis distance assume some linear structure of data. If this is not valid, then the kernelization of linear methods is one of the possible ways for solving the metric learning problem. Bellet et al. \cite{Bellet-etal-2013} review several approaches and algorithms to deal with nonlinear forms of metrics. In particular, these are the Support Vector Metric Learning algorithm provided by Xu et al. \cite{Xu-Weinberger-Chapelle-2012}, the Gradient-Boosted Large Margin Nearest Neighbors method proposed by Kedem et al. \cite{Kedem-etal-2012}, the Hamming Distance Metric Learning algorithm provided by Norouzi et al. \cite{Norouzi-etal-2012}. A powerful implementation of the metric learning dealing with non-linear data structures is the so-called Siamese neural network introduced by Bromley et al. \cite{Bromley-etal-1993} in order to solve signature verification as a problem of image matching. This network consists of two identical sub-networks joined at their outputs. The two sub-networks extract features from two input examples during training, while the joining neuron measures the distance between the two feature vectors. The Siamese architecture has been exploited in many applications, for example, in face verification \cite{Chopra-etal-2005}, in the one-shot learning in which predictions are made given only a single example of each new class \cite{Koch-etal-2015}, in constructing an inertial gesture classification \cite{Berlemont-etal-2015}, in deep learning \cite{Wang-etal-2016}, in extracting speaker-specific information \cite{Chen-Salman-2011}, for face verification in the wild \cite{Hu-Lu-Tan-2014}. This is only a part of successful applications of Siamese neural networks. Many modifications of Siamese networks have been developed, including fully-convolutional Siamese networks \cite{Bertinetto-etal-2016}, Siamese networks combined with a gradient boosting classifier \cite{Leal-Taixe-etal-2016}, Siamese networks with the triangular similarity metric \cite{Zheng-etal-2016}. One of the difficulties of the Siamese neural network as well as other neural networks is that limited training data lead to overfitting when training neural networks. Many different methods have been developed to prevent overfitting, for example, dropout methods \cite{Srivastava-etal-2014} which are based on combination of the results of different networks by randomly dropping out neurons in the network. A very interesting new method which can be regarded as an alternative to deep neural networks is the deep forest proposed by Zhou and Feng \cite{Zhou-Feng-2017} and called the gcForest. In fact, this is a multi-layer structure where each layer contains many random forests, i.e., this is an ensemble of decision tree ensembles. Zhou and Feng \cite{Zhou-Feng-2017} point out that their approach is highly competitive to deep neural networks. In contrast to deep neural networks which require great effort in hyperparameter tuning and large-scale training data, gcForest is much easier to train and can perfectly work when there are only small-scale training data. The deep forest solves tasks of classification as well as regression. Therefore, by taking into account its advantages, it is important to modify it in order to develop a structure solving the metric learning task. We propose the so-called Siamese Deep Forest (SDF) which can be regarded as an alternative to the Siamese neural networks and which is based on the gcForest proposed by Zhou and Feng \cite{Zhou-Feng-2017} and can be viewed as its modification. Three main ideas underlying the SDF can be formulated as follows: \begin{enumerate} \item We propose to modify training set by using concatenated pairs of vectors. \item We define the class distributions in the deep forest as the weighted sum of the tree class probabilities where the weights are determined in order to reduce distances between semantically similar pairs of examples and to increase them between dissimilar pairs. The weights are training parameters of the SDF. \item We apply the greedy algorithm for training the SDF, i.e., the weights are successively computed for every layer or level of the forest cascade. \end{enumerate} We consider the case of the weakly supervised learning \cite{Bellet-etal-2013} when there are no information about the class labels of individual training examples, but only information in the form of sets $\mathcal{S}$ and $\mathcal{D}$ is provided, i.e., we know only semantic similarity of pairs of training data. However, the case of the fully supervised learning when the class labels of individual training examples are known can be considered in the same way. It should be noted that the SDF cannot be called Siamese in the true sense of the word. It does not consist of two gcForests like the Siamese neural network. However, its aim coincides with the Siamese network aim. Therefore, we give this name for the gcForest modification. The paper is organized as follows. Section 2 gives a very short introduction into the Siamese neural networks. A short description of the gcForest proposed by Zhou and Feng \cite{Zhou-Feng-2017} is given in Section 3. The ideas underlying the SDF are represented in Section 4 in detail. A modification of the gcForest using the weighted averages, which can be regarded as a basis of the SDF is provided in Section 5. Algorithms for training and testing the SDF are considered in Section 6. Numerical experiments with real data illustrating cases when the proposed SDF outperforms the gcForest are given in Section 7. Concluding remarks are provided in Section 8. \section{Siamese neural networks} Before studying the SDF, we consider the Siamese neural network which is an efficient and popular tool for dealing with data of the form $\mathcal{S}$ and $\mathcal{D}$. It will be a basis for constructing the SDF. A standard architecture of the Siamese network given in the literature (see, for example, \cite{Chopra-etal-2005}) is shown in Fig. \ref{fig:siamese_net}. Let $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ be two data vectors corresponding to a pair of elements from a training set, for example, images. Suppose that $f$ is a map of $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ to a low-dimensional space such that it is implemented as a neural network with the weight matrix $W$. At that, parameters $W$ are shared by two neural networks $f(\mathbf{x}_{1})$ and $f(\mathbf{x}_{2})$ denoted as $E_{1}$ and $E_{2}$ and corresponding to different input vectors, i.e., they are the same for the two neural networks. The property of the same parameters in the Siamese neural network is very important because it defines the corresponding training algorithm. By comparing the outputs $\mathbf{h}_{i}=f(\mathbf{x}_{i})$ and $\mathbf{h _{j}=f(\mathbf{x}_{j})$ using the Euclidean distance $d(\mathbf{h _{i},\mathbf{h}_{j})$, we measure the compatibility between $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$. \begin{figure} [ptb] \begin{center} \includegraphics[ height=1.5805in, width=2.0997in {siamese_network_1.png \caption{An architecture of the Siamese neural network \label{fig:siamese_net \end{center} \end{figure} If we assume for simplicity that the neural network has one hidden layer, then there holds \[ \mathbf{h}=\sigma(W\mathbf{x}+b). \] Here $\sigma(z)$ is an activation function; $W$ is the weight $p\times M$ matrix such that its element $w_{ij}$ is the weight of the connection between unit $j$ in the input layer and unit $i$ in the hidden layer, $i=1,...,p$, $j=1,...,M$; $b=(b_{1},...,b_{p})$ is a bias vector; $\mathbf{h =(h_{1},...,h_{p})$ is the vector of neuron activations, which depends on the input vector $\mathbf{x}$. The Siamese neural network is trained on pairs of observations by using specific loss functions, for example, the following contrastive loss function: \begin{equation} l(\mathbf{x}_{i},\mathbf{x}_{j},y_{ij})=\left \{ \begin{array} [c]{cc \left \Vert \mathbf{h}_{i}-\mathbf{h}_{j}\right \Vert _{2}^{2}, & y_{ij}=0,\\ \max(0,\tau-\left \Vert \mathbf{h}_{i}-\mathbf{h}_{j}\right \Vert _{2}^{2}), & y_{ij}=1, \end{array} \right. \label{SiamDF_20 \end{equation} where $\tau$ is a predefined threshold. Hence, the total error function for minimizing is defined as \[ J(W,b)=\sum \nolimits_{i,j}l(\mathbf{x}_{i},\mathbf{x}_{j},y_{ij})+\mu R(W,b). \] Here $R(W,b)$ is a regularization term added to improve generalization of the neural network, $\mu$ is a hyper-parameter which controls the strength of the regularization. The above problem can be solved by using the stochastic gradient descent scheme. \section{Deep Forest} According to \cite{Zhou-Feng-2017}, the gcForest generates a deep forest ensemble, with a cascade structure. Representation learning in deep neural networks mostly relies on the layer-by-layer processing of raw features. The gcForest representational learning ability can be further enhanced by the so-called multi-grained scanning. Each level of cascade structure receives feature information processed by its preceding level, and outputs its processing result to the next level. Moreover, each cascade level is an ensemble of decision tree forests. We do not consider in detail the Multi-Grained Scanning where sliding windows are used to scan the raw features because this part of the deep forest is the same in the SDF. However, the most interesting component of the gcForest from the SDF construction point of view is the cascade forest \begin{figure} [ptb] \begin{center} \includegraphics[ height=1.8502in, width=4.1537in {forest_cascade.png \caption{The architecture of the cascade forest \protect\cite{Zhou-Feng-2017} \label{fig:cascade_forest \end{center} \end{figure} Given an instance, each forest produces an estimate of class distribution by counting the percentage of different classes of examples at the leaf node where the concerned instance falls into, and then averaging across all trees in the same forest. The class distribution forms a class vector, which is then concatenated with the original vector to be input to the next level of cascade. The usage of the class vector as a result of the random forest classification is very similar to the idea underlying the stacking method \cite{Wolpert-1992}. The stacking algorithm trains the first-level learners using the original training data set. Then it generates a new data set for training the second-level learner (meta-learner) such that the outputs of the first-level learners are regarded as input features for the second-level learner while the original labels are still regarded as labels of the new training data. In fact, the class vectors in the gcForest can be viewed as the meta-learners. In contrast to the stacking algorithm, the gcForest simultaneously uses the original vector and the class vectors (meta-learners) at the next level of cascade by means of their concatenation. This implies that the feature vector is enlarged and enlarged after every cascade level. The architecture of the cascade proposed by Zhou and Feng \cite{Zhou-Feng-2017} is shown in Fig. \ref{fig:cascade_forest}. It can be seen from the figure that each level of the cascade consists of two different pairs of random forests which generate 3-dimensional class vectors concatenated each other and with the original input. After the last level, we have the feature representation of the input feature vector, which can be classified in order to get the final prediction. Zhou and Feng \cite{Zhou-Feng-2017} propose to use different forests at every level in order to provide the diversity which is an important requirement for the random forest construction. \section{Three ideas underlying the SDF} The SDF aims to function like the standard Siamese neural network. This implies that the SDF should provide large distances between semantically similar pairs of vectors and small distances between dissimilar pairs. We propose three main ideas underlying the SDF: \begin{enumerate} \item Denote the set indices of all pairs $\mathbf{x}_{i}$ and $\mathbf{x _{j}$ as $K=\{(i,j)\}$ We train every tree by using the concatenation of two vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ such that the class $y_{ij \in \{0,1\}$ is defined by the semantical similarity of the vectors. In fact, the trees are trained on the basis of two classes and reflect the semantical similarity of pairs, but not classes of separate examples. With this concatenation, we define a new set of classes such that we do not need to know separate classes for $\mathbf{x}_{i}$ or for $\mathbf{x}_{j}$. As a result, we have a new training set $R=\{(\mathbf{x}_{i},\mathbf{x}_{j}),y_{ij ),\ (i,j)\in K\}$ and exploit only the information about the semantical similarity. The concatenation is not necessary when the classes of training elements are known, i.e., we have a set of labels $\{y_{1},...,y_{n}\}$. In this case, only the second idea can be applied. \item We partially use some modification of ideas provided by Xiong et al. \cite{Xiong-etal-2012} and Dong et al. \cite{Dong-Du-Zhang-2015}. In particular, Xiong et al. \cite{Xiong-etal-2012} considered an algorithm for solving the metric learning problem by means of the random forests. The proposed metric is able to implicitly adapt its distance function throughout the feature space. Dong et al. \cite{Dong-Du-Zhang-2015} proposed a random forest metric learning (RFML) algorithm, which combines semi-multiple metrics with random forests to better separate the desired targets and background in detecting and identifying target pixels based on specific spectral signatures in hyperspectral image processing. A common idea underlying the metric learning algorithms in \cite{Xiong-etal-2012} and \cite{Dong-Du-Zhang-2015} is that the distance measure between a pair of training elements $\mathbf{x _{i},\mathbf{x}_{j}$ for a combination of trees is defined as average of some special functions of the training elements. For example, if a random forest is a combination of $T$ decision trees $\{f_{t}(\mathbf{x}),t=1,...,T\}$, then the distance measure is \[ d(\mathbf{x}_{i},\mathbf{x}_{j})=T^{-1}\sum_{t=1}^{T}f_{t}(\psi(\mathbf{x _{i},\mathbf{x}_{j})). \] Here $\psi(\mathbf{x}_{i},\mathbf{x}_{j})$ is a mapping function which is specifically defined in \cite{Xiong-etal-2012} and \cite{Dong-Du-Zhang-2015}. We combine the above ideas with the idea of probability distributions of classes provided in \cite{Zhou-Feng-2017} in order to produce a new feature vector after every level of the cascade forest. According to \cite{Zhou-Feng-2017}, each forest of a cascade level produces an estimate of the class probability distribution by counting the percentage of different classes of training examples at the leaf node where the concerned instance falls into, and then averaging across all trees in the same forest. Our idea is to define the forest class distribution as a weighted sum of the tree class probabilities. At that, the weights are computed in an optimal way in order to reduce distances between similar pairs and to increase them between dissimilar points. The obtained weights are very similar to weights of the neural network connections between neurons, which are also computed during training the neural network. The trained values of weights in the SDF are determined in accordance with a loss function defining properties of the SDF or the neural network. Due to this similarity, we will call levels of the cascade as layers sometimes. It should be also noted that the first idea can be sufficient for implementing the SDF because the additional features (the class vectors) produced by the previous cascade levels partly reflect the semantical similarity of pairs of examples. However, in order to enhance the discriminative capability of the SDF, we modify the corresponding class distributions. \item We apply the greedy algorithm for training the SDF that is we train separately every level starting from the first level such that every next level uses results of training at the previous level. In contrast to many neural networks, the weights considered above are successively computed for every layer or level of the forest cascade. \end{enumerate} \section{The SDF construction} Let us introduce notations for indices corresponding to different deep forest components. The indices and their sets of values are shown in Table \ref{t:SiamDF_1}. One can see from Table \ref{t:SiamDF_1}, that there are $Q$ levels of the deep forest or the cascade, every level contains $M_{q}$ forests such that every forest consists of $T_{k,q}$ trees. If we use the concatenation of two vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ for defining new classes of semantically similar and dissimilar pairs, then the number of classes is $2$. It should be noted that the class $c$ corresponds to label $y_{ij}\in \{0,1\}$ of a training example from the set $R$ \begin{table}[tbp] \centering \caption{Notations for indices \begin{tabular} [c]{cc}\hline type & index\\ \hline cascade level & $q=1,...,Q$\\ \hline forest & $k=1,...,M_{q}$\\ \hline tree & $t=1,...,T_{k,q}$\\ \hline class & $c=0,1$\\ \hline \end{tabular} \label{t:SiamDF_1 \end{table Suppose we have trained trees in the SDF. One of the approaches underlying the deep forest is that the class distribution forms a class vector which is then concatenated with the original vector to be an input to the next level of the cascade. Suppose a pair of the original vectors is $(\mathbf{x}_{i ,\mathbf{x}_{j})$, and the $p_{ij,c}^{(t,k,q)}$ is the probability of class $c$ for the pair $(\mathbf{x}_{i},\mathbf{x}_{j})$ produced by the $t$-th tree from the $k$-th forest at the cascade level $q$. Below we use the triple index $(t,k,q)$ in order to indicate that the element belongs to the $t$-th tree from the $k$-th forest at the cascade level $q$. The same can be said about subsets of the triple. Then, according to \cite{Zhou-Feng-2017}, the element $v_{c}^{(k,q)}$ of the class vector corresponding to class $c$ and produced by the $k$-th forest in the gcForest is determined a \[ v_{ij,c}^{(k,q)}=T_{k,q}^{-1}\sum_{t=1}^{T_{k,q}}p_{ij,c}^{(t,k,q)}. \] Denote the obtained class vector as $\mathbf{v}_{ij}^{(k,q)}=(v_{ij,0 ^{(k,q)},v_{ij,1}^{(k,q)})$. Then the concatenated vector $\mathbf{x _{ij}^{(1)}$ after the first level of the cascade is \[ \mathbf{x}_{ij}^{(1)}=\left( \mathbf{x}_{i},\mathbf{x}_{j},\mathbf{v _{ij}^{(1,1)},....,\mathbf{v}_{ij}^{(M_{1},1)}\right) =\left( \mathbf{x _{i},\mathbf{x}_{j},\mathbf{v}_{ij}^{(k,1)},k=1,...,M_{1}\right) . \] It is composed of the original vectors $\mathbf{x}_{i}$, $\mathbf{x}_{j}$ and $M_{1}$ class vectors obtained from $M_{1}$ forests at the first level. In the same way, we can write the concatenated vector $\mathbf{x}_{ij}^{(q)}$ after the $q$-th level of the cascade as \begin{align} \mathbf{x}_{ij}^{(q)} & =\left( \mathbf{x}_{i}^{(q-1)},\mathbf{x _{j}^{(q-1)},\mathbf{v}_{ij}^{(1,q)},....,\mathbf{v}_{ij}^{(M_{q},q)}\right) \nonumber \\ & =\left( \mathbf{x}_{i}^{(q-1)},\mathbf{x}_{j}^{(q-1)},\mathbf{v _{ij}^{(k,q)},\ k=1,...,M_{q}\right) . \label{SiamDF_40 \end{align} In order to reduce the number of indices, we omit the index $q$ below because all derivations will concern only level $q$, where $q$ may be arbitrary from $1$ to $Q$. We also replace notations $M_{q}$ and $T_{k,q}$ with $M$ and $T_{k}$, respectively, assuming that the number of forests and numbers of trees strongly depend on the cascade level. The vector $\mathbf{x}_{ij}$ in (\ref{SiamDF_40}) has been derived in accordance with the gcForest algorithm \cite{Zhou-Feng-2017}. However, in order to implement the SDF, we propose to change the method for computing elements $v_{ij,c}^{(k)}$ of the class vector, namely, the averaging is replaced with the weighted sum of the form: \begin{equation} v_{ij,c}^{(k)}=\sum_{t=1}^{T_{k}}p_{ij,c}^{(t,k)}w^{(t,k)}. \label{SiamDF_41 \end{equation} Here $w^{(t,k)}$ is a weight for combining the class probabilities of the $t$-th tree from the $k$-th forest at the cascade level $q$. The weights play a key role in implementing the SDF. An illustration of the weighted averaging is shown in Fig. \ref{fig:weighted_class}, where we partly modify a picture from \cite{Zhou-Feng-2017} (the left part is copied from \cite[Fig. 2]{Zhou-Feng-2017}) in order to show how elements of the class vector are derived as a simple weighted sum. It can be seen from Fig. \ref{fig:weighted_class} that two-class distribution is estimated by counting the percentage of different classes ($y_{ij}=0$ or $y_{ij}=1$) of new training concatenated examples $\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $ at the leaf node where the concerned example $\left( \mathbf{x}_{i},\mathbf{x _{j}\right) $ falls into. Then the class vector of $\left( \mathbf{x _{i},\mathbf{x}_{j}\right) $ is computed as the weighted average. It is important to note that we weigh trees belonging to one of the forests, but not classes, i.e., the weights do not depend on the class $c$. Moreover, the weights characterize trees, but not training elements. This implies that they do not depend on the vectors $\mathbf{x}_{i}$, $\mathbf{x}_{j}$ too. One can also see from Fig. \ref{fig:weighted_class} that the augmented features $v_{ij,0}^{(k)}$ and $v_{ij,1}^{(k)}$ or the class vector corresponding to the $k$-th forest are obtained as weighted sums, i.e., there hold \begin{align*} v_{ij,0}^{(k)} & =0.5\cdot w^{(1,k)}+0.4\cdot w^{(2,k)}+1\cdot w^{(3,k)},\\ v_{ij,1}^{(k)} & =0.5\cdot w^{(1,k)}+0.6\cdot w^{(2,k)}+0\cdot w^{(3,k)}. \end{align*} The weights are restricted by the following obvious condition: \begin{equation} \sum_{t=1}^{T_{k}}w^{(t,k)}=1. \label{SiamDF_42 \end{equation} In other words, we have the weighted averages for every forest, and the corresponding weights can be regarded as trained parameters in order to decrease the distance between semantically similar $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ and to increase the distance between dissimilar $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$. Therefore, we have to develop a way for training the SDF, i.e., for computing the weights for every forest and for every cascade level \begin{figure} [ptb] \begin{center} \includegraphics[ height=1.4355in, width=5.7in {weighted_class_vector_gen_2.png \caption{An illustration of the class vector generation taking into account the weights \label{fig:weighted_class \end{center} \end{figure} Now we have numbers $v_{ij,c}^{(k)}$ for every class. Let us analyze these numbers from the point of the SDF aim view. First, we consider the case when $(\mathbf{x}_{i},\mathbf{x}_{j )\in \mathcal{S}$ and $y_{ij}=0$. However, we may have non-zero $v_{ij,c ^{(k)}$ for both classes. It is obvious that $v_{ij,0}^{(k)}$ (the average probability of class $c=0$) should be as large as possible because $c=y_{ij}=0$. Moreover, $v_{ij,1}^{(k)}$ (the average probability of class $c=1$) should be as small as possible because $c\neq y_{ij}=0$. We can similarly write conditions for the case when $(\mathbf{x _{i},\mathbf{x}_{j})\in \mathcal{D}$ and $y_{ij}=1$. In this case, $v_{ij,0}^{(k)}$ should be as small as possible because $c\neq y_{ij}=1$, and $v_{ij,1}^{(k)}$ should be as large as possible because $c=y_{ij}=1$. In sum, we should increase (decrease) $v_{ij,c}^{(k)}$ if $c=y_{ij}$ ($c\neq y_{ij}$). In other words, we have to find the weights maximizing (minimizing) $v_{ij,c}^{(k)}$ when $c=y_{ij}$ ($c\neq y_{ij}$). The ideal case is when $v_{ij,c}^{(k)}=1$ by $c=y_{ij}$ and $v_{ij,c}^{(k)}=0$ by $c\neq y_{ij}$. However, the vector of weights has to be the same for every class, and it does not depend on a certain class. At first glance, we could find optimal weights for every individual forest separately from other forests. However, we should analyze simultaneously all forests because some vectors of weights may compensate those vectors which cannot efficiently separate $v_{ij,0}^{(k)}$ and $v_{ij,1}^{(k)}$. \section{The SDF training and testing} We apply the greedy algorithm for training the SDF, namely, we train separately every level starting from the first level such that every next level uses results of training at the previous level. The training process at every level consists of two parts. The first part aims to train all trees by applying all pairs of training examples. This part does not significantly differ from the training of the original deep forest proposed by Zhou and Feng \cite{Zhou-Feng-2017}. The difference is that we use pairs of concatenated vectors $(\mathbf{x}_{i},\mathbf{x}_{j})$ and two classes corresponding to semantic similarity of the pairs. The second part is to compute the weights $w^{(t,k)}$, $t=1,...,T_{k}$. This can be done by minimizing the following objective function over $M$ unit (probability) simplices in $\mathbb{R ^{T_{k}}$ denoted as $\Delta_{k}$, i.e., over non-negative vectors $\mathbf{w}^{(k)}=(w^{(1,k)},...,w^{(T_{k},k)})\in \Delta_{k}$, $k=1,...,M$, that sum up to one \begin{equation} \min_{\mathbf{w}}J_{q}(\mathbf{w})=\min_{\mathbf{w}}\sum_{i,j}l(\mathbf{x _{i},\mathbf{x}_{j},y_{ij},\mathbf{w})+\lambda R(\mathbf{w}). \label{SiamDF_50 \end{equation} Here $\mathbf{w}$ is a vector produced as the concatenation of vectors $\mathbf{w}^{(k)}$, $k=1,...,M$, $R(\mathbf{w})$ is a regularization term, $\lambda$ is a hyper-parameter which controls the strength of the regularization. We define the regularization term as \[ R(\mathbf{w})=\left \Vert \mathbf{w}\right \Vert ^{2}. \] The loss function has to increase values of augmented features $v_{ij,0 ^{(k)}$ corresponding to the class $c=0$ and to decrease features $v_{ij,1}^{(k)}$ corresponding to the class $c=1$ for semantically similar pairs $(\mathbf{x}_{i},\mathbf{x}_{j})$. Moreover, the loss function has to increase values of augmented features $v_{ij,1}^{(k)}$ corresponding to the class $c=1$ and to decrease features $v_{ij,0}^{(k)}$ corresponding to the class $c=0$ for dissimilar pairs $(\mathbf{x}_{i},\mathbf{x}_{j})$. \subsection{Convex loss function} Let us denote the set of vectors $\mathbf{w}$ as $\Delta$. In order to efficiently solve the problem (\ref{SiamDF_50}), the condition of the convexity of $J_{q}(\mathbf{w})$ in the domain of $\mathbf{w}$ should be fulfilled. One of the ways for determining the loss function $l$ is to consider a distance $d(\mathbf{x}_{i},\mathbf{x}_{j})$ between two vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ at the $q$-th level. However, we do not have separate vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$. We have one vector whose parts correspond to vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j $. Therefore, this is a distance between elements of the concatenated vector $\left( \mathbf{x}_{i}^{(-1)},\mathbf{x}_{j}^{(-1)}\right) $ obtained at $q-1$ level and augmented features $\mathbf{v}_{ij}^{(k)}$, $k=1,...,M$, of a special form. Let us consider the expression for the above distance in detail. It consists of $M+1$ terms. The first term denoted as $X_{ij}^{(q)}$ is the Euclidean distance between two parts of the output vector obtained at the previous level \[ X_{ij}=\sum_{l=1}^{m}\left( x_{i,l}^{(-1)}-x_{j,l}^{(-1)}\right) ^{2}. \] Here $x_{i,l}$ is the $l$-th element of $\mathbf{x}_{i}$, $m$ is the length of the input vector for the $q$-th level or the length of the output vector for the level with the number $q-1$. Let us consider elements $v_{ij,0}^{(k)}$ and $v_{ij,1}^{(k)}$ now. We have to provide the distance between these elements as large as possible taking into account $y_{ij}$. In particular, if $y_{ij}=0$, then we should decrease the difference $v_{ij,1}^{(k)}-v_{ij,0}^{(k)}$. If $y_{ij}=1$, then we should decrease the difference $v_{ij,0}^{(k)}-v_{ij,1}^{(k)}$. Let us introduce the variable $z_{ij}=-1$ if $y_{ij}=0$, and $z_{ij}=1$ if $y_{ij}=1$. Then the following expression characterizing the augmented features $v_{ij,0}^{(k)}$ and $v_{ij,1}^{(k)}$ can be written: \[ \left[ \max \left( 0,z_{ij}\left( v_{ij,0}^{(k)}-v_{ij,1}^{(k)}\right) \right) \right] ^{2}. \] Substituting (\ref{SiamDF_41}) into the above expression, we get next $M$ term \[ \left[ \max \left( 0,~\sum_{t=1}^{T_{k}}P_{ij}^{(t,k)}w^{(t,k)}\right) \right] ^{2},\ k=1,...,M, \] wher \[ P_{ij}^{(t,k)}=z_{ij}\left( p_{ij,0}^{(t,k)}-p_{ij,1}^{(t,k)}\right) . \] Finally, we can write \begin{equation} d\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) =X_{ij}+\sum_{k=1}^{M}\left[ \max \left( 0,~\sum_{t=1}^{T_{k}}P_{ij}^{(t,k)}w^{(t,k)}\right) \right] ^{2}. \label{SiamDF_58 \end{equation} So, we have to maximize $d\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $ with respect to $w^{(t,k)}$ under constraints (\ref{SiamDF_42}). Since $X_{ij}$ does not depend on $w^{(t,k)}$, then we consider the following objective functio \begin{equation} J_{q}(\mathbf{w})=\sum_{i,j}\sum_{k=1}^{M}\left[ \max \left( 0,~\sum _{t=1}^{T_{k}}P_{ij}^{(t,k)}w^{(t,k)}\right) \right] ^{2}+\lambda \left \Vert \mathbf{w}\right \Vert ^{2}. \label{SiamDF_60 \end{equation} The function $d\left( \mathbf{x}_{i},\mathbf{x}_{j}\right) $ is convex in the interval $[0,1]$ of $w^{(t,k)}$. Then the objective function $J_{q}(\mathbf{w})$ as the sum of the convex functions is convex too with respect to weights. \subsection{Quadratic optimization problem$\allowbreak$} Let us consider the problem (\ref{SiamDF_60}) under constraints (\ref{SiamDF_42}) in detail. Introduce a new variable $\xi_{ij}^{(k)}$ defined as \[ \xi_{ij}^{(k)}=\max \left( 0,~\sum_{t=1}^{T_{k}}P_{ij}^{(t,k)}w^{(t,k) \right) . \] Then problem (\ref{SiamDF_60}) can be rewritten a \begin{equation} J_{q}(\mathbf{w})=\min_{\xi_{ij}^{(k)},\mathbf{w}}\sum_{i,j}\sum_{k=1 ^{M}\left( \xi_{ij}^{(k)}\right) ^{2}+\lambda \left \Vert \mathbf{w \right \Vert ^{2}, \label{SiamDF_64 \end{equation} subject to (\ref{SiamDF_42}) \begin{equation} \xi_{ij}^{(k)}\geq \sum_{t=1}^{T_{k}}P_{ij}^{(t,k)}w^{(t,k)},\ \ \xi_{ij ^{(k)}\geq0,\ \ (i,j)\in K,\ k=1,...,M. \label{SiamDF_66 \end{equation} We have obtained the standard quadratic optimization problem with linear constraints and variables $\xi_{ij}^{(k)}$ and $w^{(t,k)}$. It can be solved by using the well-known standard methods. It is interesting to note that the optimization problem (\ref{SiamDF_64 )-(\ref{SiamDF_66}) can be decomposed into $M$ problems of the form \begin{equation} J_{q}(\mathbf{w}^{(k)})=\min_{\xi_{ij},\mathbf{w}^{(k)}}\sum_{i,j}\xi_{ij ^{2}+\lambda \left \Vert \mathbf{w}^{(k)}\right \Vert ^{2}, \label{SiamDF_70 \end{equation} subject to (\ref{SiamDF_42}) \begin{equation} \xi_{ij}\geq \sum_{t=1}^{T_{k}}P_{ij}^{(t,k)}w^{(t,k)},\ \ \xi_{ij \geq0,\ \ (i,j)\in K,\ k=1,...,M. \label{SiamDF_72 \end{equation} Indeed, by returning to problem (\ref{SiamDF_64})-(\ref{SiamDF_66}), we can see that the subset of variables $\xi_{ij}^{(k)}$ and $w^{(t,k)}$ for a certain $k$ and constraints for these variables do not overlap with the subset of similar variables for another $k$ and the corresponding constraints. This implies that (\ref{SiamDF_64}) can be rewritten as \[ J_{q}(\mathbf{w})=\sum_{k=1}^{M}\min_{\xi_{ij},\mathbf{w}}\sum_{i,j}\left( \xi_{ij}^{(k)}\right) ^{2}+\lambda \left \Vert \mathbf{w}\right \Vert ^{2}, \] and the problem can be decomposed. So, we solve the problem (\ref{SiamDF_70})-(\ref{SiamDF_72}) for every $k=1,...,M$ and get $M$ vectors $\mathbf{w}^{(k)}$ which form the vector $\mathbf{w}$. The above means that the optimal weights are separately determined for individual forests. \subsection{A general algorithm for training and the SDF testing} In sum, we can write a general algorithm for training the SDF (see Algorithm \ref{alg:SiamDF_4}). Its complexity mainly depends on the number of levels. Having the trained SDF with computed weights $\mathbf{w}$ for every cascade level, we can make decision about the semantic similarity of a new pair of examples $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$. First, the vectors make to be concatenated. By using the trained decision trees and the weights $\mathbf{w}$ for every level $q$, the pair is augmented at each level. Finally, we get \[ \mathbf{x}_{ab}^{(Q)}=\left( \mathbf{x}_{a}^{(Q)},\mathbf{x}_{b ^{(Q)}\right) =\mathbf{v}_{ab}. \] Here $\mathbf{v}_{ab}$ is the augmented part of the vector $\mathbf{x _{ab}^{(Q)}$ consisting of elements from subvectors $\mathbf{v}_{0}$ and $\mathbf{v}_{1}$ corresponding to the class $c=0$ and to the class $c=1$, respectively. The original examples $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$ are semantically similar if the sum of all elements from $\mathbf{v}_{0}$ is larger than the sum of elements from $\mathbf{v}_{1}$, i.e., $\mathbf{v _{0}\cdot \mathbf{1}^{\mathrm{T}}>\mathbf{v}_{1}\cdot \mathbf{1}^{\mathrm{T}}$, where $\mathbf{1}$ is the unit vector. In contrast to the similar examples, the condition $\mathbf{v}_{0}\cdot \mathbf{1}^{\mathrm{T}}<\mathbf{v}_{1 \cdot \mathbf{1}^{\mathrm{T}}$ means that $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$ are semantically dissimilar and $y_{ab}=1$. We can introduce a threshold $\tau$ for a more robust decision making. The examples $\mathbf{x}_{a}$ and $\mathbf{x}_{b}$ are classified as semantically similar and $y_{ab}=0$ if $\mathbf{v}_{0}\cdot \mathbf{1}^{\mathrm{T}}-\mathbf{v}_{1}\cdot \mathbf{1 ^{\mathrm{T}}\geq \tau$. The case $0\leq \mathbf{v}_{0}\cdot \mathbf{1 ^{\mathrm{T}}-\mathbf{v}_{1}\cdot \mathbf{1}^{\mathrm{T}}\leq \tau$ can be viewed as undeterminable. \begin{algorithm} \caption{A general algorithm for training the SDF} \label{alg:SiamDF_4} \begin{algorithmic} [1]\REQUIRE Training set $S$ $=\{(\mathbf{x}_{i},\mathbf{x}_{j},y_{ij ),\ (i,j)\in K\}$ consisting of $N$ pairs; number of cascade levels $Q$ \ENSURE$\mathbf{w}^{(q)}$, $q=1,...,Q$ \STATE Concatenate $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ for all pairs of indices $(i,j)\in K$ \STATE Form the training set $R=\{(\mathbf{x}_{i},\mathbf{x}_{j ),y_{ij}),\ (i,j)\in K\}$ consisting of concatenated pairs \FOR{$q=1$, $q\leq Q$ } \STATE Train all trees at the $q$-th level \FOR{$k=1$, $k\leq M_q$ } \STATE Compute the weights $\mathbf{w}^{(k)}$ at the $q$-th level from the $k$-th quadratic optimization problem with the objective function (\ref{SiamDF_70}) and constraints (\ref{SiamDF_42}) and (\ref{SiamDF_72}) \ENDFOR \STATE Concatenate $\mathbf{w}^{(k)}$, $k=1,...,M$, to get $\mathbf{w}$ at the $q$-th level \STATE For every $\mathbf{x}_{ij}$, compute $\mathbf{v}_{ij}^{(k)}$ at the $q$-th level by using (\ref{SiamDF_41}), $k=1,...,M$ \STATE For every $\mathbf{x}_{ij}$, form the concatenated vector $\mathbf{x}_{ij}$ for the next level by using (\ref{SiamDF_40}) \ENDFOR \end{algorithmic} \end{algorithm} It is important to note that the identical weights, i.e., the gcForest can be regarded as a special case of the SDF. \section{Numerical experiments} We compare the SDF with the gcForest whose inputs are concatenated examples from series data sets. In other words, we compare the SDF having computed (trained) weights with the SDF having identical weights. The SDF has the same cascade structure as the standard gcForest described in \cite{Zhou-Feng-2017}. Each level (layer) of the cascade structure consists of 2 complete-random tree forests and 2 random forests. Three-fold cross-validation is used for the class vector generation. The number of cascade levels is automatically determined. A software in Python implementing the gcForest is available at https://github.com/leopiney/deep-forest. We modify this software in order to implement the procedure for computing optimal weights and weighted averages $v_{ij,c}^{(k)}$. Moreover, we use pairs of concatenated examples composed of individual examples as training and testing data. Every accuracy measure $A$ used in numerical experiments is the proportion of correctly classified cases on a sample of data. To evaluate the average accuracy, we perform a cross-validation with $100$ repetitions, where in each run, we randomly select $N$ training data and $N_{\text{test}}=2N/3$ test data. First, we compare the SDF with the gcForest by using some public data sets from UCI Machine Learning Repository \cite{Lichman:2013}: the Yeast data set (1484 instances, 8 features, 10 classes), the Ecoli data set (336 instances, 8 features, 8 classes), the Parkinsons data set (197 instances, 23 features, 2 classes), the Ionosphere data set (351 instances, 34 features, 2 classes). A more detailed information about the data sets can be found from, respectively, the data resources. Different values for the regularization hyper-parameter $\lambda$ have been tested, choosing those leading to the best results. In order to investigate how the number of decision trees impact on the classification accuracy, we study the SDF by different number of trees, namely, we take $T_{k}=T=100$, $400$, $700$, $1000$. It should be noted that Zhou and Feng \cite{Zhou-Feng-2017} used $1000$ trees in every forest. Results of numerical experiments for the Parkinsons data set are shown in Table \ref{t:SiamDF_4}. It contains the accuracy measures obtained for the gcForest (denoted as gcF) and the SDF as functions of the number of trees $T$ in every forest and the number $N=100,500,1000,2000$ of pairs in the training set. It can be seen from Table \ref{t:SiamDF_4} that the accuracy of the SDF exceeds the same measure of the gcForest in most cases. At that, the difference is rather large for the small amount of training data. In particular, the largest differences between accuracy measures of the SDF and the gcForest are observed by $T=400$, $1000$ and $N=100$. Similar results of numerical experiments for the Ecoli data set are given in Table \ref{t:SiamDF_5}. It is interesting to point out that the number of trees in every forest significantly impacts on the difference between accuracy measures of the SDF and gcForest. It follows from Table \ref{t:SiamDF_5} that this difference is smallest by the large number of trees and by the large amount of training data. If we look at the last row of Table \ref{t:SiamDF_5}, then we see that the accuracy $0.915$ obtained for the SDF by $T=100$ is reached for the gcForest by $T=1000$. The largest difference between accuracy measures of the SDF and the gcForest is observed by $T=100$ and $N=100$. The same can be seen from Table \ref{t:SiamDF_4}. This implies that the proposed modification of the gcForest allows us to reduce the training time. Table \ref{t:SiamDF_6} provides accuracy measures for the Yeast data set. We again can see that the proposed SDF outperforms the gcForest for most cases. It is interesting to note from Table \ref{t:SiamDF_6} that the increasing number of trees in every forest may lead to reduced accuracy measures. If we look at the row of Table \ref{t:SiamDF_6} corresponding to $N=500$ pairs in the training set, then we can see that the accuracy measures by $100$ trees exceed the same measures by larger numbers of trees. Moreover, the largest difference between accuracy measures of the SDF and the gcForest is observed by $T=1000$ and $N=100$. Numerical results for the Ionosphere data set are represented in Table \ref{t:SiamDF_7}. It follows from Table \ref{t:SiamDF_7} that the largest difference between accuracy measures of the SDF and the gcForest is observed by $T=1000$ and $N=500$. The numerical results for all analyzed data sets show that the SDF significantly outperforms the gcForest by small number of training data ($N=100$ or $500$). This is an important property of the SDF which are especially efficient when the amount of training data is rather small \begin{table}[tbp] \centering \caption{Dependence of the accuracy measures on the number of pairs $N$ and on the number of trees $T$ in every forest for the Parkinsons data set \begin{tabular} [c]{ccccccccc}\hline $T$ & \multicolumn{2}{c}{$100$} & \multicolumn{2}{c}{$400$} & \multicolumn{2}{c}{$700$} & \multicolumn{2}{c}{$1000$}\\ \hline $N$ & gcF & SDF & gcF & SDF & gcF & SDF & gcF & SDF\\ \hline $100$ & $0.530$ & $0.545$ & $0.440$ & $0.610$ & $0.552$ & $0.575$ & $0.440$ & $0.550$\\ \hline $500$ & $0.715$ & $0.733$ & $0.651$ & $0.673$ & $0.685$ & $0.700$ & $0.700$ & $0.730$\\ \hline $1000$ & $0.761$ & $0.763$ & $0.778$ & $0.786$ & $0.803$ & $0.804$ & $0.773$ & $0.790$\\ \hline $2000$ & $0.880$ & $0.881$ & $0.884$ & $0.895$ & $0.875$ & $0.891$ & $0.887$ & $0.893$\\ \hline \end{tabular} \label{t:SiamDF_4 \end{table \begin{table}[tbp] \centering \caption{Dependence of the accuracy measures on the number of pairs $N$ and on the number of trees $T$ in every forest for the Ecoli data set \begin{tabular} [c]{ccccccccc}\hline $T$ & \multicolumn{2}{c}{$100$} & \multicolumn{2}{c}{$400$} & \multicolumn{2}{c}{$700$} & \multicolumn{2}{c}{$1000$}\\ \hline $N$ & gcF & SDF & gcF & SDF & gcF & SDF & gcF & SDF\\ \hline $100$ & $0.439$ & $0.530$ & $0.515$ & $0.545$ & $0.590$ & $0.651$ & $0.621$ & $0.696$\\ \hline $500$ & $0.838$ & $0.847$ & $0.814$ & $0.823$ & $0.836$ & $0.845$ & $0.821$ & $0.837$\\ \hline $1000$ & $0.844$ & $0.853$ & $0.890$ & $0.917$ & $0.888$ & $0.891$ & $0.863$ & $0.865$\\ \hline $2000$ & $0.908$ & $0.915$ & $0.895$ & $0.921$ & $0.913$ & $0.915$ & $0.915$ & $0.915$\\ \hline \end{tabular} \label{t:SiamDF_5 \end{table \begin{table}[tbp] \centering \caption{Dependence of the accuracy measures on the number of pairs $N$ and on the number of trees $T$ in every forest for the Yeast data set \begin{tabular} [c]{ccccccccc}\hline $T$ & \multicolumn{2}{c}{$100$} & \multicolumn{2}{c}{$400$} & \multicolumn{2}{c}{$700$} & \multicolumn{2}{c}{$1000$}\\ \hline $N$ & gcF & SDF & gcF & SDF & gcF & SDF & gcF & SDF\\ \hline $100$ & $0.454$ & $0.500$ & $0.484$ & $0.511$ & $0.469$ & $0.515$ & $0.469$ & $0.575$\\ \hline $500$ & $0.682$ & $0.694$ & $0.661$ & $0.673$ & $0.640$ & $0.658$ & $0.622$ & $0.628$\\ \hline $1000$ & $0.708$ & $0.711$ & $0.713$ & $0.723$ & $0.684$ & $0.710$ & $0.735$ & $0.737$\\ \hline $2000$ & $0.727$ & $0.734$ & $0.714$ & $0.716$ & $0.727$ & $0.739$ & $0.713$ & $0.722$\\ \hline \end{tabular} \label{t:SiamDF_6 \end{table \begin{table}[tbp] \centering \caption{Dependence of the accuracy measures on the number of pairs $N$ and on the number of trees $T$ in every forest for the Ionsphere data set \begin{tabular} [c]{ccccccccc}\hline $T$ & \multicolumn{2}{c}{$100$} & \multicolumn{2}{c}{$400$} & \multicolumn{2}{c}{$700$} & \multicolumn{2}{c}{$1000$}\\ \hline $N$ & gcF & SDF & gcF & SDF & gcF & SDF & gcF & SDF\\ \hline $100$ & $0.515$ & $0.515$ & $0.535$ & $0.555$ & $0.530$ & $0.555$ & $0.535$ & $0.540$\\ \hline $500$ & $0.718$ & $0.720$ & $0.723$ & $0.758$ & $0.713$ & $0.740$ & $0.715$ & $0.760$\\ \hline $1000$ & $0.820$ & $0.830$ & $0.837$ & $0.840$ & $0.840$ & $0.860$ & $0.885$ & $0.895$\\ \hline $2000$ & $0.915$ & $0.920$ & $0.905$ & $0.905$ & $0.895$ & $0.895$ & $0.910$ & $0.910$\\ \hline \end{tabular} \label{t:SiamDF_7 \end{table It should be noted that the multi-grained scanning proposed in \cite{Zhou-Feng-2017} was not applied to investigating the above data sets having relatively small numbers of features. The above numerical results have been obtained by using only the forest cascade structure. When we deal with the large-scale data, the multi-grained scanning scheme should be use. In particular, for analyzing the well-known MNIST data set, we used the same scheme for window sizes as proposed in \cite{Zhou-Feng-2017}, where feature windows with sizes $\left \lfloor d/16\right \rfloor $, $\left \lfloor d/9\right \rfloor $, $\left \lfloor d/4\right \rfloor $ are chosen for $d$ raw features. We study the SDF by applying the MNIST database which is a commonly used large database of $28\times28$ pixel handwritten digit images \cite{LeCun-etal-1998}. It has a training set of 60,000 examples, and a test set of 10,000 examples. The digits are size-normalized and centered in a fixed-size image. The data set is available at http://yann.lecun.com/exdb/mnist/. The main problem in using the multi-grained scanning scheme is that pairs of the original examples are concatenated. As a result, the direct scanning leads to scanning windows covering some parts from every example belonging to a concatenated pair, which do not correspond the images themselves. Therefore, we apply the following modification of the multi-grained scanning scheme. Two identical windows simultaneously scan two concatenated images such that pairs of feature windows are produced due to this procedure, which are concatenated for processing by means of the forest cascade. Fig. \ref{fig:scan_win} illustrates the used procedure. Results of numerical experiments for the MNIST data set are shown in Table \ref{t:SiamDF_8}. It can be seen from Table \ref{t:SiamDF_8} that the largest difference between accuracy measures of the SDF and the gcForest is observed by $T=1000$ and $N=100$. It is interesting to note that the SDF as well as the gcForest provide good results even by the small amount of training data. At that, the SDF outperforms the gcForest in the most cases \begin{table}[tbp] \centering \caption{Dependence of the accuracy measures on the number of pairs $N$ and on the number of trees $T$ in every forest for the MNIST data set \begin{tabular} [c]{ccccccccc}\hline $T$ & \multicolumn{2}{c}{$100$} & \multicolumn{2}{c}{$400$} & \multicolumn{2}{c}{$700$} & \multicolumn{2}{c}{$1000$}\\ \hline $N$ & gcF & SDF & gcF & SDF & gcF & SDF & gcF & SDF\\ \hline $100$ & $0.470$ & $0.490$ & $0.520$ & $0.520$ & $0.570$ & $0.585$ & $0.530$ & $0.570$\\ \hline $500$ & $0.725$ & $0.735$ & $0.715$ & $0.715$ & $0.695$ & $0.700$ & $0.670$ & $0.670$\\ \hline $1000$ & $0.757$ & $0.770$ & $0.755$ & $0.760$ & $0.775$ & $0.780$ & $0.830$ & $0.840$\\ \hline \end{tabular} \label{t:SiamDF_8 \end{table \begin{figure} [ptb] \begin{center} \includegraphics[ height=2.1008in, width=1.8893in {weighted_class_vector_gen_4.png \caption{The multi-grained scanning scheme for concatenated examples \label{fig:scan_win \end{center} \end{figure} An interesting observation has been made during numerical experiments. We have discovered that the variable $z_{ij}$, initially taking the values $-1$ for $y_{ij}=0$ and $1$ for $y_{ij}=1$, can be viewed as a tuning parameter in order to control the number of the cascade levels used in the training process and to improve the classification performance of the SDF. One of the great advantages of the gcForest is its automatic determination of the number of cascade levels. It is shown by Zhou and Feng \cite{Zhou-Feng-2017}, that the performance of the whole cascade is estimated on validation set after training a current level. The training procedure in the gcForest terminates if there is no significant performance gain. It turns out that the value of $z_{ij}$ significantly impact on the number of cascade levels if to apply the termination procedure implemented in the gcForest. Moreover, we can adaptively change the values of $z_{ij}$ with every level. It has been revealed that one of the best change of $z_{ij}$ is $z_{ij}^{(q)}=2z_{ij}^{(q-1)}$, where $z_{ij}^{(1)}=-1$ for $y_{ij}=0$ and $1$ for $y_{ij}=1$. Of course, this is an empirical observation. However, it can be taken as a direction for further improving the SDF. \section{Conclusion} One of the implementations of the SDF has been represented in the paper. It should be noted that other modifications of the SDF can be obtained. First of all, we can improve the optimization algorithm by applying a more complex loss function and computing optimal weights, for example, by means of the Frank-Wolfe algorithm \cite{Frank-Wolfe-1956}. We can use a more powerful optimization algorithm, for example, an algorithm proposed by Hazan and Luo \cite{Hazan-Luo-2016}. Moreover, we do not need to search for the convex loss function because there are efficient optimization algorithms, for example, a non-convex modification of the Frank-Wolfe algorithm proposed by Reddi et al. \cite{Reddi-etal-2016}, which allows us to solve the considered optimization problems. The trees and forests can be also replaced with other classification approaches, for example, with SVMs and boosting algorithms. However, the above modifications can be viewed as directions for further research. The linear combinations of weights for every forest have been used in the SDF. However, this class of combinations can be extended by considering non-linear functions of weights. Moreover, it turns out that the weights of trees can model various machine learning peculiarities and allow us to solve many machine learning tasks by means of the gsForest. This is also a direction for further research. It should be noted that the weights have been restricted by constraints of the form (\ref{SiamDF_42}), i.e., the weights of every forest belong to the unit simplex whose dimensionality is defined by the number of trees in the forest. However, numerical experiments have illustrated that it is useful to reduce the set of weights in some cases. Moreover, this reduction can be carried out adaptively by taking into account the classification error at every level. One of the ways for adaptive reduction of the unit simplex is to apply imprecise statistical models, for example, the linear-vacuous mixture or imprecise $\varepsilon$-contaminated models proposed by Walley \cite{Walley91}. This study is also a direction for further research. We have considered a weakly supervised learning algorithm when there are no information about the class labels of individual training examples, but we know only semantic similarity of pairs of training data. It is also interesting to extend the proposed ideas on the case of fully supervised algorithms when only the class labels of individual training examples are known. The main goal of fully supervised distance metric learning is to use discriminative information in distance metric learning to keep all the data samples in the same class close and those from different classes separated \cite{Mu-Ding-2013}. Therefore, another direction for further research is to adapt the proposed algorithm for the case of available class labels. \section*{Acknowledgement} The reported study was partially supported by RFBR, research project No. 17-01-00118.
1,108,101,564,020
arxiv
\section{Introduction} Intelligent Transportation Systems (ITSs) are envisioned to ameliorate traffic congestion and improve road safety and traffic experience. ITSs have drawn the attention of a large number of stakeholders due to their direct effect on the manufacturing of sensor and wireless-equipped vehicles known as connected and autonomous cars. In this regard, Vehicle-to-everything (V2X) applications are considered a key enabler for the shift to Intelligent Transportation Systems (ITSs) in terms of traffic management. These applications allow the vehicles to communicate and exchange information with their surrounding environment that includes other vehicles, pedestrians and supporting road side units (RSUs). To ensure road safety, these applications operate with stringent end-to-end (E2E) latency/delay. There are different paradigms that can determine the placement of V2X applications to address the E2E latency requirements. The placement of these services is disruptive to the customary cloud-based infrastructure. The projected increase in the number of connected and autonomous vehicles will result in data explosion. The data will be routed to a single centralized server creating severe network traffic congestion \cite{b1} . Additionally, the centralized servers are usually located far from vehicles generating data; thus, incurring a huge E2E delay. Furthermore, this architecture exposes a single point of failure, which is huge risk to take for time and mission-critical V2X applications. Given these circumstances, distributing the cloud computing technology in proximity to users is proposed as a viable solution to deal with the shortcomings of the centralized paradigm \cite{b2}. This computing architecture is referred to as Edge Computing. Edge Computing can support the latency requirements of the V2X applications which are critical for their performance \cite{b3}. In addition, the edge servers collect data from the close local nodes which allows for a more individualized experience for the V2X application users. While Edge Computing paradigm can ensure some V2X system-level performances, this comes at the expense of limited computational power at the edges which hinders the processing of large amount of data. Microservices architecture, that decomposes a single application into decoupled modules, combined with virtualization techniques, that fully utilizes resource at the edges, can be used to address this issue. Hawilo \textit{et. al }\cite{key-3}\textit{ }investigated the applicability of this paradigm for Virtual Network Functions which display similar characteristics to V2X applications making it a viable option for their placement. In the domain of V2X applications, 3rd Generation Partnership Project (3GPP) \cite{b4} envisions complex V2X applications that combine vehicle status analysis, imminent traffic events generation, and raw sensor data exchange that define the function of autonomous and connected vehicles. Each of these applications rely on the data processing and analysis of miniscule V2X basic services. Mobile edge clouds (MEC), edge clouds and roadside cloud have been proposed in several previous works in the context of vehicular applications. In \cite{b5}, Emara \textit{et. al} employ an MEC-assisted architecture to evaluate end-to-end latency for vehicles to detect the vulnerable road units. Moubayed \textit{et. al }\cite{b6}\textit{ }formulated an integer linear programming problem for efficient placement of V2X basic service taking into consideration V2X basic services' delay and computational requirements in a hybrid environment that includes edge and core nodes. Supporting V2X applications while considering the vehicle\textquoteright s mobility aspects has been extensively addressed in literature. To support V2X applications, \cite{b7,b8,b9} consider migrating the services according to vehicle\textquoteright s mobility. In \cite{b7}, the authors customize a three-layered architecture that consists of a vehicular cloud, a roadside cloud and a central cloud to support vehicular applications. Their approach focuses on the dynamic allocation of resources, driven by the vehicle\textquoteright s mobility, in vehicular and roadside clouds. In \cite{b8}, Yu \textit{et. al} consider the migration of V2X applications placed on edge servers according to predictive vehicle\textquoteright s mobility combined with setting a priority schema for V2X applications. The approach considers the latency and resource requirements of each of the applications. In the same context, Yao \textit{et. al }\cite{b9} investigate Virtual Machine (VM) placement and migration in roadside cloud that is part of the vehicular cloud computing architecture. The approach targets minimizing the overall network cost given the available resources at the edge. Each of these previous works has its own shortcoming. One common aspect is considering either latency or resource limitations for the placement of the services at the edge. Another shortcoming is the disregard of the nature of V2X applications that may be composed of a single or many modules. Finally, different traffic conditions were not considered to model any solution for vehicular application placement. To address these shortcomings, this work focuses on V2X application placement that minimizes the end-to-end delay which takes into consideration the computational requirements of V2X services forming it. This work\textquoteright s main contributions are as follows: \begin{itemize} \item Decompose V2X applications into multi-V2X basic services. \item Formulate the optimal V2X application placement by considering their delay requirements and the resource requirements of their constituent components. \item Evaluate the performance of the optimal placement in terms of average delay and density distribution for each V2X application under different traffic conditions. \end{itemize} The remainder of this paper is organized as follows: Section II describes the system model and presents the problem formulation, Section III provides the simulation procedure and discusses the results and Section IV concludes the paper and suggests future work. \section{System Model} In the reference model, a highway scenario is considered. Each of the vehicles moving on the highway is running a set of V2X applications that are collecting data from nearby roadside units (RSUs) to function autonomously. RSUs and the vehicles are communicating directly using Dedicated Short-range Communication \cite{b10}, and no communication takes place between any vehicles. Each RSU is equipped with a server which are both considered as an edge computing node. Vehicles are receiving data from V2X basic services placed on each RSU. European Telecommunication Standardization Institute (ETSI) defines three V2X basic services that are the foundation of any envisioned V2X applications. The V2X basic services are as follows: Cooperative Awareness Basic Service (CA) \cite{b11} is responsible for creating, analyzing and sending Cooperative Awareness Basic Messages (CAMs) which include information about the vehicle\textquoteright s status and attributes, Decentralized Environmental Notification Service \cite{b12} (DEN) broadcasts Decentralized Notification Messages (DENM) whenever a road hazard or abnormal traffic condition takes place, and Media Downloading \cite{b13} service is requested on demand by the passengers of the vehicle. Additionally, ETSI defines Local Dynamic Maps (LDMs) \cite{b14} that are responsible for storing the sent CAMs and DENMs. Because LDMs store spatial relevant information, an LDM is deployed on each edge server. LDMs are queried by V2X basic services in order to retrieve information. Finally, in addition to basic vehicular services that are related to road safety, there is a variety of innovative applications that are referred to as value-added services that are of lower priority \cite{b15}. These services include augmented reality, parking location and others that are part of the infotainment services provided by vehicular applications. Compared to road safety applications, these services display high levels of diversity and individuation. Therefore, they need to be migrated when the vehicle moves from one edge server coverage zone to another. For this purpose, each edge server reserves part of its resources to accommodate these migrating services. In this section, the system design and the optimal optimization technique for V2X basic service placement are presented. \subsection{System Design} In the reference model used for the placement of V2X basic services, HWY 416 IC-721A that passes through the city of Ottawa is considered. The edge computing servers are deployed uniformly along the highway as the deployment of RSU is out of the scope of this paper. No communication interference zone exists between any two successive RSUs to avoid the possibility of encountering ping-pong handover cases which will be difficult to handle in an optimization model. Additionally, the vehicles are assumed to be always connected to RSUs throughout their journey. The end-to-end latency of a service is the sum of the communication, processing, transmission and propagation delay. The propagation delay is dependent on the medium of communication which is out of the scope of this paper, and therefore considered negligible. In this model, DSRC, the communication technology between the moving vehicles and RSUs, affects the communication delay. In the proposed model, the processing and transmission delay between the communicated edge servers is considered. Each edge computing server has the same computing and processing power that are expressed by the number of cores and RAM available. Finally, the vehicle density is considered to model real case scenarios. \subsection{Optimization Problem} In the optimization function, a set of edge servers and V2X services are considered. Let $N$ denote the set of edge servers where $n\in N.$ Let $U$ denote the set of unique V2X basic services where $u\in U.$ The availability of the computational resources on the edge is denoted by matrix $Cap$ where $Cap_{kn}$ denotes the $k$th computational resources available on edge server $n$. Matrix $R$ represents the resources required by the V2X basic services where $R_{ku}$ represents the $k$th computational resources required by V2X basic service $u.$ A binary row vector $\overrightarrow{q}$ denotes the edge servers a vehicle can communicate with. Let $C$ be the matrix that represents the processing and the transmission latency between edge servers where $C_{ij}$ represents the latency between edge server $i$ and edge server $j$. Matrix $M$ represents the V2X services needed by V2X applications where $M_{au}=1$ denotes that application $a$ needs V2X basic service $u$. Let $X$ denote the placement matrix where $X_{un}=1$ means that V2X basic service $u$ is placed on edge server $n$. The column $X_{u}$ denotes the placement of the V2X services on edge server $n$. $D_{a}^{v}$ and $D_{a}^{th}$ denote respectively the delay experienced by a moving vehicle $v$ served by application $a$ and the maximum tolerable threshold of this delay. To represent the vehicles' density, $\gamma$ is used. $d_{com}^{v}$ and $d_{DL}^{v}$ denote respectively the communication and download latency between a vehicle $v$ and a serving edge server. The optimization function used to minimize the delay of V2X application is as follows: \begin{equation} \mathop{min\sum_{a\in A}}D_{a}^{v} \end{equation} where: \begin{equation} D_{a}^{v}=d_{com}^{v}+max(M_{a}\odot min(X\odot(\gamma C\times\overrightarrow{q}))+d_{DL}^{v} \end{equation} subject to: \begin{equation} D_{a}^{v}\leq D_{a}^{th},\forall a\in A \end{equation} \begin{equation} RX\leq Cap \end{equation} \begin{equation} \sum X_{n}=1,\forall n\in N \end{equation} In what follows, the explanation of the equations (1)-(5). Equation (1) describes the overall objective which is minimizing the summation of the delay of all V2X application experienced by a vehicle requesting their services. Equation (2) presents the components contributing to the delay of V2X application. The delay of a V2X application is the delay of the V2X services it relies on depending on the edge servers a vehicle can communicate with. Because the functioning of a V2X basic service is independent of other V2X basic services, the delay of a V2X application is defined as the maximum of the delay of its constituent V2X basic services. This value is added to the communication and download link delay. Next, the equations (3) -- (5) describe the constraints. Equation (3) defines that the delay of an application should not exceed its maximum defined tolerable delay. Equation (4) ensures that the resources allocated to a V2X service do not exceed the available resource on the hosting edge server. Equation (5) limits placing only one V2X service on each edge server. The following diagram illustrates an example of the communication and processing that takes place for a V2X application that requires CA and DEN services, given that each server has resources reserved for migrating applications denoted by VM 3. \begin{figure} \begin{centering} \par\end{centering} \begin{centering} \caption{System Model} \par\end{centering} \centering{}\centerline{\includegraphics[scale=0.4]{6C__Users_ishaer_Desktop_backup_UWO_Research_Do___nference-LaTeX-template_7-9-18_System_Model}} \end{figure} The logic governing the realization of the application is as follows: (1) The vehicle requests the services of an application. This step incurs communication delay that is denoted by $d_{com}^{v}.$ (2) The CA service found on Edge Server 1 requests the necessary information from the LDM. The processing of the request on the LDM is denoted by $C_{11}$. (3) Edge server 1 communicates with the closest server that include DEN basic service. No delay is considered in this phase. (4) DEN queries and receives information from the LDM that is closest to the requesting vehicle. The LDM on edge server 1 has accurate information about the requesting vehicle's surrounding environment. This delay is the sum of the processing delay of LDM on edge server 1 and the transmission delay between edge server 1 and 2. This is denoted by $C_{12}.$ (5, 6) These steps represent the CA and DEN response to the requesting vehicle. This delay is denoted by $d_{DL}^{v}$. For basic service CA, the delay is as follows: \[ d_{CA}^{v}=C_{11}+d_{DL}^{v} \] Similarly, the delay for DEN is: \[ d_{DEN}^{v}=C_{12}+d_{DL}^{v} \] Given that the requests for each basic service are executed in parallel and that these services are independent in their execution, the delay experienced by a vehicle $v$ requesting the services of application $a$ is: \[ D_{a}^{v}=d_{com}^{v}+max\{d_{CA}^{v},d_{DEN}^{v}\} \] \section{Experimental Setup and Results} \subsection{Simulation Setup} In order to evaluate the placement of V2X basic services, a realistic simulation environment must be created. To this end, Simulation of Urban Mobility (SUMO) \cite{b16} was used to extract the movement of vehicles along a highway. A 4 km highway that resembles HWY 416 IC-712A was considered as a reference highway. Ontario traffic volume for provincial highways \cite{b17} provided the average daily traffic and the accident rates during summer, winter, weekdays and weekends. In the simulation setup, the statistics offered by this report were used to emulate moderate and heavy traffic experienced on HWY 416 IC-712A highway that is expressed through the vehicles per hour parameter in SUMO. Regarding the movement of the vehicles, Table I summarizes the key parameters that are used in the simulation. \begin{table} \caption{SUMO vehicle movement parameters} \centering{ \begin{tabular}{|c|c|} \hline \textbf{\scriptsize{}Parameter} & \textbf{\scriptsize{}Value}\tabularnewline \hline \hline {\scriptsize{}Maximum Speed} & \textbf{\scriptsize{}$27.7m/s$}\tabularnewline \hline {\scriptsize{}Maximum Acceleration} & \textbf{\scriptsize{}$2.6m/s^{2}$}\tabularnewline \hline {\scriptsize{}Maximum Deceleration} & \textbf{\scriptsize{}$4.5m/s^{2}$}\tabularnewline \hline \end{tabular} \end{table} The V2X applications considered are Platooning (PL), Sensor and Sensor State Mapping (SSM), Emergency Stop (ES), Pre-crash Sensing Warning (PSW) and Forward Collision Warning (FCW). Their corresponding performance requirements and service components are presented in Table II \cite{b18,b19}. Choosing these V2X applications stems from their importance and stringent performance requirements in the realm of the autonomous cars. In addition, in the context of the defined problem, each of the chosen V2X application offers a unique combination of V2X services. In the simulation procedure, the communication delay between a vehicle and an RSU is 1 ms \cite{b20}. In this model, the processing delay is the amount of time required by a Local Dynamic Map to process the data requested by other V2X services either placed on the same or different edge server. In \cite{b21}, the authors devise an LDM according to the specifications defined by ETSI. The application defines two Application Programming Interfaces (APIs) that retrieve information of the IDs of the vehicles driving on the same road and the vehicle driving immediately ahead of the requesting vehicle. For different number of queried vehicles ranging from 5 to 20 vehicles, the response time was between 3 and 5 ms with no clear correlation between the size of the data and the response time. Consequently, in the simulation setup, the processing delay is generated uniformly between 3 and 5 ms. In the same context, the authors in \cite{b22}, assumed the transmission latency between two edge servers to be between 1 and 5 ms. Because the simulation procedure takes place under several vehicle densities, the increase of the data processing and transmission overhead with the increase of number of vehicles is inevitable. In this regard, the execution cost increases with the number of vehicles in proximity to the vehicle requesting the V2X application services. As the implementation of LDM did not consider cases beyond 20 vehicles, the added delay for these cases will be in the form of $\log(NC/20)$ where $NC$ represents the number of cars and the expression is derived from the increase of processing delay upon the increase in size of the queried data in SQL \cite{b23}. In terms of edge servers, 10 edge servers are deployed every 400 m alongside the highway. Each of the RSUs hosts an LDM, V2X service and an optional migrating service. The computational requirements of CA, DEN and Media services are those of a small, medium and large VMs. Table III summarizes the edge server capabilities and the computational requirements of CA, DEN and Media services. In the experimental procedure, the placement of the V2X basic services is carried out using the defined optimization function. Next, traffic simulation is executed for defined densities that reflect moderate and heavy traffic. The traffic traces were generated for 1500 seconds. Every 10 seconds, a snapshot of the road condition is taken and delays for each V2X application for each vehicle is calculated. Finally, at the end of the simulation, the average delay for each V2X application is obtained. \begin{table} \caption{Breakdown of V2X applications' V2X basic service and performance metrics} \centering{}{\scriptsize{} \begin{tabular}{|c|c|c|c|} \hline \textbf{\scriptsize{}Application} & \textbf{\scriptsize{}Service(s)} & \textbf{\scriptsize{}Latency(ms)} & \textbf{\scriptsize{}Reliability(\%)}\tabularnewline \hline \hline {\scriptsize{}PL} & {\scriptsize{}CA} & {\scriptsize{}50} & {\scriptsize{}90}\tabularnewline \hline {\scriptsize{}SSM} & {\scriptsize{}CA, DEN, Media} & {\scriptsize{}20} & {\scriptsize{}90}\tabularnewline \hline {\scriptsize{}ES} & {\scriptsize{}DEN} & {\scriptsize{}10} & {\scriptsize{}95}\tabularnewline \hline {\scriptsize{}PSW} & {\scriptsize{}CA, DEN} & {\scriptsize{}20} & {\scriptsize{}95}\tabularnewline \hline {\scriptsize{}FCW} & {\scriptsize{}CA, DEN} & {\scriptsize{}10} & {\scriptsize{}95}\tabularnewline \hline \end{tabular}{\scriptsize\par} \end{table} \begin{table} \begin{centering} \caption{Computational Requirements} \par\end{centering} \begin{centering} \begin{tabular}{|c|c|c|} \hline \textbf{\scriptsize{}Entity} & \textbf{\scriptsize{}Number of Cores} & \textbf{\scriptsize{}RAM}\tabularnewline \hline \hline {\scriptsize{}Edge Server} & {\scriptsize{}8} & {\scriptsize{}8}\tabularnewline \hline {\scriptsize{}CA} & {\scriptsize{}2} & {\scriptsize{}2}\tabularnewline \hline {\scriptsize{}DEN} & {\scriptsize{}2} & {\scriptsize{}4}\tabularnewline \hline {\scriptsize{}Media Service} & {\scriptsize{}4} & {\scriptsize{}6}\tabularnewline \hline {\scriptsize{}LDM} & {\scriptsize{}4} & {\scriptsize{}2}\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \subsection{Implementation} The optimization function was solved using IBM ILOG CPLEX 12.9.0 through its Python API. The solution is provided instantly for all simulation scenarios with different vehicle densities on a laptop with an Intel Core i7-8750 CPU, 2.21 GHz clock frequency and 16 GB of RAM. The final solution includes the V2X services placed on each edge server. \subsection{Results and Discussion} To evaluate the efficacy of the optimization function, the simulation procedure was carried out using two different traffic scenarios each representing moderate (Scenario 1) and heavy (Scenario 2) traffic models. The results are obtained as an average for five independent runs. To assess the placement function, the average delay of each V2X application under study is obtained and compared it to the maximum tolerable delay. Additionally, the model is evaluated using the probability density function of each of V2X application. The density function provides a more thorough overview about the distribution of the delays in terms of detecting extreme values that are overshadowed by the common values trend. Furthermore, the density functions reveal the shortcomings of the approaches that are concealed by the calculation of the mean. The suggested optimization function failed to converge, so a new heuristic algorithm that relaxes the delay threshold for each application by magnitudes of the reliability metrics is considered and executed. This heuristic algorithm is referred to as: Resource and Delay-aware V2X basic service Placement (RDP). The results of the simulation process in terms of the average delay and the probability densities of each V2X application are presented in Figures 2-5. \begin{figure} \caption{Average Delay of V2X Applications for different Vehicle Densities} \begin{centering} \centerline{\includegraphics[scale=0.225]{1C__Users_ishaer_Desktop_backup_UWO_Research_Documents_Conference_Content_Code_delay-1500-1800}} \par\end{centering} \end{figure} \begin{figure} \begin{centering} \par\end{centering} \begin{centering} \caption{Probability Density Function for 1500 vehicles/hour} \par\end{centering} \centering{}\centerline{\includegraphics[scale=0.13]{2C__Users_ishaer_Desktop_backup_UWO_Research_Documents_Conference_Content_Code_results-23-1500}} \end{figure} \begin{figure} \caption{Probability Density Function for 1800 vehicles/hour} \centering{}\centerline{\includegraphics[scale=0.13]{3C__Users_ishaer_Desktop_backup_UWO_Research_Documents_Conference_Content_Code_results-23-1800}} \end{figure} Figure 2 shows the mean delay for each of the V2X applications. The results clearly show that the average delay experienced by each V2X application is within the tolerable threshold. These results show that the heuristic algorithm met the stringent V2X application delay requirements. In terms of the traffic effect, the mean of delay of each application has slightly increased but still fulfills the overall objective of the placement function. The vehicles' density has contributed to an increase in the average delay of the V2X applications in the range of 1.3\% to 4.8\% whereby the SSM application has experienced the greatest variation. This fact shows that the placement of Media Service, that SSM relies on, is the most sensitive to traffic variation. On the other hand, the probability density functions tell a different story. Figures 3 and 4 depicting the delay distribution for both cases show that the delay is highly skewed to the left which supports the viability of the approach. However, this is not the case for FCW application which shows that for each scenario, 20\% and 25\% of the experienced delay exceeds the tolerable threshold which is beyond the 5\% permitted shown in Table II. In terms of traffic effect, it is observed that there is a slight right shift of the probability distribution in scenario 2. Additionally, it is observed that some applications have similar probability distributions. This is attributed to the fact that these applications need the same V2X services, and as it shows, these services incur the most delay out of the other services that they rely on. The dispersion of some of the probability density function is due to the limited number of edge nodes hosting V2X services. The limited number of edge servers means that vehicles at the start and the end of the route will suffer from prolonged delay due to the distance separating the vehicles and the closest V2X basic services. In the cases of continued route, the suggested approach can be replicated along the highway to ensure that V2X services are delivered as expected. For comparison purposes and to further cement this paper\textquoteright s approach, a baseline approach that maximizes the resource utilization at each node server is compared to RDP. The baseline approach formulates a placement algorithm that takes into consideration only the available resources at each node. This baseline approach is reffered to as Resource-Aware Algorithm (RAA). The two approaches were evaluated according to the probability density functions of the delays of ES and FCW applications. The probability densities are depicted in Figure 5. \begin{figure} \begin{centering} \caption{RDP vs RAA} \par\end{centering} \raggedright{}\centerline{\includegraphics[scale=0.21]{4C__Users_ishaer_Desktop_backup_UWO_Research_Documents_Conference_Content_Code_RDP-RAA}} \end{figure} The baseline line approach\textquoteright s density function shows promising results regarding the ES application as the full delay distribution is below the tolerable threshold. More values are concentrated on the extremes which makes it harder to gauge its value whenever the application is requested. However, for the case of FCW, this approach fails to be within the tolerable threshold rendering this approach ineffective for mission-critical applications. This is to be expected given that FCW application requires CA and DEN basic services. Due to the nature of RAA that maximizes the overall resource utilization, deploying more of CA services results in decreasing the utilization which incurs extra delay for FCW application when requesting the services of CA. \section{Conclusion} This paper addressed the efficient placement of V2X basic service comprising different V2X applicatins in an edge computing environment. To this end, an optimization function that minimizes the delay for multi-component V2X applications consisting of V2X services while considering the resource requirements of these services under different traffic conditions is formulated. The approach was evaluated under realistic scenarios where homogeneous edge servers with limited computational power and variable traffic conditions were considered. Furthermore, the approach was compared to a baseline approach that maximizes the overall resource utilization of edge servers. The results have shown that the approach guarantees an acceptable quality of service, and outperforms other approaches while emulating realistic conditions. While the current work considers that each V2X application has a constant request rate, the plan is to extend the work to consider different request distributions to mimic a real-world scenario. In the same context, the deployment of V2X applications in a dynamic service availability environment is also a subject to our future work.
1,108,101,564,021
arxiv
\section{Introduction} Phase field fracture \cite{bourdin_numerical_2000,ambati_review_2015,wu_chapter_2020} is a leading tool for investigating fracture in engineered \cite{roters_damask_2019,bui_review_2021}, geological \cite{wilson_phase-field_2016}, and biological materials \cite{shen_novel_2019}. By considering cracks as localized changes in a phase field variable, phase field fracture requires no explicit tracking of the crack front, and can thus simulate arbitrarily complex crack geometries. Phase field fracture is straightforward to generalize to different physical scenarios, with variants for dynamic \cite{karma_phase-field_2001,bourdin_time-discrete_2011} and quasi-static \cite{bourdin_numerical_2000} fracture and extensions that include plasticity \cite{roters_damask_2019,alessi_comparison_2018} and a variety of other multi-physics phenomena \cite{wilson_phase-field_2016,bilgen_phase-field_2021,svolos_thermal-conductivity_2020}. The development of phase field fracture over the last 20 years has even lead to a variety of different models for even the relatively simple case of quasi-static brittle fracture (see reviews \cite{ambati_review_2015,wu_chapter_2020}). For researchers interested in using phase field fracture, systematic comparisons of these models are valuable in determining what is physically appropriate for their system. However, such comparisons \cite{ambati_review_2015,kuhn_degradation_2015,linse_convergence_2017,tanne_crack_2018, de_lorenzis_nucleation_2021, zhang_assessment_2022} have so far focused on homogeneous systems, despite the growing number of phase field fracture studies that are explicitly interested in the effects of material heterogeneity \cite{chakraborty_multi-scale_2016,hansen-dorr_phase-field_2020,mesgarnejad_crack_2020,wang_modeling_2021,lotfolahpour_effects_2021}. The present study seeks to address this gap by focusing on how different phase field fracture formulations affect crack paths in a set of randomly generated, elastically heterogeneous two-dimensional (2-D) microstructures. Our interest in crack paths is motivated by the problem of predicting the geometry of fracture surfaces. Fracture surfaces are known to exhibit self-affine scaling \cite{mandelbrot_fractal_1984,maloy_experimental_1992}, and understanding this geometrical scaling has been a goal of modeling and simulation efforts for over 30 years (see e.g., Ref.\ \cite{ponson_statistical_2016} for a review). Models for fracture surface roughness in brittle materials \cite{larralde_shape_1995,ramanathan_quasistatic_1997,katzav_fracture_2007,ponson_statistical_2016} consider the evolution of a sharp crack via propagation laws based on solutions to the stress field around the crack obtained from linear elastic fracture mechanics (LEFM) \cite{zehnder_fracture_2012}. In such models, the crack propagates in the direction indicated by the principle of local symmetry, in which the direction in which the stress intensity factor for mode II (in-plane shear) $K_\mathrm{II}$ is zero \cite{goldstein_brittle_1974,hodgdon_derivation_1993}, when Griffith's criteria \cite{griffith_vi_1921} is met in this direction. That is, when the elastic energy $G$ that would be released by extending the crack by a unit distance exceeds a critical value $G_c$. Other criteria for the direction and onset of crack growth exist, but differences between them are minimal for isotropic materials \cite{cotterell_slightly_1980,hutchinson_mixed_1991}. The stress distributions obtained from LEFM enable analytical predictions \cite{larralde_shape_1995,ramanathan_quasistatic_1997,ponson_statistical_2016} and efficient simulations \cite{katzav_fracture_2007,lebihain_effective_2020}, but only for systems where the elasticity problem is analytically tractable, for example when elastic properties are uniform or their effects can be abstracted into a noise term acting on the crack path. Phase field fracture is more general than these sharp-crack evolution models in that it can simulate crack nucleation and branching in addition to propagation, and it is not limited to systems where LEFM can be applied. The formulation of Ref.~\cite{bourdin_numerical_2000} from which most contemporary phase field fracture formulations originate was proposed as a regularization of the variational approach to fracture \cite{francfort_revisiting_1998}, in which the crack growth criterion $G>G_c$ is recovered via variational principles from an energy functional containing both the stored elastic energy (dependent on the phase field and strain field) and the energy dissipated during propagation of the crack (dependent only on the phase field). This fracture dissipation energy contains a diffuse interface approximation for the crack measure that $\Gamma$-converges as a crack width parameter $\ell$ goes to zero \cite{ambrosio_approximation_1990,braides_approximation_1998}. This approximation was originally proposed by Ambrosio and Tortorelli \cite{ambrosio_approximation_1990} for the Mumford-Shah image segmentation problem. The limit $\ell\to 0$ was also investigated via matched asymptotic analysis by Hakim and Karma \cite{hakim_laws_2009}, who confirmed agreement with the principle of local symmetry for propagation through isotropic materials and considered anisotropic fracture toughness via simulations. In addition to describing crack propagation, phase field fracture models have been shown to describe crack nucleation in a way that accurately matches experimental systems with stress concentrations and singularities \cite{tanne_crack_2018}. Phase field fracture has also been interpreted as a form of continuum damage model \cite{pham_gradient_2011,de_borst_gradient_2016}, which provides a physical interpretation to evolution of the phase field away from a crack. While most studies point to agreement between phase field fracture and classical theories, there are certain scenarios and formulations that are known to result in behavior that is non-physical in the context of brittle fracture. One example is the possibility for interpenetration of crack faces and crack nucleation in compression in the initial model of Bourdin et al.\ \cite{bourdin_numerical_2000}. Multiple formulations were subsequently proposed to restrict the driving force for fracture to tensile or shear conditions and to enforce some form of elastic contact between crack faces \cite{amor_regularized_2009,lancioni_variational_2009,freddi_regularized_2010,miehe_thermodynamically_2010,zhang_assessment_2022} (see also Ref.\ \cite{wu_chapter_2020} for a review). These include non-variational formulations \cite{ambati_review_2015}, in which the governing equations for the strain field and phase field do not correspond to the same energy functional. A second example concerns the form of the fracture dissipation energy in the original model of Ref.\ \cite{bourdin_numerical_2000}, in which the phase field evolves even at low stresses leading to the lack of a purely elastic phase prior to fracture \cite{pham_gradient_2011}. An alternative formulation with an elastic phase leads to an improved description of crack nucleation compared to experiments \cite{tanne_crack_2018}. A related third example is the irreversibility of crack growth: constraining the entire evolution of the phase field to be irreversible, as opposed to a crack set \cite{bourdin_numerical_2000,gerasimov_penalization_2019}, can lead to poor $\Gamma$-convergence \cite{linse_convergence_2017}. In this work, we consider formulations of the elastic energy, fracture dissipation energy, and irreversibility condition as three `dimensions' in which models for quasi-static fracture can vary. As a fourth `dimension', we also consider the method for evolving the phase field. We consider three types of evolution method: minimization \cite{bourdin_numerical_2000,bourdin_numerical_2007,bourdin_variational_2008}, time-dependent evolution \cite{miehe_thermodynamically_2010,kuhn_continuum_2010}, and near-equilibrium (e.g., path-following \cite{vignollet_phase-field_2014,may_new_2016,singh_fracture-controlled_2016}) methods. To our knowledge, our study is the first comprehensive comparison between all three of these evolution methods for quasi-static phase field fracture. The different formulations considered in this work are simulated within a common numerical and computational framework. Our solvers weakly couple the phase field and elasticity sub-problems, a relatively common approach in phase field fracture (see e.g., Refs.\ \cite{bourdin_numerical_2000,miehe_phase_2010,singh_fracture-controlled_2016}). The phase field and elasticity sub-problems are discretized using spectral methods \cite{saranen_periodic_2002,zeman_finite_2017} that make use of fast Fourier transforms (FFTs), although our approach differs in certain technical aspects from previous FFT-based implementations \cite{chen_fft_2019,ernesti_fast_2020,pankowski_fourier_2020}. Notably, we apply a bound-constrained conjugate gradients algorithm \cite{vollebregt_bound-constrained_2014} to solve for the phase field while constraining it to evolve irreversibly. Our example of a path-following method is also novel for phase field fracture, and its attributes compared to previous methods \cite{vignollet_phase-field_2014,may_new_2016,singh_fracture-controlled_2016} will be briefly discussed. Overall, this work is focused on comparing model formulations with respect to crack path selection, and other questions about the relative suitability of our methods are left to future work. \section{Background \label{sec:background}} Phase field fracture models simulate damage and fracture via the evolution of the phase field $\phi$ within the entire $d$-dimensional domain. There are multiple approaches to determining this evolution, but they all at some level involve solving partial differential equations for $\phi$ and the displacement or strain field. The phase field represents both local degradation of the elastic properties of the material and the dissipation of energy due to disruption of bonds in the material via damage or formation of a crack. During fracture, evolution of the phase field becomes localized around one or more $(d-1)$-dimensional cracks. In order to avoid healing damage or cracks that developed at previous steps, evolution of $\phi$ must be constrained to be irreversible, either throughout the entire domain \cite{miehe_phase_2010,pham_gradient_2011} or within a crack set where $\phi$ has reached some critical value \cite{bourdin_numerical_2000}. Phase field fracture models are formulated such that $\phi$ varies smoothly between its fully damaged state (e.g., $\phi=1$) at a crack center and its value in the bulk material (e.g., $\phi=0$). This regularity around the crack provides phase field fracture with a degree of independence from its spatial discretization \cite{de_borst_gradient_2016,linse_convergence_2017}, provided that the discretization elements are sufficiently small relative to the length scale over which $\phi$ decays. The capability to simultaneously nucleate and evolve multiple cracks independently of the spatial discretization makes phase field fracture a promising method for investigating crack path selection. \subsection{Free energy functional} The usual starting point for describing phase field fracture models is a free energy functional. For quasi-static brittle fracture, this functional has two parts, \begin{equation} F[\phi, \mathbf u] := F_e[\phi, \mathbf u] + F_f[\phi], \end{equation} where $\mathbf u$ is the displacement vector, $F_e$ is the stored elastic energy, and $F_f$ is the energy dissipated during fracture. Here and in the following we use bold-faced symbols for vectors and square brackets to indicate functional dependence. The elastic energy $F_e[\phi,\mathbf u]$ is simply the integral over the domain of the elastic energy density $\psi(\phi,\mathbf{\varepsilon})$ for a material point with phase field $\phi$ and strain $\mathbf{\varepsilon}=\nabla_s \mathbf u$, where $\nabla_s$ denotes the symmetrized gradient $\nabla_s \mathbf u=(\nabla \mathbf u +\nabla^T \mathbf u)/2$, \begin{equation} \label{eq:Fe} F_e[\phi,\mathbf u] := \int_\Omega \psi\left(\phi,\mathbf{\varepsilon}(\nabla \mathbf u)\right) \dif \mathbf x, \end{equation} where $\Omega$ denotes the $d$-dimensional simulation domain $\Omega \subset \mathbb R^d$ and $\mathbf x \in \Omega$. In the simplest choice for $\psi(\phi,\varepsilon)$, the classical small-strain elastic energy density for an isotropic solid is multiplied by a quadratic degradation function $h(\phi)=(1-\phi)^2$ \cite{bourdin_numerical_2000,pham_gradient_2011}, \begin{equation} \label{eq:isotropic_en} \psi(\phi,\mathbf{\varepsilon}) = \frac{1}{2}\left(\lambda \mathrm{tr}(\mathbf{\varepsilon})^2 + 2\mu \sum_i^d \sum_j^d \varepsilon_{ij}^2\right) h(\phi), \end{equation} where $\mathrm{tr}(\mathbf{\varepsilon})$ is the trace of $\mathbf{\varepsilon}$ and $\lambda$ and $\mu$ are the Lam\'e parameters: $\mu=E/(2+2\nu)$ and $\lambda = E\nu/(1-\nu-2\nu^2)$ in terms of the Young's modulus $E$ and Poisson's ratio $\nu$. The degradation function $h(\phi)$ is equal to unity in the undamaged state, $h(0)=1$, and zero in the fully damaged state at the crack center, $h(1)=0$. It also has a derivative of zero at the fully damaged state, $h'(1)=0$, which means that there is no driving force for further increases in $\phi$ beyond $\phi=1$. The elastic energy density $\psi(\phi,\mathbf{\varepsilon})$ is non-convex in $\phi$ and $\mathbf{\varepsilon}$ when they are considered together, but convex in each when the other variable is held constant. The model in Eq.~\eqref{eq:isotropic_en} is referred to as isotropic because it does not distinguish between tensile and compressive strain states \cite{bourdin_numerical_2000,miehe_thermodynamically_2010}. Thus, a crack in this model would be stress-free even under compressive strains where a contact stress would be expected physically. In order to account for the asymmetric response of a crack to tension vs.\ compression, the elastic energy density is typically split into two terms: $\psi^+_0(\mathbf{\varepsilon})$, which is affected by the degradation function $h(\phi)$, and $\psi^-_0(\mathbf{\varepsilon})$, which is not, \begin{equation} \label{eq:psi_schema} \psi(\phi,\mathbf{\varepsilon}) := \psi^+_0(\mathbf{\varepsilon})h(\phi) + \psi^-_0(\mathbf{\varepsilon}). \end{equation} (Note that the isotropic model in Eq.~\eqref{eq:isotropic_en} also fits this schema with $\psi^-_0=0$.) In this work we consider the strain-spectral split of Miehe et al.\ \cite{miehe_phase_2010} and the volumetric-deviatoric split of Amor et al.\ \cite{amor_regularized_2009}, both formulated for an otherwise isotropic material. These are two of the most widely studied tension/compression splits (see, e.g., Refs.~\cite{ambati_review_2015,wu_chapter_2020,bilgen_crack-driving_2019,freddi_regularized_2010,de_lorenzis_nucleation_2021,zhang_assessment_2022}). The strain-spectral split has terms $\psi^+_0$ and $\psi^-_0$ of the form \begin{equation} \label{eq:strain_spectral_en} \psi^\pm_0(\mathbf{\varepsilon}) = \frac{1}{2} \lambda \left< \sum^d_{\alpha=1} \mathbf{\varepsilon}_\alpha \right>_\pm^2 + \mu \sum^d_{\alpha=1} \left< \mathbf{\varepsilon}_{\alpha} \right>_\pm^2, \end{equation} where $\mathbf{\varepsilon}_\alpha$ are the eigenvalues of the strain $\mathbf{\varepsilon}$ and the angle brackets $\langle \cdot \rangle_\pm$ denote ramp functions such that $\langle x \rangle_+=x$ for $x>0$, $\langle x \rangle_-=x$ for $x<0$, and both functions are zero otherwise. The volumetric-deviatoric split takes the form \begin{equation} \label{eq:vol_dev_en} \psi^+_0(\mathbf{\varepsilon}) = \frac{1}{2}K \left< \mathrm{tr}(\mathbf{\varepsilon}) \right>_+^2 + \mu \sum_{i=1}^d \sum_{j=1}^d \left( \varepsilon_{ij} - \frac{1}{3}\delta_{ij} \mathrm{tr}(\mathbf{\varepsilon}) \right)^2, \end{equation} \[ \psi^-_0(\mathbf{\varepsilon}) = \frac{1}{2} K \left< \mathrm{tr}(\mathbf{\varepsilon}) \right>_-^2 \] where $\delta_{ij}$ is the Kronecker delta and $K$ is the bulk modulus, $K = \lambda + 2\mu/3$. As written in Eq.~\eqref{eq:vol_dev_en}, this formulation holds for 3D and 2-D cases such plane strain and plane stress that are obtained from 3D \cite{li_phase_2021}; Amor et al.\ \cite{amor_regularized_2009} additionally proposed a purely 2-D formulation that we will not consider here. The total dissipated fracture energy $F_f$ is formulated to approximate its theoretical equivalent for a sharp crack, \begin{equation} F_{f,\mathrm{sharp}}: = \int_\Gamma G_c \dif \mathcal H^{d-1}, \end{equation} where $\Gamma$ is the set corresponding to a sharp crack, $\mathcal H^{d-1}$ is the $d-1$-dimensional Hausdorff measure (equivalent to length for $d=2$ or area for $d=3$ for sufficiently regular $\Gamma$), and $G_c$ is the critical energy release rate, the energy dissipated when $\Gamma$ is extended by a unit of $\mathcal H^{d-1}$ under equilibrium conditions. Phase field models approximate $F_{f,\mathrm{sharp}}$ via an elliptic functional in $\phi$ \cite{ambrosio_approximation_1990,bourdin_numerical_2000,gerasimov_penalization_2019}, \begin{equation} \label{eq:Ff} F_f[\phi] := \frac{G_c}{\ell c_w} \int_\Omega \left[ f(\phi) + \ell^2 |\nabla \phi|^2 \right] \dif \mathbf x, \end{equation} where $\ell$ is a length scale that determines the width of the diffuse crack, and $c_w$ is a constant that takes different values depending on the form of $f(\phi)$ to ensure that $F_f$ evaluates to $G_c$ for an ideal phase field crack with unit $\Gamma$. The actual increment in $F_f$ corresponding to an unit increment in $\Gamma$ is usually larger than $G_c$ in practice, for example due to numerical error \cite{linse_convergence_2017,bleyer_dynamic_2017}. For systems in which we can easily measure $\Gamma$, we denote this `true' energy release rate by $G=\dif F_f/\dif\mathcal H^{d-1}$. We consider two forms for the local fracture energy density term $f(\phi)$ \cite{pham_gradient_2011,gerasimov_penalization_2019}, \begin{equation} \label{eq:fracturelocal} \textrm{(AT1):} \;\; f(\phi)= \phi, \; c_w=8/3; \;\;\;\textrm{(AT2):} \;\; f(\phi) = \frac{1}{2} \phi^2,\; c_w = 2. \end{equation} In combination with the quadratic degradation function $h(\phi)=(\phi-1)^2$, these forms of $f(\phi)$ correspond to the AT1 and AT2 models considered in Refs.\ \cite{tanne_crack_2018,alessi_comparison_2018}. The `AT' designation refers to Ambrosio and Tortorelli \cite{ambrosio_approximation_1990}, who provided a method to prove $\Gamma$-convergence of $F_f$ to $F_{f,\mathrm{sharp}}$ in the limit $\ell \to 0$. The AT2 model corresponds to the original phase field fracture model proposed by Bourdin et al.\ \cite{bourdin_numerical_2000}, while AT1 was proposed subsequently by Pham et al.\ \cite{pham_gradient_2011}. The AT1 and AT2 models result in different optimal profiles of $\phi(x)$ for a 1-D crack \cite{miehe_thermodynamically_2010,pham_gradient_2011}: \begin{equation} \label{eq:AT1_analytical} \mathrm{(AT1):}\;\;\;\phi(x) = \left(1- \frac{|x-x_0|}{2\ell}\right)^2, \end{equation} \begin{equation} \label{eq:AT2_analytical} \mathrm{(AT2):}\;\;\;\phi(x) = \exp{\frac{-|x-x_0|}{\ell}}, \end{equation} where $x_0$ denotes the center of the crack. During simulations with the AT2 model, the phase field increases as soon as $\psi^+_0$ becomes non-zero, which prevents truly elastic behavior and leads to delocalized evolution of $\phi$ far from the eventual crack \cite{pham_gradient_2011}. This delocalized evolution results in a worse description of crack nucleation in systems that lack a strongly singular stress concentration compared to the AT1 model \cite{tanne_crack_2018}, which retains a linear elastic response until the onset of fracture. The principal disadvantage of the AT1 model is that it is ill posed unless a constraint is imposed on $\phi$ throughout the entire domain: either the irreversibility constraint must be enforced throughout the entire domain or another constraint (e.g., $\phi \ge 0$) must be added where the irreversibility constraint is not enforced. The AT2 model has no such requirement due to $f(\phi)$ being strictly convex. Enforcing irreversible evolution of $\phi$ in the entire domain has been found to negatively affect $\Gamma$-convergence of $F_f$ with the AT2 model due to the delocalized evolution of $\phi$ prior to fracture \cite{linse_convergence_2017}. Thus, works with the AT2 model often limit the irreversibility constraint to a crack set of points with $\phi$ greater than some threshold value \cite{bourdin_numerical_2000,gerasimov_penalization_2019}. To provide consistent notation between these constraints, we define two variants of a constraining field $\phi_\mathrm{con.}(\phi)$, \begin{equation} \label{eq:constraint_field} \textrm{(crack-set):}\;\; \phi_\mathrm{con.}(\phi) = \begin{cases} \phi& \text{if}\; \phi \geq 0.9,\\ 0 & \text{otherwise} \end{cases},\;\;\;\; \textrm{(damage):}\;\; \phi_\mathrm{con.}(\phi) = \phi. \end{equation} where the crack set has been approximated as the set of points where $\phi(\mathbf x)>0.9$ and the `damage' name refers to the prevalence of irreversibility in the entire domain in interpretations of phase field fracture as a damage model \cite{pham_gradient_2011,de_borst_gradient_2016}. The irreversibility constraint can then be written as $\phi - \phi_\mathrm{con.} \ge 0$, where $\phi_\mathrm{con.}$ is obtained from Eq.~\ref{eq:constraint_field} based on a previous iterate for $\phi$. The choice of previous iterate differs between evolution methods. We may now write the overall energy functional for the phase field fracture model as \begin{equation} \label{eq:functional} F[\phi, \mathbf u] = \int_\Omega \psi\left(\phi,\mathbf{\varepsilon}(\nabla \mathbf u)\right) + \frac{G_c}{c_w \ell}\left( f(\phi) + \ell^2 |\nabla \phi|^2 \right) \dif\mathbf x, \end{equation} This work will focus on the choices of $f(\phi)$ in Eq.\ \eqref{eq:fracturelocal} and the choices of $\psi(\phi,\mathbf{\varepsilon})$ described in Eqs.\ \eqref{eq:isotropic_en}-\eqref{eq:vol_dev_en}. This selection of formulations is intended to represent the simplest and most commonly used formulations for quasi-static brittle fracture, and is not comprehensive. See for example Refs.\ \cite{pham_gradient_2011,wu_unified_2017,wu_chapter_2020,wu_length_2018} for alternative forms of the local fracture energy density $f(\phi)$ and degradation function $h(\phi)$, Ref.\ \cite{borden_higher-order_2014} for a form of $F_f$ incorporating the Laplacian of $\phi$, and Refs.\ \cite{bilgen_crack-driving_2019, de_lorenzis_nucleation_2021,zhang_assessment_2022} for alternative decompositions of the elastic energy density $\psi(\phi,\mathbf{\varepsilon})$. Instead of $F$ itself, evolution methods use $F_\phi$ and $F_{\mathbf u}$, respectively the variational derivatives of $F$ with respect to $\phi$ and $\mathbf u$. For $F$ as written in Eq.\ \eqref{eq:functional}, these variational derivatives are \begin{equation} \label{eq:F_phi} F_\phi = h'(\phi)\psi^+_0(\mathbf{\varepsilon}) + \frac{G_c}{c_w \ell}\left( f'(\phi) - \ell^2 \nabla^2 \phi \right) \end{equation} \begin{equation} \label{eq:F_strain} F_\mathbf{u} = -\nabla \cdot \mathbf{\sigma}, \;\; \mathbf{\sigma} = \frac{\partial \psi(\mathbf{\varepsilon},\phi)}{\partial \mathbf{\varepsilon}}. \end{equation} where the symmetrization operator $\partial \varepsilon/\partial \nabla \mathbf{u}$ has no effect for the choices of $\psi(\mathbf{\varepsilon},\phi)$ considered here. Like the energy density itself, the stress can be expressed as a splitting of two terms modified by the degradation function $h(\phi)$, \begin{equation} \mathbf{\sigma} = \frac{\partial \psi^+_0}{\partial \mathbf{\varepsilon}}h(\phi) +\frac{\partial \psi^-_0}{\partial \mathbf{\varepsilon}} = \mathbf{\sigma}_0^+ h(\phi) + \mathbf{\sigma}_0^-. \end{equation} The stress decompositions for the isotropic, strain-spectral, and volumetric-deviatoric models are then \begin{equation} \label{eq:stress_iso} \textrm{(isotropic):}\;\; \mathbf{\sigma}^+_0 = \lambda \mathbf I \mathrm{tr}(\mathbf{\varepsilon}) + 2\mu \varepsilon_{ij},\;\; \mathbf{\sigma}^-_0 = \mathbf 0 \end{equation} \begin{equation} \label{eq:stress_spectral} \textrm{(strain-spectral):}\;\; \mathbf{\sigma}^\pm_0 = \sum_{\alpha=1}^d \left( \lambda \left< \mathrm{tr}(\mathbf{\varepsilon})\right>_\pm + 2\mu \left< \mathbf{\varepsilon}_\alpha \right>_\pm \right) \mathbf{n}^\alpha \otimes \mathbf{n}^\alpha, \end{equation} \begin{equation} \label{eq:stress_voldev} \textrm{(volumetric-deviatoric):}\;\;\mathbf{\sigma}^+_0 = \frac{1}{3} K \mathbf I \left< \mathrm{tr}(\mathbf{\varepsilon})\right>_+ + 2\mu \left[\mathbf{\varepsilon} - \frac{1}{3} \mathbf I \left<\mathrm{tr}(\mathbf{\varepsilon})\right>_+ \right],\;\; \mathbf{\sigma}^-_0 =\frac{1}{3} K\mathbf I \left< \mathrm{tr}(\mathbf{\varepsilon})\right>_-, \end{equation} where $\mathbf{n}^\alpha$ is the $\alpha$-th eigenvector of $\mathbf{\varepsilon}$, $\otimes$ denotes the outer product, $\mathbf 0$ is the $d\times d$ matrix with all entries equal to zero, and $\mathbf I$ is the $d\times d$ identity matrix. Under a variety of circumstances, it can be convenient to change the terms $\partial \psi/\partial \phi$ and $\partial \psi/\partial \mathbf{\varepsilon}$ in Eqs.~\eqref{eq:F_phi} and \eqref{eq:F_strain}, respectively, such that they no longer represent derivatives of the same energy density $\psi$. The term $\partial \psi/\partial \phi$ has become known as the crack driving force \cite{bilgen_crack-driving_2019,kumar_revisiting_2020}. The different forms of the stress affect the mechanical response of the crack and other regions with non-zero $\phi$. For this reason, we refer to forms of $\partial \psi/\partial \mathbf{\varepsilon}$ as contact models, even if they fail to reproduce realistic contact physics \cite{amor_regularized_2009,freddi_regularized_2010,zhang_assessment_2022}. The earliest example of a non-variational phase field fracture model may be the use of a history function in place of $\psi_0^+$ in the crack driving force in order to satisfy a damage-type irreversibility condition \cite{miehe_phase_2010,gerasimov_penalization_2019}. Ambati et al.\ \cite{ambati_review_2015} proposed using the crack driving force from the strain-spectral split, Eq.\ \eqref{eq:strain_spectral_en}, with the stress-free contact model from the isotropic formulation, Eq.\ \eqref{eq:isotropic_en}, to save on computational effort. Other works have proposed non-variational forms of the crack driving force to better approximate experimental strength surfaces \cite{wu_unified_2017,kumar_revisiting_2020} and crack paths \cite{bilgen_crack-driving_2019}. In this work, we will only consider crack driving forces and contact models derived from the energy densities in Eqs.\ \eqref{eq:isotropic_en}-\eqref{eq:vol_dev_en}, but we will consider non-variational combinations of crack driving forces and contact models. \subsection{Evolution methods} \label{sec:bg_evolution} We can consider three main types of models for the evolution of $\phi$ during quasi-static brittle fracture \cite{ambati_review_2015,wu_chapter_2020}: minimization \cite{bourdin_numerical_2000}, time-dependent evolution \cite{karma_phase-field_2001,miehe_phase_2010, miehe_thermodynamically_2010}, and near-equilibrium or path-following evolution \cite{vignollet_phase-field_2014,may_new_2016,singh_fracture-controlled_2016}. In the minimization approach, the functional $F[\phi, \mathbf{\varepsilon}]$ is minimized with respect to $\phi$ and $\mathbf u$ \cite{bourdin_numerical_2000}, \begin{equation} \label{eq:minimization} \phi, \mathbf u = \mathrm{arg} \min_{\phi',\mathbf u'} F[\phi',\mathbf u']. \end{equation} This minimization is complicated by the non-convexity of the $\psi(\phi,\mathbf{\varepsilon})$ term in $F$, which, depending on the method used, may result in non-convergence \cite{gerasimov_line_2016,heister_primal-dual_2015,farrell_linear_2017} or convergence to a local rather than a global minimizer \cite{bourdin_numerical_2007}. Bourdin \cite{bourdin_numerical_2007} discusses the issue of global vs.\ local minimizers in depth and provides a backtracking method for finding global minimizers. However, in subsequent literature it has been common to accept the local minimizers resulting from a particular optimization algorithm as the solution \cite{heister_primal-dual_2015,ambati_review_2015,gerasimov_line_2016,farrell_linear_2017,wick_modified_2017,gerasimov_penalization_2019,ernesti_fast_2020}, although finding an ensemble of local minimizers has also been proposed \cite{gerasimov_stochastic_2020}. Finding a local minimizer amounts to finding $\phi$ and $\mathbf u$ that satisfy the Karoush-Kuhn-Tucker optimality conditions \cite{pham_gradient_2011}, namely the stationary condition for $\mathbf u$, \begin{equation} \label{eq:stationarity_u} F_\mathbf{u} = 0, \end{equation} the stationarity and dual feasibility conditions for $\phi$, \begin{equation} \label{eq:stationarity_phi} F_\phi \ge 0, \end{equation} the irreversibility condition on $\phi$ (primal feasibility), \begin{equation} \label{eq:irreversibility} \phi - \phi_\mathrm{con.} \ge 0 \end{equation} and the complementary slackness condition for $\phi$, \begin{equation} \label{eq:slackness} F_\phi \left(\phi - \phi_\mathrm{con.} \right) = 0, \end{equation} where $\phi_\mathrm{con.}$ is based on the previous minimization result. Minimization allows brutal fracture where a minimization step results in a discontinuous change in $\phi$, often corresponding to sudden propagation of a crack through the domain \cite{francfort_revisiting_1998,bourdin_numerical_2000,bourdin_numerical_2007,bourdin_variational_2008}. For such cases, the irreversibility constraint plays a much smaller role compared to other evolution methods. Typical solution methods for minimization are Newton-based monolithic schemes \cite{heister_primal-dual_2015,gerasimov_line_2016,farrell_linear_2017,wick_modified_2017,gerasimov_penalization_2019} and alternating minimization (AM), in which the solver alternates between solving Eq.\ \eqref{eq:stationarity_u} with $\phi$ held constant and Eqs.\ \eqref{eq:stationarity_phi}-\eqref{eq:slackness} with $\mathbf u$ held constant until a convergence criterion is reached \cite{bourdin_numerical_2000,bourdin_numerical_2007,hossain_effective_2014,ambati_review_2015,farrell_linear_2017}. Time-dependent evolution, the second type of evolution method, can be interpreted either as a viscous regularization of the minimization method \cite{miehe_thermodynamically_2010} or a Ginzburg-Landau-type gradient flow \cite{lazzaroni_model_2011,kuhn_continuum_2010}, \begin{equation} \label{eq:time_evolution} \eta \frac{\partial \phi}{\partial t} \ge - F_\phi, \end{equation} where $\eta$ is a viscosity parameter. The displacement field is governed by Eq.\ \eqref{eq:stationarity_u}, the irreversibility condition Eq.\ \eqref{eq:irreversibility} is applied with $\phi_\mathrm{con.}$ based on the previous time step, and the equivalent of the complementary slackness condition, Eq.\ \eqref{eq:slackness}, is \begin{equation} \label{eq:slack_time} \left( \eta \frac{\partial \phi}{\partial t} + F_\phi \right)\left( \phi - \phi_\mathrm{con.} \right) = 0. \end{equation} Unlike minimization-based methods, the time-dependent method regularizes brutal fracture: in the limit of continuous time evolution, the time-dependent method results in `progressive' fracture where $\phi$ changes continuously between steps \cite{lazzaroni_model_2011,bourdin_variational_2008}. Like the choice of a specific algorithm in the minimization method, the time-dependent method evolves along a specific pathway for energy dissipation and crack growth during fracture \cite{bourdin_variational_2008}. However, even if the minimization method is applied with an iterative algorithm similar in form to Eq.\ \eqref{eq:time_evolution}, it would still be mathematically distinct from the time-dependent method because it enforces irreversibility based on the initial state of the minimization algorithm, rather than the previous update. In this sense, the staggered method proposed by Miehe et al.\ \cite{miehe_phase_2010}, in which the irreversibility condition is updated after a single iteration of the alternating minimization algorithm, can be interpreted as a time-dependent method in the limit of zero viscosity, $\eta \to 0$. We note that the time-continuous crack path will only be affected by $\eta$ if there is another source of time dependence in the system (e.g., in the loading conditions). If there is no other time dependence, then $\eta$ can be combined with the discrete time step $\Delta t$ into a numerical parameter $\Delta t/\eta$, where low $\Delta t/\eta$ corresponds to less evolution per step. The third type of evolution model is what we call near-equililbrium methods. The reason fracture simulations do not tend to remain near equilibrium is illustrated by linear elastic fracture mechanics, which predicts that for a crack in Mode I loading, the energy release rate $G$ increases linearly as a function of crack length \cite{zehnder_fracture_2012,rice_mathematical_1968}. Thus, once a crack starts to grow, $G$ will continue to increase beyond $G_c$, drawing the system further from equilibrium. Similar behavior is widely seen in mechanical systems with strain-softening properties, and is referred to as snap-back \cite{de_borst_computation_1987,singh_fracture-controlled_2016} due to the simultaneous decreases in stress and strain on an equilibrium stress-strain plot. If the reduction in loading did not occur, the system would be far from equilibrium in an overstressed state. (One could also refer to this state as overstrained, but `overstrained' is associated with plasticity moreso than fracture \cite{vincent_mechanics_1992}). Overstress is known to affect crack morphology and dissipated energy in experiments \cite{scheibert_brittle-quasibrittle_2010} and simulations of dynamic fracture \cite{bleyer_dynamic_2017}. We define near-equilibrium methods as methods where the loading conditions are adapted during evolution to remain near equilibrium, leading to progressive crack growth in which the irreversibility condition is applied between steps. The main category of near-equilibrium method is known as path-following or arc-length control methods. In these methods, $F$ is minimized subject to a constraint that a quantity that increases monotonically during fracture (e.g., dissipated energy \cite{gutierrez_energy_2004,vignollet_phase-field_2014,may_new_2016} or crack set measure \cite{singh_fracture-controlled_2016}) must increase by a fixed amount $\Delta \tau$. To provide the additional degree of freedom to satisfy this constraint, the applied boundary conditions are allowed to vary, typically via a single scaling parameter. The augmented system, composed of the original constrained minimization problem plus the path-following constraint, models progressive fracture along a path that is as close as possible to satisfying the equilibrium conditions, Eqs.\ \eqref{eq:stationarity_u}-\eqref{eq:slackness}, given the discrete increment in the control parameter $\tau$. Near-equilibrium behavior can be recovered in other evolution methods through specific choices of geometry and/or boundary conditions. For instance, Hossain et al.\ \cite{hossain_effective_2014} proposed a `surfing' boundary condition in which crack growth via any evolution method is self-limiting. These surfing boundary conditions consist of Dirichlet conditions on the displacements based on the LEFM solution for a crack tip at a given location; propagation is then driven by increasing the magnitude of the displacements and/or translating the imposed crack tip location. A large pre-existing crack normal to the loading direction will also limit snap-back by limiting the amount by which crack propagation can increase $G$ \cite{zehnder_fracture_2012}. \section{Methods} \subsection{Sub-problem solution methods} In this work, we consider numerical approaches in which the phase field and elasticity sub-problems are weakly coupled in that separate linear-algebraic problems are solved for each sub-problem. This can simplify implementation by allowing the use of standalone mechanics and/or phase field codes developed for other problems, albeit usually at the cost of performance compared to `monolithic' methods that solve both fields simultaneously \cite{gerasimov_line_2016}. In our case, weak coupling makes it easier to apply FFT-based preconditioning for the elasticity problem, which improves computational performance and enables scalability. We solve the elasticity sub-problem via a Fourier Galerkin scheme and the phase field sub-problem via a Fourier collocation scheme. Both of these schemes employ the same representations of the fields in real and Fourier space. In the Fourier Galerkin scheme, trigonometric polynomials are used as test functions and a quadrature rule is applied to solve the equations in weak form. In the collocation scheme, a trigonometric projection operator is applied to the governing equations resulting in an expression for the strong form of the governing equations/inequalities at each grid point \cite{saranen_periodic_2002}. Following \cite{zeman_finite_2017}, we consider a 2-D domain $\Omega$ centered at the origin with lengths $L_x$ and $L_y$ in the $x$ and $y$ directions: $\Omega = [-L_x/2,L_x/2]\times [-L_y/2,L_y/2] \subset \mathbb R^2$, with area $|\Omega| = L_x L_y$. To discretize this domain, we define a regular 2-D grid. We denote the size of the grid by the vector $\mathbf N = (N_x, N_y) \in \mathbb N^2$, where $N_x$ and $N_y$ are the numbers of points in each direction and $|\mathbf N|=N_x N_y$ is the total number of grid points. We can then define a set of grid point indices as \begin{equation} \mathbf Z_N^2 = \left\{\mathbf k = (k_x,k_y) \in \mathbb Z^2 : \frac{-N_x}{2} < k_x < \frac{N_x}{2} , \frac{-N_y}{2} < k_y < \frac{N_y}{2} \right\} \end{equation} The vector of coordinates $\mathbf x$ for the grid point corresponding to index $\mathbf k$ is \begin{equation} \mathbf x^\mathbf{k} = \left(\frac{k_x L_x}{N_x}, \frac{k_y L_y}{N_y} \right). \end{equation} Likewise, the wavevector $\mathbf q$ corresponding to index $\mathbf k$ is \begin{equation} \mathbf q^\mathbf{k} = \left(\frac{k_x }{L_x}, \frac{k_y}{L_y} \right). \end{equation} Now we define the space of trigonometric polynomials, \begin{equation} \mathcal T_N = \left\{ \sum_{\mathbf k \in \mathbb Z_N^2} c_\mathbf{k} e^{2\pi i \mathbf q^\mathbf{k} \cdot \mathbf x}: c_\mathbf{k} \in \mathbb C, \mathbf k \in \mathbb Z_N^2 \right\}. \end{equation} For a function $v \in \mathcal T_N$, the coefficents $\hat v_N$ of its trigonometric polynomial are determined by its discrete Fourier transform $\mathcal F_N$, \begin{equation} \mathcal F_N v (\mathbf x^\mathbf{j}) = \hat v (\mathbf q^\mathbf{k}) = \frac{1}{|\mathbf N|} \sum_{\mathbf j \in \mathbb Z_N^2} v (\mathbf x^\mathbf{j}) \exp \left(- 2\pi i \mathbf q^{\mathbf k} \cdot \mathbf x^\mathbf{j} \right), \; \;\; (\mathbf j, \mathbf k \in \mathbb Z_N^2) \end{equation} (The circumflex $\hat \cdot$ is used hereafter to indicate the Fourier coefficients of a real-space field or operator.) Likewise, values of $v_N$ at grid points $\mathbf x^\mathbf{k}$ can be obtained by the inverse transform $\mathcal F_N^{-1}$, \begin{equation} \mathcal F_N^{-1} \hat v (\mathbf q^\mathbf{k}) = v (\mathbf x^\mathbf{j}) = \sum_{\mathbf k \in \mathbb Z_N^2} \hat v (\mathbf q^\mathbf{k}) \exp \left( 2\pi i \mathbf q^{\mathbf k} \cdot \mathbf x^\mathbf{j} \right), \; \;\; (\mathbf j, \mathbf k \in \mathbb Z_N^2). \end{equation} An additional property, relevant for the Fourier Galerkin scheme, is that an inner product of functions $v,w \in \mathcal T_N$ over $\Omega$ is exactly equal to the integration of their product by the trapezoidal method, \begin{equation} \label{eq:trapezoidal} \int_\Omega v(\mathbf x) w(\mathbf x) \dif\mathbf x = \frac{|\Omega|}{|\mathbf N|} \sum_{\mathbf k \in \mathbb Z_N^2} v(\mathbf x^\mathbf{k}) w(\mathbf x^\mathbf{k}), \end{equation} when the numbers of grid points in each direction, $N_x$ and $N_y$, are both odd. For this reason, we only consider odd $N_x$ and $N_y$ here. C.f. Refs.\ \cite{zeman_finite_2017,leute_elimination_2022,ladecky_optimal_2022} for the general case and additional details regarding this property. \subsubsection{Elasticity sub-problem} The elasticity sub-problem is solved by a Fourier Galerkin scheme with the strain field $\mathbf{\varepsilon}$ as the principal unknown. The strain field is considered to be $\Omega$-periodic, and it is decomposed as $\mathbf{\varepsilon} = \mathbf{\bar \varepsilon} + \mathbf{\varepsilon}^*$ into a constant term $\mathbf{\bar \varepsilon} = \frac{1}{|\Omega|}\int_\Omega \mathbf{\varepsilon} \dif\mathbf x$ and a polarization term $\mathbf{\varepsilon}^*(\mathbf x)$ that is spatially varying and has zero mean, $\int_\Omega \mathbf{\varepsilon}^* \dif\mathbf x=0$. Loading is applied by setting $\mathbf{\bar \varepsilon}$, leaving $\mathbf{\varepsilon}^*$ to be determined by the Fourier Galerkin scheme. The conditions to be satisfied are mechanical equilibrium, \begin{equation} \label{eq:elasticity} \nabla \cdot \mathbf{\sigma}=0 \qquad \text{(see also Eq.\ \eqref{eq:F_strain})}, \end{equation} and compatibility of the spatially varying strain field $\mathbf{\varepsilon}^*$: $\mathbf{\varepsilon}^*=\nabla_s \mathbf u^*$ for some $\Omega$-periodic displacement vector $\mathbf u^*$. Implicit in this definition of the compatibility condition is the fact that we are using a small-strain formulation of elasticity, which is typical for phase field fracture models. The first step towards deriving the Fourier Galerkin scheme is the statement of the weak form of Eq.\ \eqref{eq:elasticity}, \begin{equation} \label{eq:elasticity_weak} \int_\Omega \delta \mathbf{\varepsilon}^*:\mathbf{\sigma} d\mathbf x = 0, \end{equation} where $\delta \mathbf{\varepsilon}^*$ denotes a test function from within the space of compatible tensor fields and the stress $\mathbf{\sigma}$ is expressed in terms of $\mathbf{\varepsilon}$ in Eqs.~\eqref{eq:F_strain}-\ref{eq:stress_voldev}. When $\delta \mathbf{\varepsilon}^*$ and $\mathbf{\sigma}$ are both members of $\mathcal T^{2\times 2}_N$, the space of rank-2 tensor fields with components in $\mathcal T_N$, Eq.\ \eqref{eq:trapezoidal} implies that the weak form in Eq.\ \eqref{eq:elasticity_weak} is equivalent to the following discrete integration: \begin{equation} \label{eq:elasticity_weak_discrete} \frac{|\Omega|}{|\mathbf N|} \sum_{\mathbf k \in \mathbb Z_N^2} \delta \mathbf{\varepsilon}^*(\mathbf x^\mathbf{k}) :\mathbf{\sigma}(\mathbf x^\mathbf{k}) = 0. \end{equation} Since it is not known a priori if an arbitrary test function $\zeta \in \mathcal T^{2\times 2}_N$ is compatible, we decompose our compatible test function $\delta \mathbf{\varepsilon}^*$ into the convolution of an arbitrary test function $\zeta$ with an operator $\mathbf G$ that projects it into the subspace of $\mathcal T^{2\times 2}_N$ consisting of compatible strain fields, \begin{equation} \delta \mathbf{\varepsilon}^* = \int_\Omega \mathbf G(\mathbf x-\mathbf x') \zeta (\mathbf x)d\mathbf x'. \end{equation} This convolution is symmetric and sparse in Fourier space as the Fourier-space operator $\hat G$ is block diagonal (see e.g., Refs.\ \cite{milton_variational_1988, zeman_finite_2017, leute_elimination_2022, Ladecky2022-kl} for its precise form). Now, taking the discrete Fourier transform of Eq.\ \eqref{eq:elasticity_weak_discrete} and substituting $\hat {\delta \mathbf{\varepsilon}^*} = \hat \zeta : \hat G$, we have the following discretized weak form, \begin{equation} \frac{|\Omega|}{|\mathbf N|^2} \sum_{\mathbf k \in \mathbb Z_N^2} \hat \zeta^\mathbf{k} : \mathbf{ \hat G}^\mathbf{k}: \mathbf{\hat \sigma}^\mathbf{k} = 0. \end{equation} which results in the nodal equilibrium equations \begin{equation} \label{eq:elasticity_nodal} \mathbf{\hat G}^\mathbf{k}:\hat \sigma^\mathbf{k} = 0. \end{equation} The system of nodal equations for $\mathbf{\varepsilon}^*$ may be non-linear, and thus we apply Newton's method to solve for $\mathbf{\varepsilon}^{*}$, \begin{equation} \label{eq:mech_newton} \mathbf{\varepsilon}^*_{m+1} = \mathbf{\varepsilon}^*_m + \theta_m, \end{equation} where the Newton update $\theta_m$ at step $m$ is obtained by using conjugate gradients (CG) to solve \begin{equation} \label{eq:elasticity_newton_linear} \mathbf{\hat G} :\widehat{\mathbf C_{m}:\theta_m}= \mathbf{\hat G}:\mathbf{\hat \sigma}_m, \end{equation} where $\mathbf C_m$ is the stiffness tensor $\mathbf C = \partial \mathbf{\sigma}/\partial \mathbf{\varepsilon}^*$ at step $m$. The stress $\mathbf{\hat \sigma}$ and the matrix product $\mathbf C_m:\theta_m$ are computed for each real-space grid point, taking into account any spatial differences in material properties Then, their FFTs are taken in order to apply the projection operator $\mathbf G$ in Fourier space. This numerical method is highly efficient due to the sparsity of the linear operations in real space (calculation of the stresses) and Fourier space (application of the projection operator) and the efficiency of the only dense operation, the FFT \cite{zeman_finite_2017}. It also benefits from almost optimal conditioning~\cite{gergelits_laplacian_2019,ladecky_optimal_2022}. Overall, the mechanics sub-problem only differs between different mechanics models; it is unaffected by the choices of evolution method or fracture energy formulation $F_f$ considered here. For the three mechanics models we consider (isotropic, strain-spectral split, and volumetric-deviatoric split), forms of $\mathbf C$ are available in literature \cite{li_phase_2021}. In all three cases, the nodal equations, Eq.\ \eqref{eq:elasticity_nodal}, are solved to either a relative or absolute tolerance in the $L_\infty$ norm, $||f||_\infty = \max f$. Since $\mathbf C$ is linear in the isotropic model, the Newton iteration in Eq.\ \eqref{eq:mech_newton} is terminated after a single step in which the linear problem in Eq.\ \eqref{eq:elasticity_newton_linear} is solved via CG to the final desired tolerance. For the strain-spectral split and volumetric-deviatoric split, a lower relative tolerance is used for CG solves for the Newton updates. Our simulations employ a plane-stress formulation in which the out-of-plane strains are not represented explicitly, meaning that we work in a reduced $2\times 2$ representation of the strain. \subsubsection{Phase field sub-problem} The phase field $\phi$ is discretized in space by a Fourier collocation scheme employing the same space of basis functions $\mathcal T_N$ that the Galerkin scheme used for the components of $\epsilon^*$. In this scheme, we solve the strong forms of the equations/inequalities \eqref{eq:stationarity_phi}-\eqref{eq:slackness} at each grid point $\mathbf x_k \in \mathbb Z^2_N$. The only term in these expressions that requires information from other grid points is the Laplacian $\nabla^2\phi$ in $F_\phi$. To evaluate the Laplacian in this discretization, we define the collocation Laplacian $\nabla^2_N$, \begin{equation} \nabla^2_N g =-\mathcal F_N^{-1} || 2\pi\mathbf q||^2 \mathcal F_N g, \;\; g \in \mathcal T_N. \end{equation} In addition to the spatial discretization, discretization in time is required for time-dependent evolution methods for the phase field. We implement the time-dependent evolution via a backwards Euler scheme, \begin{equation} \label{eq:phase_field_euler} \frac{\eta}{\Delta t}\left( \phi_{n} - \phi_{n-1} \right) \ge -F_\phi(\phi_n), \end{equation} where $\Delta t$ is the time increment and $n$ is the index of the time increment. Time-independent formulations are considered by taking $\eta/\Delta t=0$. Inequality \eqref{eq:phase_field_euler} is linear in $\phi_n$ for the choices of $f(\phi)$ and $h(\phi)$ considered here. To formulate the linear unconstrained problem in a general way, we consider a Newton-type update $v_r = \phi_r - \phi_{r-1}$, \begin{equation} \label{eq:phase_field_update} J v_r \ge -F_\phi(\phi_{r-1}), \end{equation} where $J$ is the Jacobian matrix of Eq.\ \eqref{eq:phase_field_euler}, \begin{equation} J = \frac{\eta}{\Delta t} + h''(\phi)\psi^+ - \frac{G_c}{c_w \ell} \left[ f''(\phi) - \ell^2 \nabla^2_N \right], \end{equation} in which $h''(\phi)=2$ and $f''(\phi)$ is equal to zero for AT1 and $1/2$ for AT2. The irreversibility constraint on the update $v_r$ is formulated as $v_r \ge v_\mathrm{con.}$, where $v_\mathrm{con.} = \phi_{r-1} - \phi_\mathrm{con.}(\phi_{r-1})$, with $\phi_\mathrm{con.}$ defined in Eq.~\ref{eq:constraint_field}. We solve Eq.\ \eqref{eq:phase_field_update} subject to the irreversibility constraint and slackness condition using a bound-constrained conjugate gradients (BCCG) algorithm, specifically the enhanced BCCG(K) algorithm introduced by Vollebregt \cite{vollebregt_bound-constrained_2014}. Convergence of this algorithm is not in general guaranteed; Vollebregt conjectured that it converges for non-negative matrices, but this is not the case for $J$ due to the Laplacian operator $\nabla^2_N$. Nevertheless, we find that it convergences to the desired numerical tolerance in all cases explored here. In Algorithm \ref{alg:BCCG} below, we provide a concise statement of the BCCG algorithm as implemented in our code. \begin{algorithm}[H] \SetAlgoLined Initialize solution vector $v^0$ (e.g., $v^0=b$) and set $m=1$\\ Set $v^0: = v_\mathrm{con.}$ where $v^0 < v_\mathrm{con.}$\\ $\displaystyle r^0 := Jv^0-b$\\ Set $r^0 := 0$ where both $v^0 = v_\mathrm{con.}$ and $r^0 > 0$\\ $\displaystyle p^0 := - r^0$\\ \While{$||r^{m-1}||_2 < \mathrm{Tol}_\mathrm{PF}$}{ $\displaystyle \alpha:= \frac{r^{m-1}\cdot p^{m-1}}{p^{m-1}\cdot J p^{m-1}}$\\ $\displaystyle v^{m} := v^{m-1} + \alpha p^{m-1}$\\ Set $v^m: = v_\mathrm{con.}$ where $v^m < v_\mathrm{con.}$\\ $\displaystyle r^m := Jv^m-b$\\ Set $r^m := 0$ where both $v^m = v_\mathrm{con.}$ and $r^m > 0$\\ $\displaystyle \beta:=\frac{r^m\cdot(r^m-r^{m-1})}{\alpha p^{m-1}\cdot J p^{m-1}}$\\ $\displaystyle p^m := -r^m + \beta p^{m-1}$\\ Set $p^m := 0$ where both $v^m = v_\mathrm{con.}$ and $r^m > 0$\\ $m:=m+1$\\ } \caption{Bound-constrained CG algorithm \label{alg:BCCG}} \end{algorithm} Notation in Algorithm \ref{alg:BCCG} has been simplified from Eq.\ \eqref{eq:phase_field_update}: we have dropped the time step/outer solver index $n$ from $v$ and we denote the RHS by $b=-F_\phi(\phi_{n-1})$. The definition of the active set (points where both $v^m = v_\mathrm{con.}$ and $r^m > 0$) makes use of the fact that the complementary slackness condition can be written in terms of $v$ and the residual $r=Jv-b$ as $(v-v_\mathrm{con.})r = 0$. The notation in Algorithm \ref{alg:BCCG} interprets $v$, $r$, $p$, $Jv$, and $Jp$ as vectors with the same length (i.e., $N_xN_y$), such that $r\cdot p$ is the conventional inner product and $||r||_2$ is the $\ell_2$ norm. The matrix $J$ is never represented explicitly, as only the matrix-vector products $Jv$ and $Jp$ are used. These matrix-vector products are the most computationally intensive steps in Algorithm \ref{alg:BCCG} because the collocation Laplacian requires fast Fourier transforms that take $\mathcal O(|N|\log |N|)$ time. \subsection{Evolution Algorithms} In this sub-section, we describe our implementations of the evolution methods from Section \ref{sec:bg_evolution}. The previous sub-section described separate sub-problems for determining the strain field given an applied average strain $\mathbf{\bar \varepsilon}$ and $\phi$ and for determining $\phi$ given $\mathbf{\varepsilon}$, the constraining field $\phi_\mathrm{con.}$, and the time step $\Delta t$. Each sub-problem is converged to a relative or absolute tolerance based on the $\ell_2$ norm of the residual. The evolution algorithms integrate these sub-problem solvers with methods that control or adapt $\mathbf{\bar \varepsilon}$ and $\Delta t$. \subsubsection{Alternating miminization} For our minimization approach, we employ the alternating minimization algorithm (Algorithm \ref{alg:alternating-minimization}), in which the elasticity and phase field problems are solved separately one after the other. The system is solved to convergence for each strain increment, and the converged phase field for the previous strain increment is used for the irreversibility constraint of the current strain increment. The algorithm consists of an outer loop (index $s$) where $\mathbf{\bar \varepsilon}$ is updated by a tensor-valued increment $\Delta \mathbf{\bar \varepsilon}$ and an inner loop (index $n$) for the iterative minimization itself. The inner/minimization loop is considered converged when the difference in $\phi$ between consecutive inner iterations is less than a tolerance $\mathrm{Tol}_\mathrm{AM}$, $||\phi_n-\phi_{n-1}||_1 < \mathrm{Tol}_\mathrm{AM}$, where $||\cdot||_1$ is the $L^1$ norm, $||f||_1=\frac{|\Omega|}{|\mathbf N|} \sum_{\mathbf k \in \mathbb Z_N^2} f(\mathbf x^\mathbf{k})$. For the outer loop, the maximum number of strain steps $s_\mathrm{max.}$ is usually set such that a stiffness-based termination criterion is triggered first. This stiffness-based criterion, also used in the other evolution methods, is triggered when a measure of stiffness $C$, calculated as the ratio between the largest components of the average stress $\mathbf{\bar \sigma}$ and the average strain $\mathbf{\bar \varepsilon}$, falls below a reference value $\bar C_\mathrm{broken}$ that is intended to represent the crack passing through most or all of the domain (e.g., $\bar C_\mathrm{broken}\approx 0$). \begin{algorithm}[H] \DontPrintSemicolon Set $\mathbf{\varepsilon}_{0,0}: = \mathbf 0$ everywhere\; Solve phase field sub-problem for $\phi_{0,0}$ with $\mathbf{\varepsilon}=\mathbf{\varepsilon}_{0,0}$ and $\phi_\mathrm{con.}(\phi_\mathrm{init.})$\; \For{$s \in [1,2,...,s_\mathrm{max.}]$}{ $\mathbf{\bar \varepsilon}_s := \mathbf{\bar \varepsilon}_{s-1} + \Delta \mathbf{\bar \varepsilon}$\; \While{$\Delta \phi > \mathrm{Tol}_\mathrm{AM}$}{ Solve elasticity sub-problem for $\mathbf{\varepsilon}_{s,n}$ with $\mathbf{\bar \varepsilon}=\mathbf{\bar \varepsilon}_s$ and $\phi=\phi_{s,n}$\; Solve phase field sub-problem for $\phi_{s,n+1}$ with $\mathbf{\varepsilon} = \mathbf{\varepsilon}_{s,n}$, $\phi_\mathrm{con.}(\phi_{s,0})$, and $\eta/\Delta t = 0$\; $\Delta \phi:= ||\phi_{s,n+1}-\phi_{s,n}||_1$\; $n:=n+1$ } Set $\mathbf{\varepsilon}_{s+1,0} := \mathbf{\varepsilon}_{s,n-1}$ and $\phi_{s+1,0} := \phi_{s,n}$ \; Calculate $\bar C$ from $\mathbf{\varepsilon}_{s+1,0}$ and $\phi_{s+1,0}$ \; \If{$\bar C < C_\mathrm{broken}$}{\rm{Break}} } \caption{Alternating minimization \label{alg:alternating-minimization}} \end{algorithm} \subsubsection{Time-dependent evolution} The two main differences between the time-dependent evolution (Algorithm \ref{alg:time-discretized-nocontrol} below) and alternating minimization are that the factor $\eta/\Delta t$ is non-zero and $\phi_\mathrm{con.}$ is updated after each pair of sub-problem solves rather than after the convergence of an outer loop. Our implementation of this method limits the amount of crack growth per step by adapting $\Delta t$ via the inner while-loop in Alg.\ \ref{alg:time-discretized-nocontrol}. Phase field sub-problem solves are only accepted once the time step $\Delta t$ has been lowered such that $\Delta \phi = ||\phi_{n+1}-\phi_{n}||_1$ is less than an upper bound $(\Delta \phi)_\mathrm{max.}$ or it has reached its own lower bound $(\Delta t)_\mathrm{min.}$. The time step is allowed to increase again once $\Delta \phi < (\Delta \phi)_\mathrm{max.}/2$ up to a maximum of $(\Delta t)_\mathrm{max.}$, and we do not increment $\mathbf{\bar \varepsilon}$ again until we have both a large time step ($\Delta t_n \ge (\Delta t)_\mathrm{max.}$) and a small change in $\phi$ ($\Delta \phi < (\Delta \phi)_\mathrm{min.}$). This method is able to accommodate large changes in $\Delta t$ because nothing in our simulations depends on the value of $t$ itself. By incrementing $\mathbf{\bar \varepsilon}$ independently of the value of $t$, this method avoids a type of strain-rate dependent overstress commonly observed in the literature \cite{miehe_thermodynamically_2010,bilgen_crack-driving_2019}, but it can introduce a `stepping' phenomenon into stress-strain curves when $\phi$ evolves significantly before fracture. These choices in the design of our time-discretized algorithm are intended to efficiently approach time-continuous fracture and thereby provide a clearer contrast with the near-equilibrium method, in which evolution is also limited by adaptive changes to $\mathbf{\bar \varepsilon}$. \begin{algorithm}[H] \DontPrintSemicolon Set $n:=0$, $\mathbf{\bar \varepsilon}_{0}: = \mathbf 0$, $\Delta t_0: = (\Delta t)_\mathrm{max.}$, and $\bar C \gg \bar C_\mathrm{broken}$\; \While{$\bar C > \bar C_\mathrm{broken}$}{ Solve elasticity sub-problem for $\mathbf{\varepsilon}_{n}$ with $\mathbf{\bar \varepsilon}=\mathbf{\bar \varepsilon}_n$ and $\phi=\phi_{n}$\; \While{\rm{True}}{ Solve phase field sub-problem for $\phi_{n+1}$ with $\mathbf{\varepsilon} = \mathbf{\varepsilon}_{n}$, $\phi_\mathrm{con.}(\phi_{n})$, and $\Delta t = \Delta t_n$\; $\Delta \phi: = ||\phi_{n+1}-\phi_{n}||_1$\; \eIf{$\Delta \phi < (\Delta \phi)_\mathrm{max.}$ \rm{or} $\Delta t_n \le (\Delta t)_\mathrm{min.}$} { \If{$\Delta \phi < (\Delta \phi)_\mathrm{max.}/2$} {$\Delta t_n: = 2\Delta t_n$\;} \rm{Break}\;} {$\Delta t_n: = \Delta t_n/2$} } \If{$\Delta \phi < (\Delta \phi)_\mathrm{min.}$ \rm{and} $\Delta t_n \ge (\Delta t)_\mathrm{max.}$}{ {$\mathbf{\bar \varepsilon}_{n+1} := \mathbf{\bar \varepsilon}_{n} + \Delta \mathbf{\bar \varepsilon}$\;}} Set $\Delta t_{n+1} :=\Delta t_n$ and calculate $\bar C$ from $\mathbf{\varepsilon}_n$ and $\phi_{n+1}$\; $n:=n+1$\; } \caption{Time-discretized algorithm \label{alg:time-discretized-nocontrol}} \end{algorithm} \subsubsection{Near-equilibrium algorithm} Instead of a path-following algorithm where the entire problem is directly coupled to a path-following constraint, we employ a heuristic algorithm that rescales $\mathbf{\varepsilon}$ via an explicit formula intended to keep the driving force for evolution of $\phi$ in Eq.\ \eqref{eq:time_evolution}, $-F_\phi$, at or below an upper bound $(-F_\phi)_\mathrm{max.}$. Algorithm \ref{alg:near-equilibrium} describes the overall control flow for our near-equilibrium evolution method, and the rescaling procedure is in lines 4-10. The rescaling is done based on values of $-F_\phi$ and the crack driving force term $-h'(\phi)\psi^+_0(\mathbf{\varepsilon})$ at the grid point $\mathbf x^*$ where $-F_\phi$ is at a maximum. Taking advantage of the fact that $\psi^+_0(\mathbf{\varepsilon})$ is degree-two homogeneous in $\mathbf{\varepsilon}$ (i.e., that $\psi^+_0(\gamma \varepsilon) = \gamma^2 \varepsilon$) in this small-strain context, line 8 solves for the scaling factor $\gamma$ that sets $-F_\phi = (-F_\phi)_\mathrm{max.}$ at $\mathbf x^*$ if the entire strain field undergoes the rescaling $\mathbf{\varepsilon}_n = \gamma \mathbf{\varepsilon}_n$ in line 10. Homogeneity also explains why this rescaling produces valid solutions to Eq.\ \eqref{eq:stationarity_u}: despite being highly non-linear, the expressions for the stresses in Eqs.\ \eqref{eq:stress_spectral} and \eqref{eq:stress_voldev} are still degree-one homogeneous in $\mathbf{\varepsilon}$. In line 9, the maximum increase in a component of $\mathbf{\bar \varepsilon}$ via rescaling is limited to be less than or equal to the the largest component of the strain increment $\Delta \mathbf{\bar \varepsilon}$. Lines 4-5 allow the rescaling to be triggered only after the crack driving force term $-h(\phi)\psi_0^+(\mathbf{\varepsilon})$ reaches a threshold value. \begin{algorithm}[H] \DontPrintSemicolon Set $n:=0$, $\mathbf{\bar \varepsilon}_{0} := \mathbf 0$, $\phi_0 := \phi_\mathrm{init.}$, and $\bar C \gg C_\mathrm{broken}$\; \While{$\bar C > C_\mathrm{broken}$}{ Solve elasticity sub-problem for $\mathbf{\varepsilon}_{n}$ with $\mathbf{\bar \varepsilon}=\mathbf{\bar \varepsilon}_n$ and $\phi=\phi_{n}$\; \If{$\max \left[-F_\phi (\phi_n, \mathbf{\varepsilon}_n)\right] > (-F_\phi)_\mathrm{max.}$ \rm{and} $\max \left[-h(\phi_n)\psi_0^+(\mathbf{\varepsilon}_n)\right] > (-h\psi_0^+)_\mathrm{thresh.}$} {Flag:=True} \If{$\max \left[-F_\phi (\phi_n, \mathbf{\varepsilon}_n)\right] > (-F_\phi)_\mathrm{max.}$ \rm{and Flag is True}} { Find $\mathbf x^* = \mathrm{arg} \max_{\mathbf{x}} \left[-F_\phi(\mathbf x)\right]$\; $\gamma := \sqrt{\frac{(-F_\phi)_\mathrm{max.} - \left[ -F_\phi(\phi_n, \mathbf{\varepsilon}_n) +h'(\phi_n)\psi^+_0(\mathbf{\varepsilon}_n)\right]|^{\mathbf x^*}}{h'(\phi_n)\psi^+_0(\mathbf{\varepsilon}_n) |^{\mathbf x^*}} }$\; $\gamma: = \min\left(\gamma, 1+ \frac{\max(|\Delta \mathbf{\bar \varepsilon}|)}{\max (|\mathbf{\bar \varepsilon}_n|)} \right)$\; Set $\mathbf{\varepsilon}_n := \gamma \mathbf{\varepsilon}_n$ and $\mathbf{\bar \varepsilon}_n := \gamma \mathbf{\bar \varepsilon}_n$\; } Solve phase field sub-problem for $\phi_{n+1}$ with $\mathbf{\varepsilon} = \mathbf{\varepsilon}_{n}$, $\phi_\mathrm{con.}(\phi_{n})$, and $\Delta t = (\Delta t)_\mathrm{max.}$\; Calculate $\bar C_n$ from $\mathbf \varepsilon_n$ and $\phi_{n+1}$\; \If{$||\phi_{n+1}-\phi_{n}||_1 < (\Delta \phi)_\mathrm{min.}$} {$\mathbf{\bar \varepsilon}_{n+1}: = \mathbf{\bar \varepsilon}_{n} + \Delta \mathbf{\bar \varepsilon}$\;} $n:=n+1$ } \caption{Near-equilibrium algorithm \label{alg:near-equilibrium}} \end{algorithm} If $\mathbf x^*$ remains the point with the largest value of $-F_\phi$ after rescaling, then Algorithm \ref{alg:near-equilibrium} enforces an upper bound on $-F_\phi$ within the entire system, limiting the extent to which it can be shifted out of equilibrium. It is not difficult to provide theoretical counterexamples where this bound would be violated, but such behavior was rarely observed during our simulations. We did observe snap-back events in the stress-strain curve that appeared to be spurious (e.g., during otherwise stable crack growth in a homogeneous domain), but these events temporarily inhibit the evolution of $\phi$ and thus should not affect the crack path. Another concern is getting stuck in a cycle of loading and unloading with exclusively reversible evolution (e.g., with crack-set irreversibility), but this was not encountered in the simulations shown here. The relationship between $(-F_\phi)_\mathrm{max.}$ and global measures of evolution such as $||\phi_{n+1}-\phi_{n}||_1$ is variable and dependent on both the choice of model parameters in $F_\phi$ and the grid resolution. Our approach is in some respects related to the staggered path-following method introduced by Singh et al.\ \cite{singh_fracture-controlled_2016}; we would characterize our approach as being simpler to implement (since the strain is rescaled outside of the sub-problem solvers), but subject to the above drawbacks. A direct comparison of path-following approaches is outside the scope of this work. \subsection{Non-Dimensionalization and Simulation Parameters} \label{sec:parameters} Since this work is focused on comparing methods rather than examining a particular material system, we consider all dimensional quantities in terms of model parameters rather than physical units. We scale length by the regularization parameter $\ell$. Per Eqs.\ \eqref{eq:AT1_analytical} and \eqref{eq:AT2_analytical}, $\ell$ is the inverse of the magnitude of the slope of $\phi(x)$ at the crack center, and thus $2\ell$ can be considered an approximate width for the highly damaged `core' of the crack. There are multiple energy densities that are relevant for scaling, but the most convenient are the fracture energy density $G_c/\ell$ and a reference Young's modulus $E_0=10^4 G_c/\ell$. The high ratio of $E_0/(G_c/\ell)$ is intended to ensure that fracture occurs at small strains. In our 2-D systems, integrated energies such as $F_f$ are scaled by $G_c \ell$. The characteristic time scale for the time-dependent models is $\tilde t = \eta/(G_c/\ell)$. We scale stresses and strains by the maximum values $\sigma_M$ and $\varepsilon_M$ that they could obtain in a homogeneous material with Young's modulus $E_0$ \cite{pham_gradient_2011}. In general, these quantities depend on the phase field fracture model (both AT1 vs.\ AT2 and the choice of mechanics model) and the applied loading. The most relevant case for this work is the AT1 model subject to a strain in which $\varepsilon_{22}$ is positive and the only non-zero component. In this case, the mechanics models in Eqs.\ \eqref{eq:isotropic_en}-\eqref{eq:vol_dev_en} behave identically, and we have \begin{equation} \label{eq:tension_M} \psi^+_0 (\varepsilon_M) h'(0) = \frac{3G_c}{8\ell}f'(0) \end{equation} \[ (\lambda + 2\mu) \varepsilon_M^2 = \frac{3G_c}{8\ell} \] \begin{equation} \label{eq:epsilon_M} \varepsilon_M = \sqrt{\frac{3G_c}{8\ell(\lambda +2\mu)}} \end{equation} \begin{equation} \label{eq:sigma_M} \sigma_M = \sqrt{\frac{3G_c(\lambda +2\mu)}{8\ell}} \end{equation} For $\nu=0.2$, we have $(\lambda + 2\mu) = \frac{10}{9}E_0$, which results in $\varepsilon_M = \sqrt{27/(8\times 10^5)}\approx0.005809$ and $\sigma_M = \sqrt{10^5/24}G_c/\ell\approx 64.55G_c/\ell$. We simulate fracture in 2-D domains of size $L_x=L_y=100\ell$, $L_x=L_y=200\ell$, and $L_x=L_y=400\ell$, with grid sizes that are respectively $N_x=N_y=513$, $N_x=N_y=1023$, and $N_x=N_y=2047$. These grids result in $\ell/\Delta x \approx 5$, which is comparable to best practice resolutions for finite element discretizations of phase field fracture \cite{bleyer_dynamic_2017}. For the smaller simulations ($L_x \le 200\ell$), we use relative and absolute tolerances of $10^{-6}$ for both sub-problem solvers and $(\Delta \phi)_\mathrm{min.} = \mathrm{Tol}_\mathrm{AM} = 10^{-3}$ for all three evolution methods. The time dependent method additionally has $(\Delta t)_\mathrm{max.}=2^{16}\tilde t\approx 6.55\times 10^4\tilde t$, $(\Delta t)_\mathrm{min.}=2^{-16}\tilde t\approx 1.53\times 10^{-5}\tilde t$, and $(\Delta \phi)_\mathrm{max.} = 1.5$, while the near-equilibrium method has $(\Delta t)_\mathrm{max.}=2^{16}\tilde t$, $(-F_\phi)_\mathrm{max.}=0.7G_c/\ell$, and $(-h\psi_0^+)_\mathrm{thresh.}=1G_c/\ell$. Relaxed tolerances and a larger limiting driving force $(-F_\phi)_\mathrm{max.}$ were used for simulations with $L_x=L_y=400\ell$. Since we only show crack paths for one set of such simulations, we give these modified conditions alongside the description of the simulations in Section \ref{sec:evolution_random}. For simulations of tensile fracture, the termination criterion $C_\mathrm{broken}$ has been set to $0.01E_0$. \subsection{Microstructure Generation} \label{sec:structure_gen} In this work, we compare the crack paths generated by phase field fracture models in three different types of periodic structure. The first type of structure consists of domains with uniform material properties into which a crack or defect is incorporated via the initial condition of the phase field. We consider via this method a periodic version of the standard single-edge notched tension and shear tests \cite{ambati_review_2015,chen_fft_2019} as well as tensile fracture initiated at a small void. The second and third types of structure employ spatially varying Young's moduli of the form $E(\mathbf x) = E_0 \xi(\mathbf x)$, where $\xi(\mathbf x)$ is constructed from a Gaussian random field to have a mean of approximately unity. Our second type of structure employs a random field for $\xi$ directly while the third type thresholds $\xi$ into a two-phase structure, sometimes called a ``slit island'' analyis~\cite{mandelbrot_fractal_1984}. Such two-phase structures have been considered as surrogates for random two-phase systems in materials science, and their geometric characteristics have been extensively studied \cite{Teubner1991random,soyarslan20183d}. To generate our random structures, we initialize $\xi(\mathbf x)$ at each grid point with values sampled from a Gaussian distribution with zero mean and unit variance. We then apply a low-pass filter to eliminate Fourier modes with wavelengths smaller than a cutoff wavelength $L_\mathrm{cut}$. This step determines the distribution of size scales (e.g., interfacial curvatures) present in the microstructure \cite{Teubner1991random}. To apply the low-pass filter, we take the discrete Fourier transform of $\xi$, \begin{equation} \hat \xi (\mathbf q) = \mathcal F_N \left[ \xi (\mathbf x)\right] \end{equation} and set to zero the Fourier components of $\xi$ that have $|q|<2\pi/L_\mathrm{cut}$, \begin{equation} \hat \xi(\mathbf q) := 0,\;\; \forall \mathbf q: |\mathbf q| < \frac{2\pi}{L_\mathrm{cut}}, \end{equation} The inverse Fourier transform is applied to the filtered $\hat \xi$, resulting in a smoothly varying Gaussian random field with zero mean. The remaining steps are different between the random field structure and the two-phase structure. For the random field structure, we multiply the filtered field by a scalar to achieve a specific target standard deviation $\mathrm{STD}_t$ and add one to achieve our target mean, \begin{equation} \xi_r := \xi \frac{\mathrm{STD}_t}{\mathrm{STD}(\xi)} + 1, \end{equation} where $\mathrm{STD}(\xi)$ denotes the standard deviation of $\xi$ before rescaling. Since this procedure does not guarantee that $\xi_r$ is greater than zero, we apply an algebraic sigmoid function to values of $\xi_r$ less than one to smoothly enforce $\xi_r \ge \xi_\mathrm{min.}$, \begin{equation} \label{eq:sigmoid} \xi_r := \frac{|\xi_r-1| |\xi_\mathrm{min.}-1|}{\left(|\xi_r-1|^{10} + |\xi_\mathrm{min.}-1|^{10}\right)} + 1, \;\; \forall \xi_r < 1 \end{equation} where $\xi_\mathrm{min.}$ is a minimum value for $\xi_r$, taken to be 0.01 here. This procedure results in a smooth, positive field $\xi_r$ that is primarily characterized by the spectral cutoff wavelength $L_\mathrm{cut}$ and the target standard deviation $\mathrm{STD}_t$. It is no longer Gaussian-random due to Eq.\ \eqref{eq:sigmoid}, but that fact is of no consequence to the simulations. For the two-phase structure, we threshold $\xi$ at each point such that \begin{equation} \xi_p := \mathrm{sgn}\;\xi, \end{equation} where the sign function $\mathrm{sgn}(\cdot)$ indicates that $\xi_p=1$ for positive $\xi$, $\xi_p=0$ for $\xi=0$, and $\xi_p = -1$ for negative $\xi$. To avoid ringing artifacts that can occur in spectral discretizations with discontinuous changes in properties between pixels, we smooth the segmented field $\xi_p$ with a finite differences iteration based on the Allen-Cahn equation \cite{allen_microscopic_1979, bueno-orovio_spectral_2006}. This iteration can be expressed as \begin{equation} \label{eq:ACsmooth} \xi_\mathbf{k}^{n+1}:= \xi_\mathbf{k}^n - 0.1 \left[ \left( \xi^3 -\xi \right) + \left( 4\xi_{\mathbf k} - \xi_{\mathbf k- (1,0)} - \xi_{\mathbf k- (0,1)} - \xi_{\mathbf k+ (1,0)} - \xi_{\mathbf k+ (0,1)}\right) \right], \end{equation} for $\mathbf k \in \mathbb Z_N^2$ ($\xi$ at indices outside of $\mathbb Z_N^2$ is known based on periodicity) and $n=0,1,2,...,20$. The iteration in Eq.\ \eqref{eq:ACsmooth} results in a structure consisting of two phases with $\xi_p=-1$ and $\xi_p=1$ separated by a diffuse interface approximately $2\Delta x$ wide. This structure is then scaled to its final values according to \begin{equation} \xi_p := a \xi_p +1, \end{equation} where the scalar $a < 1$ determines the ratio $(a+1)/(a-1)$ of the Young's moduli of the bulk phases. We consider three realizations of the smooth random structure and two-phase structure at size $L_x=L_y=100\ell$ and two realizations of the two-phase structure at size $L_x=L_y=400\ell$. All of these structures have $L_\mathrm{cut}=6\ell$. The smooth random structures have a target standard deviation for $\xi$ of $\mathrm{STD}_t=0.3$, while the two-phase structures have $a=0.875$, resulting in a ratio of 15 between the Young's moduli of the phases. To investigate convergence of the crack path with respect to the ratio $L_\mathrm{cut}/\ell$ in already-generated structures, we upscale $(100\ell)^2$ structures via bivariate cubic spline interpolation. In this process, we use the SciPy RectBivariateSpline class to interpolate from a $513^2$ grid (the original $N_x=N_y=511$ grid plus extra layers of points to ensure periodicity) to a $1023^2$ or $2047^2$ grid corresponding respectively to a larger $(200\ell)^2$ or $(400\ell)^2$ domain and thus a larger value of $L_\mathrm{cut}$. The results below will typically present only a single realization for a given condition. Cases where observations do not generalize to the other realizations are specifically noted. \section{Results and Discussion} \subsection{Evolution Method} \label{sec:results_evolution} In this sub-section, we consider the different evolution methods for phase field fracture and compare how they affect crack paths in elastically heterogeneous materials. To inform our discussion of the heterogeneous case (and to provide some validation for our numerical methods), we first examine simpler systems that are homogeneous except for a single crack or flaw. All fracture simulations in this chapter were carried out for the AT1 phase field formulation, damage-type irreversibility, the strain-spectral crack driving force (Eq.\ \eqref{eq:strain_spectral_en}), and the stress-free (isotropic) contact model (Eq.\ \eqref{eq:stress_iso}). \subsubsection{Homogeneous Material}\label{subsec:homog_ev} Consider a homogeneous domain of size $L_x=L_y=100\ell$ ($N_x=N_y=511$) with $E = E_0 = 10^4G_c/\ell$ and $\nu = 0.2$. The crack was introduced into the initial condition of the phase field via a generalization of the analytical solution in Eq.\ \eqref{eq:AT1_analytical}, \begin{equation} \label{eq:init_crack} \phi_\mathrm{init.}(\mathbf x)= \begin{cases} \left(1-\frac{1}{2\ell}||\mathbf r(\mathbf x)||_2\right)^2, & \text{if } ||\mathbf r||_2 \leq 2\\ 0, & \text{otherwise} \end{cases} \end{equation} where $\mathbf r = ( \max(x-L_\mathrm{crack}/2,0), y)$ with $L_\mathrm{crack}=50\ell$ the length of the crack. The small void was introduced using Eq.\ \eqref{eq:init_crack} and $\mathbf r = (x, y)$, making it effectively a crack with zero length. Uniaxial tensile strain was applied to this domain with increments of \begin{equation} \label{eq:loading_tensile} \Delta \mathbf{ \bar \varepsilon} =\left\{\begin{matrix} 0 & 0 \\ 0 & 10^{-4} \end{matrix}\right\}. \end{equation} Figure \ref{fig:homogeneous_tension_phi} compares the initial conditions (ICs) for $\phi$, the final state for the time-dependent evolution method, and the profile of $\phi$ for completed simulations for each evolution method along the vertical line $x=50\ell$ at the right edge of the domain between the crack initial condition (Fig.~\ref{fig:homogeneous_tension_phi}a, b, and c, respectively) and the small void initial condition (Fig.~\ref{fig:homogeneous_tension_phi}d, e, and f, respectively). The final states for $\phi$ for the alternating minimization and near-equilibrium cases are not shown as they agree with the analytical solution in Eq.\ \eqref{eq:AT1_analytical}: relative errors $\int_\Omega |\phi - \phi_\mathrm{anal.}|\dif\mathbf x/\int_\Omega |\phi_\mathrm{anal.}|\dif\mathbf x$ for the crack and small-void ICs are respectively 5.6\% and 6.6\% for the alternating minimization method and 6.5\% and 6.8\% for the near-equilibrium method. This agreement with the analytical solution is illustrated qualitatively in Fig.\ \ref{fig:homogeneous_tension_phi}c and f, in which the analytical solution is drawn for comparison. For the time-dependent method, the final phase field for the crack ICs, shown in Fig.\ \ref{fig:homogeneous_tension_phi}b, also agrees reasonably well with the analytical solution, with a relative error of 10.4\% that is higher than those of the other two methods. The main difference from the analytical solution in this case is that the crack becomes wider as it nears the domain boundary in Fig.\ \ref{fig:homogeneous_tension_phi}b, resulting in a profile wider than the analytical profile in Fig.\ \ref{fig:homogeneous_tension_phi}c. A similar behavior is observed with much greater magnitude in the crack for the time-dependent method with the small void ICs, shown in Fig.\ \ref{fig:homogeneous_tension_phi}e. This crack becomes increasingly broad from the initial void at the center of the domain to the domain boundaries, and the profile of $\phi$ at the domain boundary (Fig.\ \ref{fig:homogeneous_tension_phi}f) is much broader than for any other solution. The relative difference in $\phi$ between the time-dependent case in Fig.\ \ref{fig:homogeneous_tension_phi}e and the analytical solution is large, at 186\%. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/point_crack_phi.pdf} \caption{Initial conditions and selected final phase fields for simulations via the three evolution methods (alternating minimization, time dependent, and near equilibrium) in domains with spatially uniform elastic properties and an initial crack (a-c) or small void (d-f) in the phase field. Pseudocolor plots of the phase field are shown for the initial conditions (a,d) and the final states of the time-dependent simulations (b,e), while the profile of $\phi$ at its final state along the rightmost boundary of the domain (the line $x=50\ell$) is shown for all evolution methods (c,f). For these profiles, the alternating minimization solution is indicated by circles, the time-dependent solution by squares, and the near-equilibrium solution by triangles. Results for alternating minimization and near equilibrium perfectly coincide in (c,f). The analytical solution is indicated by a solid black line. } \label{fig:homogeneous_tension_phi} \end{figure} To provide insight into the differences between the time-dependent method and the other methods observed in Fig.\ \ref{fig:homogeneous_tension_phi}, Fig.\ \ref{fig:homogeneous_tension_stats} plots the stress-strain curves, the evolution of the total dissipated fracture energy in the system $F_f$ vs.\ iteration, and the fracture energy per unit length $G/G_c$ vs.\ $x/\ell$ for both the crack and small void ICs. Average stress $\bar \sigma_{22}$ and strain $\bar \varepsilon_{22}$ are scaled by $\sigma_M$ and $\varepsilon_M$ obtained from Eqs.\ \eqref{eq:epsilon_M} and \eqref{eq:sigma_M}, respectively. Iteration in Fig.\ \ref{fig:homogeneous_tension_stats}b and e denotes the time step $m$ in Algorithms \ref{alg:time-discretized-nocontrol} and \ref{alg:near-equilibrium} for the time-dependent and near-equilibrium methods respectively, while for the alternating minimization method, Algorithm \ref{alg:alternating-minimization}, it refers to the total number of inner iterations (indexed by $m$) for both the current outer iteration (indexed by $n$) and all previous outer iterations. The alternating minimization and time-dependent simulations have the same stress-strain curves in Fig.\ \ref{fig:homogeneous_tension_stats}a and d: stress increases linearly with strain until it reaches its maximum value, at which point it decreases to zero at fixed strain. The slope of the linear regime (i.e., the homogenized elastic constant $\bar C_{2222}=\bar \sigma_{22}/\bar \varepsilon_{22}$ prior to fracture) for the small void IC in Fig.\ \ref{fig:homogeneous_tension_stats}d is nearly $1\sigma_M/\varepsilon_M$, consistent with a nearly homogeneous domain, and the peak stress is relatively high, at $0.77\sigma_M$. The crack IC results in a less stiff domain, with $\bar C_{2222} =0.65 \sigma_M/\varepsilon_M$, and fractures at a much lower stress of $0.16\sigma_M$. This difference in fracture stresses is expected, as a crack should induce a singularity in the stress field while a round void should not. The stress-strain curves for the near-equilibrium case exhibit snap-back, where strain decreases during fracture instead of remaining constant. We can track the effects of material degradation by noting that the homogenized stiffness $\bar C_{2222}$ at a partially fractured state is the slope of the line between a point on the stress-strain curve and the origin. For the small void IC case in Fig.\ \ref{fig:homogeneous_tension_stats}d, significant snap-back (almost a 3x reduction in $\bar \varepsilon_{22}$) occurs with very little change in stiffness, whereas stiffness for the crack IC in Fig.\ \ref{fig:homogeneous_tension_stats}a decreases by almost 50\% before significant snap-back occurs. This behavior could be expected, as the crack that nucleates from the small void during fracture initiation introduces a new stress singularity, greatly reducing the critical value of $\bar \varepsilon_{22}$ needed for crack growth. Since the time-dependent and alternating minimization simulations experience much higher strains in the small void case than the near-equilibrium simulation when at the same average stiffness, we can say that they undergo crack propagation under overstressed conditions. Since all of the simulations with the crack IC have no change in strain as stress (and thus stiffness) decreases initially, we can say that they are all similarly close to equilibrium for a large initial part of their evolution. The near-equilibrium stress-strain curve for the crack IC eventually diverges from the other evolution methods, but the difference in applied strain between them is still small compared to the small void IC case. To understand how overstress might affect evolution, consider the plots of the fracture energy $F_f$ vs.\ iteration in Figs. \ref{fig:homogeneous_tension_stats}b and e. For the crack IC case in Fig.\ \ref{fig:homogeneous_tension_stats}b, all three evolution methods give essentially the same amount of crack growth per iteration for the first 600 iterations. The alternating minimization and time-dependent cases have almost identical evolution thereafter, with the main distinction being a slight drop in $F_f$ at the end of the alternating minimization simulation, which conveniently brings it closer to the ideal value of $100G_c\ell$. The near-equilibrium case has slower evolution at the end compared to the other two methods, requiring approximately 30\% more iterations to reach its end state. For the void IC case in Fig.\ \ref{fig:homogeneous_tension_stats}e, evolution of $F_f$ is very different between the three evolution methods. In the alternating minimization case, $F_f$ peaks at $227.7G_c\ell$ after only 184 iterations before declining rapidly to $103.0G_c\ell$ at 196 iterations. In the time-dependent case, $F_f$ increases monotonically to $166.2G_c\ell$ over 526 iterations. In the near-equilibrium case, $F_f$ increases monotonically more slowly than the time-dependent case but stops closer to the ideal value, reaching $102.6 G_c\ell$ after 3024 iterations. The alternating minimization solution does not appear to suffer any ill effects from its rapid evolution, however, as $G/G_c$ in Fig.\ \ref{fig:homogeneous_tension_stats}c and f remains near unity over the length of the crack, just as it does the near-equilibrium case. Consistent with the profiles of $\phi(50\ell,y)$ shown in Fig.\ \ref{fig:homogeneous_tension_phi}c and f, $G/G_c$ for the time-dependent simulation only differs from unity near the domain boundary ($x/\ell>45$) for the crack IC in Fig.\ \ref{fig:homogeneous_tension_stats}c, while for the void IC in Fig.\ \ref{fig:homogeneous_tension_stats}f it increases significantly starting from the initial void. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/point_crack_compare.pdf} \caption{Stress-strain plots (a,d), plots of fracture energy $F_f$ vs.\ iteration (b,e), and plots of energy released per unit length $G$ vs.\ $x/\ell$ for simulations conducted with the three evolution methods (alternating minimization, time-dependent, and near-equilibrium) in domains with spatially uniform elastic properties and an initial crack (a-c) or small void (d-f) in the phase field. Iteration in (b,e) refers to the time step for the near-equilibrium and time-dependent cases and the inner iteration, indexed cumulatively for all load steps, for the alternating minimization method. In all plots, the alternating minimization case is indicated by a solid orange line, the time-dependent case by a thick green dashed line, and the near-equilibrium case by a blue finely dashed line. The analytical final value of $F_f$ in (b,e) is 100, which is indicated in (e) by a thin solid black line. } \label{fig:homogeneous_tension_stats} \end{figure} \subsubsection{Randomly Heterogeneous Structures} \label{sec:evolution_random} Figure \ref{fig:paths_small_evolution} compares crack paths between evolution methods for a smooth random structure and a two-phase structure, both of size $L_x=L_y=100\ell$. For the smooth random structure, each evolution method produces a qualitatively different crack path. Both the alternating minimization (Fig.\ \ref{fig:paths_small_evolution}a) and time-dependent (Fig.\ \ref{fig:paths_small_evolution}b) crack paths avoid propagating through regions with high Young's modulus even if it requires them to change direction. This is in contrast to the near-equilibrium crack path (Fig.\ \ref{fig:paths_small_evolution}), which deviates only slightly from a straight horizontal line. The time-dependent crack path is notably thicker than both the alternating minimization and near-equilibrium crack paths, which contributes to it having a higher scaled fracture energy $F_f/(G_c \ell)$ at the end of the simulation, with $157.2$ compared to $104.3$ for the near-equilibrium case and $126.5$ for the alternating minimization case. The time-dependent crack also evolved in both directions simultaneously (as indicated by the intermediate states in Fig.\ \ref{fig:paths_small_evolution}b) while the near-equilibrium crack grew primarily from right to left, eventually re-entering the right side of the periodic domain and continuing to the original initiation site. For the two-phase structure in Fig.\ \ref{fig:paths_small_evolution}d-f, the crack paths for the different evolution methods are in better qualitative agreement than for the smooth random structure in Fig.\ \ref{fig:paths_small_evolution}a-c. All of the crack paths in Fig.\ \ref{fig:paths_small_evolution}d-f have evolved significantly in the vertical direction, yielding convoluted crack paths that closely track microstructural features. In particular, the crack nucleates within regions of low-$E$ phase that separate regions of high-$E$ phase in the vertical direction, and it tends to propagate through the low-$E$ phase where possible. All of the crack paths agree for approximately half of their extent, deviating eventually because the near-equilibrium crack in Fig.\ \ref{fig:paths_small_evolution}f extends to the lower right while the cracks in Figs.\ \ref{fig:paths_small_evolution}d and e extend to the upper right. In both the time-dependent (Fig.\ \ref{fig:paths_small_evolution}e) and near-equilibrium (Fig.\ \ref{fig:paths_small_evolution}f) cases, secondary cracks are observed to nucleate and grow, and the old primary crack tip may join with the secondary crack (as in Fig.\ \ref{fig:paths_small_evolution}e and the left side of Fig.\ \ref{fig:paths_small_evolution}f) or bypass it and go in a different direction (as in the right side of Fig.\ \ref{fig:paths_small_evolution}f). The alternating minimization crack path in Fig.\ \ref{fig:paths_small_evolution}d is missing secondary crack tips that remain in the time-dependent crack path Fig.\ \ref{fig:paths_small_evolution}e; otherwise both evolution methods produce very similar final crack paths. The final values of $F_f/(G_c \ell)$ are closer for the two-phase structure than for the smooth random structure, with $137.2$ for the alternating minimization method, $151.2$ for the time-dependent method, and $162.7$ for the near-equilibrium method. (For the time-dependent and near-equilibrium evolution methods, final values for $F_f$ correspond to the largest values in the legends for $F_f$ in Fig.~\ref{fig:paths_small_evolution} and similar figures.) These differences are due primarily to the different crack paths: the only systematic difference in $F_f$ across all realizations of the two-phase structure is that the time-dependent case has higher $F_f$ than the alternating minimization case. Differences in crack paths are also more subtle in other realizations compared to Fig.~\ref{fig:paths_small_evolution}d-f. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/paths_small_evolution.pdf} \caption{Pseudocolor plots of the scaled Young's modulus $E(\mathbf x)/E_0$ for two randomly heterogeneous structures, one smooth structure (a-c) and one two-phase structure (d-f) overlaid with crack paths with the three evolution methods, alternating minimization (a,d), time-dependent evolution (b,e), and near-equilibrium evolution (c,f). Both structures have size $L_x=L_y=100\ell$. The crack paths consist of filled contours that depict areas with $\phi\ge0.5$. The crack paths for the time-dependent and near-equilibrium evolution methods (b, c, e, f) are shaded from light to dark red to show the progression of crack growth, with each level corresponding to the fracture energy $F_f$ depicted in the legend. The alternating minimization crack paths (a,d) are shown with a solid red color. Images are shown centered on the crack initiation site (the first point with $\phi>0.95$) for the near-equilibrium evolution method (c,f). } \label{fig:paths_small_evolution} \end{figure} To further examine possible differences between evolution methods, we also consider fracture of a much larger two-phase structure, with $L_x=L_y=400\ell$ rather than $L_x=L_y=100\ell$. These simulations use less restrictive convergence tolerances of $10^{-4}$ for the sub-problem solvers, $(\Delta \phi)_\mathrm{min.} = \mathrm{Tol}_\mathrm{AM} = 10^{-2}$ for all evolution methods, and $(-F_\phi)_\mathrm{max.}=1G_c/\ell$ for the near-equilibrium method. Additionally, the increment of the average strain is smaller, with $\Delta \bar \varepsilon_{22}=2\times 10^{-5}$. Crack paths for this structure for the three evolution methods are shown in Figure \ref{fig:paths_large_evolution}. As in the smaller structure in Fig.\ \ref{fig:paths_small_evolution}d-f, there is initially a region of agreement between all three crack paths, but it is much smaller (${\sim20}\%$) relative to the overall crack length. The alternating minimization and time-dependent crack paths (Fig.\ \ref{fig:paths_large_evolution}b and c, respectively) agree for longer, ${\sim} 40\%$ of their length. The alternating minimization crack path in Fig.\ \ref{fig:paths_large_evolution}b contains two long secondary cracks that are separated from the longer primary crack. One secondary crack overlaps with the other cracks over its entire length, while at one location the other secondary crack completely encircles a feature of high-$E$ phase. For the time-dependent crack path in Fig.\ \ref{fig:paths_large_evolution}c, there many small secondary cracks, particularly once the primary crack has progressed away from its nucleation site. In contrast, the near-equilibrium crack path in Fig.\ \ref{fig:paths_large_evolution}d has no visible secondary cracks at all, resulting in a lower fracture energy $F_f=486.7G_c\ell$ compared to $643.5G_c\ell$ for the alternating minimization case and $686.9G_c\ell$ for the time-dependent case. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/paths_large_evolution_arxiv.pdf} \caption{Pseudocolor plots of the scaled Young's modulus $E(\mathbf x)/E_0$ for a two-phase structure with size $L_x=L_y=400\ell$ overlaid with crack paths corresponding to simulations with the three evolution methods, (b) alternating minimization, (c) time-dependent evolution, and (d) near-equilibrium evolution. The entire structure is plotted in (a) with a black box that indicates the area shown in (b-d). Centering of the images and depiction of the crack paths is as in Fig.\ \ref{fig:paths_small_evolution}. } \label{fig:paths_large_evolution} \end{figure} Figure \ref{fig:stress_strain_evolution} depicts stress-strain curves corresponding to the crack paths in Figs.\ \ref{fig:paths_small_evolution} and \ref{fig:paths_large_evolution}. As in Fig.\ \ref{fig:homogeneous_tension_stats}, all three evolution methods share the same linear regime prior to fracture, after which the near-equilibrium method undergoes unloading while the alternating minimization and time-dependent methods evolve the crack with fixed $\bar \varepsilon_{22}$. The two-phase structures have similar stiffness ($\bar C_{2222}=0.44\sigma_M/\varepsilon_M$ and $\bar C_{2222}=0.42\sigma_M/\varepsilon_M$ for large and small, respectively) and fracture stresses ($0.22\sigma_M$ and $0.24\sigma_M$), while the smooth random structure has significantly higher stiffness $0.94\sigma_M/\varepsilon_M$ and fracture stress $0.69\sigma_M$. The near-equilibrium stress-strain curves for the large (Fig.\ \ref{fig:stress_strain_evolution}c) and small (Fig.\ \ref{fig:stress_strain_evolution}b) two-phase structures are qualitatively different in that the large structure undergoes more snap-back than the smaller structure. In this way the large two-phase structure is qualitatively similar to the smooth random structure (Fig.\ \ref{fig:stress_strain_evolution}). The lesser snap-back in the small two-phase structure can be interpreted as a more rapid degradation of its stiffness. The large structure might experience relatively less degradation due to the presence of more high-stiffness features along the line of crack growth: the near-equilibrium crack path crosses the high-$E$ phase 14 times in the large structure (Fig.\ \ref{fig:paths_large_evolution}d)and only three times in the small structure (Fig.\ \ref{fig:paths_small_evolution}f). The small two-phase structure undergoes very high strains and low stresses in Fig.\ \ref{fig:stress_strain_evolution}b at the end of fracture because the tips of the main crack are far apart vertically, and oscillations in the stress-strain curve in this regime are thought to be an artifact of our evolution method. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/stress_strain_evolution.pdf} \caption{Stress-strain plots comparing the three evolution methods (alternating minimization, time-dependent evolution, and near-equilibrium evolution) for three different structures: (a) the smooth random structure from Fig.\ \ref{fig:paths_small_evolution}a-c, (b) the two-phase random structure from Fig.\ \ref{fig:paths_small_evolution}d-f, and (c) the large two-phase random structure in Fig.\ \ref{fig:paths_large_evolution}. In all plots, the alternating minimization case is indicated by a solid orange line, the time-dependent case by a thick green dashed line, and the near-equilibrium case by a blue finely dashed line. } \label{fig:stress_strain_evolution} \end{figure} \subsubsection{Discussion} Our results indicate that evolution method is an important factor in determining the crack path obtained by phase field simulations of quasi-static brittle fracture in heterogeneous materials. To understand the origin of the differences between evolution methods, and possible physical interpretations for the different methods, we connect them to our observations for the simple homogeneous examples in Section \ref{subsec:homog_ev}. Specifically, we note that for the homogeneous structures, all methods resulted in similar cracks when near equilibrium (with the crack IC), but the time-dependent method produced a thicker crack than the others when significant overstresses are present (with the small void IC). For the heterogeneous structures, all methods behave somewhat similarly near equilibrium (in the small two-phase structure), but the alternating minimization and time-dependent methods behave differently from the near-equilibrium method when significant overstresses are present (away from the crack nucleation site in the smooth random and large two-phase structures). Recalling the structure of the alternating minimization algorithm and its behavior in Fig.\ \ref{fig:homogeneous_tension_stats}e with the small void IC, its behavior can be explained: as overstress increases, the alternating minimization algorithm evolves $\phi$ increasingly rapidly and non-locally until the two crack tips meet each other (signifying complete fracture), at which point $\phi$ decreases until a local minimizer is obtained. In our homogeneous examples, only a single local minimizer is available, and the alternating minimization algorithm obtains it successfully. In the heterogeneous examples, a multiplicity of local minima are available. Due to its use of a staggered inner iteration, the alternating minimization method selects a crack path (i.e., local minimizer) that may resemble those of the other methods, particularly the time-dependent method. This is especially true when crack propagation with the alternating minimization method occurs close to equilibrium conditions, such as the homogeneous crack IC example and the small two-phase structure. When propagation occurs far from equilibrium in a heterogeneous structure, it is not clear that the crack path obtained by the alternating minimization method has a specific physical interpretation. Given the difficulty of interpreting the minimization approach, the time-dependent evolution is not particularly meaningful as a regularized minimization approach. Its interpretation as a Ginzburg-Landau-type gradient flow is more useful, primarily because the same interpretation exists for evolution of the phase field in certain models of dynamic fracture \cite{karma_phase-field_2001,hakim_laws_2009}. The quasi-static time-dependent evolution can be obtained from such dynamic fracture models as the limit of high crack viscosity and/or negligible inertial effects. Indeed, our simulations with the time-dependent evolution show qualitative features, such as crack widening and branching, that have been observed in phase field simulations of dynamic fracture at high overstress \cite{bleyer_dynamic_2017}. Crack branching in dynamic models for phase field fracture is a desirable feature, as it is consistent with experiments. We conjecture that nucleation of small secondary cracks in the large two-phase structure occurs via a similar mechanism, namely delocalized evolution of the phase field due to overstress (see e.g., Ref.~\cite{scheibert_brittle-quasibrittle_2010} for a possible analog in experiments). The real questions regarding the time-dependent evolution method are 1) whether the high-viscosity/negligible inertia limit is realistic and 2) whether it is appropriate to label evolution with such a method as `quasi-static'. Regarding the first question, we note only that the zero-viscosity limit is more commonly considered in recent work on dynamic fracture \cite{bourdin_time-discrete_2011,bleyer_dynamic_2017}. Regarding the second, the time-dependent method clearly contains physics corresponding to overstress that are absent from the near-equilibrium method but present in dynamic fracture. Perhaps `quasi-dynamic' fracture would be a more appropriate term for the time-dependent method. The near-equilibrium evolution appears to be the only method remaining for obtaining accurate crack paths for quasi-static fracture of the types of heterogeneous structure we have considered here. We do not wish to overstate the applicability of this result. Deviation from the near-equilibrium crack path appears to depend on overstress and the heterogeneity of the structure, and there may be broad classes of heterogeneous structures that do not induce the differences between evolution methods that we have observed. Furthermore, minimization methods that seek a global rather than local minimizer for the phase field fracture system \cite{bourdin_numerical_2007,bourdin_variational_2008} provide a fundamentally different piece of information than the quasi-static crack path, which can be interpreted as a lower bound on fracture toughness. We note that the global minimizer for heterogeneous elasticity with homogeneous local fracture energy is a straight crack. Our simulations show that none of the evolution methods evolve towards this global minimizer for the two-phase structure; the systems instead appear to naturally evolve to local minimizers with substantially higher dissipated fracture energies than the $L_x G_c\ell$ expected for a straight crack. \subsection{Mechanics Formulation} We consider five mechanics models in total: the three variational models introduced in the Background section (isotropic, strain-spectral splitting, and volumetric-deviatoric splitting), plus two non-variational models in which we pair the strain-spectral crack driving force with contact formulations based on the isotropic and volumetric-deviatoric elastic energy densities. We will refer to these non-variational models by abbreviations of the form `driving force model/contact model', resulting in, respectively, the strain-spectral/stress-free model and the strain-spectral/vol.-dev.~model. Unless specified otherwise, simulations in this section are conducted with the AT1 model with damage irreversibility evolved by the near-equilibrium method. Before considering the simulation results themselves, we briefly consider how the mechanics formulations behave analytically when exposed to different strain states. \subsubsection{Analysis of Mechanics Formulations} Consider a system in plane stress with principal strains $\mathbf{\varepsilon}^1 = a$ and $\mathbf{\varepsilon}^2 = -a-b$ in the plane, where $a>0$ and $b\ge 0$. This corresponds to the superposition of a shear strain $a$ and a uniaxial compressive strain $b$. Since all three variational mechanics formulations have the same degradation function $h(\phi)$, the difference between their crack driving forces $\psi(\phi,\mathbf{\varepsilon})$ lies in their values for the coupled part of the elastic energy, $\psi^+_0$ in Eq.~\eqref{eq:psi_schema}. These are $\psi^+_0 = \frac{1}{2}\lambda b^2+2\mu(a^2 + ab + b^2/2)$ for the isotropic model, $\psi^+_0 = \mu a^2$ for the strain-spectral split, and $\psi^+_0 = 2\mu(a^2 + ab + 5b^2/18)$ for the volumetric-deviatoric split. For this loading, one can say generically that $(\psi^+_0)_\mathrm{iso.} \ge (\psi^+_0)_\mathrm{vol.-dev.} > (\psi^+_0)_\mathrm{spectral}$, with equality between the isotropic and volumetric-deviatoric driving forces for pure shear ($b=0$). The strain-spectral driving force is the only one with no contribution from the compressive strain $b$. This is desirable from a theoretical perspective, since classical theories for the direction of crack propagation \cite{hutchinson_mixed_1991,hodgdon_derivation_1993} only allow propagation in directions subject to tension. For undamaged material ($\phi=0$), all three models return the same stresses for a given strain. We therefore compare the models at a point where $\phi=1$, where stresses contain only their $\partial \psi_0^-/\partial \mathbf{\varepsilon}$ term. Such a point corresponds to the center of a crack, and the stresses there correspond to a model for contact of the crack faces \cite{amor_regularized_2009,freddi_regularized_2010}. These contact models are limited because they lack explicit information about the crack's direction or its surface normal vector, but we can still assess their effects in the context of contact by aligning the system coordinates to the normal vector of the crack surfaces. Consider a straight crack normal to the $y$-axis. Instead of simulating the entire domain for this scenario (see, e.g., Ref.~\cite{zhang_assessment_2022} for this case), we consider analytically the response of a point with $\phi=1$ to an imposed local strain. Given a compressive strain along the $y$-axis, one would expect a contact model to yield a compressive stress. This is the case for the strain-spectral and volumetric-deviatoric splits, but not for the isotropic model, which is stress free, \[ \mathbf{\varepsilon}=\left\{\begin{matrix} 0 & 0 \\ 0 & -b \end{matrix} \right\} \] \begin{equation} \mathbf{\sigma}_\mathrm{iso.}=\mathbf 0,\;\; \mathbf{\sigma}_\mathrm{spectral}=\left\{\begin{matrix} -\lambda b & 0 \\ 0 & -(\lambda+2\mu) b \end{matrix} \right\},\;\; \mathbf{\sigma}_\mathrm{vol.-dev.}= (\lambda + 2\mu/3) \left\{\begin{matrix} -b & 0 \\ 0 & -b \end{matrix} \right\}, \end{equation} The strain-spectral model is also the only one where the stress response matches the undamaged material since the volumetric-deviatoric model results in lower $\sigma_{yy}$ stress. For a mixed strain state with tensile strain along the $y$-axis, one would expect zero stress because the tensile strain would bring the crack faces out of contact. This is the case for the isotropic model and the volumetric-deviatoric split, \[ \mathbf{\varepsilon}=\left\{\begin{matrix} 0 & b \\ b & 2b \end{matrix} \right\} \] \begin{equation} \mathbf{\sigma}_\mathrm{iso.}=\mathbf 0,\;\; \mathbf{\sigma}_\mathrm{spectral}= \frac{\mu}{\sqrt{2}}\left\{\begin{matrix} -b & (\sqrt{2}-1)b \\ (\sqrt{2}-1)b & (2\sqrt{2}-3)b \end{matrix} \right\},\;\; \mathbf{\sigma}_\mathrm{vol.-dev.}= \mathbf 0. \end{equation} The strain-spectral split, on the other hand, retains a significant amount of positive shear stress and introduces new compressive axial stresses in both the $x$- and $y$-directions. Since the strain-spectral split does not remove shear stresses regardless of the presence of tensile strains, we consider it a `fixed' contact, in contrast to the frictionless behavior of volumetric-deviatoric split \cite{amor_regularized_2009} and the stress-free crack simulated by the isotropic model. While it does not model contact in compression, the stress-free crack is in fact a common assumption for stress analysis of cracks \cite{zehnder_fracture_2012,rice_mathematical_1968} and sharp-crack models for fracture \cite{larralde_shape_1995,ramanathan_quasistatic_1997,katzav_fracture_2007,lebihain_effective_2020}. Our analytical observations here are consistent with recent simulations of simple compression and shear by Zhang et al.~\cite{zhang_assessment_2022}. \subsubsection{Mode II Fracture of a Homogeneous Material} \label{sec:modeII_homogeneous} Comparisons of mechanics models in literature often examine fracture of a pre-cracked specimen with uniform properties under in-plane shear (mode II) loading \cite{ambati_review_2015, bilgen_crack-driving_2019,zhang_assessment_2022}. Conditions for these simulations are difficult to replicate directly with periodic boundary conditions. Fortunately, loading via an applied average shear strain is similar to a classic experiment by Erdogan and Sih \cite{erdogan_crack_1963}, where a distributed shear was applied to a cracked PMMA plate away from the crack (Fig.\ 9 ibid.). To match this experiment, we simulate fracture within a domain with $L_x = L_y=200\ell$ containing a horizontal crack of length $20\ell$ imposed in either the phase field or the Young's modulus $E(\mathbf x)$. The crack in $E(\mathbf x)$ is obtained by taking $E(\mathbf x) = h(\phi_\mathrm{init.})E_0$ with $\phi_\mathrm{init.}$ from Eq.\ \eqref{eq:init_crack}, while the phase field crack uses $\phi_\mathrm{init.}$ as the initial condition directly. Apart from the crack, elastic properties are uniform with $E=E_0$ and $\nu=0.4$, which is more representative of PMMA than $\nu=0.2$. The domain is then strained in pure shear in increments of \[ \Delta \mathbf{\bar{\varepsilon}} = \left\{\begin{matrix} 0 & 5\times10^{-5} \\ 5\times 10^{-5} & 0 \end{matrix}\right\}. \] This strain state implies different values of $\sigma_M$ and $\varepsilon_M$ for the stress and strain for fracture of a homogeneous material than were computed in Eqs.\ \eqref{eq:tension_M}-\eqref{eq:sigma_M} for a pure tensile strain. For pure shear with the strain-spectral split for the elastic energy density, we have \begin{align} 2 \mu \bar \varepsilon_{12,M}^2 &= \frac{3G_c}{8\ell}, \label{eq:shear_epsilon_M} \\ \bar \varepsilon_{12,M} &= \frac{1}{4}\sqrt{\frac{3G_c}{\ell \mu}}, \\ \bar\sigma_{12,M} &= \frac{1}{2}\sqrt{\frac{3G_c\mu}{\ell}}, \end{align} which for $\nu=0.4$ and $E=10^4G_c$ yields $\bar \varepsilon_{12,M}=0.007246$ and $\bar \sigma_{12,M}=51.75G_c/\ell$. Figure \ref{fig:shear_pf_compare} compares the crack paths resulting from the phase field initial crack to a trace of the three cracks (one pre-crack and two mode II cracks) present in Fig.\ 9 of Erdogan and Sih \cite{erdogan_crack_1963}. Interestingly, none of the variational models (Fig.\ \ref{fig:shear_pf_compare}a-c) matches the experimental crack path, but the two non-variational models (Fig.\ \ref{fig:shear_pf_compare}d and e) both fit it very well. The strain-spectral variational model (Fig.\ \ref{fig:shear_pf_compare}a) results in cracks at a $45^\circ$ angle relative to the pre-crack, which disagrees with the experimental crack path and the angle of $70^\circ$ predicted in Ref.~\cite{erdogan_crack_1963}. This result does however match previous shear fracture simulations with periodic boundary conditions \cite{chen_fft_2019} and one set of FEM-based simulations \cite{bilgen_crack-driving_2019}. The isotropic model (Fig.\ \ref{fig:shear_pf_compare}b) results in growth of the pre-crack followed by nucleation of two crack branches per initial crack tip (i.e., four crack branches in total). The nucleation of these spurious crack branches is expected behavior for the isotropic model \cite{miehe_thermodynamically_2010, ambati_review_2015, bilgen_crack-driving_2019}. The volumetric-deviatoric model (Fig.\ \ref{fig:shear_pf_compare}c) results initially in growth in the same direction as the pre-crack, but the crack paths eventually change direction and take a path that resembles a scaled-up version of the experimental crack path. With both the strain-spectral/stress-free and strain-spectral/vol.-dev.\ non-variational models (Fig.\ \ref{fig:shear_pf_compare}d and e, respectively), the mode II cracks propagate directly from the pre-crack with the same angle, overall trajectory, and scale relative to the initial crack as the experimental crack path. These models also fractured at similar stresses $\bar \sigma_{12}$ of $0.36\sigma_{12,M}$ for the strain-spectral/stress-free model and $0.37\sigma_{12,M}$ for the strain-spectral/vol.-dev.\ model. This is compared to a much higher fracture stress of $0.56\sigma_{12,M}$ for the strain-spectral variational model and lower fracture stresses of $0.31\sigma_{12,M}$ and $0.33\sigma_{12,M}$ for the volumetric-deviatoric and isotropic variational models, respectively. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/shear_nu4_pf.pdf} \caption{Pseudocolor plots of the phase field resulting from shearing of a homogeneous material with an initial crack in the phase field according to five different mechanics formulations: (a) the strain-spectral variational model, (b) the isotropic variational model, (c) volumetric-deviatoric variational model, (d) the non-variational strain-spectral/stress-free model, and (e) the non-variational strain-spectral/vol.-dev.~model. Images depict the central area $[-100\ell,100\ell]^2$, one quarter of the simulation domain. The red dashed lines indicate a trace of the cracks (one pre-crack and two mode II cracks) in Fig.\ 9 of Erdogan and Sih \cite{erdogan_crack_1963} that has been rotated and rescaled while preserving its aspect ratio. All sub-figures use the same scaling for this trace. } \label{fig:shear_pf_compare} \end{figure} To provide insight into the differences in crack path and fracture stress between the simulations in Fig.\ \ref{fig:shear_pf_compare}, Fig.\ \ref{fig:shear_stress} presents pseudocolor plots of the stresses induced by the three contact models prior to significant evolution of the phase field. In particular, Fig.\ \ref{fig:shear_stress}a-c plots the sum of the principal stresses $\sigma_1$ and $\sigma_2$ (i.e., the trace of the stress tensor), while the difference $\sigma_1-\sigma_2$ is plotted in Fig.\ \ref{fig:shear_stress}d-f. Both quantities are scaled by the expected far-field value for $\sigma_1-\sigma_2$ based on the applied strain, $2\sigma_\infty=4\mu\bar \varepsilon_{12}$, which is equal to $14.29G_c/\ell$ in this case. The stress distribution for the strain-spectral model (Fig.\ \ref{fig:shear_stress}a and c) matches the scenario for a point with $\phi=1$ outlined in the previous sub-section: the applied shear strain induces significant stresses within the crack that correspond to a mixed state of shear and compression, with large negative $\sigma_1+\sigma_2$ and positive $\sigma_1-\sigma_2$. Net tensile stresses ($\sigma_1+\sigma_2>0$) are concentrated at the crack tips, but the distribution is qualitatively different from the isotropic model and volumetric-deviatoric split in Fig.\ \ref{fig:shear_stress}b and e and \ref{fig:shear_stress}c and f, respectively. These contact models result in stress-free crack centers and stress concentrations exclusively at the crack tips. The isotropic model results in a distribution of $\sigma_1+\sigma_2$ that is anti-symmetric about both the $x$- and $y$-axes. The volumetric-deviatoric split results in a qualitatively similar stress distribution compared to the isotropic model, particularly for $\sigma_1-\sigma_2$, but it has significantly smaller positive (tensile) peak values for the trace $\sigma_1+\sigma_2$. This difference in stress distribution between the volumetric-deviatoric and isotropic models seems to have had only a minor effect on fracture stress, however, and no noticeable effect on crack path. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/shear_stresses_nu4.pdf} \caption{Pseudocolor plots of the trace of the stress $\sigma_1+\sigma_2$ (a-c) and the difference between principal stresses $\sigma_1-\sigma_2$ (d-f) for a phase field crack subjected to an imposed average shear with three mechanics models: (a,d) strain-spectral split, (b,e) isotropic (no split), and (c,f) volumetric-deviatoric split. The scaling factor $2\sigma_\infty=4\mu \bar\varepsilon_{12}$ corresponds to the far-field value of $\sigma_1+\sigma_2$ induced by the applied average strain in a homogeneous material. } \label{fig:shear_stress} \end{figure} Simulations with the initial crack in $E(\mathbf x)$ rather than $\phi(\mathbf x)$ serve to further refine our distinction between effects of contact model and crack driving force. In this case, the contact model contained in the phase field model will affect the mode II cracks that emerge during fracture, but the initial crack in $E(\mathbf x)$ is always stress-free. The results of these simulations in Fig.\ \ref{fig:shear_cx_compare} are qualitatively similar to those in Fig.\ \ref{fig:shear_pf_compare} with one major exception: the strain-spectral variational model in Fig.\ \ref{fig:shear_cx_compare}a now matches the experimental crack path with the same fidelity as the two non-variational models in Fig.\ \ref{fig:shear_cx_compare}d and e. All three of these models now have very similar fracture stresses, at $0.47\sigma_{12,M}$ for the variational strain-spectral model and $0.46\sigma_{12,M}$ for both non-variational models. The large difference in fracture stress between initial conditions in the non-variational cases may be due to the need for nucleation of the new crack when the initial crack is in $E(\mathbf x)$. Nucleation may also be responsible for more subtle differences between Fig.\ \ref{fig:shear_pf_compare} and Fig.\ \ref{fig:shear_cx_compare}: the scaling factor for the trace of the experimental path is 11\% larger in the latter, and the fits between simulated and experimental crack paths are slightly worse in Fig.\ \ref{fig:shear_cx_compare}a, d, and e than in Fig.\ \ref{fig:shear_pf_compare}d and e. Fracture stresses for the isotropic and volumetric-deviatoric variational models were both $0.34\sigma_{12,M}$, consistent with the similarity between their crack driving forces under shear that was noted in the previous sub-section. Note that $\sigma_1-\sigma_2$, which corresponds to shear stress, is maximized in Fig.\ \ref{fig:shear_stress}e and f along the same axis as the initial crack, and it is at this location that crack grows initially in the isotropic and volumetric-deviatoric models in Figs.~\ref{fig:shear_pf_compare} and \ref{fig:shear_cx_compare}. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/shear_nu4_cx.pdf} \caption{Pseudocolor plots of the phase field resulting from shearing of a homogeneous material with an initial crack in $E(\mathbf x)$ according to five different mechanics formulations: (a) the strain-spectral variational model, (b) the isotropic variational model, (c) volumetric-deviatoric variational model, (d) the non-variational strain-spectral/stress-free model, and (e) the non-variational strain-spectral/vol.-dev.~model. Images depict the same sub-domain as in Fig.\ \ref{fig:shear_pf_compare}. The experimental crack path in this figure (red dashed line) has been uniformly rescaled to be 11\% larger than that in Fig.\ \ref{fig:shear_pf_compare}. } \label{fig:shear_cx_compare} \end{figure} \subsubsection{Mixed-Loading Fracture of a Randomly Heterogeneous Material} We now consider how different mechanics models affect crack paths in heterogeneous structures. Our main interest in this study is tensile fracture, but the differences between mechanics models are greatest for compressive stress states \cite{de_lorenzis_nucleation_2021}. As a compromise, we consider a mixed loading state with an applied strain increment of \[ \Delta \mathbf{\bar \varepsilon} = \left\{\begin{matrix} -5\times10^{-5} & 0 \\ 0 & 10^{-4} \end{matrix}\right\}. \] (Poisson's ratio is set to 0.2, as in all other simulations except those of Section \ref{sec:modeII_homogeneous}.) Figure \ref{fig:paths_mixed_elasticity} shows crack paths for the different mechanics models for a two-phase random structure of size $L_x=L_y=100\ell$. Crack paths for the non-variational models with the strain-spectral driving force in Fig.~\ref{fig:paths_mixed_elasticity}d and e are essentially identical. The variational strain-spectral crack in Fig.~\ref{fig:paths_mixed_elasticity}a) follows the same path as the non-variational models for much of its evolution, but the crack grows wider in certain locations as the simulation progresses. This widening results in a substantially higher final fracture energy $F_f$ compared to the non-variational models. This widening also smooths the crack path, resulting in fewer high-curvature features (kinks or corners) compared to the non-variational models. The isotropic and volumetric-deviatoric variational models (Fig.~\ref{fig:paths_mixed_elasticity}b and c, respectively) initially nucleate a crack at the same location as the models with the strain-spectral driving force, but growth of this crack is arrested and the domain is eventually perforated by crack growth from other nucleation sites. Both cases have the same secondary nucleation sites in the upper left of the domain, but their crack paths bifurcate due to crack growth from an additional out-of-plane nucleation site in the volumetric-deviatoric case (Fig.~\ref{fig:paths_mixed_elasticity}c). Crack growth from secondary nuclei also occurs in the cases with strain-spectral driving force, but close enough to the primary crack that they are able to coalesce. Figure \ref{fig:paths_mixed_elasticity} represents the greatest contrast in crack paths between different crack driving forces out of the realizations of the two-phase structure that we have simulated; we typically observed smaller differences in other realizations. However, similarity between the two non-variational models and the crack widening phenomenon for the variational strain-spectral model were observed consistently across realizations, as well as in simulations with purely tensile average strains. \begin{figure} \centering \includegraphics[width = 15.5cm]{figures/paths_mixed_elasticity.pdf} \caption{Crack paths resulting from mixed tensile-compressive loading of a two-phase random structure of size $L_x=L_y=100\ell$ via phase field fracture with five different mechanics formulations: (a) the variational model for the strain-spectral split, (b) the variational model for the isotropic formulation (no split), (c) the variational model for the volumetric-deviatoric split, (d) the strain-spectral/stress-free non-variational model, and (e) the strain-spectral/vol.-dev.~non-variational model. } \label{fig:paths_mixed_elasticity} \end{figure} Figure \ref{fig:stress_strain_mechanics} depicts the stress-strain curves for the simulations whose crack paths are plotted in Fig.\ \ref{fig:paths_mixed_elasticity}. All stress-strain curves show a linear elastic regime followed by a jagged pattern of snap-back events and reloading that is typical for the near-equilibrium evolution method in a heterogeneous structure. In Fig.\ \ref{fig:stress_strain_mechanics}a, which compares the variational mechanics models, the isotropic and volumetric-deviatoric models result in qualitatively similar stress-strain curves that differ significantly from the strain-spectral curve after the initial snap-back event and from each other after ${\sim}3$ additional snap-back events. This is an expected consequence of the difference in crack paths in Fig.\ \ref{fig:paths_mixed_elasticity}. In Fig.\ \ref{fig:stress_strain_mechanics}b, all of the models with the strain-spectral crack driving force have the same pattern of snap-back events for much of their evolution. The stress-strain curves for the two non-variational models are essentially identical, but they differ from the variational strain-spectral model by having consistently lower stresses, a difference that increases as fracture progresses. The end of the stress-strain curve for the strain-spectral variational model is characterized by oscillations between low and high strain, and higher strains are needed for fracture compared to the other models. This oscillatory behavior, not present with the other models, is likely an artifact of our near-equilibrium algorithm and not representative of the equilibrium path. All of the mechanics models have similar peak $\bar \sigma_{yy}$ stresses, with $0.27\sigma_M$ for the volumetric-deviatoric variational model and $0.26\sigma_M$ for the other models. This is contrary to the behavior of these models in a homogeneous material, where the compressive strain component would contribute to the crack driving force in the isotropic and volumetric-deviatoric models but not the strain-spectral model. Following the methodology in Eq.~\eqref{eq:sigma_M}, the strain-spectral crack driving force would have a fracture stress in a homogeneous material that is 12\% higher than the isotropic and volumetric-deviatoric models. \begin{figure} \centering \includegraphics[width = 15.0cm]{figures/stress_strain_mixed.pdf} \caption{Stress-strain plots corresponding to fracture simulations of the random two-phase structure shown in Fig.\ \ref{fig:paths_mixed_elasticity} with (a) variational mechanics models and (b) mechanics models with the strain-spectral crack driving force. The curve for the strain-spectral variational model is shown as a thin solid blue line in both plots. (a) also depicts curves for the volumetric-deviatoric split (solid olive line) and isotropic model (pink dashed line). (b) also depicts curves for the strain-spectral/stress-free model (solid orange line) and the strain-spectral/vol.-dev.~model (red dashed line). } \label{fig:stress_strain_mechanics} \end{figure} Since the crack widening/smoothing phenomenon observed for the strain-spectral variational model in Fig.\ \ref{fig:paths_mixed_elasticity}a is not observed in any other model, we have conducted simulations with additional formulations and evolution methods to test its generality. Similar widening/smoothing to Fig.\ \ref{fig:paths_mixed_elasticity}a is observed for crack-set irreversibility, the AT2 model, and the time-dependent evolution method. When alternating minimization is applied to smooth random structures with the strain-spectral variational model, we find minimal widening but substantial smoothing compared to the strain-spectral/stress-free case. An example of this effect is shown in Fig.~\ref{fig:paths_alternating_grf}. \begin{figure} \centering \includegraphics[width = 10.0cm]{figures/paths_alternating_grf.pdf} \caption{Crack paths (filled $\phi=0.5$ contours) for simulations of fracture of a smooth random structure via the alternating minimization method with (a) the strain-spectral/stress-free mechanics model and (b) the strain-spectral variational model. } \label{fig:paths_alternating_grf} \end{figure} \subsubsection{Discussion} Our results suggest that the distinction between crack driving force (the form of $\partial \psi(\phi,\mathbf{\varepsilon})/\partial \phi$ employed in the phase field evolution equation) and contact model (the form of $\partial \psi(\phi,\mathbf{\varepsilon})/\partial \mathbf{\varepsilon}$ employed in the mechanical equilibrium equation) is the key to arriving at an acceptable mechanics formulation based on the tension-compression splits of $\psi(\mathbf{\varepsilon})$ that are commonly used in the literature. The simulations of in-plane shear (mode II) fracture of a homogeneous domain demonstrate that the crack driving force corresponding to the strain-spectral split of $\psi(\mathbf{\varepsilon})$ is in reasonable agreement with canonical experimental results and therefore the sharp-crack theories of fracture that are based upon them \cite{erdogan_crack_1963,zehnder_fracture_2012}. The other crack driving forces we consider, the isotropic model with no splitting and the volumetric-devatoric split, do not pass this simple test. The agreement between the experimental crack path and our mode II fracture simulations with the strain-spectral crack driving force holds for all cases except one, Fig.\ \ref{fig:shear_pf_compare}a, with the variational strain-spectral model and a phase field initial crack. This exception is the only case out of Figs.~\ref{fig:shear_pf_compare} and \ref{fig:shear_cx_compare} in which the strain-spectral contact model is active and exposed to non-tensile strains. The fixed contact resulting from this model significantly alters the stress distribution for mode II loading compared to the stress-free contact model, as shown in Fig.\ \ref{fig:shear_stress}, which likely results in the disagreement seen in Fig.\ \ref{fig:shear_pf_compare}a. The strain-spectral contact model only weakens the material completely to purely tensile strains, resulting in significant stresses for shear strains. Cracks that are not orthogonal to the tensile loading direction (i.e., horizontal in Fig.\ \ref{fig:paths_mixed_elasticity}) are likely to contain shear strains, leading to the higher stresses for the strain-spectral contact model noted in Fig.~\ref{fig:stress_strain_mechanics}b. We anticipate that these exaggerated `frictional' stresses lead to the artificial widening and smoothing of cracks in heterogeneous structures observed in Figs.~\ref{fig:paths_mixed_elasticity} and \ref{fig:paths_alternating_grf}. The two models in which both the crack driving force and contact model are satisfactory are both non-variational: they combine the strain-spectral crack driving force with either the stress-free contact model or the volumetric-deviatoric contact model. One would expect to find differences between these two models under compression, where the contact model is likely to have a greater effect on the stress distribution and overall stiffness of the structure. For the load cases we consider here, however, we observe no significant differences in crack paths or stress-strain curves between the non-variational models. The stress-free contact model is computationally advantageous because it is linear \cite{ambati_review_2015}, and thus we use it with the strain-spectral crack driving force for the simulations in other sections of this paper. It is perhaps unsatisfying that our results favor non-variational models. While such models are increasingly popular \cite{ambati_review_2015,bilgen_crack-driving_2019} and can more easily match empirical strength surfaces for macroscopically homogeneous materials \cite{wu_unified_2017,kumar_revisiting_2020}, variational models have an appealing theoretical coherence. We suspect that the free energy functionals that have been proposed for phase field fracture are intrinsically too simple to correctly model contact in a variational model because they lack information about the orientation of the crack. The crack/surface normal vector is essential to models of static friction such as Coloumb's law \cite{popov_coulombs_2017}, which we presume to be the desired physics for a non-healing crack after fracture. In contrast, phase field formulations determine stress-strain response based purely on the state of strain and the pointwise value of the phase field. Such formulations lack the angular information provided by the crack normal vector, and thus will only ever coincidentally match models for frictional or frictionless contact. In the absence of a unified variational model that captures both contact and fracture, it becomes a reasonable strategy to mix and match the parts of existing models that are least objectionable for the task at hand. Finally, we consider our results in the context of other efforts to critically examine mechanics formulations. Works that focus on strength surfaces \cite{kumar_revisiting_2020,de_lorenzis_nucleation_2021} are largely orthogonal to ours. However, comparisons of crack paths from mode II fracture simulations are provided in Refs.~\cite{ambati_review_2015,bilgen_crack-driving_2019,zhang_assessment_2022}. Curiously, these studies find different crack paths for the strain-spectral model: Refs.~\cite{ambati_review_2015,zhang_assessment_2022} found a crack path similar to our stress-free initial crack, while Ref.~\cite{bilgen_crack-driving_2019} found one similar to our phase field initial crack (i.e., in poor agreement with experiments). For Ref.~\cite{zhang_assessment_2022}, it appears that a stress-free initial crack was in fact used, but the choice of initial crack is not given explicitly in Refs.~\cite{ambati_review_2015,bilgen_crack-driving_2019}. In additional simulations, we found moderate effects on the mode II crack path from the length of the initial crack and the Poisson's ratio that do not affect our conclusions, but which would affect qualitative comparisons to other works. The use of periodic vs.~fixed boundary conditions could also have an effect, but this is difficult to check with our methods. Due to the lack of heterogeneity, we do not expect effects from other differences in formulation (e.g., AT1 vs.~AT2 and near-equilibrium vs.~minimization). \subsection{Phase Field Formulation} In this section, we consider effects of three aspects of phase field fracture models that have no direct equivalent in models for propagation of sharp cracks: the form of the pointwise fracture energy density $f(\phi)$ (AT1 vs.\ AT2), the choice of irreversibility criterion for $\phi$ (crack-set vs.\ damage), and the ratio between the microstructural length scale $L_\mathrm{cut}$ and the crack width parameter $\ell$. Effects of these aspects of the model have been extensively considered for homogeneous materials (see, e.g., Refs.~\cite{pham_gradient_2011,tanne_crack_2018} for AT1 vs.\ AT2 and Ref.~\cite{tanne_crack_2018} for effects of the irreversibility condition and $\ell$). Our focus is therefore on qualitative differences in crack paths in randomly heterogeneous structures. Simulations in this section are conducted with the near-equilibrium evolution method using a uniaxial tensile applied strain with the increment given in Eq.~\eqref{eq:loading_tensile}. We focus primarily on the strain-spectral/stress-free mechanics model, but cross-effects with other mechanics models are also illustrated. \subsubsection{Model Comparison} Figure \ref{fig:paths_formulation} considers how crack paths are affected by three different aspects of the model formulation: AT1 vs.\ AT2, damage vs.\ crack-set irreversibility, and the choice between the isotropic, volumetric-deviatoric, and strain-spectral/stress-free mechanics models. (We exclude the strain-spectral variational model due to its non-physical crack widening and the strain-spectral/vol.-dev.\ model due to its similarity to the strain-spectral/stress-free model.) Figure \ref{fig:paths_formulation} demonstrates a striking sensitivity of the crack path to all three aspects of the formulation. For the strain-spectral/stress-free model in Fig.~\ref{fig:paths_formulation}a-d.1, we see three different crack paths, with only the AT1 damage and AT2 damage cracks closely resembling each other (Fig.~\ref{fig:paths_formulation}a.1 and d.1, respectively). However, even these two crack paths have different intermediate states and different final values for the fracture energy $F_f$, with $137.8G_c\ell$ and $160.5G_c\ell$ for the AT1 and AT2 damage cases, respectively. Considering the other mechanics models, we see at least five distinct crack paths, with Fig.~\ref{fig:paths_formulation}a.1, b.1, d.1, c.2, and d.2 being typical examples. Combined with results for other structures (not included here), Fig.\ \ref{fig:paths_formulation} suggests that there is no clear pattern for how these three aspects of the model formulation affect the crack path. Between similar crack paths, we note that the damage-type irreversibility results in higher final fracture energies $F_f$ than the crack-set irreversibility (within Fig.~\ \ref{fig:paths_formulation}, compare a.2 to b.2, or c.3 to d.3, for example), and the AT2 damage model in particular usually has the highest values of $F_f$ overall. \begin{figure} \centering \includegraphics[width = 16.0cm]{figures/paths_formulation.pdf} \caption{Crack paths for the two-phase structure from Fig.~\ref{fig:paths_mixed_elasticity} simulated under a uniaxial tensile applied strain with the (a,b) AT1 and (c,d) AT2 phase field formulations with (a,c) damage and (b,d) crack-set irreversibility criteria using the (1) strain-spectral/stress-free, (2) volumetric-deviatoric, and (3) isotropic mechanics models. } \label{fig:paths_formulation} \end{figure} To illustrate a key difference between the AT1 and AT2 models, Fig.~\ref{fig:phi_formulation} shows the phase field during fracture initiation in simulations corresponding to Fig.~\ \ref{fig:paths_formulation}a.1 and d.1. Fig.~\ref{fig:phi_formulation}a and b thus correspond to the AT1 damage and AT2 crack-set models, respectively, but the irreversibility condition should not affect the distribution of $\phi$ prior to fracture initiation. In the AT1 model, evolution of $\phi$ is localized to a peak at the primary nucleation site in the center and 4-5 smaller peaks in other locations, while the rest of the structure is undamaged. In the AT2 model, $\phi$ is non-zero within the entire structure, and broad regions exist with moderate damage ($0.2 < \phi < 0.6$). With crack-set irreversibility, these regions have the opportunity to `heal' after fracture initiation, but with damage irreversibility they affect the structure permanently. One of these regions in the lower right of Fig.~\ref{fig:phi_formulation}b matches the location of a secondary crack in the AT2 damage case in Fig.~\ref{fig:paths_formulation}c.1 that eventually merges with the primary crack. \begin{figure} \centering \includegraphics[width = 9cm]{figures/phi_nucleation.pdf} \caption{Pseudocolor images of the phase field $\phi$ at simulation steps corresponding to fracture initiation with (a) the AT1 model and (b) the AT2 model for the two-phase structure depicted in Fig.~\ref{fig:paths_formulation}. } \label{fig:phi_formulation} \end{figure} To understand differences in evolution over the course of the entire simulation, Fig.~\ref{fig:iteration_formulation} plots evolution of the average value of $\phi$ and the fracture energy $F_f$ vs.\ iteration for the simulations of two-phase structures shown in Fig.~\ref{fig:paths_formulation}a-d.1. In Fig.~\ref{fig:iteration_formulation}a, simulations with the AT2 model show a large initial increase in average $\phi$, which should correspond to the state shown in Fig.~\ref{fig:phi_formulation}b. With the crack-set irreversibility condition, this increase is followed by a large decrease and additional oscillations until the end of the simulation. With the damage irreversibility condition, average $\phi$ continues to increase monotonically at a slower rate, leading to a very high final average $\phi$ compared to the other three cases. With the AT1 model, both irreversibility conditions result in similar steady growth, with occasional slight decreases in average $\phi$ observed for crack-set irreversibility. Compared to average $\phi$, there is less of a difference between AT1 and AT2 in the evolution of $F_f$ in Fig.~\ref{fig:iteration_formulation}b. This is consistent with the delocalized evolution of $\phi$ in Fig.~\ref{fig:phi_formulation}b because low values of $\phi$ contribute less to $F_f$ in the AT2 model due to the quadratic form of $f(\phi)$. All four cases show steady increases in $F_f$, with monotonic growth for damage irreversibility and occasional slight decreases for crack-set irreversibility. \begin{figure} \centering \includegraphics[width = 16cm]{figures/iteration_formulation.pdf} \caption{Plots of (a) average $\phi$ and (b) fracture energy $F_f$ vs.\ iteration for simulations of fracture of a two-phase structure with the AT1 model with crack-set (thin blue line) and damage (thick orange line) irreversibility and the AT2 model with crack-set (thin red line) and damage (thick cyan line) irreversibility. Simulations plotted correspond to the crack paths in Fig.~\ref{fig:paths_formulation}a-d.1. } \label{fig:iteration_formulation} \end{figure} Evolution of $\phi$ in the AT2 model prior to fracture is well-known to affect mechanical response \cite{pham_gradient_2011}. Figure \ref{fig:stress_strain_formulation} shows stress-strain plots for the simulations in Fig.~\ref{fig:paths_formulation}a-d.1. In Fig.~\ref{fig:stress_strain_formulation}b, the AT2 model results in a decrease in stiffness prior to fracture. This in turn results in a lower fracture stress of $0.22\sigma_M$ compared to $0.27\sigma_M$ for the AT1 model in Fig.~ \ref{fig:stress_strain_formulation}a, with $\sigma_M$ calculated for the AT1 model from Eq.~\eqref{eq:sigma_M}. Sawtooth-like features prior to fracture with the AT2 model corresponds to relaxation of $\phi$ before the next strain increment is applied. Stress-strain curves for the two irreversibility conditions are the same prior to fracture, but eventually they deviate due in part to the differences in crack path shown in Fig.~\ref{fig:paths_formulation}. Stress-strain curves for the crack-set cases show signs of `stiffening' (increases in average stiffness), likely due to healing of $\phi$ where it is below the crack-set threshold of 0.9. At the location indicated `1' in Fig.~\ref{fig:stress_strain_formulation}b, this stiffening occurs during a decrease in applied strain, resulting in a nearly horizontal segment of stress-strain curve. At location 2, stiffening coincides with an increase in applied strain, resulting in a snap-back event with a cusp appearing `inside' another snap-back event. Our control algorithm for the near-equilibrium method handles these examples gracefully, but in general additional precautions may be needed to prevent simulations with crack-set irreversibility from being trapped in cycles of loading and unloading that lack irreversible evolution. \begin{figure} \centering \includegraphics[width = 13cm]{figures/stress_strain_formulation.pdf} \caption{Stress-strain plots for fracture of a two-phase structure with (a) the AT1 model with crack-set (thin blue line) and damage (thick orange line) irreversibility and (b) the AT2 model with crack-set (thin red line) and damage (thick cyan line) irreversibility. Simulations plotted correspond to the crack paths in Fig.~\ref{fig:paths_formulation}a-d.1. Selected instances of `stiffening' (increases in average stiffness) with crack-set irreversibility are highlighted with black arrows. The scaling stress $\sigma_M$ for both plots is based on the AT1 model, i.e., Eq.~\eqref{eq:sigma_M}. } \label{fig:stress_strain_formulation} \end{figure} \subsubsection{Convergence with respect to microstructural length scale} We now consider how differences between the AT1 and AT2 models and the damage and crack-set irreversibility conditions change as the microstructural length scale $L_\mathrm{cut}$ is increased relative to the crack width parameter $\ell$. This can be interpreted as an evaluation of $\Gamma$-convergence, since our use of $\ell$ as a characteristic length scale prevents us from investigating the limit $\ell\to 0$ directly. As noted in Section \ref{sec:structure_gen}, we change $L_\mathrm{cut}$ by interpolating structures generated with $L_\mathrm{cut} =6\ell$ in a domain of size $L_x=L_y=100\ell$ onto a larger grid with $1023^2$ or $2047^2$ points compared to the original grid size of $N_x=N_y=511$. These larger grids in turn represent larger domain sizes, $L_x=L_y=200\ell$ or $L_x=L_y=400\ell$, resulting in $L_\mathrm{cut}=12\ell$ or $L_\mathrm{cut}=24\ell$, respectively. Figure \ref{fig:paths_ell} shows crack paths resulting from simulations under the same conditions as in Fig.~\ref{fig:paths_formulation}a-d.1 with the structure upscaled from $L_\mathrm{cut}=6\ell$ to $L_\mathrm{cut}=12\ell$. These crack paths are thinner than their equivalents with $L_\mathrm{cut}=6\ell$ (a result of our use of the $\phi=0.5$ contour for visualization), and exhibit sharp changes in direction that might have appeared smoother in similar cracks at $L_\mathrm{cut}=6\ell$. In their overall structure, three of the crack paths (corresponding to the AT2 model and the AT1 damage case) now agree with each other for much of their length. The evolution of these crack paths most closely resembles the AT2 crack-set case in Fig.~\ref{fig:paths_formulation}d.1, while the final crack path also resembles the AT1 crack-set case in Fig.~\ref{fig:paths_formulation}b.1 and similar crack paths with the AT1 model and other mechanics formulations. Meanwhile, the AT1 crack-set case in Fig.~\ref{fig:paths_ell}b is changed significantly from Fig.~\ref{fig:paths_formulation}b.1: the initial crack now stops growing and the structure is fractured by a secondary crack initiated in the upper right corner. Consistent with Fig.~\ref{fig:paths_ell}, results for other realizations of the two-phase structure show more agreement between crack paths with different model formulations at $L_\mathrm{cut}/\ell=12$ than at $L_\mathrm{cut}/\ell=6$. Compared to Fig.~\ref{fig:paths_ell}, these realizations result in agreement between $L_\mathrm{cut}/\ell=12$ and $L_\mathrm{cut}/\ell=6$ with the same model formulation more often than is indicated by Fig.~\ref{fig:paths_ell} alone. However, there does not appear to be any pattern in this agreement between forms of $f(\phi)$ or irreversibility conditions. \begin{figure} \centering \includegraphics[width = 16.4cm]{figures/paths_ell_arxiv.pdf} \caption{Crack paths for the structure from Fig.~\ref{fig:paths_formulation} upscaled to size $L_x=L_y=200\ell$ such that the cutoff length scale $L_\mathrm{cut}$ describing the microstructure is $12\ell$ instead of $6\ell$. Simulation conditions are otherwise the same as Fig.~\ref{fig:paths_formulation}a-d.1. } \label{fig:paths_ell} \end{figure} A clearer picture emerges when $L_\mathrm{cut}/\ell$ is increased in the smooth random structures. These do not exhibit much variation in crack path with the near-equilibrium evolution method (most are nearly flat, as in Fig.~\ref{fig:paths_small_evolution}c), but the location where the crack nucleates can differ between the AT1 and AT2 models. Figure \ref{fig:nucleation_grf} compares nucleation sites between the AT1 and AT2 models for three smooth random structures at two or three levels of $L_\mathrm{cut}/\ell$. Nucleation sites in Fig.~\ref{fig:nucleation_grf} are designated as the location where $\phi$ first exceeds 0.9 in a given simulation. In all three of the original structures with $L_\mathrm{cut}=6\ell$, the AT2 model nucleates the crack at a different location from the AT1 model. (For comparison, this was only observed in one of the three two-phase structures for $L_\mathrm{cut}=6\ell$.) In two out of three cases, the nucleation site for the AT2 model converges to that of the AT1 model as $L_\mathrm{cut}/\ell$ increases, with agreement at $L_\mathrm{cut}/\ell=24$ in Fig.~\ref{fig:nucleation_grf}a and $L_\mathrm{cut}/\ell=12$ in Fig.~\ref{fig:nucleation_grf}b. In Fig.~\ref{fig:nucleation_grf}c, the nucleation site of the AT1 model is itself not converged at $L_\mathrm{cut}/\ell=6$, but the AT2 model still nucleates at this `old' site at $L_\mathrm{cut}/\ell=12$, before nucleating at yet another site when $L_\mathrm{cut}/\ell=24$. The AT1 model in Fig.~\ref{fig:nucleation_grf} uses damage irreversibility while the AT2 model uses crack-set irreversibility, but we do not expect the irreversibility condition to affect the nucleation site. \begin{figure} \centering \includegraphics[width = 13.6cm]{figures/nucleation_grf.pdf} \caption{Comparison of crack nucleation sites, indicated by red plus signs, between the AT1 and AT2 models for three smooth random structures (a, b, and c) that are upscaled to create three size scales: $L_\mathrm{cut}=6\ell$, $L_x=L_y=100\ell$ (original); $L_\mathrm{cut}=12\ell$, $L_x=L_y=200\ell$; and $L_\mathrm{cut}=24\ell$, $L_x=L_y=400\ell$. The nucleation location appears to converge at lower values of $L_\mathrm{cut}/\ell$ for the AT1 model compared to the AT2 model. } \label{fig:nucleation_grf} \end{figure} \subsubsection{Discussion} One way to interpret the differences between crack paths in Fig.~\ref{fig:paths_formulation} is that one model (and thus crack path) is more correct than the others. For the mechanics formulations and evolution methods, we evaluated models based in part on simple simulations that are easier to analyze than the heterogeneous structures. In this section, we primarily refer to analyses already present in the literature. Linse et al.~\cite{linse_convergence_2017} find that damage irreversibility prevents convergence of the dissipated fracture energy $F_f$ to its ideal value because diffuse evolution of $\phi$ prior to nucleation of the crack is not allowed to heal. This diffuse evolution of $\phi$ is not eliminated by decreasing $\ell$ relative to the length of a reduced-stiffness region in their 1D domain, and thus they find that the damage irreversibility condition is incompatible with $\Gamma$-convergence of $F_f$. In our own results, we find a difference $F_f$ between the damage and crack-set irreversibility conditions for the AT2 model (e.g., in Fig.~\ref{fig:paths_formulation}) that matches the findings of Linse et al., but it is not clear whether the damage irreversibility condition harms $\Gamma$-convergence of the crack path (in our case, convergence in the limit $L_\mathrm{cut}/\ell \to \infty$). In the AT1 model, which they do not consider, we find a much smaller effect of damage irreversibility on $F_f$ compared to the AT2 model. Tann\'e et al.~\cite{tanne_crack_2018} fixed $\ell$ based on the fracture stress and the toughness $G_c$ and found that the AT1 model successfully approximates fracture across a range of weak and strong stress singularities (introduced by V-notches) and concentrations (introduced by U-shaped notches). The AT2 model was found to successfully model crack nucleation at strong stress concentrations/singularities, but it diverged from experiments for weaker concentrations/singularities. This agrees quite well with our experience, where the two-phase structures have stronger stress concentrations than the smooth random structures (per the lower fracture stress and strain in Fig.~\ref{fig:stress_strain_evolution}b compared to Fig.~\ref{fig:stress_strain_evolution}a) and result in less of a difference in nucleation behavior between the AT1 and AT2 models. In the smooth random structures, faster convergence of the nucleation location with respect to $L_\mathrm{cut}/\ell$ for the AT1 model supports its use instead of the AT2 model for weak stress concentrations even when $\ell$ is not fixed based on material properties. (The relationship between $\ell$, $G_c$, and fracture stress can be manipulated by changing $f(\phi)$ and $h(\phi)$, for example \cite{wu_length_2018}.) Overall, our findings are generally consistent with those of Tann\'e et al.~\cite{tanne_crack_2018} and Linse et al.~\cite{linse_convergence_2017}, which taken together suggest that the AT1 model and crack-set irreversibility should be preferred. However, the high sensitivity of crack paths in the two-phase structure to any change in model formulation suggests that focusing on a single `correct' path may not be a desirable approach. For one thing, none of the methods is consistently converged with respect to $L_\mathrm{cut}/\ell$ for $L_\mathrm{cut} < 12\ell$, suggesting that studying only converged `correct' paths may be computationally challenging. Furthermore, the crack paths shown in Figs.~\ref{fig:paths_formulation} and \ref{fig:paths_ell} are all qualitatively similar, with no clear systematic difference due to the irreversibility condition, form of $f(\phi)$, or mechanics formulation. Even increasing $L_\mathrm{cut}/\ell$ only appears to systematically affect the crack path at small length scales close to the crack width. If crack propagation in the two-phase structures is interpreted as a highly sensitive or chaotic process \cite{gerasimov_stochastic_2020}, then all of these aspects may simply be influencing which crack path is selected out of several statistically indistinguishable realizations. The real question then becomes which aspects of model formulation have systematic effects on statistical descriptors of the crack path, such as its power spectrum \cite{ponson_statistical_2016}. This question is substantially different from the qualitative approach taken in this work; we speculate on which aspects of phase field fracture models will result in such quantitative effects in the following section. \subsection{Discussion} In Section \ref{sec:background}, we examined several different ways in which phase field models for quasi-static brittle fracture can be formulated. In the previous three sections, we have tested the effects of different formulations on crack path selection in elastically heterogeneous microstructures. Our results indicate that the near-equilibrium evolution method and non-variational mechanics models with the strain-spectral driving force are better than their alternatives at modeling quasi-static brittle fracture. We argue based on our simulation results and previous studies that the AT1 model should be preferred to the AT2 model and the crack-set irreversibility condition to the damage irreversibility condition. Crack path selection in our two-phase structures appears to be highly sensitive to the aspects of model formulation that we consider, and an `ideal' model for quasi-static brittle fracture under tension would be composed of these preferred variants. However, it is not clear whether many of the differences in crack path result from differences in the formulations themselves or from the high sensitivity of the crack path selection process in the two-phase structures. Quantitative analysis of a larger dataset of crack paths for an ensemble of statistically identical microstructures would provide a stronger basis for evaluating systematic differences between model variants, and would be an interesting approach for future work. This would be in a way similar to a stochastic approach proposed by Gerasimov et al.~\cite{gerasimov_stochastic_2020} for systems with homogeneous material properties. In the meantime, we can speculate about quantitative effects based on our qualitative observations. We expect that sufficiently high overstress would result in systematic and statistically significant effects on crack paths simulated with the time-dependent and minimization evolution methods in heterogeneous structures. Likewise, the crack smoothing and widening observed for the strain-spectral variational mechanics formulation systematically affects the crack path and the dissipated fracture energy $F_f$. Damage irreversibility in the AT2 model also systematically affects $F_f$, although we cannot yet say whether it systematically affects the crack path. A minimal recommendation from this study is that these expected systematic effects on crack path and $F_f$ should be avoided when investigating randomly heterogeneous materials. \section{Conclusions} We have presented a comprehensive overview of how popular variants of the phase field model for quasi-static brittle fracture affect crack path selection in systems with both uniform and randomly heterogeneous elastic properties. We consider four ways in which phase field models for quasi-static brittle fracture can vary: in how the phase field is evolved, in the formulation of the coupling between the elastic and phase field, the form of the phase field approximation to the crack length/area (AT1 or AT2 \cite{tanne_crack_2018}), and the conditions under which evolution of the phase field is considered to be irreversible (everywhere, as in damage models, or only within a crack set). We probe the effects of these variants in simulations with spatially uniform elastic properties, random two-phase structures with contrasting Young's moduli, and random structures with smoothly varying Young's modulus. For the random structures, we examine how crack paths (and their sensitivity to model variants) change as the size scale of the structure is changed relative to the crack width parameter $\ell$ in the phase field model. We consider all of these variants within a common numerical approach that combines novel and standard aspects. We identify three types of evolution method for the phase field: minimization, time evolution, and near-equilibrium evolution. Our implementation of the near-equilibrium method is novel but the others are standard, and for all three methods we employ staggered solutions of the phase field and mechanics sub-problems. We use an FFT-accelerated strain-based micromechanics solver for the mechanics sub-problem~\cite{zeman_finite_2017,leute_elimination_2022,ladecky_optimal_2022}, and implement a bound-constrained conjugate gradients algorithm~\cite{vollebregt_bound-constrained_2014} to solve for the phase field while directly enforcing irreversibility constraints. We find that crack paths in heterogeneous structures differ significantly between the near-equilibrium evolution method and the minimization and time-dependent evolution methods under overstressed conditions. Such conditions occur when the near-equilibrium method undergoes unloading (i.e., snap-back) but the time-dependent and minimization methods do not. This effect on crack path in the minimization method relies on the interaction between overstress and material heterogeneity, and thus it is not apparent in simulations without heterogeneity. Effects of overstress with the time-dependent method, such as crack broadening and branching, resemble effects of overstress in dynamic fracture models \cite{bleyer_dynamic_2017}. The near-equilibrium method most closely resembles classical models for quasi-static fracture and avoids overstress-related effects present in the other methods. In our examination of different mechanics formulations, we find distinct effects due to the choice of driving force (elasticity formulation in the phase field governing equation) and the choice of contact model (elasticity formulation in the mechanical equilibrium equation). We consider elastic energy densities with no splitting between tension and compression (i.e., the isotropic model), and splittings based on volumetric-deviatoric and spectral decompositions of the strain tensor. Of these, only the driving force for the strain-spectral split results in agreement with an experimental mode II (in-plane shear) crack path. However, the contact model for the strain-spectral split results in a crack that bears significant shear stresses, leading to artificial widening and smoothing of cracks in heterogeneous microstructures. Desirable combinations of driving force and contact model for predominately tensile loading can be obtained via non-variational mechanics formulations that combine the driving force from the strain-spectral split with the contact model from another formulation. We find that crack paths in the heterogeneous structures are sensitive to multiple aspects of the formulation besides evolution method and mechanics formulation. Sensitivity to aspects that do not have an equivalent in sharp-crack models of crack propagation (e.g., AT1 vs.~AT2 and crack-set vs.~damage irreversibility) is reduced when the length scale of the microstructure is larger relative to the crack width parameter $\ell$. Our results along with previous work \cite{tanne_crack_2018} suggest that the AT1 model is advantageous for $\Gamma$-convergence in the presence of relatively weak stress concentrations. Likewise, the crack-set irreversibility condition is preferable for use with the AT2 model \cite{linse_convergence_2017}. A potential approach for future studies would be to test for systematic differences between methods via statistical characterization of the crack path. \section*{Acknowledgements} We thank Till Junge and Jan Zeman for useful discussion, and Ali Falsafi, Richard Leute, Antoine Sanner, and Sindhu Singh for assistance in code development and deployment. We used \textsc{$\mu$Spectre} (\url{https://gitlab.com/muspectre/muspectre}) to solve the mechanical problem. Funding was provided by the European Research Council (StG-757343) and by the Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy (EXC-2193/1 – 390951807). Numerical simulations were performed on bwForCluster NEMO (University of Freiburg, DFG grant INST 39/963-1 FUGG). \section*{Competing interests} The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
1,108,101,564,022
arxiv
\section{\label{sec:Intro}Introduction} Quantum random walks describe a walker, often a particle, that explores some state space using coherent wave-like dynamics. These non-classical dynamics allow the walker to propagate faster in structures with certain symmetries. Quantum walks have been shown to be universal for quantum computation \cite{Childs09}, and may deliver computational speed-ups for many different problems \cite{Childs03, Shenvi03, Childs07, Ambainis07}. They have also been proposed as a possible mechanism in energy transfer in biological systems such as photosynthetic molecules \cite{Sension07}. Since the original theoretical proposal \cite{Aharonov93}, many quantum random walks have been proposed. Discrete-time quantum walks can use different coins \cite{Tregenna03}, or be defined without coins via alternative quantization procedures \cite{Szegedy04}, or equivalently, using tesselations of the state space \cite{Portugal16}. For a review of both continuous- and discrete-time quantum walks, see \cite{Venegas-Andraca12}. Quantum random walks have been implemented in various systems: neutral trapped atoms in optical lattices \cite{Karski09}, trapped ions \cite{Schmitz09}, and nuclear magnetic resonance \cite{Du03,Ryan05}. Photonic implementations are among the most promising approaches in this regard, using either bulk optics or integrated photonic chips. Previously implemented photonic quantum random walks include walks on a 1D linear graph \cite{Perets08, Broome2010,Biggerstaff2016}, on a circle \cite{Bian2017,Nejadsattari2019}, and in two-dimensional, cycle-free (tree) graphs \cite{Caruso2016,Boada2017,Chen2018,Tang2018,Wang2020}, using either continuous time or discrete time steps. Quantum walks with more than one walking particle have also been implemented using photonics in a way that mimics bosonic or fermionic particle behaviors \cite{Sansoni12}. Photonic Ring Resonators (PRRs) are ring-like coupled waveguides first introduced as narrow-band optical filters. \acrshort{rr}s have been extensively studied \cite{Rabus2007,Bogaerts2012} and find applications ranging from sensors \cite{Kim2016} to micro-lasers \cite{Stern2017} to fast optoelectronic circuits \cite{Moazeni2017}. \acrshort{rr}s can be coupled to other \acrshort{rr}s in complex \acrshort{rr} configurations \cite{Bachman2015} or other photonic elements, becoming one of the most widely used building blocks of today's photonic integrated circuits. In this work, we model the coherent dynamics of light in coupled \acrshort{rr}s as quantum random walks in these structures. We describe a family of graphs that model light propagation along multiple series-coupled \acrshort{rr}s and calculate predictions from classical and quantum propagation dynamics on these graphs. The quantum model corresponds to coherent light propagation, whereas the corresponding classical model has the same hopping probability between each pair of coupled waveguides but no coherent effects. Comparison between the quantum and classical results enables us to find sufficient conditions for demonstrating quantum advantage in the transport efficiency over these structures. We show that the classical model is recovered from the quantum model predictions when we average over the phase acquired when photons go around each ring. Besides the theoretical modeling, we also report preliminary experimental feasibility studies regarding the implementation of \acrshort{rr}s using polymeric waveguides in air. This paper is organized as follows. In Section~\ref{sec:qrw}, we describe our model for quantum random walks on series-coupled photonic ring waveguides. The classical model calculations are described in section~\ref{sec:CRW}, with the quantum model calculations presented in Section~\ref{sec:QRW} and a comparison between the two in Section~\ref{sec:CRWvsQRW}. Section~\ref{sec:Experimental} presents a feasibility study based on different materials and designs, highlighting some of the experimental challenges associated with polymeric devices. A discussion of the findings is presented in Section~\ref{sec:discussion}, with some concluding remarks in Section~\ref{sec:conclusion}. \section{Modeling walks in series-coupled photonic ring resonators} \label{sec:qrw} In this section, we model light propagation in multiple series-coupled photonic ring resonators, as depicted in Fig.~\ref{fig:concept}(a). The walk begins at the node representing the input port of the system (top-left), where the light enters the device, and ends when the walker reaches either of the output ports: the \textit{Through} node $T$ (abbreviated to \textit{Thru}) or the \textit{Drop} node $D$. On the way, the walker hops between half-rings, each of which is represented by an intermediate node $P_i$. Let $p(a \to b)$ express the probability of the walker hopping from node $a$ to node $b$. The $\kappa_i$ and $\tau_i$ coefficients define the coupling between adjacent waveguides, so that $|\kappa_i|^2+|\tau_i|^2=1$. Note that the hopping probability depends both on where the walker is and where it is going. The physical system composed of coupled photonic ring resonators, as shown in Fig.~\ref{fig:concept}(a), can thus be modeled by the graph in Fig.~\ref{fig:concept}(b), where we identify the input port, the two output ports, all half-rings of the structure, and the hopping probabilities between them. The loss coefficient $\alpha$ quantifies the ring round-trip transmission. It is omitted from Fig.~\ref{fig:concept}(b) but is considered in the following calculations. In the proposed photonic implementation, the light injected at the input port propagates continuously through the system, acquiring a phase shift $\theta_i$ around each ring. The phase shift depends on the ring radius and wavelength and determines the constructive and destructive interference responsible for the differences between the quantum and classical walks on the same graph structures. The physical process of light propagation is continuous in time, but it can be modeled as a discrete process, where each step corresponds to the time it takes for light to propagate around one half-ring. This enables analyzing the differences between the quantum and classical solutions in their discrete-time dynamics. To compare the dynamics of classical and quantum random walks in these structures, we consider a hypothetical puzzle whose goal is to reach the \textit{Drop} port and analyze the parameter-tuning abilities of classical and quantum models to maximize the goal-hitting rate. We propose two scalable methods to calculate the probabilities of propagation to the \textit{Thru} and \textit{Drop} ports and, for the sake of simplicity, solve and discuss the simplest single-ring configuration (Fig.~\ref{fig:concept:1ring}). However, we present a scalable method that can calculate the probabilities of propagation to any node in the graph for any number of series-coupled rings. The double-ring case results are presented in Appendix \ref{ap:CRW:2rings}. \begin{figure} \includegraphics[width=8.1cm]{Figures/Fig05-Concept-line} \caption{\label{fig:concept} Random walks in series-coupled ring resonators. (a) Series-coupled multi-ring resonator. $\kappa_i$ and $\alpha$ are the coupling coefficient between each indicated pair of waveguides, and loss coefficient, respectively. The indicated values correspond to a walker moving consecutively from one ring to the next without looping. (b) Graph proposed to model propagation in the series-coupled ring resonators. $k_i = |\kappa_i|^2$ are the walker hopping probabilities associated with each pair of nodes. The loss coefficient $\alpha$ is omitted from the figure. } \end{figure} \subsection{Classical Random Walk} \label{sec:CRW} Let $p_C(P_i \to P_j)$ be the classical probability of a walker moving from point $P_i$ to $P_j$. If $P_i$ and $P_j$ are non-adjacent, $p_C(P_i \to P_j)$ represents the sum over the probabilities of all possible paths between those end-points. For conciseness, a walker starting at the input node $P_0$ reaches the \textit{Drop} and \textit{Thru} ports with the probabilities $p_C^D$ and $p_C^T$, respectively. $p_C^D$ and $p_C^T$ can be calculated analytically using two general approaches: 1) by explicitly summing the probabilities associated with all possible paths from $P_0$ to $P_D$ and $P_T$. Note that the number of paths is infinite, as there is no bound on the number of loops the walker can take around each ring. As we will see, this is a classical analogue of the quantum-mechanical Feynman path sum calculation; 2) using a transfer matrix-based Markov chain method. As we will see, the first method clarifies the differences between quantum and classical walks over these graphs. In contrast, the second method is systematic and practical for systems of multiple rings. \subsubsection{Summing over paths} Here we will consider the sum-over-paths solution to the simplest configuration of a single ring coupled to two linear waveguides, see Fig.~\ref{fig:concept:1ring}. Let us begin by analyzing the probability to reach the \textit{Drop} port. Starting at node $P_0$, the possible paths to reach $P_D$ differ only in the number of loops the walker takes around the ring. In particular, the walker can reach it after $1/2, 3/2, 5/2, \dots$ turns around the ring, where the associated transmission coefficient is reduced by an $\alpha^\frac{1}{2}$ factor for each half turn around the ring. To simplify notation, let us consider $t_i = 1-k_i$, and use $n$ to represent the number of random walk steps. Also, let $f_C(n)$ describe the classical probability of the walker, having started at node $P_0$, getting to one of the output nodes after $n$ steps. Note the difference between $f_C(n)$ and the cumulative probability $p_C(n)$ of the walker having arrived at $P_D$ or $P_T$ in any number of times steps up to $n$. Since the walker cannot return from the output nodes $P_D$ or $P_T$ back to any other graph node, the cumulative probabilities $p_C^D(n)$ and $p_C^T(n)$ increase monotonically with $n$, and are determined by summing the probability of the walker having reached the particular output after $n$ steps. The walk begins at $n=1$. At this stage, the walker either hops directly to the \textit{Thru} port (in which case the walk is over) or enters the ring. Hence, $f_C^T(1) = t_1$ and $f_C^D(1) = 0$. At $n=2$, provided the walker entered the ring, it can either hop to the \textit{Drop} port or remain in the ring. Hence, $f_C^T(2) = 0$ and $f_C^D(2) = k_1k_2\alpha^\frac{1}{2}$. Similarly, at $n = 3$, the walker can hop to the \textit{Thru} port but not to the \textit{Drop} port. Hence, $f_C^T(3) = t_1 + k_1^2t_2\alpha$ and $f_C^D(3) = 0$. As the walk progresses, the probability functions $f_C^D(n)$ and $f_C^T(n)$ can be expressed as \begin{multline} f_C^D(n) = \left\{ \begin{array}{ll} 0 & ,n = 1,3,5,\dots \\ k_1k_2\alpha^\frac{1}{2}(t_1t_2\alpha)^{\frac{n-2}{2}} & , n = 2,4,6,\dots \end{array} \right. \end{multline} \noindent and \begin{multline} f_C^T(n) = \\ \left\{ \begin{array}{ll} t_1 &, n = 1\\ 0 & ,n = 2,4,6,\dots \\ t_1+k_1^2t_2\alpha(t_1t_2\alpha)^\frac{n-3}{2} & , n = 3,5,7,\dots \end{array} \right. \end{multline} The cumulative probability $p_C^D(n)$ of the walker reaching the \textit{Drop} port after $n$ steps can be explicitly calculated as a geometric series sum. For $n = 2,4,6,\dots$: \begin{align} p_C^D(n) = & \sum _j^n f_C^T(j) \\ = & k_1 k_2\alpha^\frac{1}{2} \left[1+t_1 t_2\alpha + \dots + (t_1 t_2\alpha)^\frac{n-2}{2}\right] \label{eq:CRW:1ring:po-pd-sum} \end{align} The total (cumulative) probability after an infinite number of steps $p_C^D \equiv p_C^D(n\to\infty)$ can thus be written as \begin{align} p_C^D = k_1 k_2\alpha^\frac{1}{2}\sum_{m=0}^{\infty} (t_1 t_2\alpha)^m = & \frac{k_1 k_2\alpha^\frac{1}{2}}{1-\alpha t_1 t_2}, \label{eq:CRW:1ring:p0-pd-final} \end{align} \noindent where $m = (n-2)/2$, for $n = 2,4,6,\dots$. Similarly, for the \textit{Thru} port, for $n = 1,3,5,\dots$: \begin{align} p_C^T(n) = & \sum _j^n f_C^T(j) \\ = & t_1 + \alpha k_1^2 t_2 \left[ 1+\alpha t_1 t_2 + \dots +(\alpha t_1 t_2)^\frac{n-3}{2} \right] \label{eq:CRW:1ring:po-pt-sum} \end{align} The total (cumulative) probability after an infinite number of steps $p_C^T \equiv p_C^T(n\to\infty)$ can thus be written as \begin{equation} p_C^T = t_1 + \alpha k_1^2 t_2 \sum_{m=0}^\infty (\alpha t_1 t_2)^m = \frac{t_1 + t_2\alpha - 2t_1 t_2 \alpha}{1 - t_1 t_2 \alpha}, \label{eq:CRW:1ring:p0-pt-final} \end{equation} \noindent where $m = (n-3)/2$ for $n = 3,5,7,\dots$. It is easy to check that $p_C^T + p_C^D = 1$ for the lossless case ($\alpha = 1$). As the number $n$ of walk steps increases, the $f_C(n)$ functions alternate between zero and non-zero probabilities, depending on the parity. This leads $p_C^D(n)$ and $p_C^T(n)$ to increase monotonically with $n$, with non-zero increases respectively for even and odd $n$. This method mirrors the quantum-mechanical calculation we will show later by explicitly adding the probabilities associated with all possible paths. Nevertheless, considering all possible paths becomes unwieldy as a calculation method, as shown in \textit{Appendix}~\ref{ap:CRW:2rings}, where we use it to calculate the probabilities associated with the two-ring configuration. A more convenient approach relies on the Markov Chain method, described next. \begin{figure} \includegraphics[width=7.9cm]{Figures/Fig07-Concept-line-1Ring} \caption{\label{fig:concept:1ring} Schematic of the random walk on a single-ring system, and its model. (a) Single-ring waveguide configuration, coupling $\kappa_i, i = \{1,2\}$, and loss $\alpha$ coefficients. (b) Graph used to model random walks for the single-ring system. $k_i = |\kappa_i|^2, i = \{1,2\}$ are the walker hopping probabilities between the node pairs indicated. The loss coefficient $\alpha$ is omitted in the figure but considered in the calculations. } \end{figure} \subsubsection{Markov Chain method} Let the tuple $p^{(n)}$ describe a list of probabilities that the walker will be found in each of the nodes after $n$ steps: \begin{equation} p^{(n)} = T^np^{(1)} \end{equation} where $p^{(1)}$ describes the initial probability distribution of the walker, and $T^n$ is the transfer matrix describing the node transition probabilities after $n$ steps. Now, note that the same matrix $P$ diagonalizes $T^n$ for any power $n$: \begin{equation} T=PDP^{-1} \to T^n=(PDP^{-1})^n=P D^n P^{-1}. \end{equation} So \begin{equation} D^n = P^{-1}T^nP. \end{equation} The transfer matrix $T^1$ describes the hopping probabilities corresponding to the first step and can be directly read out from the graph of Fig. \ref{fig:concept:1ring}. We can then obtain the matrix $P$ that diagonalizes $T$, i.e., find $P$ and diagonal $D$ such that $D=P^{-1}TP$, and calculate the probability distribution over the nodes after $n$ steps as \begin{equation} \label{eq:Markov:pNmain} p^{(n)} = T^n p^{(1)}=P D^{n} P^{-1} p^{(1)} \end{equation} We can use Eq.~(\ref{eq:Markov:pNmain}) to find the probability distribution for the walker for any number of rings and any number $n$ of steps, with an arbitrary initial probability distribution of the walker over the nodes. For the case of the quantum random walk that we will describe later, the approach will be very similar, except that the tuples represent probability amplitudes rather than probabilities, with $T$ encoding transition amplitudes instead. For the graph corresponding to the single-ring configuration of Fig.~\ref{fig:concept:1ring}(b) and a walker starting at $P_0$, we can write \begin{eqnarray} p_C^{(n)} = \begin{pmatrix} p^0\\p^1\\p^2\\p^D\\p^T \end{pmatrix}_C^{(n)} = T^n \begin{pmatrix} 1\\0\\0\\0\\0 \end{pmatrix}, \label{eq:Markcov:classical:P} \\ T^1 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ k_1 & 0 & t_1\alpha^\frac{1}{2}& 0 & 0 \\ 0 & t_2\alpha^\frac{1}{2}& 0 & 0 & 0 \\ 0 & k_2\alpha^\frac{1}{2}& 0 & 1 & 0 \\ t_1 & 0 & k_1\alpha^\frac{1}{2}& 0 & 1 \end{pmatrix}, \label{eq:Markcov:classical:T} \end{eqnarray} where the $\alpha^\frac{1}{2}$ terms account for the losses in each half-ring. Plugging in Eqs.~(\ref{eq:Markcov:classical:P}) and (\ref{eq:Markcov:classical:T}) in Eq.~(\ref{eq:Markov:pNmain}) for $n\to\infty$ recovers the $p_C^D$ and $p_C^T$ obtained in the previous section, Eqs.~(\ref{eq:CRW:1ring:p0-pd-final}) and (\ref{eq:CRW:1ring:p0-pt-final}), respectively. To help compare the different random walk dynamics, let us set $p_g=2/3$ as an arbitrary threshold probability for $P_D$, and analyze the parameter regimes that are sufficient to reach that threshold. Fig.~\ref{fig:CRW} shows the $p_C^T$ and $p_C^D$ distributions for the classical random walk, sweeping over the hopping probabilities $k_1$ and $k_2$. The black dashed lines delimit the $k_1,k_2$ region for which the goal is achieved, i.e., $p_C^D > p_g=2/3$. The $(k_1,k_2)$ region of success (where $p_C^D>p_g=2/3$) is relatively small, and $p_C^D = 1$ only for $k_1=k_2=1$. Since $p_C^T$ and $p_C^D$ add up to one for the lossless case ($\alpha=1$), all \textit{Thru} port observations are complementary to the \textit{Drop} and thus all $p_C^T$ results shall henceforth be omitted from the figures. \begin{figure} \centering \includegraphics[width=8.6cm]{Figures/Fig21-CRW-1Ring} \caption{Probabilities of getting to the \textit{Thru} and \textit{Drop} ports for the classical random walk on the single-ring graph of Fig. \ref{fig:concept:1ring}(b). (a,b) Probability of reaching $P_T$ (a) and $P_D$ (b), as a function of the two hopping probabilities $k_1$ and $k_2$. } \label{fig:CRW} \end{figure} \subsection{Quantum Random Walk} \label{sec:QRW} While the final probability distributions of a classical walk depend only on the initial state and the hopping probabilities $k_i$, photons display constructive and destructive interference effects resulting in different dynamics. For ease of comparison with the classical case, let us assume all ring waveguides are identical, having the same radii and index of refraction. The probability amplitude acquired after each ring round is \begin{equation} \label{eq:QRW:gamma} \gamma=\alpha e^{i \theta}, \end{equation} The loss coefficient $\alpha = e^{-(\alpha_{t} (L + L_c) + \alpha_{b}L)}$ depends on the material absorption $\alpha_{t}$ and waveguide bending loss $\alpha_{b}$, where $L = 2\pi r$ and $L_c$ are the ring and coupler lengths ($L_c > 0$ for a \textit{racetrack} ring system). The phase $\theta$ acquired by a photon after going around the loop once is given by \begin{equation} \label{eq:QRW:theta} \theta = 2\pi n_{eff} \frac{L}{\lambda}, \end{equation} \noindent where $n_{eff}$ and $\lambda$ are the waveguide effective refractive index and photon wavelength, respectively. The assumption that all rings have the same radius will allow us to characterize the \textit{Drop} and \textit{Thru} probabilities as a function of time, parameterized in units corresponding to the time taken for light to go around one half-ring, $\Delta t= \pi r n_{eff}/c$. As we will see, interference effects will change the \textit{Drop} and \textit{Thru} probabilities and can be tuned by changing the photon wavelength $\lambda$, or ring radius $r$ (in the latter case, the unit of time will also change accordingly). There are several ways to obtain the probability that a quantum walker starting at the input port $P_0$ will arrive at the \textit{Drop} and \textit{Thru} ports, respectively $p_Q^D$ and $p_Q^T$. Again, in this section, we will obtain those for the single-ring configuration, leaving the case of two or more rings for discussion in Appendix~\ref{ap:QRW:2rings}. The first method is the Feynman path sum over amplitudes corresponding to all possible paths. We refer to the Fig.~\ref{fig:concept:1ring}(b) schematic, with the understanding that now we must represent probability amplitudes $a(P_i \to P_j)$ for the different transitions, rather than hopping probabilities. The phases must be chosen so that the couplings between waveguides is described by a unitary transformation; for the single-ring configuration, the simplest choice is to set all amplitude phases equal to zero, except $a(P_0 \to P_1)= -k_1^\frac{1}{2}$. As in the classical case, we define the probabilities $f_Q^D(n)$ and $f_Q^T(n)$ of the walker getting to either output port at step $n$: \begin{multline} f_Q^D(n) =\\ \left\{ \begin{array}{ll} 0 & ,n = 1,3,5,\dots \\ -(k_1k_2\gamma)^\frac{1}{2}\left((t_1t_2)^\frac{1}{3}\gamma\right)^{\frac{n-2}{2}} & , n = 2,4,6,\dots \end{array} \right. \end{multline} and \begin{multline} f_Q^T(n) =\\ \left\{ \begin{array}{ll} t_1^\frac{1}{2} &, n = 1\\ 0 & ,n = 2,4,6,\dots \\ t_1^\frac{1}{2}-k_1t_2^\frac{1}{2}\gamma\left((t_1t_2)^\frac{1}{2}\gamma\right)^\frac{n-3}{2} & , n = 3,5,7,\dots \end{array} \right. \end{multline} The cumulative probability amplitudes of getting to the output ports in any number of steps can be obtained by mirroring the calculation for the classical case, except now we are summing amplitudes. For the \textit{Drop} case: \begin{align} a^D(n) = & \sum _j^n f_Q^D(j) \\ = & -(k_1 k_2 \gamma)^\frac{1}{2} \left[1 + (t_1 t_2)^\frac{1}{2}\gamma + \dots + \right.\nonumber\\ & \hspace{3.625cm} \left. + ((t_1 t_2)^\frac{1}{2}\gamma)^\frac{n-2}{2} \right] \label{eq:QRW:1ring:po-pd-sum_a} \end{align} The total (cumulative) probability amplitude $a^D \equiv a^D(n\to\infty)$ after an infinite number of steps can thus be written as \begin{equation} a^D= - (k_1 t_2\gamma)^\frac{1}{2} \sum_{m=0}^\infty\left((t_1t_2)^\frac{1}{2}\gamma\right)^m = -\frac{(k_1 k_2 \gamma)^\frac{1}{2}}{1- (t_1 t_2)^\frac{1}{2}\gamma}. \label{eq:qdrop1ring} \end{equation} \noindent where $m = (n-2)/2$ for $n = 2,4,6,\dots$. The calculation for $a^T$ also follows the same line as the classical sum over paths calculation: \begin{align} a^T(n) = & \su _j^n f_Q^T(j) \\ = & t_1^\frac{1}{2} - k_1 t_2^\frac{1}{2}\gamma \left[1 + (t_1t_2)^\frac{1}{2}\gamma + \dots + \right. \nonumber\\ & \hspace{3.2cm} + \left.\left((t_1t_2)^\frac{1}{2}\gamma\right)^\frac{n-3}{2} \right] \label{eq:QRW:1ring:po-pt-sum_a} \end{align} The cumulative probability amplitude of getting to the \textit{Thru} port then becomes \begin{align} a^T = t_1^\frac{1}{2} - k_1 t_2^\frac{1}{2}\gamma \sum_{m=0}^\infty\left((t_1t_2)^\frac{1}{2}\gamma\right)^ = \frac{t_1^\frac{1}{2} -t_2^\frac{1}{2}\gamma}{1 - (t_1 t_2)\frac{1}{2}\gamma}, \label{eq:qt1ring} \end{align} \noindent where $m = (n-3)/2$ for $n = 3,5,7,\dots$. We see that the \textit{Drop} and \textit{Thru} probabilities for the classical case (and probability amplitudes for the quantum case) are both given as a sum over probabilities (or amplitudes) corresponding to each of the infinite possible paths from $P_0$ to $P_D$ and $P_T$. We can obtain the expressions for the quantum amplitudes $a^D$ and $a^T$ by substituting amplitudes for probabilities in the classical walk result, as shown by a comparison between Eqs.~\ref{eq:CRW:1ring:p0-pd-final} and \ref{eq:qdrop1ring}, and Eqs.~\ref{eq:CRW:1ring:p0-pt-final} and~\ref{eq:qt1ring}, respectively. The substitutions are: \begin{align} k_i \to & k_i^\frac{1}{2}e^{i \phi_i}\\ t_i \to & t_i^\frac{1}{2} \label{eq:CRWvsQRW:subs:t_i}\\ \alpha \to & \gamma = \alpha e^{i\theta} \end{align} \noindent where the phases $\phi_i$ must be chosen to guarantee unitarity of all waveguide couplings. To obtain the quantum probabilities of a walker going from $P_0$ to $P_D$ or $P_T$, as always in quantum mechanics, we must take the absolute value squared of the amplitudes obtained above: \begin{align} p_Q^D = & \frac{k_1 k_2\alpha^\frac{1}{2}}{1 + t_1 t_2 \alpha - 2(t_1 t_2)^\frac{1}{2} \alpha \cos \theta} \label{eq:QRW:1ring:Id/I0} \\ p_Q^T = & \frac{t_1 + t_2\alpha - 2(t_1 t_2)^\frac{1}{2} \alpha \cos \theta}{1 + t_1 t_2 \alpha - 2(t_1 t_2)^\frac{1}{2} \alpha \cos \theta} \label{eq:QRW:1ring:It/I0} \end{align} For the case of no loss ($\alpha=1$), it is easy to check that $p_Q^D + p_Q^T= 1$, as expected. The expressions above give us the quantum probabilities of reaching either output port after an arbitrarily long time. Suppose we want to know this probability after $n$ random walk steps. In that case, we can truncate the sums above in the corresponding term, or equivalently, use the Markov chain approach described in Appendix~\ref{ap:QRW:1ring:Markov}. Another way of obtaining the steady-state \textit{Drop} and \textit{Thru} probabilities involves appealing to the classical/quantum correspondence principle. The intensity ratios predicted by the classical electrodynamic description of the problem must match the quantum mechanical probability calculations, so $p_Q^D=I_D/I_0$ and $p_Q^T=I_T/I_0$, where $I_0, I_D, I_T$ represent the input, \textit{Drop} and \textit{Thru} steady-state intensities predicted by classical electrodynamics. This way of doing the calculation allows us to use the boundary conditions at each coupling region to solve Maxwell's equations, and from the solution, obtain the quantum-mechanical result. The quantum probabilities for \textit{Drop} and \textit{Thru} ports we obtained above match previously reported results \cite{Rabus2007}. \begin{figure} \includegraphics[width=8.6cm]{Figures/Fig29-SingleRR-Sweep-nwl} \caption{Quantum random walk implemented by coherent light propagation in a single-ring resonator. (a-b) Probability $p_Q^D$ of reaching the \textit{Drop} port, sweeping over hopping probabilities $k_1,k_2$ and single-ring acquired phase $\theta = 0$ (a) and $\theta = \pi$ (b). (c-d) $p_Q^D$ for sweeping $k_1$ and $\theta$, with fixed $k_2 = \frac{1}{2}$ (c) and for matching $k_2=k_1$ (d). The black dashed lines indicate the goal-hitting threshold for $p_Q^D = 2/3$. All plots share the same intensity color scale. } \label{fig:QRW:PD} \end{figure} Fig.~\ref{fig:QRW:PD} plots $p_Q^D$ for different combinations of $k_1,k_2$ and $\theta$. Unlike the classical random walk, the response depends strongly on the new parameter $\theta$; the phase acquired after one loop around the ring. The goal-hitting region in parameter space, corresponding to $p_Q^D > p_g=2/3$, is shown in Fig.~\ref{fig:QRW:PD}(a,b) for $\theta = \{0,\pi\}$, respectively. As expected, the resonant ($\theta = 2\pi n, n \in \mathbb{Z}$) and antiresonant ($\theta = \pi + 2\pi n$) phases give maximum and minimum conditions for $p_Q^D$. The $\theta$ sweeps from Fig.~\ref{fig:QRW:PD}(c-d) exhibit the coherent phase effect by sweeping over $k_1$, with a fixed $k_2 = 1/2$ value (Fig.~\ref{fig:QRW:PD}(c)) and the matching $k_1=k_2$ condition (Fig.~\ref{fig:QRW:PD}(d)). Again the latter is confirmed to maximize $p_Q^D$, and thus the goal-hitting chance. The observations from Figs.~\ref{fig:CRW} and \ref{fig:QRW:PD} indicate that the coherent phase $\theta$ can be tuned to increase the goal-hitting chance. In the next section, we explore the comparison between quantum and classical random walks on coupled ring resonators in more detail. \subsection{Comparison between classical and quantum random walks} \label{sec:CRWvsQRW} In the previous two sections, we calculated the classical and quantum solutions for the random walk on the graph of Fig.~\ref{fig:concept:1ring}(b), which models a ring resonator. Quantum random walks can achieve faster propagation than classical walks over different graphs. In this section, we consider the goal of traversing the graph from the starting node $P_0$ to the \textit{Drop} port $P_D$, and compare the performance of quantum and classical walks, for any finite number of steps $n$, but also in the steady-state, obtained as the number of steps $n \to \infty$. \subsubsection{Steady-state solutions} Section~\ref{sec:CRW} demonstrates that apart from the loss coefficient $\alpha$, the only tunable parameters for the classical random walk are the hopping probabilities $k_i$, c.f. Eqs.~\ref{eq:CRW:1ring:p0-pd-final} and \ref{eq:CRW:1ring:p0-pt-final}. On the other hand, the quantum random walk dynamics over the same structure depend not only on the coupling coefficients but also on the ratio between ring radius $r$ and wavelength $\lambda$, which can be conveniently parameterized by the phase $\theta$ acquired after a single ring round-trip. Even the loss coefficient $\alpha$, which in the classical regime only harms the goal-hitting chance, \textit{can} sometimes be used to optimize the ring resonance contrast \cite{Rabus2007} in the quantum regime. The steady-state comparison between the classical and quantum regimes can be done by analyzing the results from Eqs.~\ref{eq:CRW:1ring:p0-pd-final} and \ref{eq:QRW:1ring:Id/I0}, once more focusing on the lossless case $\alpha = 1$. Comparison between $p_C^D$ and $p_Q^D$ immediately shows that for the resonant condition $\theta = 2\pi n, n\in \mathrm{Z}$, $p_Q^D \geqslant p_C^D$ for any choice of $(k_1,k_2)$. Fig.~\ref{fig:QRW-CRQ:comparison} shows the difference between the quantum and classical probabilities of reaching the \textit{Drop} port $p_Q^D-p_C^D$ as a function of the coupling coefficients (Fig.~\ref{fig:QRW-CRQ:comparison}(a)) and as a function of the round-ring acquired phase $\theta$ for matching coupling constants (Fig.~\ref{fig:QRW-CRQ:comparison}(b)). The color-scale shows parameter combinations for quantum-over-classical advantage outperforms in shades of orange and vice versa in shades of blue. The black and blue line patterns highlight the parameter combinations for which the probability of reaching the \textit{Drop} ports is larger than the chosen goal-hitting threshold $p^D > p_g=2/3$. Since Fig.~\ref{fig:QRW-CRQ:comparison}(a) is obtained for the resonant condition $\theta = 0$, $p_Q^D \geqslant p_C^D$ for all $(k_1, k_2)$ combinations. On the other hand, Fig.~\ref{fig:QRW-CRQ:comparison}(b) shows that for non-resonant conditions, $p_Q^D$ can often be lower than the classical counterpart $p_C^D$, with a difference minimum of $\approx -0.25$. Further, for $(k_1 = k_2) > 4/5$ and $\theta \in [\pi/2 + m,3\pi/2+m],\ m\in\mathbb{Z}$, the classical walk outperforms its quantum counterpart \textit{and} overcomes the goal-hitting threshold ($p_C^D > p_Q^D$ and $p_C^D > p_g=2/3$). \begin{figure} \centering \includegraphics[width=8.6cm]{Figures/Fig34-SingleRR-Classical-k1k2sweep} \caption{Comparison between classical and quantum probabilities of reaching the \textit{Drop} port of the single-ring resonator. (a-b) Difference between quantum $p_Q^D$ and classical $p_C^D$ distributions, sweeping over coupling coefficients $k_1,k_2$ with a fixed coherent phase $\theta = 0$ (a), and sweeping $k_1, \theta$, for matching $k_2=k_1$ (b). The black and blue patterns highlight the parameter combinations for which $p_Q^D$ and $p_C^D$ overcome the goal-hitting threshold $p_g=2/3$. The color scale is shared by both images. } \label{fig:QRW-CRQ:comparison} \end{figure} This oscillation of $p_Q^D$ is a wave phenomenon, resulting in a quantum \textit{Drop} probability that may be either higher or lower than the classical counterpart. A simple calculation shows that in the absence of losses ($\alpha=1$), if we average $p_Q^D$ over the acquired phase $\theta$, we recover the classical result $p_C^D$ (lossless case): \begin{align} \left< p_Q^D \right> = & \frac{1}{2\pi}\int_0^{2\pi} p_Q^D(\theta) \mathrm{d}\theta\\ = & \frac{1}{2\pi}\int_0^{2\pi} \frac{k_1k_2}{1+t_1t_2-2(t_1t_2)^\frac{1}{2}\cos \theta} \mathrm{d}\theta\\ = & \frac{k_1k_2}{1-t_1t_2} = p_C^D, \label{eq:QRW-average-Id} \end{align} As $p_Q^D+p_Q^T=1$, averaging $p_Q^T$ over $\theta$ will also recover the classical \textit{Thru} probability. This simple calculation shows how the classical behavior is recovered from the quantum behavior as an average over the only quantum parameter in this model, the phase $\theta$. It is an alternative way of finding the result that is expected if we perform the experiment with incoherent light, for which there will be no definite phase acquired after each ring round trip. \subsubsection{Time-domain solutions and goal-hitting time} \begin{figure} \centering \includegraphics[width=8.6cm]{Figures/Fig57-TimeDomain} \caption{Comparison between the dynamic classical ($p_C^D$) and quantum ($p_Q^D$) random walk probabilities of reaching the \textit{Drop} port of the single-ring resonator. (a) Dynamic $p_C^D$ (black) and $p_Q^D$ (color) over the random walk step $n$ for variable coherent phases $\theta$ and fixed coupling coefficients $k_1=k_2=1/2$. Horizontal dashed line: Goal-hitting threshold $p_g$. (b) Difference between $p_Q^D$ and $p_C^D$ distributions over the random walk step, as a function of the matched coupling coefficients $k_1 = k_2$. (c) Log-log plot of the goal-hitting time (defined at $p_g = 2/3$) as a function of $k_1 = k_2$ for the classical (black) and quantum (color) regimes for variable coherent phases $\theta$.} \label{fig:QRW-CRW:TimeDomain} \end{figure} Thus far, we have only used Eq.~(\ref{eq:Markov:pNmain}) to analyze the steady-state random walk solutions ($n \to \infty$), whose output probabilities agree with the electromagnetic steady-state solutions for the intensity obtained from Maxwell's equations. As long as all rings have the same radius, it is also possible to study the time-dependent solution, corresponding to the output probabilities in multiples of the time it takes light to traverse one half-ring. One can read the time-dependent solutions from our formulations for finite $n$: Eqs.~\ref{eq:CRW:1ring:po-pd-sum} and \ref{eq:CRW:1ring:po-pt-sum} for the classical walk and the absolute square of Eqs.~\ref{eq:QRW:1ring:po-pd-sum_a} and \ref{eq:QRW:1ring:po-pt-sum_a} for the quantum. Fig.~\ref{fig:QRW-CRW:TimeDomain}(a) shows the time evolution of the \textit{Drop} probabilities for the classical and quantum random walks on a single-ring configuration, considering the balanced $k_1 = k_2 = 1/2$ case, for several values of the coherent phase $\theta$. Whereas the classical \textit{Drop} probability converges monotonically to the steady-state solution, the quantum \textit{Drop} probability shows fluctuations that depend on $\theta$. The fluctuations become more significant when we are close to the antiresonant condition $\theta \to \pi + 2n\pi, n \in \mathbf{Z}$, and virtually vanish close to the resonant condition $\theta \to 2n\pi, n \in \mathbf{Z}$. It is worth recalling that for both classical and quantum walks, the \textit{Thru} and \textit{Drop} probabilities only vary at odd and even time-step values, respectively. For small values of $\theta$ we are close to the resonance condition, and the quantum random walk has a better performance than the classical counterpart. At resonance, and with $k_1=k_2=0.5$, the threshold \textit{Drop} probability goal $p_g=2/3$ is reached in $n=6$ time steps, whereas the classical random walk never reaches this goal, unless $(k_1=k_2) \geqslant 4/5$ (as can be checked with Eq.~\ref{eq:CRW:1ring:po-pd-sum}). Fig.~\ref{fig:S:TimeDomain} of \textit{Appendix}~\ref{ap:QRW:1ring} shows the individual classical and quantum \textit{Drop} time evolution for variable $\theta$ and $k_1 = k_2$ conditions. The sum of \textit{Drop} and \textit{Thru} probabilities for finite $n$ does not add up to 1, even in the lossless case, as walkers are still filling the waveguide nodes towards the steady-state regime, reached as $n \to \infty$. Fig.~\ref{fig:QRW-CRW:TimeDomain}(b) compares the time evolution between the classical random walk and the resonant quantum random walk for sweeping values of $k_1 = k_2$. The black and blue line patterns highlight the parameter space regions for which $p_Q^D(n) \geqslant p_g$ and $p_C^D(n) \geqslant p_g$, respectively, where $p_g = 2/3$. One can see that the quantum \textit{Drop} probability is much higher than its classical counterpart around the resonances (as indicated by the orange shade), reaching the goal-hitting threshold $p_g$ after 6 steps for $\theta = 0$, while the classical random walk (for $k_1 = k_2 = 1/2$) never does. Also, the quantum walk hits the goal for all $(k_1 = k_2) > 0$ values, with a generally higher \textit{Drop} probability (thus larger goal-hitting chance) than the classical analogue, particularly for low $(k_1 = k_2)$ values. Furthermore, the numerical results indicate that in resonance ($\theta = 0$), $p_Q^D(n) \geqslant p_C^D(n)$ for any $k_1=k_2$, meaning that in these conditions, the cumulative probability of the walker having reached the \textit{Drop} port in the resonant quantum random walk regime is never smaller than in the classical case. We define the goal-hitting time as the number of walking steps required to achieve the goal-hitting probability threshold $p_g=2/3$. Fig.~\ref{fig:QRW-CRW:TimeDomain}(c) plots the classical and quantum goal-hitting times for variable coherent phase $\theta$ values and $k_1 = k_2 = 1/2$. The goal-hitting $k_1 = k_2$ range increases as $\theta$ approximates the resonant condition, taking an increasingly longer hitting time towards the lower $k_1 = k_2$ values. Even though Eq.~(\ref{eq:QRW:1ring:Id/I0}) shows that $p_Q^D(\theta = 0) > p_g$ for all $k_1 = k_2>0$, Fig.~\ref{fig:QRW-CRW:TimeDomain}(c) demonstrates that there is an approximate power law between the goal-hitting time and $k_1 = k_2$. \section{Feasibility Study} \label{sec:Experimental} The control of coherent and resonant effects in \acrshort{rr}s relies strongly on the waveguide materials and fabrication accuracy. \acrshort{rr}s have been around for a long time and have become one of the most commonly used photonic integrated chip (PIC) building blocks in Si photonics. However, Si is not transparent in the visible wavelength range and thus is usually operated in the infrared around $\lambda = 1.550$~$\mathrm{\mu}$m{}. Photonic-aimed polymers provide good transparency in the visible and near-infrared wavelength ranges. Still, their lower refractive indices (usually between $1.3$ and $1.7$ in the visible \cite{Liu2009}, up to 1.936 \cite{Ritchie2021}) compared to Si ($n_g=3.6\ @\lambda=1.550$~$\mathrm{\mu}$m{}) provide weaker mode confinement and thus larger waveguide bending losses. As a result, the \acrshort{rr} dimensions must be much larger, which narrows the free spectral range between resonance peaks, thus reducing the control over the device output. While infrared Si photonic elements are already used routinely in \acrshort{pic}s, polymeric ones are not. This, together with the theoretical models we have described, motivated us to perform preliminary feasibility tests of the proposed quantum random walk implementation using polymer materials (EpoCore/EpoClad) in the visible wavelength range. In particular, we assess the experimental control on the coupling coefficients $k_i$ by theoretically calculating and experimentally verifying them as a function of the coupler length $L_s$ and distance $d$. One can find sample preparation and characterization details in \textit{Appendix}~\ref{ap:QRW:Feasibility}. Fig.~\ref{fig:S:exp-Couplers}(a) shows the top-view light scattering of fiber side-coupled polymeric directional couplers ($L_s = 100$~$\mathrm{\mu}$m{}) for three coupler distances $d = \{0, 0.14, 2.5\}$~$\mathrm{\mu}$m{}. The $d$ dependence can be qualitatively observed by the total, partial, and no coupling between the input (top) to the transport (bottom) waveguides. The coupling coefficients (intensity splitting) are quantitatively characterized by the intensity at the waveguide outputs (measured at the chip edge). Fig.~\ref{fig:S:exp-Couplers}(b,c) shows the intensity splitting obtained for coupler distances $d=0.12$ (b) and $d=0.14$~$\mathrm{\mu}$m{} (c), as a function of the coupler length. The coupling coefficients were theoretically predicted using a mode solver software \cite{Fallahkhair2008} and Eqs.~\ref{eq:ap-exp-Coupler-Lb}-\ref{eq:ap-exp-Coupler-k} of \textit{Appendix}~\ref{ap:QRW:Feasibility}. We observe a general agreement with the theory but with large standard deviations that indicate the experimental challenge. \begin{figure} \centering \includegraphics[width=8.6cm]{Figures/Fig70-Exp-Couplers} \caption{Experimental characterization of the coupling coefficient in parallel planar polymeric waveguides. (a) Top-view light-scattering intensity of optical fiber-coupled polymeric-waveguide directional couplers with coupler distances $d = {0 0.14 2.5}$~$\mathrm{\mu}$m{}. (b,c) Experimental and theoretical intensity splitting (coupling coefficients) as a function of the coupler length for coupler spacing $d=0.12$~$\mathrm{\mu}$m{} (a) and $d=0.14$~$\mathrm{\mu}$m{} (b). The error bars indicate the experimental $\pm$ standard deviation. Excitation wavelength $\lambda=635$~nm. } \label{fig:S:exp-Couplers} \end{figure} As mentioned above, the maximum free spectral range achievable with a \acrshort{rr} depends on the minimum bending radius supported by the waveguide. Fig.~\ref{fig:exp:matBeningLosses} analyzes the simulated bending losses associated with example waveguide configurations with core/cladding materials: EpoCore/Epoclad Fig.~\ref{fig:exp:matBeningLosses}(a), Si/SiO\tsb{2} Fig.~\ref{fig:exp:matBeningLosses}(b) and Si\tsb{3}N\tsb{4}/SiO\tsb{2} (abbreviated to SiN/SiO\tsb{2}) Fig.~\ref{fig:exp:matBeningLosses}(c-d). The waveguide designs consist of either a core material on a cladding substrate, surrounded by air (Fig.~\ref{fig:exp:matBeningLosses}(a)), or completely surrounded by the cladding material (Fig.~\ref{fig:exp:matBeningLosses}(b-d)). The simulations were performed for wavelengths $\lambda = 0.635$ (Fig.~\ref{fig:exp:matBeningLosses}(a,c)), and $\lambda = 1.55$~$\mathrm{\mu}$m{} (Fig.~\ref{fig:exp:matBeningLosses}(b,d)) and the geometries were optimized for single-mode waveguiding in each material-wavelength combination. Fig.~\ref{fig:exp:matBeningLosses}(e) plots the bending losses associated with each waveguide design, including the fully surrounded EpoCore/EpoClad configuration. The plot shows how the different materials support strikingly different minimum waveguiding bending radii, spanning from $\approx 1$~$\mathrm{\mu}$m{} for Si/SiO\tsb{2} up to $700$~$\mathrm{\mu}$m{} for EpoCore/EpoClad. \begin{figure} \centering \includegraphics[width=8.6cm]{Figures/Fig90-Exp-Implementation} \caption{Single-mode waveguiding for different materials and wavelengths. (a-d) TM waveguiding mode supported by core/cladding waveguides of EpoCore/EpoClad (a), Si/SiO\tsb{2} (b), and Si\tsb{3}N\tsb{4}/SiO\tsb{2} (abbreviated to SiN/SiO\tsb{2}) (c,d), supporting wavelengths $\lambda = 0.635$~$\mathrm{\mu}$m{} (a,c) and $\lambda = 1.55$~$\mathrm{\mu}$m{} (b,d). (e) Waveguide bending losses (transmission per $90^{\circ}$) associated with (a-d). ECore/Eclad (violet curve) relates to an EpoCore core completely surrounded by EpoClad. } \label{fig:exp:matBeningLosses} \end{figure} We characterize the bending losses in an EpoCore/ EpoClad/Air configuration and compare them against the simulation (Fig.~\ref{fig:S:exp-BendLosses} of \textit{Appendix}~\ref{ap:QRW:Feasibility}). The substantial transmission variance indicates challenging experimental reproducibility beyond the simulation predictions. \section{Discussion} \label{sec:discussion} This paper proposes using coupled ring resonators to implement quantum random walks. We model light propagation in series-coupled ring resonators as random walks on an appropriate family of graphs and calculate the finite-time and steady-state solutions for both classical and quantum random walks on these graphs. The probability of a walker reaching either of the two output ports (labeled as \textit{Thru} and \textit{Drop}) can be obtained using a classical random walk formalism by considering the hopping probabilities at each node. On the other hand, the physical implementation of the system using \acrshort{rr}s depends on coherent and resonant effects that bias the walker "decisions," leading to modified output probabilities, which are proportional to the intensities expected to be measured experimentally. The analysis presented has been devoted mainly to the simplest single add-drop photonic ring resonator (\acrshort{rr}) case, which allows drawing the most important observations from the system with a reduced calculation complexity. We have also obtained steady-state solutions for the two-ring configurations. Multi-ring systems add complexity and output tunability (as more parameters come into play). Still, the main observation remains that one can use the coherent and resonant \acrshort{rr} effects to tune (quantum bias) the response of a random walk graph and improve its algorithmic efficiency. The two proposed calculation methods provide an intuitive and a scalable approach to calculate the \textit{Thru} and \textit{Drop} output probabilities. While the Feynman path sum becomes unwieldy to tackle systematically, complicated for multi-ring systems, as inter-ring loop combinations need to be considered (see double-ring system analysis in \textit{Appendix}~\ref{ap:RW:2rings}), the Markov chain method provides a platform for straightforward numerical calculation method for any number of rings, providing both steady-state and dynamic probability distributions at each random walk node. Analytical solutions can be obtained from the Markov Chain methods using symbolic computational solvers such as \textit{Wolfram Mathematica}. \acrfull{fdtd} and \acrfull{fdfd} photonic simulation models can be used to simulate the quantum random walk scenario. However, such numerical simulation approaches will require considerable (and perhaps forbidding) computation power and time resources. Eq.~\ref{eq:QRW-average-Id} reveals that the coherent phase average of quantum \textit{Drop} and \textit{Thru} port probabilities converge towards the classical solutions. This means the quantum regime corresponds to a field redistribution over the graph, as a function of the coherent effects and ring resonances. In terms of goal-hitting rate and speed, there must always be a trade-off between resonant and antiresonant conditions, so that on average, the quantum random walk behaves just like the classical random walk. The key is thus to optimize the coherent parameters (ring radii and wavelength) to improve the algorithmic efficiency relative to the classical regime. It should be noted that even though we have only addressed series-coupled \acrshort{rr} arrays, the same type of modeling can be applied to more complex 2D coupling configurations \cite{Bachman2015}. This suggests that 2D arrays of series- and parallel-coupled ring resonators may provide a flexible, and experimentally accessible model for coherent light propagation in two-dimensional structures. The control of coherent and resonant effects in \acrshort{rr} relies strongly on the waveguide materials and fabrication accuracy. In particular, the effective waveguide refractive index determines the waveguiding mode confinement, which affects single-mode waveguide dimension requirements and its associated bending losses. In turn, the bending losses limit the minimum ring radius supported by the waveguide, which plays a key role in the free spectral range between resonance peaks. Polymer-based photonic elements have recently arisen as a low-cost, versatile alternative to Si-based photonic integrated chips, with outstanding transparency in the visible and near-infrared wavelength ranges. Polymer photonics boast new interesting properties such as biocompatibility, mechanical flexibility, and photosensitive properties that allow \acrfull{dlw}-based 3D patterning. However, the lower refractive indices in polymers (usually between $1.4$ and $1.7$ in the visible) compared to Si ($n_g=3.6\ @\lambda=1.550$~$\mathrm{\mu}$m{}) provide weaker mode confinement and thus larger waveguide bending losses. As a result, the \acrshort{rr} dimensions must be much larger, which narrows the free spectral range, thus reducing the control over the device output. Very narrow free spectral ranges (e.g., narrower than the bandwidth of the light source) make it difficult to distinguish resonances experimentally and thus introduce decoherence. As the device output becomes the average over multiple wavelength contributions, the fine control described in Section~\ref{sec:QRW} is harmed or lost. Previous works have reported on the impact of decoherence in quantum random walk systems and the importance of controlling it\cite{Svozilik2012,Biggerstaff2016,Broome2010}. Eq.~(\ref{eq:QRW-average-Id}) shows that introducing decoherence (phase average) in the proposed system deviates the output distributions away from the quantum regime and towards the classical random walk solution. The effect can be seen as a further output optimization parameter, but also as a constraint, since photonic integrated chip nanofabrication and device operation incur experimental errors, to which narrow free spectral range systems are more sensitive. Even in perfect experimental conditions, the large minimum radius supported by the polymeric waveguides leads to an extremely reduced free spectral range ($10$-$100$~pm). Consequently, accurate experimental characterization would require either an extremely narrow-band light source or a high-precision optical spectrum analyzer. The decoherence effects (as a consequence of many averaged resonance steps) would likely frame the polymeric waveguide implementation closer to the classical regime than the quantum. Therefore, any of the semiconductor material-based configurations here discussed is more likely to provide successful device implementation than the polymeric ones. \section{Conclusion} \label{sec:conclusion} We have proposed series-coupled photonic ring resonators as a platform to implement controllable quantum random walks. We have modeled light propagation in these structures using a suitable family of graphs and analyzed quantum and classical random walk behavior using this model. Light coherence and resonance effects in the quantum random walk provide output tunability based on the wavelength and ring radii. Both classical and quantum random walk regimes can be analyzed using Feynman path sums or Markov Chain approaches, where the latter provides an efficient method for both analytical and numerical output probability calculations for graphs with any number of rings. The quantum random walk coherent phase average tends towards that of the corresponding classical random walk, imposing a trade-off between resonant and non-resonant conditions. When trying to maximize the transport efficiency across the graph, from the initial input node to the \textit{Drop} output node, resonant coherent conditions in a single-ring system allow enhanced transport far beyond its classical counterpart, for a much wider range of coupling constant combinations. The time-domain analysis shows slightly slower convergence rates in the quantum domain, but successful goal-hitting for all matching $k_1 = k_2$ conditions, something the corresponding classical walks are incapable of doing for small $k_1=k_2$. Mode confinement and waveguide bending loss simulations revealed the importance of material choice and the impact on decoherence effects. Preliminary feasibility tests in the visible wavelength range using polymers revealed implementation challenges from the points of view of both the free spectral range between observable resonances and control over device parameters. The proposed analysis of coupled ring resonators as a platform for quantum random walks has the potential for photonic algorithm implementations that are not restricted to linear ring configurations but could be expanded to complex 2D arrangements. These could be used to experimentally model coherent energy transfer in 2D arrays. \begin{acknowledgments} We wish to acknowledge Dr. Jérôme Borme for his support on e-beam lithography for the sample nanofabrication. We acknowledge funding from the INL Seed Grant project ``Coherent light propagation in photonic quantum walk chips". EFG acknowledges funding of the Portuguese institution FCT – Fundação para Ciência e Tecnologia via project CEECINST/00062/2018. This work was supported by the ERC Advanced Grant QU-BOSS (GA no.: 884676). RA acknowledges the Laser Photonics \& Vision Ph.D. program, U. Vigo. MCG acknowledges funding of H2020 Marie Skłodowska-Curie Actions (713640) \end{acknowledgments}
1,108,101,564,023
arxiv
\section{Introduction.} A {\it Poisson structure} on a smooth algebraic variety $X$ is given by a bivector field $\omega\in H^0(X,{\wedge}^2 T_X)$, which satisfies the condition $$ [\omega, \omega]=0, \;\;\;\;\;\;\;\;\;\; (\star ) $$ where $[\; , \; ]$ denotes the Schouten bracket.\\ It is natural to consider the variety of Poisson structures $\mathcal P$ on $X$, which is the subvariety in ${\mathbb P}(H^0 (X,{\wedge}^2T_X))$ given by the equations $(\star )$.\\ Given any point $\omega \in \mathcal P$, one defines Poisson cohomology $H_{Poisson}^\star (X,\omega)$ of $X$ with respect to $\omega$ \cite{Lichnerowicz}.\\ Explicit computation of Poisson cohomology for various algebraic varieties is of interest (see \cite{HongXu}). In \cite{HongXu} Wei Hong and Ping Xu computed Poisson cohomology of del Pezzo surfaces. The condition $(\star )$ is automatically satisfied in that case for dimension reasons. This is not the case for Fano threefolds.\\ Poisson Fano threefolds with Picard number $1$ were classified in \cite{Loray1} and \cite{Loray2}. They are exactly the following (see \cite{Loray1}, \cite{Loray2}): \begin{itemize} \item the projective space ${\mathbb P}^3$, \item the quadric $Q\subset {\mathbb P}^4$, \item a sextic hypersurface $X\subset {\mathbb P} (1,1,1,2,3)$, \item a quartic hypersurface $X\subset {\mathbb P} (1,1,1,1,2)$, \item a cubic hypersurface $X\subset {\mathbb P}^4$, \item a complete intersection of two quadrics in ${\mathbb P}^5$, \item the del Pezzo quintic threefold $X\subset {\mathbb P}^6$, \item the Mukai-Umemura threefold. \end{itemize} In each case Loray, Pereira and Touzet described the dimensions of the irreducible components of the variety of Poisson structures, described their smoothness and some other properties (see \cite{Loray1}, \cite{Loray2}).\\ In this note we consider (from a different viewpoint) two families of Poisson Fano threefolds from the above list: cubic threefolds and the del Pezzo quintic threefold. For each of them we describe explicitly the variety of Poisson structures (thus reobtaining the results of \cite{Loray1}, \cite{Loray2} in a special case) and compute Poisson cohomology.\\ Our main results are the following. For the explicit computations of matrices $A_{\omega}$, $B_{\omega}$ and $C_{\omega}$ we refer the reader to sections 2.3 and 3.2.\\ {\bf Theorem 1 (Loray, Pereira, Touzet, \cite{Loray1}, \cite{Loray2}).} {\it Let $X$ be a smooth cubic threefold. Then the variety of Poisson structures $\mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ on $X$ is the Grassmannian $G(2,5)$ of lines in ${\mathbb P}^4$ and $\mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ is the Pl{\" u}cker embedding.}\\ {\bf Theorem 2.} {\it Let $X$ be a smooth cubic threefold and $\omega\in \mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ a Poisson structure on $X$. Then the dimensions of the Poisson cohomology groups are as follows: \begin{itemize} \item $dim(H^0_{Poisson}(X,\omega))=1$, \item $dim(H^1_{Poisson}(X,\omega))=0$, \item $dim(H^2_{Poisson}(X,\omega))=20-rk(C_{\omega})$, \item $dim(H^3_{Poisson}(X,\omega))=15-rk(C_{\omega})$. \end{itemize}} {\bf Theorem 3 (Loray, Pereira, Touzet, \cite{Loray1}, \cite{Loray2}).} {\it Let $X$ be the (smooth) del Pezzo quintic threefold. Then the variety of Poisson structures $\mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ on $X$ is the disjoint union of the Grassmannian $G(2,7)$ of lines in ${\mathbb P}^6$ (embedded into ${\mathbb P}(H^0(X,{\wedge}^2T_X))$ by the Pl{\" u}cker embedding) and a smooth conic. The plane spanned by the conic does not intersect the Grassmannian.}\\ {\bf Theorem 4.} {\it Let $X$ be the (smooth) del Pezzo quintic threefold and $\omega\in \mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ a Poisson structure on $X$. Then the dimensions of the Poisson cohomology groups are as follows: \begin{itemize} \item $dim(H^0_{Poisson}(X,\omega))=1$, \item $dim(H^1_{Poisson}(X,\omega))=3-rk(A_{\omega})$, \item $dim(H^2_{Poisson}(X,\omega))=21-rk(A_{\omega})-rk(B_{\omega})$, \item $dim(H^3_{Poisson}(X,\omega))=23-rk(B_{\omega})$. \end{itemize}} The identification of the irreducible components of the varieties of Poisson structures with Grassmannians was proved in greater generality in \cite{Loray1}, \cite{Loray2}. In the cases we are considering we reobtain these results with a different method. Thus Theorem 1 and Theorem 3 is entirely a result of Loray, Pereira and Touzet. Note that in the case of the del Pezzo quintic threefold the curve component is shown to be smooth and rational in \cite{Loray1}, \cite{Loray2}. The fact that it is a conic was not mentioned explicitely there, but Loray, Pereira and Touzet knew it thanks to the joint work of Jorge Pereira and Carlo Perrone \cite{Pereira}. We thank Jorge Pereira for the last comment as well as for the following Remark.\\ {\bf Remark.} Still, note that the arguments of \cite{Loray1}, \cite{Loray2} do not shed light on the scheme structure of the space of Poisson structures.\\ The key ingredient of our computation of Poisson cohomology for cubics and the del Pezzo quintic threefold is the same as in \cite{HongXu} - the spectral sequence of Laurent-Gengoux, Sti{\'e}non and Xu \cite{Stienon}.\\ We work over $k=\mathbb C$ throughout.\\ \section{Schouten bracket.} The Schouten bracket on a smooth algebraic variety $X$ gives a way to extend the structure of a Lie algebra on vector fields on $X$ to the structure of a graded Lie algebra on all multivector fields on $X$.\\ A convenient local definition of the Schouten bracket is given in a preprint by Bondal \cite{Bondal}:\\ {\bf Definition (Bondal, \cite{Bondal}). } Let $A$, $B$ be multivector fields on $X$ of degrees $n$ and $m$ respectively. Then their Schouten bracket $[A,B]$ is a multivector field of degree $n+m-1$ such that for any $(n+m-1)-$form $\omega$ we have $$ [A,B](\omega)=(-1)^{m(n-1)}A(dB(\omega))+(-1)^{n}B(dA(\omega))-AB(d\omega ). $$ When $dim(X)=3$, one can use the isomorphism of vector bundles ${\wedge}^2T_X\cong {\Omega}^1_X(-K_X)$ and reformulate this definition as follows.\\ {\bf Proposition 1.} {\it Assume that $X$ is a smooth projective algebraic threefold. Let ${\omega}_A, {\omega}_B \in H^0(X,{\Omega}^1_X(-K_X))$ be the forms corresponding to bivector fields $A,B\in H^0(X,{\wedge}^2T_X)$ under the isomorphism ${\wedge}^2T_X\cong {\Omega}^1_X(-K_X)$. Then the Schouten bracket $[{\omega}_A, {\omega}_B]$ is equal to $$ [{\omega}_A, {\omega}_B]=\frac{1}{Vol}\cdot \left( {\omega}_A \wedge d{\omega}_B + d{\omega}_A \wedge {\omega}_B \right). $$ }\\ Note that in this case the Schouten bracket $$ [\; , \; ]\colon H^0(X,{\wedge}^2T_X) \times H^0(X,{\wedge}^2T_X)\rightarrow H^0(X,{\wedge}^3T_X)\cong H^0(X,-K_X) $$ has its image in the space of global sections of the anticanonical line bundle $-K_X$, and the division by $Vol$ in the formula above signifies the isomorphism ${\Omega}^3_X\cong K_X$. In other words, if $dim(X)=3$, then in terms of forms the Schouten bracket $$ [\; , \; ]\colon H^0(X,{\Omega}^1_X(-K_X)) \times H^0(X,{\Omega}^1_X(-K_X))\rightarrow H^0(X,-K_X) $$ is the composition of the exterior derivatives and the exterior product: \begin{multline*} H^0(X,{\Omega}^1_X(-K_X)) \times H^0(X,{\Omega}^1_X(-K_X))\xrightarrow{1\times d+d\times 1} [ H^0(X,{\Omega}^1_X(-K_X)) \times H^0(X,{\Omega}^2_X(-K_X)) ] \oplus \\ [ H^0(X,{\Omega}^2_X(-K_X)) \times H^0(X,{\Omega}^1_X(-K_X)) ] \xrightarrow{\wedge} H^0(X,{\Omega}^3_X(-2K_X))\cong H^0(X,-K_X). \end{multline*} {\it Proof:} This follows from Bondal's definition by a local computation.\\ Let $x_1,x_2,x_3$ be local coordinates on $X$. As an element of $H^0(X,{\wedge}^2T_X)$, ${\omega}_A$ maps $dx_i\wedge dx_j$ to $\frac{1}{Vol}\cdot {\omega}_A \wedge dx_i\wedge dx_j$, where $Vol=dx_1\wedge dx_2 \wedge dx_3$.\\ Hence \begin{multline*} [A,B](dx_1\wedge dx_2 \wedge dx_3)=A\left( d\frac{1}{Vol}({\omega}_B \wedge dx_1\wedge dx_2)\wedge dx_3 - d \frac{1}{Vol} ({\omega}_B \wedge dx_1\wedge dx_3)\wedge dx_2 +\right. \\ \left. +d\frac{1}{Vol} ({\omega}_B \wedge dx_2\wedge dx_3)\wedge dx_1 \right) +(A\leftrightarrow B)=\\ =\frac{1}{Vol}\cdot {\omega}_A \wedge [dc_1 \wedge dx_1 + dc_2 \wedge dx_2 + dc_3 \wedge dx_3]+(A\leftrightarrow B), \end{multline*} where $c_i$ are such that ${\omega}_B=\sum_{i=1}^{3} c_i\cdot dx_i$.\\ This expression is exactly equal to $$ \frac{1}{Vol} {\omega}_A \wedge d{\omega}_B + (A\leftrightarrow B). $$ {\it QED}\\ If $X$ is a Fano threefold of index $2$ (the only case we are considering), then $K_X\cong {\mathcal O}_X(-2)$. In this case, the Schouten bracket is a bilinear map $$ [\; , \; ]\colon H^0(X,{\Omega}^1_X(2))\times H^0(X,{\Omega}^1_X(2))\rightarrow H^0(X,{\mathcal O}_X(2)). $$ From now on $X$ will always denote a smooth Fano threefold of index $2$. We will always identify $H^0(X,{\wedge}^2T_X)$ and $H^0(X,{\Omega}^1_X(2))$ (as well as $H^0(X,-K_X)$ and $H^0(X,{\mathcal O}(2))$) by the isomorphism ${\wedge}^2T_X\cong {\Omega}^1_X(-K_X)$.\\ \section{Cubic threefolds.} Let $X\subset {\mathbb P}^4$ be a smooth cubic threefold. Let us denote by $Z_0,Z_1,Z_2,Z_3,Z_4$ the homogeneous coordinates in ${\mathbb P}^4$ and by $$ F=F(Z_0,Z_1,Z_2,Z_3,Z_4)=0 $$ the equation of $X$.\\ \subsection{Cohomology computations.} In order to describe the variety of Poisson structures on $X$ and compute Poisson cohomology of $X$, we need to find handy descriptions for the spaces of multivector fields $H^0(X,{\wedge}^jT_X)$ on $X$ as well as to compute (or check the vanishing) of the higher cohomology groups $H^i(X,{\wedge}^jT_X), i\geq 1$. This is the subject of the present subsection. The methods we are using are standard and the results are well-known.\\ {\bf Lemma 1.} {\it Let $X\subset {\mathbb P}^4$ be a smooth cubic threefold. Then \begin{itemize} \item[(a)] $H^0(X,{\mathcal O}_X)=k$, $H^i(X,{\mathcal O}_X)=0$ for any $i\geq 1$, \item[(b)] $H^i(X,T_X)=0$ for any $i\neq 1$, \item[(c)] $H^1(X,T_X)$ is a $10$-dimensional vector space, which can be identified with the cokernel of the homomorphism $$ H^0({\mathbb P}^4,\mathcal O)\oplus H^0({\mathbb P}^4,\mathcal O (1))^{\oplus 5} \rightarrow H^0({\mathbb P}^4,{\mathcal O}(3)), (a,(b_0,b_1,b_2,b_3,b_4))\mapsto aF+ \sum_{i=0}^{4} b_i \frac{\partial F}{\partial Z_i}, $$ \item[(d)] $H^i(X,{\wedge}^2T_X)=0$ for any $i\geq 1$, \item[(e)] $H^0(X,{\wedge}^2T_X)$ is a $10$-dimensional vector space, which can be identified with the kernel of the (surjective) homomorphism $$ H^0({\mathbb P}^4,\mathcal O (1))^{\oplus 5} \rightarrow H^0({\mathbb P}^4,{\mathcal O}(2)), (a_0,a_1,a_2,a_3,a_4)\mapsto \sum_{i=0}^{4} a_i Z_i, $$ \item[(f)] $H^i(X,{\wedge}^3T_X)=0$ for any $i\geq 1$, \item[(g)] $H^0(X,{\wedge}^3T_X)\cong H^0({\mathbb P}^4,{\mathcal O}(2))$ has dimension $15$. \end{itemize}} {\it Proof:}\\ (a) The short exact sequence of sheaves on ${\mathbb P}^4$ \begin{equation} \label{SES1} 0\rightarrow {\mathcal O}(-3)\rightarrow \mathcal O \rightarrow {\mathcal O}_X \rightarrow 0 \end{equation} gives the long exact sequence of cohomology groups $$ 0\rightarrow H^0({\mathbb P}^4,{\mathcal O}(-3))\rightarrow H^0({\mathbb P}^4,O) \rightarrow H^0(X,{\mathcal O}_X) \rightarrow H^1({\mathbb P}^4, {\mathcal O}(-3))\rightarrow \ldots $$ Since $H^i({\mathbb P}^4, {\mathcal O}(-3))=0$ for any $i\in \mathbb Z$, we obtain the isomorphisms $$ H^i({\mathbb P}^4,O) \cong H^i(X,{\mathcal O}_X),\;\;\; i\in \mathbb Z. $$ Since $H^i({\mathbb P}^4, {\mathcal O})=0$ for any $i\geq 1$, we conclude.\\ (b)(c) Multiplying the exact sequence~(\ref{SES1}) by ${\mathcal O}(d)$, $d\in\mathbb Z$, we get \begin{equation} \label{SES2} 0\rightarrow {\mathcal O}(d-3)\rightarrow \mathcal O (d) \rightarrow {\mathcal O}_X (d) \rightarrow 0. \end{equation} Since $H^i({\mathbb P}^4, {\mathcal O} (d))=0$ for $0< i < 4$, $d\in \mathbb Z$ and for $i=4, d\geq -4$, the long exact sequence of cohomology groups implies that \begin{gather*} H^i(X, {\mathcal O}_X (d))=0 \;\mbox{for }\; 0<i<3, d\in \mathbb Z,\\ H^3(X, {\mathcal O}_X (d))=0 \;\mbox{for }\; d\geq -1,\\ H^4(X, {\mathcal O}_X (d))=0 \;\mbox{for }\; d\geq -4.\\ \end{gather*} Moreover, we get the short exact sequences \begin{equation} \label{SES3} 0\rightarrow H^0({\mathbb P}^4,{\mathcal O}(d-3)) \rightarrow H^0({\mathbb P}^4,{\mathcal O}(d)) \rightarrow H^0(X,{\mathcal O}_X(d)) \rightarrow 0, \end{equation} which compute the spaces of global sections of line bundles ${\mathcal O}_X(d)$ on $X$ in terms of those on ${\mathbb P}^4$.\\ In particular, for $d\leq 2$ we get isomorphisms $$ H^0({\mathbb P}^4,{\mathcal O}(d))\cong H^0(X,{\mathcal O}_X(d)). $$ Let $j\colon X\hookrightarrow {\mathbb P}^4$ denote the embedding. Consider the normal bundle sequence on $X$ \begin{equation} \label{SES5} 0\rightarrow T_X\rightarrow j^{\star}T_{{\mathbb P}^4} \rightarrow {\mathcal O}_X (3) \rightarrow 0 \end{equation} and the pullback to $X$ of the Euler exact sequence on ${\mathbb P}^4$ \begin{equation} \label{SES6} 0\rightarrow {\mathcal O}_X\rightarrow {\mathcal O}_X (1)^{\oplus 5} \rightarrow j^{\star}T_{{\mathbb P}^4} \rightarrow 0. \end{equation} Since $H^i(X,{\mathcal O}_X(d))=0$ for any $i\geq 1$ and any $d\geq 0$, we conclude that $$ H^i(X,j^{\star}T_{{\mathbb P}^4})=0 \;\;\; \mbox{for any}\; i\geq 1. $$ It is shown in \cite{KodairaSpencer} that $H^0(X,T_X)=0$. Then the long exact sequence of cohomology groups of~(\ref{SES5}) gives that $$ H^i(X,T_X)=0, i\geq 2 $$ and also the short exact sequence \begin{equation} \label{SES7} 0\rightarrow H^0(X,j^{\star}T_{{\mathbb P}^4}) \rightarrow H^0(X,{\mathcal O}(3)) \rightarrow H^1(X,T_X) \rightarrow 0. \end{equation} The space of global sections of $j^{\star}T_{{\mathbb P}^4}$ on $X$ can be computed from~(\ref{SES6}) as follows: $$ 0\rightarrow H^0(X,{\mathcal O}_X) \rightarrow H^0(X,{\mathcal O}_X(1))^{\oplus 5} \rightarrow H^0(X,j^{\star}T_{{\mathbb P}^4}) \rightarrow 0. $$ Since $H^0(X,{\mathcal O}_X) = H^0({\mathbb P}^4,{\mathcal O})$ and $H^0(X,{\mathcal O}_X (1)) \cong H^0({\mathbb P}^4,{\mathcal O} (1))$, using Euler's exact sequence again we conclude that $$ H^0(X,j^{\star}T_{{\mathbb P}^4}) \cong H^0({\mathbb P}^4,T_{{\mathbb P}^4}) $$ is the cokernel of the following homomorphism of vector spaces $$ H^0({\mathbb P}^4,{\mathcal O}) \rightarrow H^0({\mathbb P}^4,{\mathcal O}(1))^{\oplus 5}, 1\mapsto (Z_0,Z_1,Z_2,Z_3,Z_4). $$ The long exact sequence of~(\ref{SES3}) computes $H^0(X,{\mathcal O}_X(3))$ as follows: $$ 0\rightarrow H^0({\mathbb P}^4,{\mathcal O}) \rightarrow H^0({\mathbb P}^4,{\mathcal O}(3)) \rightarrow H^0(X,{\mathcal O}_X(3)) \rightarrow 0. $$ The homomorphism $H^0(X,j^{\star}T_{{\mathbb P}^4}) \rightarrow H^0(X,{\mathcal O}_X(3))$ lifts to $$ H^0({\mathbb P}^4,{\mathcal O}(1))^{\oplus 5} \rightarrow H^0({\mathbb P}^4,{\mathcal O}(3)), (a_0,a_1,a_2,a_3,a_4)\mapsto \sum_{i=0}^{4} a_i\cdot \frac{\partial F}{\partial Z_i}. $$ Indeed, the element of $H^0(X,j^{\star}T_{{\mathbb P}^4})$ corresponding to $(a_0,a_1,a_2,a_3,a_4)\in H^0({\mathbb P}^4, {\mathcal O}(1))^{\oplus 5}$ is the vector field $\sum_{i=0}^{4} a_i\cdot \frac{\partial }{\partial Z_i} {\mid}_X$. Its image in $H^0(X,{\mathcal O}_X(3))$ is the corresponding section of the normal bundle $N_{X / {\mathbb P}^4}\cong {\mathcal O}_X(3)$ to $X$ in ${\mathbb P}^4$.\\ This proves part (c).\\ (d)(e) Take the dual of the normal bundle sequence~(\ref{SES5}): \begin{equation} \label{SES8} 0\rightarrow {\mathcal O}(-3) \rightarrow j^{\star}\Omega_{{\mathbb P}^4}^1 \rightarrow \Omega_{X}^1 \rightarrow 0. \end{equation} The long exact sequence of cohomology groups of~(\ref{SES2}) implies that $H^0(X,{\mathcal O}_X(d))=0$ for $d\leq -1$. Note that $H^i(X,{\mathcal O}_X(-1))=0$ for any $i\in \mathbb Z$. Hence~(\ref{SES8}) gives (after multiplying by $\mathcal O (2)$ and taking cohomology) $$ H^i(X,j^{\star}\Omega_{{\mathbb P}^4}^1(2))\cong H^i(X,\Omega_{X}^1(2))\; \mbox{for any }\; i\in \mathbb Z. $$ Taking the dual of the short exact sequence~(\ref{SES6}) and multiplying the result by $\mathcal O (2)$, we find \begin{equation} \label{SES9} 0 \rightarrow j^{\star}\Omega_{{\mathbb P}^4}^1(2) \rightarrow {\mathcal O}_{X}(1)^{\oplus 5}\rightarrow {\mathcal O}_X(2) \rightarrow 0. \end{equation} Since $H^i(X,{\mathcal O}_X(1))=H^i(X,{\mathcal O}_X(2))=0,\;\; i\geq 1$, this implies immediately that $$ H^i(X,j^{\star}\Omega_{{\mathbb P}^4}^1(2))=0,\;\;\; i\geq 2. $$ Moreover, by taking cohomology in~(\ref{SES9}) we obtain an exact sequence $$ 0\rightarrow H^0(X,j^{\star}\Omega_{{\mathbb P}^4}^1(2)) \rightarrow H^0(X,{\mathcal O}_X(1))^{\oplus 5}\rightarrow H^0(X,{\mathcal O}_X(2))\rightarrow H^1(X,j^{\star}\Omega_{{\mathbb P}^4}^1(2) )\rightarrow 0. $$ The homomorphism in the middle can be identified with $$ H^0({\mathbb P}^4,\mathcal O (1))^{\oplus 5} \rightarrow H^0({\mathbb P}^4,{\mathcal O}(2)), (a_0,a_1,a_2,a_3,a_4)\mapsto \sum_{i=0}^{4} a_i Z_i. $$ It is clearly surjective. Hence $H^1(X,j^{\star}\Omega_{{\mathbb P}^4}^1(2))=0$.\\ This gives the vanishing $H^i(X,{\wedge}^2T_X)\cong H^i(X,{\Omega}^1_X(2))\cong H^i(X,j^{\star}\Omega_{{\mathbb P}^4}^1(2))=0$ for any $i\geq 1$ as well as the description of $H^0(X,{\wedge}^2T_X)$ in part (e).\\ (f)(g) Note that ${\wedge}^3T_X\cong ({\Omega}^3_X)^{\vee}\cong {\mathcal O}(-K_X)\cong {\mathcal O}_X(2)$ since $X$ has index $2$.\\ Hence $H^i(X, {\wedge}^3T_X)\cong H^i(X, {\mathcal O}_X(2))\cong H^i({\mathbb P}^4, {\mathcal O}(2))$ for any $i\in \mathbb Z$. {\it QED}\\ {\bf Corollary 1.} {\it Let $X$ be a smooth cubic threefold. Then $H^0(X,{\wedge}^2T_X)$ can be identified (as a vector space) with $\mathfrak{so}(5)$, which is the Lie algebra of skew-symmetric $5\times 5$ matrices.\\ As a basis of $H^0(X,{\wedge}^2T_X) \cong H^0(X,{\Omega}_X^1(2))$ one can take $1$-forms $$ {\epsilon}_{ij}=Z_jdZ_i-Z_idZ_j, \; i<j. $$ }\\ {\it Proof:} In Lemma 1(e) we described $H^0(X,{\wedge}^2T_X) \cong H^0(X,{\Omega}_X^1(2))$ as the kernel of the homomorphism $$ H^0({\mathbb P}^4,\mathcal O (1))^{\oplus 5} \rightarrow H^0({\mathbb P}^4,{\mathcal O}(2)),\;\;\; (a_0,a_1,a_2,a_3,a_4)\mapsto \sum_{i=0}^{4} a_i\cdot Z_i. $$ As a basis of this kernel one can take sequences of the form $$ {\epsilon}_{ij}=(0,\cdots , Z_j,\cdots , -Z_i ,\cdots ,0), \; i<j, $$ where we have $Z_j$ in the $i$-th position, $-Z_i$ in the $j$-th position and $0$ in the remaining positions.\\ Then an arbitrary element of the kernel will have the form $\sum_{0\leq i < j \leq 4} {\beta}_{ij}\cdot {\epsilon}_{ij}$, where $({\beta}_{ij})\in \mathfrak{so}(5)$.\\ A sequence $(a_0,a_1,a_2,a_3,a_4)\in H^0({\mathbb P}^4,\mathcal O (1))^{\oplus 5}$ such that $\sum_{i=0}^{4} a_i\cdot Z_i=0$ corresponds to the form $\sum_{i=0}^{4} a_i \cdot dZ_i$ on $X$. Hence ${\epsilon}_{ij}$ are exactly the forms $Z_jdZ_i-Z_idZ_j$. {\it QED}\\ More invariantly, this leads to an identification $H^0(X,{\wedge}^2T_X)\cong {\wedge}^2V$, where $V=H^0({\mathbb P}^4, {\mathcal O}(1))$ is the $5$-dimensional vector space such that $X\subset {\mathbb P}(V)$ is the vanishing locus of $F\in Sym^3V$.\\ We will see later (and this was shown earlier by Loray, Pereira, Touzet, \cite{Loray1}, \cite{Loray2}) that the variety of Poisson structures $\mathcal P$ on any smooth cubic threefold $X$ does not depend on the holomorphic structure of $X$ and as a projective variety $\mathcal P \subset {\mathbb P}(H^0(X,{\wedge}^2T_X))\cong {\mathbb P}({\wedge}^2V)$ is isomorphic to the Grassmannian $G(2,V)$ of lines in ${\mathbb P}(V)$. Moreover, $\mathcal P\subset {\mathbb P}({\wedge}^2V)$ is the Pl{\" u}cker embedding.\\ \subsection{Variety of Poisson structures.} In order to describe equations $[\omega,\omega]=0$ defining the variety of Poisson structures $\mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))\cong {\mathbb P}(\mathfrak{so}(5))$, we need to compute the Schouten bracket. Since it is bilinear, it is sufficient to compute numbers $$ C_{ijkl}=\frac{1}{2}\cdot [{\epsilon}_{ij},{\epsilon}_{kl}], $$ where ${\epsilon}_{ij}=Z_jdZ_i-Z_idZ_j, i<j$ are elements of the basis of $H^0(X,{\wedge}^2T_X)$ from Corollary 1.\\ Note that $C_{ijkl}\in H^0(X,{\mathcal O}_X(2))\cong H^0({\mathbb P}^4,{\mathcal O}(2))$.\\ {\bf Lemma 2.} {\it $C_{ijkl}$ is totally antisymmetric with respect to its indices. It is completely determined by the following values: \begin{gather*} C_{0123}=\frac{\partial F}{\partial Z_4},\; C_{0124}=-\frac{\partial F}{\partial Z_3},\; C_{0134}=\frac{\partial F}{\partial Z_2}, \; C_{0234}=-\frac{\partial F}{\partial Z_1},\; C_{1234}=\frac{\partial F}{\partial Z_0}. \end{gather*} }\\ {\it Proof:} It is enough to work locally assuming, for example, that $Z_0=1$. Let us denote by $$ X_1=\frac{Z_1}{Z_0},\;\;\;\; X_2=\frac{Z_2}{Z_0},\;\;\;\; X_3=\frac{Z_3}{Z_0},\;\;\;\; X_4=\frac{Z_4}{Z_0} $$ the remaining affine coordinates and by $$ f(X_1,X_2,X_3,X_4)=\frac{F(Z_0,Z_1,Z_2,Z_3,Z_4)}{Z_0^3} $$ the restriction of the cubic form $F$ defining $X\subset {\mathbb P}^5$. Since $X$ is smooth, without loss of generality we can assume that $\frac{\partial f}{\partial X_4}\neq 0$.\\ Then by Proposition 1 we have \begin{multline*} C_{ijkl}=\frac{1/2}{Vol}\cdot \left( (X_jdX_i-X_idX_j)\wedge (dX_l\wedge dX_k-dX_k\wedge dX_l) + \right. \\ \left. +(dX_j\wedge dX_i-dX_i\wedge dX_j)\wedge (X_ldX_k-X_kdX_l) \right)=\\ =\frac{1}{Vol}\cdot \left( X_i\cdot dX_j\wedge dX_k\wedge dX_l - X_j\cdot dX_i\wedge dX_k\wedge dX_l + X_k\cdot dX_i\wedge dX_j\wedge dX_l - \right.\\ \left. - X_l\cdot dX_i\wedge dX_j\wedge dX_k \right). \end{multline*} The fact that $C_{ijkl}$ is totally antisymmetric is now evident. Hence it is enough to compute $C_{ijkl}$ assuming that $i<j<k<l$.\\ Note that by the adjunction formula we can take $$ Vol=\frac{dX_1\wedge dX_2 \wedge dX_3}{{\partial f}/{\partial X_4}}. $$ If $i=0$, then $$ C_{ijkl}=\frac{dX_j\wedge dX_k \wedge dX_l}{Vol}. $$ Hence \begin{gather*} C_{0123}=\frac{\partial f}{\partial X_4},\;\; C_{0124}=\frac{\partial X_4}{\partial X_3}\cdot \frac{\partial f}{\partial X_4}=- \frac{\partial f}{\partial X_3},\;\; C_{0134}=-\frac{\partial X_4}{\partial X_2}\cdot \frac{\partial f}{\partial X_4}= \frac{\partial f}{\partial X_2},\\ C_{0234}=\frac{\partial X_4}{\partial X_1}\cdot \frac{\partial f}{\partial X_4}= -\frac{\partial f}{\partial X_1}. \end{gather*} Finally, \begin{multline*} C_{1234}=\frac{\partial f}{\partial X_4}\cdot \frac{1}{dX_1\wedge dX_2 \wedge dX_3}\cdot \left( X_1\cdot dX_2\wedge dX_3\wedge dX_4 - X_2\cdot dX_1\wedge dX_3\wedge dX_4 +\right. \\ \left. + X_3\cdot dX_1\wedge dX_2\wedge dX_4 - X_4\cdot dX_1\wedge dX_2\wedge dX_3 \right)=\\ =\frac{\partial f}{\partial X_4}\cdot \left( X_1\cdot \frac{\partial X_4}{\partial X_1} + X_2\cdot \frac{\partial X_4}{\partial X_2} + X_3\cdot \frac{\partial X_4}{\partial X_3} - X_4 \right)=-\sum_{i=1}^{4} X_i\cdot \frac{\partial f}{\partial X_i}=\frac{\partial F}{\partial Z_0}. \end{multline*} {\it QED}\\ Now let $\omega=\sum_{i<j} a_{ij}\cdot {\epsilon}_{ij}$ be a point of ${\mathbb P}(H^0(X,{\wedge}^2T_X))\cong {\mathbb P}(\mathfrak{so}(5))$ with coordinates $(a_{ij})$.\\ Then $$ [\omega,\omega]=\sum_{i<j} \sum_{k<l} a_{ij}a_{kl}\cdot [{\epsilon}_{ij}, {\epsilon}_{kl}]=4\sum_{i<j<k<l} {\alpha}_{ijkl} \cdot C_{ijkl}, $$ where ${\alpha}_{ijkl}=a_{ij}a_{kl}-a_{ik}a_{jl}+a_{il}a_{jk}$.\\ {\bf Theorem 1 (Loray, Pereira, Touzet, \cite{Loray1}, \cite{Loray2}).} {\it Let $X\subset {\mathbb P}^4$ be a smooth cubic threefold. Then the variety of Poisson structures $\mathcal P\subset {\mathbb P}(\mathfrak{so}(5))$ on $X$ is isomorphic to the Grassmannian $G(2,5)$ of lines in ${\mathbb P}^4$ embedded into ${\mathbb P}(\mathfrak{so}(5))\cong {\mathbb P}({\wedge}^2k^{\oplus 5})$ via the Pl{\" u}cker embedding.}\\ {\it Proof:} It follows from Lemma 2 that $$ \frac{1}{4}[\omega,\omega]={\alpha}_{1234}\cdot \frac{\partial F}{\partial Z_0}-{\alpha}_{0234}\cdot \frac{\partial F}{\partial Z_1}+{\alpha}_{0134}\cdot \frac{\partial F}{\partial Z_2}-{\alpha}_{0124}\cdot \frac{\partial F}{\partial Z_3}+{\alpha}_{0123}\cdot \frac{\partial F}{\partial Z_4}. $$ Since $X\subset {\mathbb P}^4$ is smooth (and $char(\mathbb C)=0$), this expression is $0$ as an element of $H^0(X,{\mathcal O}_X(2))\cong H^0({\mathbb P}^4,{\mathcal O}(2))$ if and only if ${\alpha}_{ijkl}=0$ for any $i<j<k<l$. Hence $\mathcal P \subset {\mathbb P}(\mathfrak{so}(5))$ is the intersection of the following $5$ quadrics: $$ {\alpha}_{0123},\; {\alpha}_{0124},\; {\alpha}_{0134},\; {\alpha}_{0234},\; {\alpha}_{1234}. $$ These are exactly the Pl{\" u}cker quadrics defining $G(2,5)\subset {\mathbb P}^9$. {\it QED}\\ As we mentioned in the introduction, this theorem was proved earlier by Loray, Pereira, Touzet (see \cite{Loray1}) more generally by a different method.\\ \subsection{Poisson cohomology.} Let us compute Poisson cohomology of $X\subset {\mathbb P}^4$ for any Poisson structure $\omega \in \mathcal P$.\\ According to \cite{Stienon}, Corollary 4.26, Poisson cohomology is equal to the total cohomology of the following double complex ${\Omega}^{\star,\star}$:\\ $$ \begin{CD} \cdots @. \cdots @. \cdots @. \\ @AAA @AAA @AAA @. \\ {\Omega}^{0,0}(X,{\wedge}^2T_X) @>\bar{\partial}>> {\Omega}^{0,1}(X,{\wedge}^2T_X) @>\bar{\partial}>> {\Omega}^{0,2}(X,{\wedge}^2T_X) @>\bar{\partial}>> \cdots \\ @AAd_{\omega}A @AAd_{\omega}A @AAd_{\omega}A @. \\ {\Omega}^{0,0}(X,T_X) @>\bar{\partial}>> {\Omega}^{0,1}(X,T_X) @>\bar{\partial}>> {\Omega}^{0,2}(X,T_X) @>\bar{\partial}>> \cdots \\ @AAd_{\omega}A @AAd_{\omega}A @AAd_{\omega}A @. \\ {\Omega}^{0,0}(X,{\mathcal O}_X) @>\bar{\partial}>> {\Omega}^{0,1}(X,{\mathcal O}_X) @>\bar{\partial}>> {\Omega}^{0,2}(X,{\mathcal O}_X) @>\bar{\partial}>> \cdots \end{CD} $$ Here the vertical maps $d_{\omega}\colon {\wedge}^iT_X\rightarrow {\wedge}^{i+1}T_X$ are given by the Schouten bracket with $\omega$: $$ d_{\omega}(\nu)=\frac{1}{2}[\omega,\nu]. $$ \par Hence Poisson cohomology $H^{\star}_{Poisson}(X,\omega)$ is computed by the spectral sequence of this double complex: $$ E_2^{p,q}= {^{v}H}^{p} {^{h}H}^{q}({\Omega}^{\star,\star}) \Rightarrow H^{p+q}_{Poisson}(X,\omega). $$ The cohomology groups of horizontal complexes in the double complex ${\Omega}^{\star,\star}$ are exactly the cohomology groups of sheaves of multivector fields on $X$.\\ Hence from Lemma 1 it follows that $^{h}H^{q}({\Omega}^{p,\star})$ has the following shape:\\ \begin{center} \begin{tabular}{ccc} 0 & 0 & 0 \\ $H^0(X,{\mathcal O}_X(2))$ & 0 & 0 \\ $\uparrow d_{\omega}$ & & \\ $H^0(X,{\wedge}^2T_X)$ & 0 & 0 \\ 0 & $H^1(X,T_X)$ & 0 \\ $H^0(X,{\mathcal O}_X)$ & 0 & 0 \\ \end{tabular} \end{center} The matrix $C_{\omega}$ of the linear map $$ d_{\omega}\colon \mathfrak{so}(5)\cong H^0(X,{\wedge}^2T_X)\rightarrow H^0(X,{\mathcal O}_X(2))\cong H^0({\mathbb P}^4,{\mathcal O}(2)) $$ can be computed explicitly. This is done in the next Proposition 2.\\ {\bf Proposition 2.} {\it Let $\omega=\sum_{i<j}a_{ij}\cdot {\epsilon}_{ij}\in \mathcal P$. Then the images of the basis elements ${\epsilon}_{kl}\in \mathfrak{so}(5)$ under $d_{\omega}$ are as follows: \begin{gather*} d_{\omega}({\epsilon}_{01})=a_{23}\frac{\partial F}{\partial Z_4}-a_{24}\frac{\partial F}{\partial Z_3}+a_{34}\frac{\partial F}{\partial Z_2},\; d_{\omega}({\epsilon}_{02})=-a_{13}\frac{\partial F}{\partial Z_4}+a_{14}\frac{\partial F}{\partial Z_3}-a_{34}\frac{\partial F}{\partial Z_1}, \\ d_{\omega}({\epsilon}_{03})=a_{12}\frac{\partial F}{\partial Z_4}-a_{14}\frac{\partial F}{\partial Z_2}+a_{24}\frac{\partial F}{\partial Z_1},\; d_{\omega}({\epsilon}_{04})=-a_{12}\frac{\partial F}{\partial Z_3}+a_{13}\frac{\partial F}{\partial Z_2}-a_{23}\frac{\partial F}{\partial Z_1}, \\ d_{\omega}({\epsilon}_{12})=a_{03}\frac{\partial F}{\partial Z_4}-a_{04}\frac{\partial F}{\partial Z_3}+a_{34}\frac{\partial F}{\partial Z_0},\; d_{\omega}({\epsilon}_{13})=-a_{02}\frac{\partial F}{\partial Z_4}+a_{04}\frac{\partial F}{\partial Z_2}-a_{24}\frac{\partial F}{\partial Z_0}, \\ d_{\omega}({\epsilon}_{14})=a_{02}\frac{\partial F}{\partial Z_3}-a_{03}\frac{\partial F}{\partial Z_2}+a_{23}\frac{\partial F}{\partial Z_0},\; d_{\omega}({\epsilon}_{23})=a_{01}\frac{\partial F}{\partial Z_4}-a_{04}\frac{\partial F}{\partial Z_1}+a_{14}\frac{\partial F}{\partial Z_0}, \\ d_{\omega}({\epsilon}_{24})=-a_{01}\frac{\partial F}{\partial Z_3}+a_{03}\frac{\partial F}{\partial Z_1}-a_{13}\frac{\partial F}{\partial Z_0},\; d_{\omega}({\epsilon}_{34})=a_{01}\frac{\partial F}{\partial Z_2}-a_{02}\frac{\partial F}{\partial Z_1}+a_{12}\frac{\partial F}{\partial Z_0}. \end{gather*} }\\ {\it Proof:} $[\omega,{\epsilon}_{kl}]=\sum_{i<j}a_{ij}\cdot [{\epsilon}_{ij},{\epsilon}_{kl}]=2\cdot \sum_{i<j}a_{ij}\cdot C_{ijkl}$.\\ In particular, \begin{gather*} \frac{1}{2}[\omega,{\epsilon}_{01}]=a_{23}C_{0123}+a_{24}C_{0124}+a_{34}C_{0134},\; \frac{1}{2}[\omega,{\epsilon}_{02}]=-a_{13}C_{0123}-a_{14}C_{0124}+a_{34}C_{0234}, \\ \frac{1}{2}[\omega,{\epsilon}_{03}]=a_{12}C_{0123}-a_{14}C_{0134}-a_{24}C_{0234},\; \frac{1}{2}[\omega,{\epsilon}_{04}]=a_{12}C_{0124}+a_{13}C_{0134}+a_{23}C_{0234}, \\ \frac{1}{2}[\omega,{\epsilon}_{12}]=a_{03}C_{0123}+a_{04}C_{0124}+a_{34}C_{1234},\; \frac{1}{2}[\omega,{\epsilon}_{13}]=-a_{02}C_{0123}+a_{04}C_{0134}-a_{24}C_{1234}, \\ \frac{1}{2}[\omega,{\epsilon}_{14}]=-a_{02}C_{0124}-a_{03}C_{0134}+a_{23}C_{1234},\; \frac{1}{2}[\omega,{\epsilon}_{23}]=a_{01}C_{0123}+a_{04}C_{0234}+a_{14}C_{1234}, \\ \frac{1}{2}[\omega,{\epsilon}_{24}]=a_{01}C_{0124}-a_{03}C_{0234}-a_{13}C_{1234},\; \frac{1}{2}[\omega,{\epsilon}_{34}]=a_{01}C_{0134}+a_{02}C_{0234}+a_{12}C_{1234}. \end{gather*} Now one uses Lemma 2. {\it QED}\\ Hence the second sheet $E_2^{\star,\star}$ of the Laurent-Gengoux-Sti{\' e}non-Xu spectral sequence has the following shape:\\ \begin{center} \begin{tabular}{ccc} 0 & 0 & 0 \\ $coker(d_{\omega})$ & 0 & 0 \\ $ker(d_{\omega})$ & 0 & 0 \\ 0 & $H^1(X,T_X)$ & 0 \\ $k$ & 0 & 0 \\ \end{tabular} \end{center} Since all the differentials vanish, we have $$ E_{\infty}^{p,q}=E_2^{p,q}. $$ This proves the following result.\\ {\bf Theorem 2.} {\it Let $X\subset {\mathbb P}^4$ be a smooth cubic threefold and $\omega\in \mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ a Poisson structure on $X$. Then \begin{itemize} \item $H^0_{Poisson}(X,\omega)\cong H^0(X,{\mathcal O}_X)=k$, \item $H^1_{Poisson}(X,\omega)=0$, \item $H^2_{Poisson}(X,\omega)$ is an extension of $ker(d_{\omega})$ by $H^1(X,T_X)$, \item $H^3_{Poisson}(X,\omega)\cong coker(d_{\omega})$. \end{itemize}} In particular, $$ dim(H^2_{Poisson}(X,\omega))=dim(ker(d_{\omega})) + dim(H^1(X,T_X))=10-rk(C_{\omega})+10=20-rk(C_{\omega}) $$ and $$ dim(H^3_{Poisson}(X,\omega))=15-rk(C_{\omega}). $$ \section{Del Pezzo quintic threefold.} Let $X$ be the (smooth) del Pezzo quintic threefold. For the standard properties of $X$ (as well as for Fano varieties in general) we refer the reader to \cite{Fano} or papers of Iskovskikh, Mukai,...\\ The most important for us description of $X$ is given as a general codimension $3$ linear section of the Grassmannian $G(2,5)\subset {\mathbb P}^9$ of lines in ${\mathbb P}^4$ in its Pl{\" u}cker embedding.\\ {\bf Remark.} In what follows we use essentially the same approach (via cohomology and spaces of global sections of vector bundles) as in Section 2 in order to work with Poisson structures on $X$. For this the description of $X$ as a linear section of the Grassmannian is essential. Alternatively, one can use the fact that $X$ is a compactification of ${\mathbb C}^3$. This allows one to work on ${\mathbb C}^3$ instead of $X$ (provided that one keeps track of which (bi)vector fields extend to the whole of $X$). Indeed, using, for example, \cite{Kimura}, section 11 it is straightforward to describe $X$ in terms of equations in ${\mathbb P}^6$ and find explicitly a scroll (with one double line) on $X$, whose complement is exactly ${\mathbb C}^3$.\\ Let us denote by $Z_0,Z_1,Z_2,Z_3,Z_4,Z_5,Z_6,Z_7,Z_8,Z_9$ the homogeneous coordinates on ${\mathbb P}^9$. Then $G(2,5)\subset {\mathbb P}^9$ is given by the intersection of the following quadrics (see \cite{Mukai}, for example): \begin{gather*} p_1=Z_0Z_7-Z_1Z_5+Z_2Z_4,\;\; p_2=Z_0Z_8-Z_1Z_6+Z_3Z_4,\;\; p_3=Z_0Z_9-Z_2Z_6+Z_3Z_5,\\ p_4=Z_1Z_9-Z_2Z_8+Z_3Z_7,\;\;\;\;\;\; p_5=Z_4Z_9-Z_8Z_5+Z_6Z_7. \end{gather*} We will use the following hyperplanes in order to obtain $X$ in the intersection with the Grassmannian (see \cite{Kimura}, section 11): \begin{gather*} {\lambda}_1=Z_0+Z_7,\;\; {\lambda}_2=Z_4+Z_9,\;\; {\lambda}_3=Z_1+Z_6. \end{gather*} (The further intersection with one of the coordinate hyperplanes gives the scroll mentioned in the Remark above, whose complement is ${\mathbb C}^3$.)\\ \subsection{Cohomology computations.} Let $G=G(2,5)\subset {\mathbb P}^9$ and ${\Pi}_i\subset {\mathbb P}^9, i=1,2,3$ be the hyperplanes (given by the equations ${\lambda}_i=0$ as above). Let $X_0=G$, $X_1=G\cap {\Pi}_1$, $X_2=G\cap {\Pi}_1 \cap {\Pi}_2$, $X_3=G\cap {\Pi}_1 \cap {\Pi}_2 \cap {\Pi}_3=X$. Then we have a chain of embeddings $$ X=X_3\subset X_2\subset X_1 \subset X_0=G\subset {\mathbb P}^9. $$ As in the case of cubic threefolds, we need to show vanishing of higher cohomology groups and describe the spaces of global sections of sheaves of multivector fields on $X$.\\ Let $S$ and $Q$ denote the tautological (of rank $2$) and the quotient (of rank $3$) vector bundles on $G=G(2,5)$.\\ We will use the exact sequence \begin{equation} \label{TES1} 0\rightarrow {\mathcal O}_{X_i}(-1)\rightarrow {\mathcal O}_{X_i} \rightarrow {\mathcal O}_{X_{i+1}} \rightarrow 0 \end{equation} on $X_i$, the tautological exact sequence \begin{equation} \label{TES2} 0\rightarrow S \rightarrow {\mathcal O}_{G}^{\oplus 5} \rightarrow Q \rightarrow 0 \end{equation} on $G$ and the conormal bundle sequence \begin{equation} \label{TES1pp} 0\rightarrow N_{X/G}^{\vee} \rightarrow {\Omega}_{G}^{1} {\mid}_X \rightarrow {\Omega}_{X}^{1} \rightarrow 0 \end{equation} on X. In order to compute cohomology of various vector bundles on $G$ we will use general formulas from \cite{Kapranov} (section 3) and \cite{Fonarev} (section 2.2). In the notation of \cite{Kapranov} $S^{\bot}=Q^{\vee}$.\\ Note that $T_G\cong S^{\vee}\otimes Q$, ${\Omega}^1_{G}\cong S\otimes Q^{\vee}$, $N_{G/{\mathbb P}^9}\cong Q^{\vee} (2)$ (see \cite{Manivel}, section 5.4) and $N_{X/G}\cong \oplus_{i=1}^3 {\mathcal O}_X(1)$.\\ In the next Lemma we collect some vanishing results for cohomology of certain vector bundles on the Grassmannian $G=G(2,5)$. They all are consequences of the Borel-Weil-Bott theorem (via the general formulas of Fonarev \cite{Fonarev} and Kapranov \cite{Kapranov}) and are well-known.\\ {\bf Lemma 3.} {\it Let $G=G(2,5)$. Then \begin{itemize} \item[(a)] $H^i(G,{\mathcal O}(-d))=0$ for any $i\geq 1$, $d\leq 3$, \item[(b)] $H^i(G,{\Omega}^1_G (2-k))=0$ for any $i\geq 1$, $k=0,1,3,4$, $H^i(G,{\Omega}^1_G )=0$ for $i\neq 1$ and $H^1(G,{\Omega}^1_G)\cong k$, \item[(c)] $H^0(G,{\Omega}^1_G (d))=0$ for $d=-1,0,1$, \item[(d)] $H^0(G,{\Omega}^1_G (2))\cong k^{\oplus 45}$, \item[(e)] $H^1(G,{\mathcal O}(d))=0$ for any $d\in \mathbb Z$, \item[(f)] $H^0(G,{\mathcal O}(2))\cong k^{\oplus 50}$. \end{itemize}} {\it Proof:} In the notation of \cite{Fonarev} and \cite{Kapranov} ${\mathcal O}(-d)={\Sigma}^{-d,-d}S^{\vee}$ and ${\Omega}^1_{G}(d)={\Sigma}^{d,d-1}S^{\vee}\otimes {\Sigma}^{1,0,0}S^{\bot}$.\\ (a) follows, for example, from \cite{Kapranov}, Lemma 3.2 and (b), (c), (e) follow from \cite{Fonarev}, section 2.2.\\ (d) According to \cite{Fonarev}, the dual of $H^0(G, {\Omega}^1_G (2))$ admits an irreducible representation of $GL(5)$ corresponding to the Young diagram \begin{Young} &\cr \cr \cr \end{Young}. By the hook formula, its dimension is $45$.\\ (f) According to \cite{Fonarev}, section 2.2, the dual of $H^0(G, {\mathcal O}(2))$ admits an irreducible representation of $GL(5)$ corresponding to the Young diagram \begin{Young} &\cr &\cr \end{Young}. By the hook formula, its dimension is $50$. {\it QED}\\ The following observation will be also employed.\\ {\bf Remark.} Let $\mathcal E$ be a vector bundle on $G$. Then the vanishing of $H^i(X,\mathcal E)$ follows from the vanishing of $H^{i+k}(G,{\mathcal E}(-k))$ for any $k=0,1,2,3$. Indeed, one multiplies~(\ref{TES1}) by $\mathcal E$, takes cohomology and applies induction.\\ {\bf Lemma 4.} {\it Let $X$ be the (smooth) del Pezzo quintic threefold. Then \begin{itemize} \item[(a)] $H^0(X,{\mathcal O}_X)=k$, $H^i(X,{\mathcal O}_X)=H^i(X,{\mathcal O}_X(1))=0$ for any $i\geq 1$, \item[(b)] $H^0(X,T_X)\cong k^{\oplus 3}$, $H^i(X,T_X)=0$ for any $i\geq 1$, \item[(c)] $H^0(X,{\wedge}^2T_X)\cong k^{\oplus 21}$, $H^i(X,{\wedge}^2T_X)=0$ for any $i\geq 1$, \item[(d)] $H^0(X,-K_X)\cong k^{\oplus 23}$, $H^i(X,-K_X)=0$ for any $i\geq 1$. \end{itemize}} {\it Proof:} (a) This follows from the Remark above and Lemma 3(a).\\ (b) $H^i(X,T_X)\cong H^i(X,{\Omega}^2_X(2))=0$ for $i\geq 2$ by the Kodaira Vanishing theorem. $H^0(X,T_X)\cong k^{\oplus 3}$ is well-known (see \cite{Mukai2}, for example).\\ (c) Note that ${\wedge}^2T_X\cong {\Omega}^1_X(2)$.\\ Multiplying~(\ref{TES1pp}) by ${\mathcal O}(2)$ and taking cohomology, we obtain the short exact sequence \begin{equation} \label{TES5} 0\rightarrow H^0(X,{\mathcal O}(1))^{\oplus 3} \rightarrow H^0(X,{\Omega}_{G}^{1}(2)) \rightarrow H^0(X,{\Omega}_{X}^{1}(2)) \rightarrow 0 \end{equation} and isomorphisms $$ H^i(X,{\Omega}_{G}^{1}(2))\cong H^i(X,{\Omega}_{X}^{1}(2))\;\; \mbox{for any}\;\; i\geq 1. $$ By the Remark above the vanishing of $H^i(X,{\Omega}_{G}^{1}(2))$, $i\geq 1$ follows from the vanishing of $H^{i+k}(G,{\Omega}_{G}^{1}(2-k))$ for any $k=0,1,2,3$ and any $i\geq 1$. This follows from Lemma 3(b).\\ Moreover, by~(\ref{TES5}) $$ dim(H^0(X,{\Omega}_{X}^{1}(2)))=dim(H^0(X,{\Omega}_{G}^{1}(2)))-3dim(H^0(X,{\mathcal O}(1))). $$ It follows from part (a) that $dim(H^0(X,{\mathcal O}(1)))=dim(H^0(G,{\mathcal O}(1)))-3=7$. Let us compute $dim(H^0(X,{\Omega}_{G}^{1}(2)))$.\\ Taking cohomology of the short exact sequences \begin{gather*} 0\rightarrow {\Omega}_{G}^{1}(1) {\mid}_{X_2} \rightarrow {\Omega}_{G}^{1}(2) {\mid}_{X_2} \rightarrow {\Omega}_{G}^{1}(2) {\mid}_{X} \rightarrow 0,\\ 0\rightarrow {\Omega}_{G}^{1}(1) {\mid}_{X_1} \rightarrow {\Omega}_{G}^{1}(2) {\mid}_{X_1} \rightarrow {\Omega}_{G}^{1}(2) {\mid}_{X_2} \rightarrow 0,\\ 0\rightarrow {\Omega}_{G}^{1}(1) \rightarrow {\Omega}_{G}^{1}(2) \rightarrow {\Omega}_{G}^{1}(2) {\mid}_{X_1} \rightarrow 0,\\ 0\rightarrow {\Omega}_{G}^{1} {\mid}_{X_1} \rightarrow {\Omega}_{G}^{1}(1) {\mid}_{X_1} \rightarrow {\Omega}_{G}^{1}(1) {\mid}_{X_2} \rightarrow 0,\\ 0\rightarrow {\Omega}_{G}^{1} \rightarrow {\Omega}_{G}^{1}(1) \rightarrow {\Omega}_{G}^{1}(1) {\mid}_{X_1} \rightarrow 0,\\ 0\rightarrow {\Omega}_{G}^{1}(-1) \rightarrow {\Omega}_{G}^{1} \rightarrow {\Omega}_{G}^{1} {\mid}_{X_1} \rightarrow 0, \end{gather*} and applying the vanishing observations from Lemma 3, one obtains the following example sequences: \begin{gather*} 0\rightarrow H^0(X_2,{\Omega}_{G}^{1}(1)) \rightarrow H^0(X_2,{\Omega}_{G}^{1}(2)) \rightarrow H^0(X,{\Omega}_{G}^{1}(2)) \rightarrow 0,\\ 0\rightarrow H^0(X_1,{\Omega}_{G}^{1}(1)) \rightarrow H^0(X_1,{\Omega}_{G}^{1}(2)) \rightarrow H^0(X_2,{\Omega}_{G}^{1}(2)) \rightarrow 0,\\ 0\rightarrow H^0( G, {\Omega}_{G}^{1}(2)) \rightarrow H^0(X_1,{\Omega}_{G}^{1}(2)) \rightarrow 0,\\ 0\rightarrow H^0(X_1,{\Omega}_{G}^{1}) \rightarrow H^0(X_1,{\Omega}_{G}^{1}(1)) \rightarrow H^0(X_2,{\Omega}_{G}^{1}(1)) \rightarrow H^1(X_1,{\Omega}_{G}^{1}) \rightarrow 0,\\ 0\rightarrow H^0(X_1,{\Omega}_{G}^{1}(1))\rightarrow H^1(G,{\Omega}_{G}^{1}) \rightarrow 0,\\ 0\rightarrow H^0(X_1,{\Omega}_{G}^{1}) \rightarrow 0,\\ 0\rightarrow H^1(G,{\Omega}_{G}^{1})\rightarrow H^1(X_1,{\Omega}_{G}^{1}) \rightarrow 0. \end{gather*} This implies that $$ dim(H^0(X,{\Omega}_{G}^{1}(2)))=dim(H^0(X_1,{\Omega}_{G}^{1}(2)))-3 dim (H^0(X_1,{\Omega}_{G}^{1}(1)))=dim (H^0(G,{\Omega}_{G}^{1}(2)))-3=42. $$ Hence $dim(H^0(X,{\Omega}_{X}^{1}(2)))=42-3\cdot 7=21$.\\ (d) Since $\mathcal O (-K_X)\cong {\mathcal O}(2)$ on $X$, part (d) follows from the Remark above and Lemma 3.\\ In order to compute $dim(H^0(X,{\mathcal O}(2)))$, one can use the same method as in part (c), i.e. the induction along the chain $X=X_3\subset X_2\subset X_1\subset X_0=G$. Then using Lemma 3 one obtains that $$ dim(H^0(X,{\mathcal O}(2)))=dim(H^0(G,{\mathcal O}(2)))-3\cdot dim(H^0(G,{\mathcal O}(1)))+3=50-3\cdot 10+3=23. $$ {\it QED}\\ Now let us give descriptions of the spaces of global sections $H^0(X,T_X)$, $H^0(X,{\wedge}^2T_X)$ and $H^0(X,-K_X)$.\\ Let $X$ be the del Pezzo quintic threefold as above (i.e. the intersection of quadrics $p_1,p_2,p_3,p_4,p_5$ and hyperplanes ${\lambda}_1,{\lambda}_2,{\lambda}_3$ in ${\mathbb P}^9$).\\ {\bf Lemma 5.} {\it As a basis of $H^0(X,T_X)\cong \mathfrak{so}(2)$ one can take the restrictions to $X\subset G\subset {\mathbb P}^9$ of the following vector fields on ${\mathbb P}^9$: \begin{itemize} \item $v_1=2Z_1\cdot (\frac{\partial}{\partial Z_1}-\frac{\partial}{\partial Z_6})-Z_2\cdot \frac{\partial}{\partial Z_2}+3Z_3\cdot \frac{\partial}{\partial Z_3}+Z_4\cdot (\frac{\partial}{\partial Z_4}-\frac{\partial}{\partial Z_9})-2Z_5\cdot \frac{\partial}{\partial Z_5}+4Z_8\cdot \frac{\partial}{\partial Z_8}$, \item $v_2=Z_2\cdot (\frac{\partial}{\partial Z_0}-\frac{\partial}{\partial Z_7})+3Z_4\cdot (\frac{\partial}{\partial Z_1}-\frac{\partial}{\partial Z_6})+3Z_5\cdot \frac{\partial}{\partial Z_2}-5Z_1\cdot \frac{\partial}{\partial Z_3}+2Z_0\cdot (\frac{\partial}{\partial Z_4}-\frac{\partial}{\partial Z_9})-Z_3\cdot \frac{\partial}{\partial Z_8}$, \item $v_3=-3Z_4\cdot (\frac{\partial}{\partial Z_0}-\frac{\partial}{\partial Z_7})+Z_3\cdot(\frac{\partial}{\partial Z_1}-\frac{\partial}{\partial Z_6})-5Z_0\cdot \frac{\partial}{\partial Z_2}+3Z_8\cdot \frac{\partial}{\partial Z_3}-2Z_1\cdot (\frac{\partial}{\partial Z_4}-\frac{\partial}{\partial Z_9})-Z_2\cdot \frac{\partial}{\partial Z_5}$. \end{itemize}} {\it Proof:} The fact that $v_i$ restrict to vector fields on $X$ follows from the vanishing $$ v_i(dp_j)=0, \;\;\; j=1,2,3,4,5\;\;\mbox{and }\;\; v_i(d{\lambda}_k)=0, \;\;\; k=1,2,3. $$ The fact that $v_1,v_2,v_3$ are linearly independent can be checked locally on $X$ (in an open set $Z_8=1$, for example - see Lemma 8 below). {\it QED}\\ {\bf Lemma 6.} {\it One can identify $H^0(X,{\wedge}^2T_X)$ with the Lie algebra (viewed merely as a vector space) $\mathfrak{so}(7)$ of skew-symmetric $7\times 7$ matrices.\\ (Or with ${\wedge}^2V$, where $V=H^0(X,{\mathcal O}(1))$.)\\ As a basis of $H^0(X,{\wedge}^2T_X)\cong H^0(X,{\Omega}^1_X(2))$ one can take the restrictions to $X\subset G\subset {\mathbb P}^9$ of the forms ${\epsilon}_{ij}=Z_jdZ_i-Z_idZ_j$ on ${\mathbb P}^9$, where $0\leq i < j \leq 5$ or $0\leq i \leq 5$, $j=8$.}\\ {\it Proof:} The fact that the restrictions of ${\epsilon}_{ij}$ to $X$ give rise to $21$ linearly independent global sections of ${\Omega}^1_X(2)$ can be checked locally on $X$ (in an open set $Z_8=1$, for example - see Lemma 8 below). Since $dim(H^0(X,{\wedge}^2T_X))=21$ by Lemma 4, the result follows. {\it QED}\\ From now on we will always assume that the range of indices $i<j$ of ${\epsilon}_{ij}$ is the same as in Lemma 6.\\ {\bf Lemma 7.} {\it As a basis of $H^0(X,-K_X)\cong H^0(X,{\mathcal O}_X(2))$ one can take the restrictions to $X\subset G\subset {\mathbb P}^9$ of the quadratic forms ${z}_{ij}=Z_iZ_j$ on ${\mathbb P}^9$, where $0\leq i \leq j \leq 9$, $i,j \neq 6,7,9$ and $(ij)\neq (04),(14),(24),(34),(44)$.}\\ {\it Proof:} Since $dim(H^0(X,{\mathcal O}_X(2)))=23$ and we are given precisely $23$ elements $z_{ij}$, it is enough to check locally that they are linearly independent. {\it QED}\\ \subsection{Matrix elements of Schouten brackets.} Given an element $\omega\in H^0(X,{\wedge}^2T_X)$, we will need to know the matrices of the linear maps $$ {\alpha}_{\omega}\colon H^0(X,T_X)\rightarrow H^0(X,{\wedge}^2T_X)\cong H^0(X,{\Omega}^1_X(2)), \;\;\; \nu\mapsto [\nu ,\omega] $$ and $$ {\beta}_{\omega}\colon H^0(X,{\wedge}^2T_X)\rightarrow H^0(X,-K_X)\cong H^0(X,{\mathcal O}(2)), \;\;\; \nu\mapsto [\nu ,\omega] $$ Let us denote these matrices by $A_{\omega}$ and $B_{\omega}$ respectively. In this subsection we compute them relative to the bases in $H^0(X,{\wedge}^jT_X)$ introduced in Lemma 5, Lemma 6 and Lemma 7.\\ In order to do this, it is sufficient to compute the images ${\alpha}_{\omega}(v_i)$, $i=1,2,3$ and ${\beta}_{\omega}({\epsilon}_{ij})$, $i<j$.\\ If $\omega=\sum_{i<j}a_{ij}\cdot {\epsilon}_{ij}$, then $$ {\alpha}_{\omega}(v_i)=\sum_{j<k}a_{jk}\cdot [v_i,{\epsilon}_{jk}]\;\; \mbox{and}\;\; {\beta}_{\omega}({\epsilon}_{ij})=\sum_{k<l}a_{kl}\cdot [{\epsilon}_{ij},{\epsilon}_{kl}]. $$ Hence $A_{\omega}$ and $B_{\omega}$ are determined by $A_{ijk}=[v_i,{\epsilon}_{jk}]$ and $B_{ijkl}=\frac{1}{2}[{\epsilon}_{ij},{\epsilon}_{kl}]$. Let us compute them.\\ {\bf Lemma 8.} {\it \begin{gather*} A_{101}=0,\; A_{102}=-3{\epsilon}_{02},\; A_{103}={\epsilon}_{03},\; A_{104}=-{\epsilon}_{04},\; A_{105}=-4{\epsilon}_{05},\; A_{108}=2{\epsilon}_{08},\\ A_{112}=-{\epsilon}_{12},\; A_{113}=3{\epsilon}_{13},\; A_{114}={\epsilon}_{14},\; A_{115}=-2{\epsilon}_{15},\; A_{118}=4{\epsilon}_{18},\; A_{123}=0,\\ A_{124}=-2{\epsilon}_{24},\; A_{125}=-5{\epsilon}_{25},\; A_{128}={\epsilon}_{28},\; A_{134}=2{\epsilon}_{34},\; A_{135}=-{\epsilon}_{35},\\ A_{138}=5{\epsilon}_{38},\; A_{145}=-3{\epsilon}_{45},\; A_{148}=3{\epsilon}_{48},\; A_{158}=0,\; A_{201}=3{\epsilon}_{04}-{\epsilon}_{12},\\ A_{202}=3{\epsilon}_{05},\; A_{203}=-5{\epsilon}_{01}+{\epsilon}_{23},\; A_{204}={\epsilon}_{24},\; A_{205}={\epsilon}_{25},\; A_{208}=-{\epsilon}_{03}+{\epsilon}_{28},\\ A_{212}=3{\epsilon}_{15}-3{\epsilon}_{24},\; A_{213}=-3{\epsilon}_{34},\; A_{214}=-2{\epsilon}_{01},\; A_{215}=3{\epsilon}_{45},\\ A_{218}=-{\epsilon}_{13}+3{\epsilon}_{48},\; A_{223}=5{\epsilon}_{12}-3{\epsilon}_{35},\; A_{224}=-2{\epsilon}_{02}-3{\epsilon}_{45},\; A_{225}=0,\\ A_{228}=-{\epsilon}_{23}+3{\epsilon}_{58},\; A_{234}=-3{\epsilon}_{03}-5{\epsilon}_{14},\; A_{235}=-5{\epsilon}_{15},\; A_{238}=-5{\epsilon}_{18},\\ A_{245}=2{\epsilon}_{05},\; A_{248}=3{\epsilon}_{08}+{\epsilon}_{34},\; A_{258}={\epsilon}_{35},\; A_{301}={\epsilon}_{03}+3{\epsilon}_{14},\; A_{302}=3{\epsilon}_{24},\\ A_{303}=3{\epsilon}_{08}+3{\epsilon}_{34},\; A_{304}=-2{\epsilon}_{01},\; A_{305}=-{\epsilon}_{02}-3{\epsilon}_{45},\; A_{308}=-3{\epsilon}_{48},\\ A_{312}=5{\epsilon}_{01}-{\epsilon}_{23},\; A_{313}=3{\epsilon}_{18},\; A_{314}={\epsilon}_{34},\; A_{315}=-{\epsilon}_{12}+{\epsilon}_{35},\; A_{318}={\epsilon}_{38},\\ A_{323}=-5{\epsilon}_{03}+3{\epsilon}_{28},\; A_{324}=-5{\epsilon}_{04}+2{\epsilon}_{12},\; A_{325}=-5{\epsilon}_{05},\; A_{328}=-5{\epsilon}_{08},\\ A_{334}=2{\epsilon}_{13}-3{\epsilon}_{48},\; A_{335}={\epsilon}_{23}-3{\epsilon}_{58}, A_{338}=0,\; A_{345}=-2{\epsilon}_{15}+{\epsilon}_{24},\\ A_{348}=-2{\epsilon}_{18},\; A_{358}=-{\epsilon}_{28}. \end{gather*}} {\it Proof:} It is enough to work locally. Without loss of generality we can assume that $Z_8=1$. Let us denote by $x_i=\frac{Z_i}{Z_8}, 0\leq i\leq 9, i\neq 8$ the affine coordinates on this open subset of ${\mathbb P}^9$.\\ Over this open set $X$ is isomorphic to the affine space ${\mathbb C}^3$ with coordinates $x_1,x_3,x_4$ and we have on $X$ the following relations: \begin{gather*} x_0=-x_1^2-x_3x_4,\;\; x_2=-x_1x_4+x_3x_1^2+x_3^2x_4,\;\; x_5=-x_1^3-x_4^2-x_1x_3x_4,\\ x_9=-x_4,\;\;\;\;\; x_7=-x_0,\;\;\;\;\; x_6=-x_1. \end{gather*} The restrictions of the vector fields $v_i$, $i=1,2,3$ from Lemma 5 to this open subset have the following form: \begin{itemize} \item $v_1=-2x_1\frac{\partial}{\partial x_1}-x_3\frac{\partial}{\partial x_3}-3x_4\frac{\partial}{\partial x_4},$ \item $v_2=(x_1x_3+3x_4)\frac{\partial}{\partial x_1}+(x_3^2-5x_1)\frac{\partial}{\partial x_3}-(2x_1^2+x_3x_4)\frac{\partial}{\partial x_4},$ \item $v_3=x_3\frac{\partial}{\partial x_1}+3\frac{\partial}{\partial x_3}-2x_1\frac{\partial}{\partial x_4}.$ \end{itemize} The restrictions of 1-forms ${\epsilon}_{ij}=Z_jdZ_i-Z_idZ_j$, $i<j$ from Lemma 6 to this open subset are just $x_jdx_i-x_idx_j$. One obtains: \begin{itemize} \item ${\epsilon}_{01}=(x_3x_4-x_1^2)dx_1-x_1x_4dx_3-x_1x_3dx_4,$ \item ${\epsilon}_{02}=(x_1^2x_4-x_3x_4^2)dx_1+(x_1^4+x_3^2x_4^2+2x_1^2x_3x_4+x_1x_4^2)dx_3-x_1^3dx_4,$ \item ${\epsilon}_{03}=-2x_1x_3dx_1+x_1^2dx_3-x_3^2dx_4,$ \item ${\epsilon}_{04}=-2x_1x_4dx_1-x_4^2dx_3+x_1^2dx_4,$ \item ${\epsilon}_{05}=(2x_1x_4^2-2x_1^2x_3x_4-x_1^4-x_3^2x_4^2)dx_1+x_4^3dx_3-(x_3x_4^2+2x_1^2x_4)dx_4,$ \item ${\epsilon}_{08}=-2x_1dx_1-x_4dx_3-x_3dx_4,$ \item ${\epsilon}_{12}=(x_3^2x_4-x_1^2x_3)dx_1-(x_1^3+2x_1x_3x_4)dx_3+(x_1^2-x_1x_3^2)dx_4,$ \item ${\epsilon}_{13}=x_3dx_1-x_1dx_3,$ \item ${\epsilon}_{14}=x_4dx_1-x_1dx_4,$ \item ${\epsilon}_{15}=(2x_1^3-x_4^2)dx_1+x_1^2x_4dx_3+(x_1^2x_3+2x_1x_4)dx_4,$ \item ${\epsilon}_{18}=dx_1,$ \item ${\epsilon}_{23}=(2x_1x_3^2-x_3x_4)dx_1+(x_3^2x_4+x_1x_4)dx_3+(x_3^3-x_1x_3)dx_4,$ \item ${\epsilon}_{24}=(2x_1x_3x_4-x_4^2)dx_1+(x_1^2x_4+2x_3x_4^2)dx_3-x_1^2x_3dx_4,$ \item ${\epsilon}_{25}=(x_1^4x_3-2x_1x_3x_4^2+x_4^3-2x_1^3x_4+2x_1^2x_3^2x_4+x_3^3x_4^2)dx_1-(x_1^5+2x_1^2x_4^2+2x_1^3x_3x_4+2x_3x_4^3+x_1x_3^2x_4^2)dx_3+(x_1^4+2x_1^2x_3x_4-x_1x_4^2+x_3^2x_4^2)dx_4,$ \item ${\epsilon}_{28}=(2x_1x_3-x_4)dx_1+(x_1^2+2x_3x_4)dx_3+(x_3^2-x_1)dx_4,$ \item ${\epsilon}_{34}=x_4dx_3-x_3dx_4,$ \item ${\epsilon}_{35}=(3x_1^2x_3+x_3^2x_4)dx_1-(x_1^3+x_4^2)dx_3+(2x_3x_4+x_1x_3^2)dx_4,$ \item ${\epsilon}_{38}=dx_3,$ \item ${\epsilon}_{45}=(3x_1^2x_4+x_3x_4^2)dx_1+x_1x_4^2dx_3+(x_4^2-x_1^3)dx_4,$ \item ${\epsilon}_{48}=dx_4,$ \item ${\epsilon}_{58}=-(3x_1^2+x_3x_4)dx_1-x_1x_4dx_3-(2x_4+x_1x_3)dx_4.$ \end{itemize} In order to compute $A_{ijk}=[v_i,{\epsilon}_{jk}]$ one has to know $[a\frac{\partial}{\partial x_i}, bdx_j]$, where $i,j\in \{ 1,3,4 \}$.\\ The isomorphism ${\wedge}^2T_X\cong {\Omega}^1_X(2)$ identifies $dx_1$ with $\frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_4}$, $dx_3$ with $-\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4}$ and $dx_4$ with $\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3}$.\\ By definition of the Schouten bracket \cite{Bondal} one has for $j<k$ $$ \left[ a\frac{\partial}{\partial x_i}, b\frac{\partial}{\partial x_j}\wedge \frac{\partial}{\partial x_k}\right]=a\frac{\partial b}{\partial x_i} \left(\frac{\partial}{\partial x_j}\wedge \frac{\partial}{\partial x_k}\right)-b\frac{\partial a}{\partial x_j} \left(\frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial x_k}\right)+b\frac{\partial a}{\partial x_k} \left(\frac{\partial}{\partial x_i}\wedge \frac{\partial}{\partial x_j}\right), $$ if $i\neq j, i\neq k$ and $$ \left[ a\frac{\partial}{\partial x_i}, b\frac{\partial}{\partial x_j}\wedge \frac{\partial}{\partial x_k}\right]=\left(a\frac{\partial b}{\partial x_i}-b\frac{\partial a}{\partial x_i} \right)\left(\frac{\partial}{\partial x_j}\wedge \frac{\partial}{\partial x_k}\right), $$ if $i= j$ or $i= k$.\\ These formulas allow one to check the expressions for $A_{ijk}$ stated in the Lemma. {\it QED}\\ {\bf Lemma 9.} {\it $B_{ijkl}$ is totally antisymmetric with respect to its indices. In particular, $B_{ijkl}$ are determined completely by the following values: \begin{gather*} B_{0123}=-z_{01}+z_{23},\;\; B_{0124}=-z_{15},\;\; B_{0125}=-z_{25},\;\; B_{0128}=-2z_{03}-z_{28},\\ B_{0134}=-z_{08},\;\; B_{0135}=2z_{12}+z_{35},\;\; B_{0138}=z_{38},\;\; B_{0145}=z_{45},\;\; B_{0148}=-z_{48},\\ B_{0158}=2z_{01}-2z_{58},\;\; B_{0234}=-z_{12}+z_{35},\;\; B_{0235}=z_{05}-z_{22},\;\; B_{0238}=z_{08}+3z_{11},\\ B_{0245}=z_{55},\;\; B_{0248}=-z_{01}-z_{58},\;\; B_{0258}=-z_{02}+2z_{45},\;\; B_{0345}=-z_{00}-2z_{15},\\ B_{0348}=2z_{18},\;\; B_{0358}=3z_{03}+4z_{28},\;\; B_{0458}=z_{12}+z_{35},\;\; B_{1234}=-z_{03}+z_{28},\\ B_{1235}=-3z_{00}-z_{15},\;\; B_{1238}=-z_{18}+z_{33},\;\; B_{1245}=-2z_{05},\;\; B_{1248}=2z_{08}+z_{11},\\ B_{1258}=-3z_{12}-4z_{35},\;\; B_{1345}=z_{01}+z_{58},\;\; B_{1348}=-z_{88},\;\; B_{1358}=z_{13}+2z_{48},\\ B_{1458}=z_{03}+z_{28},\;\; B_{2345}=-2z_{02}-z_{45},\;\; B_{2348}=-2z_{13}+z_{48},\\ B_{2358}=-5z_{01}-z_{23}+2z_{58},\;\; B_{2458}=2z_{00}-z_{15},\;\; B_{3458}=-z_{08}+2z_{11}. \end{gather*} } {\it Proof:} We use the same set up and notation as in the proof of Lemma 8 and also restrict to the open set $Z_8=1$.\\ By Proposition 1, \begin{multline*} B_{ijkl}=\frac{1}{2}[{\epsilon}_{ij},{\epsilon}_{kl}]=\\ =\frac{1}{2}[x_jdx_i-x_idx_j,x_ldx_k-x_kdx_l]=\frac{1/2}{Vol}\left( (x_jdx_i-x_idx_j)\wedge (dx_l\wedge dx_k-dx_k\wedge dx_l)+\right.\\ +\left.(dx_j\wedge dx_i-dx_i\wedge dx_j)\wedge (x_ldx_k-x_kdx_l) \right)=\frac{1}{Vol}\left( x_i\cdot dx_j\wedge dx_k\wedge dx_l-\right.\\ \left. -x_j\cdot dx_i\wedge dx_k\wedge dx_l+x_k\cdot dx_i\wedge dx_j\wedge dx_l-x_l\cdot dx_i\wedge dx_j\wedge dx_k \right). \end{multline*} We can take $Vol=dx_1\wedge dx_3\wedge dx_4$. Then one obtains the expressions stated in the Lemma. {\it QED}\\ \subsection{Variety of Poisson structures.} Let $\omega=\sum_{i<j} a_{ij}{\epsilon}_{ij}$ be a point in ${\mathbb P}(H^0(X,{\wedge}^2T_X))\cong {\mathbb P}(\mathfrak{so}(7))$. Let us find the equations of the variety of Poisson structures $\mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ on $X$ in terms of the homogeneous coordinates $a_{ij}$.\\ $$ [\omega,\omega]=\sum_{i<j}\sum_{k<l}a_{ij}a_{kl}[{\epsilon}_{ij},{\epsilon}_{kl}]=4\cdot \sum_{i<j<k<l} {\alpha}_{ijkl}\cdot B_{ijkl}, $$ where ${\alpha}_{ijkl}=a_{ij}a_{kl}-a_{ik}a_{jl}+a_{il}a_{jk}$.\\ Using Lemma 9 one can find the components of $[\omega,\omega]\in H^0(X,{\mathcal O}_X(2))$ relative to the basis $z_{ij}$, $i\leq j$ from Lemma 7. Then $[\omega,\omega]=0$ is equivalent to the following equations: \begin{gather*} {\alpha}_{0348}={\alpha}_{0125}={\alpha}_{0138}={\alpha}_{1348}={\alpha}_{0245}={\alpha}_{1238}={\alpha}_{0235}={\alpha}_{1245}=0,\\ {\alpha}_{0123}={\alpha}_{2358},\;\; {\alpha}_{0145}=5{\alpha}_{2345},\;\; {\alpha}_{0148}=5{\alpha}_{2348},\;\; {\alpha}_{0258}=-2 {\alpha}_{2345},\\ {\alpha}_{1358}=2{\alpha}_{2348},\;\; {\alpha}_{0345}+3{\alpha}_{1235}=2{\alpha}_{2458},\;\; {\alpha}_{0248}+6{\alpha}_{2358}=2{\alpha}_{0158}+{\alpha}_{1345},\\ 2{\alpha}_{0128}+{\alpha}_{1234}=3{\alpha}_{0358}+{\alpha}_{1458},\;\; {\alpha}_{0134}-{\alpha}_{0238}-2{\alpha}_{1248}+{\alpha}_{3458}=0,\\ -2{\alpha}_{0135}+{\alpha}_{0234}-{\alpha}_{0458}+3{\alpha}_{1258}=0,\;\; {\alpha}_{0124}+2{\alpha}_{0345}+{\alpha}_{1235}+{\alpha}_{2458}=0,\\ {\alpha}_{0128}=4{\alpha}_{0358}+{\alpha}_{1234}+{\alpha}_{1458},\;\; {\alpha}_{0135}+{\alpha}_{0234}+{\alpha}_{0458}-4{\alpha}_{1258}=0,\\ {\alpha}_{0248}+2{\alpha}_{0158}-{\alpha}_{1345}-2{\alpha}_{2358}=0,\;\; {\alpha}_{1248}=-3{\alpha}_{0238}-2 {\alpha}_{3458}. \end{gather*} Note that the equations ${\alpha}_{ijkl}=0$, $i<j<k<l$ define the Grassmannian $G(2,7)\subset {\mathbb P}(\mathfrak{so}(7))$ embedded by the Pl{\" u}cker embedding. Provided that one knows that the variety of Poisson structures $\mathcal P$ has two irreducible components of dimensions $10$ and $1$ (this fact is stated in \cite{Loray1}), one concludes immediately that the $10$-dimensional component is exactly this $G(2,7)$, because $dim(G(2,7))=10$. This is, of course, proved more generally in \cite{Loray2}.\\ {\bf Theorem 3 (Loray, Pereira, Touzet, \cite{Loray1}, \cite{Loray2}).} {\it Let $X$ be the (smooth) del Pezzo quintic threefold. Then the variety of Poisson structures $\mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ on $X$ is the disjoint union of the Grassmannian $G(2,7)\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ (embedded by the Pl{\" u}cker embedding) and a smooth conic in ${\mathbb P}(H^0(X,{\wedge}^2T_X))$. The plane spanned by the conic does not intersect the Grassmannian.}\\ {\it Proof:} Let us take the plane $\Pi\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))\cong {\mathbb P}(\mathfrak{so}(7))$ defined by the following linear equations: \begin{gather*} a_{02}=a_{05}=a_{08}=a_{13}=a_{15}=a_{18}=a_{24}=a_{25}=a_{34}=a_{38}=a_{45}=a_{48}=0,\\ a_{01}=\frac{5}{2}a_{23},\;\; a_{58}=\frac{9}{2}a_{23},\;\; a_{12}=\frac{5}{3}a_{35},\;\; a_{04}=5a_{35},\;\; a_{03}=\frac{5}{3}a_{28},\;\; a_{14}=-5a_{28}. \end{gather*} Then the intersection $\Pi \cap \mathcal P$ is the conic given by the equation $(a_{23})^2=\frac{8}{9}a_{28}a_{35}$ in this plane.\\ Since ${\alpha}_{2358}=a_{23}a_{58}+a_{28}a_{35}=\frac{45}{8}(a_{23})^2$, ${\alpha}_{0345}=-a_{04}a_{35}=-5(a_{35})^2$ and ${\alpha}_{0134}=-a_{03}a_{14}=\frac{25}{3}(a_{28})^2$ never simultaneously vanish on the conic, we conclude that the plane $\Pi$ spanned by the conic does not intersect the Grassmannian $G(2,7)\subset {\mathbb P}(\mathfrak{so}(7))$. {\it QED}\\ \subsection{Poisson cohomology.} Since $H^i(X,{\wedge}^jT_X)=0$ for any $i\geq 1$ for any $j$ by Lemma 4, it follows from \cite{Stienon} (see Lemma 3.3 in \cite{HongXu}) that Poisson cohomology of $X$ with respect to $\omega\in \mathcal P\subset {\mathbb P}(H^0(X,{\wedge}^2T_X))$ is the cohomology of the following complex: $$ 0 \rightarrow H^0(X,{\mathcal O}_X) \xrightarrow{d_{\omega}=0} H^0(X,T_X) \xrightarrow{d_{\omega}={\alpha}_{\omega}} H^0(X,{\wedge}^2T_X) \xrightarrow{d_{\omega}={\beta}_{\omega}} H^0(X,{\mathcal O}_X(2)) \rightarrow 0. $$ This implies the following Theorem.\\ {\bf Theorem 4.} {\it Let $X$ be the (smooth) del Pezzo quintic threefold and $\omega\in \mathcal P$ a point on the variety of Poisson structures on $X$. Then \begin{itemize} \item $H^0_{Poisson}(X,\omega)\cong H^0(X,{\mathcal O}_X)=k$, \item $H^1_{Poisson}(X,\omega)\cong ker({\alpha}_{\omega})$, \item $H^2_{Poisson}(X,\omega)\cong ker({\beta}_{\omega})/ im({\alpha}_{\omega})$, \item $H^3_{Poisson}(X,\omega)\cong H^0(X,{\mathcal O}_X(2))/ im({\beta}_{\omega})$. \end{itemize}} In particular, using matrices $A_{\omega}$ and $B_{\omega}$ computed in Lemma 8 and Lemma 9 we have: \begin{itemize} \item $dim(H^0_{Poisson}(X,\omega))=1$, \item $dim(H^1_{Poisson}(X,\omega))=3-rk(A_{\omega})$, \item $dim(H^2_{Poisson}(X,\omega))=dim(ker({\beta}_{\omega}))-rk({\alpha}_{\omega})=21-rk(A_{\omega})-rk(B_{\omega})$, \item $dim(H^3_{Poisson}(X,\omega))=23-rk(B_{\omega})$. \end{itemize} \section{Acknowledgement.} This project started after we read the paper \cite{HongXu}. We thank Jorge Pereira for comments. \bibliographystyle{ams-plain}
1,108,101,564,024
arxiv
\section{Introduction} \label{intro} The thermodynamics of the confined phase of QCD is commonly modeled with the hadron resonance gas (HRG)~\cite{BraunMunzinger:2003zd,tawfik1,tawfik2,reseqsf,andronic,kapusta,kapusta1,gorenstein}. The equation of state for strongly interacting matter at finite temperature is well described by this model, formulated with a discrete mass spectrum of the experimentally confirmed particles and resonances. This finding was verified by recent results of lattice QCD (LQCD)~\cite{Bazavov:2012jq, Borsanyi:2013bia,missing,Borsanyi:2011sw}. However, LQCD also reveals that, when considering fluctuations and correlations of conserved charges, there are clear limitations in the HRG description~\cite{missing}. This is particularly evident in the strange sector, where the second-order correlations with the net-baryon number $\chi_{\rm BS}$ or strangeness fluctuations $\chi_{\rm SS}$ are larger in LQCD than those in the HRG model~\cite{missing,Bazavov:2012jq}. Such deviations were attributed to the missing resonances in the Particle Data Group (PDG) database~\cite{missing}. Different extensions of the HRG model have been proposed to quantify the LQCD equation of state. They account for a possible repulsive interaction among constituents and/or a continuously growing exponential mass spectrum~\cite{andronic,gorenstein,Majumder:2010ik,kapusta}. The latter was first introduced by Hagedorn~\cite{Hagedorn:1965st} within the statistical bootstrap model (SBM)~\cite{Hagedorn:1971mc,Frautschi:1971ij,raf1}, and was then studied in dual string and bag models~\cite{Huang:1970iq,Cudell:1992bi,Johnson:1975sg}. For large masses, the Hagedorn spectrum $\rho(m)$ is parametrized as \mbox{$\rho(m)\simeq m^a e^{m/T_H}$}, where $T_H$ is the Hagedorn limiting temperature and $a$ is a model parameter. The main objective of this paper is to analyze LQCD data on fluctuations and correlations of conserved charges within the HRG model. In particular, we examine whether the missing resonances contained in the asymptotic Hagedorn mass spectrum are sufficient to quantify LQCD results. We focus on the susceptibilities $\chi_{\rm BS}$ and $\chi_{\rm SS}$, where LQCD indicates the largest deviations from HRG, in spite of their agreement on the equation of state in the hadronic phase. To calculate fluctuations of conserved charges within HRG, one needs to identify the hadron mass spectrum for different quantum numbers. For a continuous mass spectrum $\rho(m)$, this issue was addressed in Refs.~\cite{Broniowski:2000bj} and~\cite{Broniowski:2004yh}, where the parameters of $\rho(m)$ in different hadronic sectors were extracted by fitting the spectra to the established hadronic states in the PDG database~\cite{pdg}. It was shown in Ref.~\cite{Broniowski:2004yh} that the Hagedorn temperatures for mesons $T_H^M$ and baryons $T_H^B$ are different, with $T_H^M>T_H^B$. The $T_H^B\simeq 140 \, {\rm MeV}$ found in~\cite{Broniowski:2000bj} is clearly below the LQCD crossover temperature $T_c = 155(1)(8) \, {\rm MeV}$ from hadronic to quark-gluon plasma phase~\cite{tcb,tcw,tc}. This, however, is inconsistent with LQCD, as it implies a large fluctuation of the net-baryon number deep in the hadronic phase, which is not observed in lattice simulations. In this study we have reanalyzed the Hagedorn mass spectrum in different sectors of quantum number, in the context of the PDG data, and have shown that there is a common Hagedorn temperature for mesons and baryons in different strange sectors. We have applied our newly calculated $\rho(m)$ in the HRG model to explore different thermodynamics observables; in particular, fluctuations of conserved charges. The results are compared with LQCD for the strangeness, net-baryon number fluctuations, and for baryon-strangeness correlations. We show that HRG, adopting a continuous mass spectrum with its parameters fitted to the PDG data, can partially account for the missing resonances needed to quantify LQCD results. To fully identify the missing resonance states, we motivate a matching of LQCD and HRG to extract a continuous mass spectrum $\rho(m)$. In the strange-baryonic sector, this $\rho(m)$ is shown to be consistent with all known and expected states listed by the PDG. However, the mass spectrum for strange mesons require some additional resonances in the intermediate mass range beyond those listed in the PDG compilation. The paper is organized as follows: In Sec.~\ref{HRG}, we introduce the HRG thermodynamics with a discrete mass spectrum. In Sec.~\ref{HRG_LQCD}, we discuss HRG model comparisons with LQCD. In Sec.~\ref{Hagedorn_LQCD}, we extract the continuous $\rho(m)$ in different sectors of quantum number and discuss fluctuations of conserved charges in conjunction with LQCD findings. Finally, Sec.~\ref{conclusions} is devoted to summary and conclusions. \section{Equation of state of hadronic matter} \label{HRG} To formulate a phenomenological model of hadronic matter at finite temperature and density, one needs to identify the relevant degrees of freedom and their interactions. In the confined phase of QCD the medium is composed of hadrons and resonances. The HRG model, in its simplest form, treats the medium constituents as point-like and independent~\cite{BraunMunzinger:2003zd}. Thus, in such a model setup, the interactions of hadrons and the resulting widths of resonances are neglected. Hence, the composition of the medium and its properties emerge through a discrete mass spectrum, \begin{eqnarray} \label{DEF:rho_pdg}\label{eq2} \rho^{\rm{HRG}}(m) = \sum_{i} d_i \delta\left(m - m_i\right) \textrm{,} \end{eqnarray} where $d_i = (2J_i+1)$ is the spin degeneracy factor of a particle $i$ with mass $m_i$ and the sum is taken over all stable particles and resonances. The mass spectrum in Eq.~(\ref{eq2}) can be identified experimentally or can be calculated within LQCD. In both cases our knowledge is far from complete. LQCD can determine the masses of hadronic ground states and low-lying excited states with fairly high precision~\cite{spectrum}. However, the higher excited states are still not well controlled in lattice calculations. \begin{figure*}[htp!] \centering\subfigure[]{\includegraphics[width=1\columnwidth]{hadrons_fit_pdg.pdf} \label{fig:cumulant_a}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{barmes_fit_pdg.pdf} \label{fig:cumulant_b}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{mesons_fit_pdg.pdf} \label{fig:cumulant_c}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{baryons_fit_pdg.pdf} \label{fig:cumulant_d}} \caption{(Color online) Cumulants of the PDG mass spectrum in different sectors of quantum number: (a) all hadrons; (b) mesons and baryons; (c) mesons of different strangeness; (d) baryons of different strangeness. The lines are obtained from the fit of Eqs.~(\ref{eq10}) and~(\ref{eq13}) to the PDG data with the parameters listed in Table~\ref{tab:1} (see text).} \label{fig:cumulant}\label{fig1} \end{figure*} The spectrum of experimentally established hadrons, summarized by the PDG~\cite{pdg}, accounts for all identified particles and resonances, i.e., confirmed mesons and baryons granted with a three- or four-star status, of masses up to $m_M \simeq 2.4 \, {\rm GeV}$ and $m_B \simeq 2.6 \, {\rm GeV}$ respectively. The investigation of higher excited states remains a significant challenge for the experiments due to the complicated decay properties and large widths of the resonances. Instead of the hadron mass spectrum~(\ref{DEF:rho_pdg}), the medium composition can be characterized by the cumulant~\cite{Broniowski:2000bj} \begin{eqnarray} \label{DEF:cumulant_pdg}\label{eq3} N^{\rm{HRG}}(m) = \sum_{i} d_i \theta\left(m - m_i\right) \textrm{, } \end{eqnarray} such that \begin{eqnarray}\label{eq4} \rho^{\rm{HRG}} = {{\partial N^{\rm{HRG}}}\over { \partial m}}. \end{eqnarray} Thus, $N^{\rm{HRG}}(m)$ counts the number of degrees of freedom with masses below $m$. Since the spectrum~(\ref{DEF:rho_pdg}) is additive in different particle species, it can be decomposed into a sum of contributions from mesons and baryons, as well as a sum of particles with definite strangeness. Figure~\ref{fig:cumulant} shows the cumulants in different sectors of hadronic quantum number with inputs from the PDG. The cumulant of all hadrons is seen in Fig.~\ref{fig:cumulant_a} to rapidly increase with mass. For $m\leq 2 \, {\rm GeV}$ such increase is almost linear, indicating that the hadron mass spectrum is exponential, as predicted by Hagedorn in the context of SBM~\cite{Hagedorn:1965st,Hagedorn:1971mc}. A rapid increase in the number of states is also seen, in Figs.~\ref{fig:cumulant_b} and~\ref{fig:cumulant_c}, to appear separately for the mesonic and baryonic sector, as well as for the strange and non-strange mesons with $m<2 \, {\rm GeV}$. Baryons of different strangeness, as illustrated in Fig.~\ref{fig:cumulant_d}, follow a similar trend with the exception of $|S|= 3$ baryons, which consists only of $\Omega$ hyperons. For an uncorrelated gas of particles (and antiparticles) with a mass spectrum $\rho(m)$, the thermodynamic pressure $\hat{P}=P/T^4$ is obtained as \begin{align} \hat{P}(T,V,\vec{\mu}) =\pm & \int \mathrm{d} m \, \rho(m) \; \int \, \frac{ \mathrm{d}\hat{p}}{2\pi^2} \; \hat{p}^2 & \nonumber \\ &\times\left[\ln (1\pm \lambda \, e^{-\hat{\epsilon}})+ \ln (1\pm \lambda^{-1} e^{-\hat{\epsilon}}) \right]\textrm{,} \label{eq5} \end{align} where $\hat{p} = p/T$, $\hat{m} = m/T$, $\hat{\epsilon}=\sqrt{\hat{p}^2+\hat{ m}^2}$, and the $(\pm)$ sign refers to fermions and bosons respectively. For a particle of mass $m$, carrying baryon number $B$, strangeness $S$ and electric charge $Q$, the fugacity $\lambda$ reads \begin{align} \lambda(T,\vec\mu )= \exp \left({{B\hat{\mu}_B+S\hat{\mu}_S+Q\hat{\mu}_Q} }\right)\textrm{,} \label{eq6} \end{align} where $\hat{\mu} = \mu/T$. Note that, for scalar particles with vacuum quantum number, the antiparticle term should be dropped to avoid double counting. Expanding the logarithm and performing the momentum integration in Eq.~(\ref{eq5}) with the discrete mass spectrum $\rho^{\rm HRG}$ in Eq.~(\ref{eq2}), one obtains \begin{align} \hat{P}=\sum_i{{d_i}\over{2\pi^2}} & \sum_{k=1}^\infty {{(\pm 1)^{k+1}}\over {k^2}} \, \hat{ m}_i^2 \, K_2(k\hat{m}_i) \, \lambda^k \rm , \label{eq7} \end{align} \noindent where the first sum over $i$ includes the contributions of all known hadrons and antihadrons, and $K_2$ is the modified Bessel function. The upper and lower sign is for bosons and fermions respectively. The Boltzmann approximation corresponds to retaining only the first term in summation. The thermodynamic pressure in Eq.~\eqref{eq5}, through the mass spectrum $\rho$, contains all the relevant information about the distribution of mass and quantum number of the medium. Thus, it allows for the study of different thermodynamic observables, including fluctuations of conserved charges. Furthermore, one can turn this argument around, and explore the implication of the thermodynamic observables on medium composition. This approach has been applied, for example, in the pure gauge theory to extract an effective glueball mass and spectrum based on the lattice results on pressure and trace anomaly \cite{Meyer:2009tq, Caselle:2015tza}. For the case of QCD, the constraint imposed by the trace anomaly on the hadronic spectrum has been investigated \cite{Arriola:2014bfa, Arriola:2015gra}. In this work, we focus on the impact of recent LQCD data on the fluctuations of conserved charges. \section{Hadron Resonance Gas and LQCD} \label{HRG_LQCD} LQCD provides a theoretical framework to calculate the equation of state and bulk properties of strongly interacting matter at finite temperature. The first comparison of the equation of state calculated on the lattice with that derived from Eq.~(\ref{eq7}) has shown that thermodynamics of hadronic matter is well approximated by the HRG with mass spectrum generated on the corresponding lattice~\cite{tawfik1,tawfik2}. Presently, we have a rather detailed picture of the thermodynamics of hadronic matter, thanks to the advancement of LQCD simulations with physical quark masses and to the progress in extrapolating observables to the continuum limit~\cite{Borsanyi:2013bia,eqsf1,eqsf2}. Thus, a direct comparison of the equation of state from Eq.~(\ref{eq5}) and LQCD can be performed with the physical mass spectrum~\cite{reseqsf,Bazavov:2012jq,ratti}. \begin{figure*}[htp!] \centering\subfigure[]{\centering\includegraphics[width=1\columnwidth]{hagedorn_pressure.pdf} \label{pressure}} \centering \centering\subfigure[]{\includegraphics[width=1\columnwidth]{hagedorn_bb.pdf} \label{bb}} \caption{(Color online) Lattice QCD results of HotQCD~\cite{eqsf1,Bazavov:2012jq} and Budapest-Wuppertal Collaborations~\cite{Borsanyi:2013bia,Borsanyi:2011sw} for different observables in dimensionless units: (a) the thermodynamic pressure; (b) the net-baryon number fluctuations $\hat{\chi}_{\rm BB}$. Also shown are the HRG results for the discrete PDG mass spectrum (dashed line) and for the effective mass spectrum in Eq.~(\ref{eq12}), which contains a continuous part to describe the effects of massive resonances (continuous line).} \label{pressure_bb} \end{figure*} In Fig.~\ref{pressure} we show the temperature dependence of the thermodynamic pressure obtained recently in lattice simulations with physical quark masses~\cite{eqsf1,Borsanyi:2013bia}. The bands of the LQCD result indicate the systematic errors due to continuum extrapolation. The vertical band marks the temperature $T_c = 155(1)(8) \, {\rm MeV}$, which is the chiral crossover temperature from the hadronic phase to the quark-gluon plasma~\cite{tc}. These LQCD results are compared in Fig.~\ref{pressure} with predictions of the HRG model for the mass spectrum~(\ref{DEF:rho_pdg}), which includes all known hadrons and resonances listed by the PDG~\cite{pdg}. There is a clear coincidence of the HRG and LQCD results on the equation of state at low temperatures. The pressure is strongly increasing with temperature towards the chiral crossover. This behavior is well understood within HRG as the consequence of growing contributions from the escalating number of higher resonances. Although HRG formulated with a discrete mass spectrum does not exhibit any critical behavior, it nevertheless reproduces remarkably well the lattice results in the hadronic phase. This agreement has now been extended to the fluctuations and correlations of conserved charges~\cite{Bazavov:2012jq,Borsanyi:2011sw,ejiri1,ejiri2}. In a thermal medium, the second-order fluctuations and correlations of conserved charges are quantified by the generalized susceptibilities \begin{eqnarray} \label{sus1}\label{eq8} \hat{ \chi}_{xy} = \frac{\partial^{2}\hat P}{\partial \hat \mu_x\partial \hat \mu_y} \textrm{,} \end{eqnarray} where $(x,y)$ are conserved charges, which in the following are restricted to the baryon number $B$ and strangeness $S$. For HRG with a discrete mass spectrum of Boltzmann particles, $\hat{\chi}_{xy}$ is obtained from Eq.~(\ref{eq7}) as \begin{eqnarray} \label{eq9} \hat{ \chi}^{\rm HRG}_{xy}\Big|_{\hat \mu_x = \hat\mu_y=0} = \frac{1}{\pi^2}\sum_{i} d_i {\hat{m}_i^2}K_2\left({\hat{m}_i}\right)x_iy_i \textrm{.} \end{eqnarray} The susceptibilities~(\ref{sus1}) and in particular~(\ref{eq9}) are observables sensitive to the quantum numbers of medium constituents. Thus, $\hat\chi_{xy}$ can be used to identify contributions of different particle species to QCD thermodynamics~\cite{ejiri1,ejiri2}. \begin{figure*}[htp!] \centering\subfigure[]{\centering\includegraphics[width=1\columnwidth]{hagedorn_bs.pdf} \label{bs}} \centering\subfigure[]{\centering\includegraphics[width=1\columnwidth]{hagedorn_ss.pdf} \label{ss}} \caption{ (Color online) As in Fig.~\ref{pressure_bb}, but for baryon-strangeness correlations $\hat{\chi}_{\rm BS}$ (a) and for strangeness fluctuations $\hat{\chi}_{\rm SS}$ (b). Also shown are the corresponding results obtained from the least-square fits to lattice data up to $T\simeq 156 \, {\rm MeV}$.} \label{bs_ss} \end{figure*} Recent LQCD calculations of the HotQCD Collaboration~\cite{Bazavov:2012jq} and Budapest-Wuppertal Collaboration~\cite{Borsanyi:2013bia, Borsanyi:2011sw} provide results on different fluctuations and correlations of conserved charges. Thus, the apparent agreement of HRG and LQCD, seen on the level of the equation of state, can be further tested within different hadronic sectors~\cite{Bazavov:2012jq}. In Figs.~\ref{bb} and~\ref{bs_ss} we show the LQCD results on the fluctuation of net-baryon number, strangeness, as well as the baryon-strangeness correlations. They are compared to the HRG model, formulated with the PDG mass spectrum. From Fig.~\ref{bb}, it is clear that the net-baryon number fluctuation in the hadronic phase are well described by HRG, whereas strangeness fluctuation $\hat\chi_{\rm SS}$ in Fig.~\ref{ss} and $\hat\chi_{\rm BS}$ correlation in Fig.~\ref{bs} are underestimated in the low-temperature phase. Following an analysis of the relations between different susceptibilities of conserved charges, it was argued in Ref.~\cite{Bazavov:2012jq} that deviations seen in Fig.~\ref{bs} can be attributed to the missing resonances in the strange-baryonic sector. In view of Fig.~\ref{ss}, similar conclusion can be drawn for the strange mesons. In general, the contributions of heavy resonances in HRG are suppressed due to the Boltzmann factor. However, the relative importance of these states depends on the observable. In the hadronic phase, the pressure is dominated by the low-lying particles. At temperature $T=150 \, {\rm MeV}$, the contribution to the pressure from particles and resonances with mass $M>1.5 \, {\rm GeV}$ is of the order of 7$\%$. However, in the fluctuations of the net-baryon number and baryon-strangeness correlations, such a contribution is already significant and amounts to 26$\%$ and 33$\%$, respectively. Contributions from missing heavy states could be the potential origin of the observed differences between LQCD results and HRG predictions on fluctuations and correlations of conserved charges in the strange sector, shown in Figs.~\ref{bs} and~\ref{ss}. \section{Hagedorn mass spectrum and LQCD fluctuations} \label{Hagedorn_LQCD} \subsection{The Hagedorn mass spectrum} To account for the unknown resonance states at large masses we adopt the continuous Hagedorn mass spectrum with the parametrization \begin{eqnarray} \label{DEF:rho_hagedorn}\label{eq10} \rho^H(m) = \frac{a_0\;}{\left(m^2+m_0^2\right)^{5/4}} e^{m/T_H}\textrm{,} \end{eqnarray} and its corresponding cumulant \begin{eqnarray} \label{DEF:cumulant_hag}\label{eq11} N^H(m) = \int\limits_0^m \mathrm{d} m' \; \rho^H(m') \textrm{,} \end{eqnarray} where $T_H$ is the Hagedorn limiting temperature, whereas $a_0$ and $m_0$ are additional free parameters. In general, the parameters of $\rho(m)$ can be calculated within a model, e.g., in SBM~\cite{Hagedorn:1971mc,Frautschi:1971ij}. In the following, we constrain the Hagedorn temperature and the weight parameters $(a_0,m_0)$ in Eq.~(\ref{eq10}) based on the mass spectrum of the PDG and the lattice data. In addition, we assume that the same exponential functional form holds separately for hadrons in different sectors of quantum number, i.e., for mesons and baryons with or without strangeness. \begin{table*}[htp!] \begin{tabular}{|l||c|c||c|c|} \hline & \multicolumn{2}{|c||}{Fit to PDG} & \multicolumn{2}{|c|}{Fit to LQCD} \\ \hline & $m_0$ [${\rm GeV}$] & $a_0(m_0)$ [${\rm GeV}^{3/2}$] & $m_0$ [{\rm GeV}] & $a_0(m_0)$ [${\rm GeV}^{3/2}$] \\ \hline\hline $\rho_H$ & 0.529(22) & 0.744(40) & 0.425(24) & 0.573(36) \\ \hline $\rho_{B}$ & 0.145(23) & 0.135(7) & 0.078(13) & 0.108(9) \\ \hline $\rho_{B}^{S=0}$ & 0.053(8) & 0.064(12) & & \\ \hline $\rho_{B}^{S=-1}$ & 0.051(12) & 0.046(6) & 0.193(96)(122) & 0.067(27) \\ \hline $\rho_{B}^{S=-2}$ & 1.453(441) & 0.023(20) & 2.469(456)(297) & 0.091(47) \\ \hline $\rho_{B}^{S=-3}$ & 0.00194(0) & 0.00027(0) & & \\ \hline $\rho_{M}$ & 0.244(17) & 0.341(19) & & \\ \hline $\rho_{M}^{S=0}$ & 0.183(19) & 0.212(17) & & \\ \hline $\rho_{M}^{S=-1}$ & 0.183(43) & 0.060(9) & 0.378(32)(95) & 0.099(24) \\ \hline \end{tabular}\qquad\qquad\qquad \begin{tabular}{|l||c|c|} \hline & \multicolumn{2}{|c|}{Constraint} \\ \hline & $m_x$ [${\rm GeV}$] & $N^{\textrm{HRG}}(m_x)$ \\ \hline\hline $\rho_H$ & 0.77526 & 18 \\ \hline $\rho_{B}$ & 1.2320 & 28 \\ \hline $\rho_{B}^{S=0}$ & 1.2320 & 40 \\ \hline $\rho_{B}^{S=-1}$ & 1.3828 & 20 \\ \hline $\rho_{B}^{S=-2}$ & 1.31486 & 2 \\ \hline $\rho_{B}^{S=-3}$ & 1.67245 & 4 \\ \hline $\rho_{M}$ & 0.77526 & 18 \\ \hline $\rho_{M}^{S=0}$ & 0.77526 & 14 \\ \hline $\rho_{M}^{S=-1}$ & 0.89166 & 5 \\ \hline \end{tabular} \caption{ (Left) Parameters of the Hagedorn mass spectra in Eqs.~(\ref{eq10}) and (\ref{eq12}), in different sectors, obtained from fits to PDG and LQCD data. The Hagedorn temperature has been set to $T_H = 180 \, {\rm MeV}$. Sectors of all hadrons, all mesons, and nonstrange mesons include both the particles' and antiparticles' contributions. In matching the LQCD results, the data for pressure and second-order fluctuations are compared with the HRG model through Eqs.~(\ref{eq5}) and (\ref{DEF:HAG_fluct}). Also shown are the errors of $m_0$ arising from the least-square fits, which induce the uncertainties in $a_0(m_0)$ through Eq.~(\ref{DEF:constraint2}). For the sectors $\rho_{B}^{S=-1}$, $\rho_{B}^{S=-2}$, and $\rho_{M}^{S=-1}$, the systematic errors in the approximation schemes are also included (see text). (Right) The constraint on the continuous mass spectrum in each sector, given in Eq.~(\ref{DEF:constraint}). } \label{tab:1} \end{table*} The analysis of the experimental hadron spectrum, in the context of Hagedorn exponential form, has been extensively discussed in the literature~\cite{Hagedorn:1971mc,raf1,raf2,Majumder:2010ik}. In one of the recent studies~\cite{Broniowski:2000bj,Broniowski:2004yh}, it was shown that, in fitting the Hagedorn spectrum to experimental data, one arrives at different limiting temperatures for mesons, baryons, and hadrons with different electric charges. In particular, with $\rho(m)$ from Eq.~(\ref{eq10}), the limiting temperature for mesons, $T_H^M\simeq 195 \, {\rm MeV}$, was extracted to be considerably larger than that for baryons, $T_H^B\simeq 140 \, {\rm MeV}$. Such Hagedorn limiting temperatures, however, are inconsistent with recent lattice results, which show that the changes from the hadronic to the quarks and gluons degrees of freedom in different sectors appear in the same narrow temperature range of the chiral crossover. Thus, the Hagedorn temperature of baryons should appear beyond the chiral crossover, i.e., $T_H^B>155 \, {\rm MeV}$. In addition, the LQCD data on $\hat\chi_{\rm BB}$ are consistent with the discrete PDG baryon mass spectrum up to $T\simeq 160 \, {\rm MeV}$. This seems to suggest that large contributions form heavy resonances are not expected in $\hat\chi_{\rm BB}$ at $T_H^B<155 \, {\rm MeV}$. From the above one concludes that it is very unlikely for the Hagedorn limiting temperatures in various hadronic sectors to differ substantially. Moreover, they are expected to be larger than the chiral crossover temperature. Consequently, the extracted Hagedorn temperature $T_H^B\simeq 140 \, {\rm MeV}$ for baryons in Refs.~\cite{Broniowski:2000bj,Broniowski:2004yh}, though mathematically correct, is disfavored by LQCD. The reason for the very different Hagedorn temperatures for mesons and baryons is that the extraction of the parameters in Eq.~(\ref{eq10}) has been performed over the whole mass range of the PDG data. The low-lying baryons drives the fit towards a lower $T_H$, resulting in the deviation of Hagedorn temperatures among different sectors. To avoid the above problem, we adopt Hagedorn's idea to treat the contributions of ground state particles\footnote{Particles that do not decay under the strong interaction. In this context, there are no ground states in the $|S|=2$ and $|S|=3$ baryonic sectors.} separately from the exponential mass spectrum. In addition, we start the continuous part of the spectrum from the onset of the first resonance in the corresponding sector. Therefore, we apply the following mass spectrum \begin{eqnarray} \label{DEF:rho_mix}\label{eq12} \rho(m) = \sum_{i} d_i\delta(m - m_i)+ \rho^H(m) \theta(m - m_x) \textrm{,} \end{eqnarray} and the corresponding cumulant \small \begin{eqnarray} \label{DEF:cumulant_mix}\label{eq13} N(m) = \sum_{i} d_i\theta(m - m_i)+ \theta(m - m_x)\int\limits_{m_x}^m\mathrm{d} m\;\rho^H(m) \textrm{,} \end{eqnarray} \normalsize \noindent where $\rho^H(m)$ is given by Eq.~(\ref{eq10}). The index $i$ counts the hadronic ground states, i.e., states with masses less than $m_x$ of the first resonance in the corresponding channel. With such a prescription for analyzing the hadronic spectrum, we find no practical advantage in treating the continuous $\rho^H(m)$ as a two-parameter function of ($m_0$, $a_0$). We therefore impose the following constraint on the continuous mass spectrum \begin{eqnarray} \label{DEF:constraint} N^{\textrm{HRG}}(m_x) = \int\limits_0^{m_x}\mathrm{d} m\; \rho^H(m). \end{eqnarray} \noindent This way, $\rho^H$ is reduced to a function of a single parameter $m_0$. The parameter $a_0$ can be determined by \begin{eqnarray} \label{DEF:constraint2} a_0(m_0) = N^{\textrm{HRG}}(m_x) \left[\int\limits_0^{m_x}\mathrm{d} m\; \frac{e^{m/T_H}}{{\left(m^2+m_0^2\right)^{5/4}}}\right]^{-1} \textrm{.} \end{eqnarray} The above spectrum can now be compared with the experimental data listed by the PDG, in different sectors of quantum number. From the analysis of the mass spectrum parameters of all hadrons, we find that the best description is obtained with $T_H\simeq 180 \, {\rm MeV}$. This value is consistent with that recently found in Ref.~\cite{Majumder:2010ik}. In addition, $T_H\simeq 180 \, {\rm MeV}$ is the largest temperature obtained as the solution of the Bootstrap equation~\cite{satz}. In the following, we apply the same $T_H$ for strange and nonstrange hadrons. In Fig.~\ref{fig:cumulant} we show that the spectra of PDG hadrons in different sectors are indeed properly described by the asymptotic mass spectrum~(\ref{eq12}) with a common Hagedorn temperature $T_H\simeq 180 \, {\rm MeV}$. The weight parameters $(a_0,m_0)$ in Eq.~(\ref{eq10}) are determined by the composition and decay properties of the resonances, hence, they are distinct for each hadronic quantum number. The optimal sets of parameters of $\rho(m)$ in Eq.~(\ref{eq10}) are summarized in Table~\ref{tab:1}. The corresponding mass spectra are shown in Fig.~\ref{fig:cumulant} as continuous lines, whereas circles indicate the lowest masses $m_x$ of the corresponding fit. Also shown, as broken lines in Fig.~\ref{fig:cumulant}, are the extrapolated cumulants below $m_x$. \begin{figure*}[htp!] \centering\subfigure[]{\includegraphics[width=1\columnwidth]{baryons_fit_lat.pdf} \label{figb}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{mesons_fit_lat.pdf} \label{figm}} \caption{(Color online) Cumulants of the discrete PDG mass spectrum (black dashed line) and the corresponding fits (red dashed line) for (a) strange baryons and (b) strange mesons. Also shown are the cumulants containing in addition the unconfirmed states (broken-dashed line). Continuous lines are obtained by matching the LQCD results to the continuous mass spectra through Eq.~(\ref{DEF:HAG_fluct}), assuming that the missing strange baryons come solely from the $|S|=1$ sector $\left[ \textrm{scheme (I)}\right]$ or $|S|=2$ sector $\left[ \textrm{scheme (II)}\right]$ (see text).} \label{fig:cumulant_fit} \end{figure*} It is important for the decomposition of the hadron mass spectrum (\ref{eq12}) into different sectors, using parameters from Table~\ref{tab:1}, to produce results that are thermodynamically consistent. Thus, e.g., the total pressure $\hat P^H$ obtained from Eq.~(\ref{eq5}) with the mass spectrum from Fig.~\ref{fig:cumulant_a} should be consistent with the sum of meson $\hat P_M$ and baryon $\hat P_B$ pressures, calculated with the mass spectra in Fig.~\ref{fig:cumulant_b}. Similar results should hold for the pressure when adding up the contributions from strange particles in different sectors. This consistency check provides further constraints on the mass spectrum parameters presented in Table~\ref{tab:1}. With the PDG mass spectrum extrapolated to the continuum, we can now test whether heavy resonances can reduce or eliminate the discrepancies between HRG and LQCD on baryon-strangeness correlations and strangeness fluctuations, seen in Figs.~\ref{bs} and~\ref{ss}. The second order cumulants $\hat\chi_{xy}$, at vanishing chemical potential, are obtained in HRG as \small \begin{subequations} \label{DEF:HAG_fluct} \begin{align} \hat \chi^{H}_{\rm BB} &= \int\limits_0^\infty \frac{\mathrm{d} m}{\pi^2} \; \rho_{{B}}(m) \hat m^2K_2\left(\hat m\right) \textrm{,}\\ \hat \chi^{H}_{\rm SS} &= \int\limits_0^\infty \frac{\mathrm{d} m}{\pi^2} \;\left[\rho_{{M}}^{S=-1}(m) + \sum_{k=1}^{3}k^2\rho_{{B}}^{S=-k}(m)\right]\hat m^2K_2\left(\hat m\right)\textrm{,}\\ \hat \chi^{H}_{\rm BS} &= - \int\limits_0^\infty \frac{\mathrm{d} m}{\pi^2} \;\left[\sum_{k=1}^{3}k\rho_{{B}}^{S=-k}(m)\right]\hat m^2K_2\left(\hat m\right)\textrm{, } \end{align} \end{subequations} \normalsize using the mass spectrum $\rho(m)$ in Eq.~(\ref{eq12}) and the parameters presented in Table~\ref{tab:1}. In Figs.~\ref{pressure_bb} and~\ref{bs_ss}, we show the contribution of the continuous Hagedorn mass spectrum to the pressure and different charge susceptibilities. The difference between the full line (fit to PDG) and the dashed line (PDG) comes from the inclusion of heavy resonances. The results in Figs.~\ref{bs} and~\ref{ss} indicate that heavy resonances can capture, to a large extent, the differences between HRG and LQCD for strangeness and baryon-strangeness correlations. However, at low temperatures, $\hat \chi_{\rm BS}$ still differs from the lattice. These deviations suggest that there are additional missing resonances in the PDG data in the mass range $m<2 \, {\rm GeV}$, as they begin to contribute substantially to $\hat\chi_{\rm BS}$ and $\hat\chi_{\rm SS}$ at lower temperatures. \subsection{Spectra of strange hadrons from LQCD} To identify the missing strange resonances in the Hagedorn mass spectrum, we use the LQCD susceptibility data as input for Eq.~(\ref{DEF:HAG_fluct}) to constrain $\rho(m)$ in different sectors. We begin with the strange-baryonic sector. The $\hat\chi_{\rm BS}$ data alone do not allow for a unique determination of the contribution from a particular sector. This is because the observable depends only on a linear combination of the spectra, namely $\rho_B^S = \rho_B^{S=-1} + 2\rho_B^{S=-2} + 3\rho_B^{S=-3}$. In principle, this problem could be resolved with additional lattice data on higher-order strangeness fluctuation, e.g., $\hat\chi_{{\rm BBSS}}$ and the kurtosis. For our purpose of analyzing the present data, we instead make the following simplification. The $|S|=3$ sector is restricted to those states listed by the PDG. We then make the assumption that the additional strange baryons come solely from the $|S|=1$ $\left[ \textrm{hereafter named scheme (I)}\right]$ or $|S|=2$ sector $\left[ \textrm{scheme (II)}\right]$, and treat the remaining one with the spectrum fitted to the PDG.\footnote{The errors induced by the use of the PDG-fitted spectra are introduced as systematic errors for the spectrum parameters. See Table~\ref{tab:1}.} The resulting spectrum parameters for both schemes are presented in Table~\ref{tab:1}. In Fig.~\ref{figb} we show the cumulants of the lattice-induced $\rho(m)$ under both schemes, together with the experimental spectra including the unconfirmed states from the PDG. The mass spectrum extracted with scheme $({\rm I})$ is seen in Fig.~\ref{figb} to follow the trend of the unconfirmed states of the PDG. This is not the case for scheme $({\rm II})$. Hence, the extra PDG data for the unconfirmed hyperons support the former scenario. The $\hat \chi_{\rm SS}$ fluctuation in general receives contributions from both the strange mesons and the strange baryons. However, due to Boltzmann suppression, we expect the observable to be dominated by the mesonic contribution. This can be inferred from the fact that $\hat\chi_{\rm SS} \gg \hat \chi_{\rm BS}$. To be definite, we fix the strange baryon contribution to $\hat \chi_{\rm SS}$ by the scenario dictated by scheme $({\rm I})$. The lattice data on $\hat\chi_{\rm SS}$ then directly determines the strange-mesonic spectrum. The resulting parameter is presented in Table~\ref{tab:1}, with the corresponding cumulant shown in Fig.~\ref{figm}. Similar to the case of strange baryons, the spectrum determined from LQCD requires additional states beyond the known strange mesons. From Fig.~\ref{figm}, we find that it exceeds even the trend set by the inclusion of the unconfirmed resonances. This may point to the existence of some uncharted strange mesons in the intermediate mass range. In addition, the general conclusion of an enhanced \mbox{lattice-motivated} strange spectra, relative to the PDG, does not depend on the chosen functional form of the continuous spectrum \eqref{DEF:rho_hagedorn}. This follows from the observation that lattice data show a stronger interaction strength in the strange sector than that expected from a free gas of known hadrons. Within the framework of HRG this implies an increase in the corresponding particle content. Nevertheless, it is important to bear in mind that such conclusion, based on an ideal resonance-formation treatment of the hadron gas, is not definitive. For example, the contribution to $\hat \chi_{\rm SS}$ from the non-strange sector is also possible through the vacuum fluctuation of $s \bar{s}$ mesons. Such an effect is neglected in the current model and a theoretical investigation is under way. \section{Summary and conclusions} \label{conclusions} Modeling the hadronic phase of QCD by the hadron resonance gas (HRG), we have examined the contribution of heavy resonances, through the exponential Hagedorn mass spectrum \mbox{$\rho(m)\simeq m^a e^{m/T_H}$}, to the fluctuation of conserved charges. A quantitative comparison between model predictions and lattice QCD (LQCD) calculations is made, with a special focus on strangeness fluctuations and baryon-strangeness correlations. We have reanalyzed the mass spectrum of all known hadrons and resonances listed in the Particle Data Group (PDG) database. A common Hagedorn temperature, $T_H\simeq 180 \, {\rm MeV}$, is employed to describe hadron mass spectra in different sectors of quantum number. This value of $T_H$ exceeds the LQCD chiral crossover temperature $T_c = 155(1)(8) \, {\rm MeV}$. The latter signifies the conversion of the hadronic medium into a quark-gluon plasma. Applying the continuum-extended mass spectrum calculated from the PDG data, we have shown that the Hagedorn asymptotic states can partly remove the disparities with lattice results in the strange sector. To fully identify the missing hadronic states, we perform a matching of LQCD data on strangeness fluctuations and baryon-strangeness correlations with HRG. The parameters of the Hagedorn mass spectrum $\rho(m)$ are well constrained by LQCD data in different sectors of strange quantum number, using the same limiting temperature $T_H\simeq 180 \, {\rm MeV} $. The mass spectra for strange baryons inferred from the existing LQCD data are shown to be consistent with the trend of the unconfirmed resonances in the PDG. This is not the case for the strange-mesonic sector, where the corresponding $\rho(m)$ exceeds the current data of the PDG, even after the unconfirmed states are included. This may point to the existence of some uncharted strange mesons in the intermediate mass range. Clearly, new data and further lattice studies are needed to clarify these issues. Moreover, such missing resonances could be important for modeling particle production yields in heavy ion collisions. It would be interesting to assess the effects of resonance width on the Hagedorn spectrum. Recent studies suggest that the implementation of low-lying broad resonances in thermal models must be handled with care~\cite{Friman:2015zua, Broniowski}. The impact on the global spectrum and consequently the thermodynamics is currently under investigation. \\ \\ \acknowledgements We acknowledge fruitful discussions with Bengt Friman. P.~M.~L. and M.~M. are grateful to E. Ruiz Arriola, W. Broniowski and M. Panero for their helpful comments and to M.~A.~R.~Kaltenborn for the careful reading of the manuscript. K.~R. also acknowledges fruitful discussion with A. Andronic, S. Bass, P. Braun-Munzinger, F. Karsch, M. Nahrgang, J. Rafelski, H. Satz and J. Stachel and partial support of the U.S. Department of Energy under Grant No. DE-FG02-05ER41367. C.~S. acknowledges partial support of the Hessian LOEWE initiative through the Helmholtz International Center for FAIR (HIC for FAIR). This work was partly supported by the Polish National Science Center (NCN), under Maestro Grant DEC-2013/10/A/ST2/00106. \section{Introduction} \label{intro} The thermodynamics of the confined phase of QCD is commonly modeled with the hadron resonance gas (HRG)~\cite{BraunMunzinger:2003zd,tawfik1,tawfik2,reseqsf,andronic,kapusta,kapusta1,gorenstein}. The equation of state for strongly interacting matter at finite temperature is well described by this model, formulated with a discrete mass spectrum of the experimentally confirmed particles and resonances. This finding was verified by recent results of lattice QCD (LQCD)~\cite{Bazavov:2012jq, Borsanyi:2013bia,missing,Borsanyi:2011sw}. However, LQCD also reveals that, when considering fluctuations and correlations of conserved charges, there are clear limitations in the HRG description~\cite{missing}. This is particularly evident in the strange sector, where the second-order correlations with the net-baryon number $\chi_{\rm BS}$ or strangeness fluctuations $\chi_{\rm SS}$ are larger in LQCD than those in the HRG model~\cite{missing,Bazavov:2012jq}. Such deviations were attributed to the missing resonances in the Particle Data Group (PDG) database~\cite{missing}. Different extensions of the HRG model have been proposed to quantify the LQCD equation of state. They account for a possible repulsive interaction among constituents and/or a continuously growing exponential mass spectrum~\cite{andronic,gorenstein,Majumder:2010ik,kapusta}. The latter was first introduced by Hagedorn~\cite{Hagedorn:1965st} within the statistical bootstrap model (SBM)~\cite{Hagedorn:1971mc,Frautschi:1971ij,raf1}, and was then studied in dual string and bag models~\cite{Huang:1970iq,Cudell:1992bi,Johnson:1975sg}. For large masses, the Hagedorn spectrum $\rho(m)$ is parametrized as \mbox{$\rho(m)\simeq m^a e^{m/T_H}$}, where $T_H$ is the Hagedorn limiting temperature and $a$ is a model parameter. The main objective of this paper is to analyze LQCD data on fluctuations and correlations of conserved charges within the HRG model. In particular, we examine whether the missing resonances contained in the asymptotic Hagedorn mass spectrum are sufficient to quantify LQCD results. We focus on the susceptibilities $\chi_{\rm BS}$ and $\chi_{\rm SS}$, where LQCD indicates the largest deviations from HRG, in spite of their agreement on the equation of state in the hadronic phase. To calculate fluctuations of conserved charges within HRG, one needs to identify the hadron mass spectrum for different quantum numbers. For a continuous mass spectrum $\rho(m)$, this issue was addressed in Refs.~\cite{Broniowski:2000bj} and~\cite{Broniowski:2004yh}, where the parameters of $\rho(m)$ in different hadronic sectors were extracted by fitting the spectra to the established hadronic states in the PDG database~\cite{pdg}. It was shown in Ref.~\cite{Broniowski:2004yh} that the Hagedorn temperatures for mesons $T_H^M$ and baryons $T_H^B$ are different, with $T_H^M>T_H^B$. The $T_H^B\simeq 140 \, {\rm MeV}$ found in~\cite{Broniowski:2000bj} is clearly below the LQCD crossover temperature $T_c = 155(1)(8) \, {\rm MeV}$ from hadronic to quark-gluon plasma phase~\cite{tcb,tcw,tc}. This, however, is inconsistent with LQCD, as it implies a large fluctuation of the net-baryon number deep in the hadronic phase, which is not observed in lattice simulations. In this study we have reanalyzed the Hagedorn mass spectrum in different sectors of quantum number, in the context of the PDG data, and have shown that there is a common Hagedorn temperature for mesons and baryons in different strange sectors. We have applied our newly calculated $\rho(m)$ in the HRG model to explore different thermodynamics observables; in particular, fluctuations of conserved charges. The results are compared with LQCD for the strangeness, net-baryon number fluctuations, and for baryon-strangeness correlations. We show that HRG, adopting a continuous mass spectrum with its parameters fitted to the PDG data, can partially account for the missing resonances needed to quantify LQCD results. To fully identify the missing resonance states, we motivate a matching of LQCD and HRG to extract a continuous mass spectrum $\rho(m)$. In the strange-baryonic sector, this $\rho(m)$ is shown to be consistent with all known and expected states listed by the PDG. However, the mass spectrum for strange mesons require some additional resonances in the intermediate mass range beyond those listed in the PDG compilation. The paper is organized as follows: In Sec.~\ref{HRG}, we introduce the HRG thermodynamics with a discrete mass spectrum. In Sec.~\ref{HRG_LQCD}, we discuss HRG model comparisons with LQCD. In Sec.~\ref{Hagedorn_LQCD}, we extract the continuous $\rho(m)$ in different sectors of quantum number and discuss fluctuations of conserved charges in conjunction with LQCD findings. Finally, Sec.~\ref{conclusions} is devoted to summary and conclusions. \section{Equation of state of hadronic matter} \label{HRG} To formulate a phenomenological model of hadronic matter at finite temperature and density, one needs to identify the relevant degrees of freedom and their interactions. In the confined phase of QCD the medium is composed of hadrons and resonances. The HRG model, in its simplest form, treats the medium constituents as point-like and independent~\cite{BraunMunzinger:2003zd}. Thus, in such a model setup, the interactions of hadrons and the resulting widths of resonances are neglected. Hence, the composition of the medium and its properties emerge through a discrete mass spectrum, \begin{eqnarray} \label{DEF:rho_pdg}\label{eq2} \rho^{\rm{HRG}}(m) = \sum_{i} d_i \delta\left(m - m_i\right) \textrm{,} \end{eqnarray} where $d_i = (2J_i+1)$ is the spin degeneracy factor of a particle $i$ with mass $m_i$ and the sum is taken over all stable particles and resonances. The mass spectrum in Eq.~(\ref{eq2}) can be identified experimentally or can be calculated within LQCD. In both cases our knowledge is far from complete. LQCD can determine the masses of hadronic ground states and low-lying excited states with fairly high precision~\cite{spectrum}. However, the higher excited states are still not well controlled in lattice calculations. \begin{figure*}[htp!] \centering\subfigure[]{\includegraphics[width=1\columnwidth]{hadrons_fit_pdg.pdf} \label{fig:cumulant_a}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{barmes_fit_pdg.pdf} \label{fig:cumulant_b}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{mesons_fit_pdg.pdf} \label{fig:cumulant_c}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{baryons_fit_pdg.pdf} \label{fig:cumulant_d}} \caption{(Color online) Cumulants of the PDG mass spectrum in different sectors of quantum number: (a) all hadrons; (b) mesons and baryons; (c) mesons of different strangeness; (d) baryons of different strangeness. The lines are obtained from the fit of Eqs.~(\ref{eq10}) and~(\ref{eq13}) to the PDG data with the parameters listed in Table~\ref{tab:1} (see text).} \label{fig:cumulant}\label{fig1} \end{figure*} The spectrum of experimentally established hadrons, summarized by the PDG~\cite{pdg}, accounts for all identified particles and resonances, i.e., confirmed mesons and baryons granted with a three- or four-star status, of masses up to $m_M \simeq 2.4 \, {\rm GeV}$ and $m_B \simeq 2.6 \, {\rm GeV}$ respectively. The investigation of higher excited states remains a significant challenge for the experiments due to the complicated decay properties and large widths of the resonances. Instead of the hadron mass spectrum~(\ref{DEF:rho_pdg}), the medium composition can be characterized by the cumulant~\cite{Broniowski:2000bj} \begin{eqnarray} \label{DEF:cumulant_pdg}\label{eq3} N^{\rm{HRG}}(m) = \sum_{i} d_i \theta\left(m - m_i\right) \textrm{, } \end{eqnarray} such that \begin{eqnarray}\label{eq4} \rho^{\rm{HRG}} = {{\partial N^{\rm{HRG}}}\over { \partial m}}. \end{eqnarray} Thus, $N^{\rm{HRG}}(m)$ counts the number of degrees of freedom with masses below $m$. Since the spectrum~(\ref{DEF:rho_pdg}) is additive in different particle species, it can be decomposed into a sum of contributions from mesons and baryons, as well as a sum of particles with definite strangeness. Figure~\ref{fig:cumulant} shows the cumulants in different sectors of hadronic quantum number with inputs from the PDG. The cumulant of all hadrons is seen in Fig.~\ref{fig:cumulant_a} to rapidly increase with mass. For $m\leq 2 \, {\rm GeV}$ such increase is almost linear, indicating that the hadron mass spectrum is exponential, as predicted by Hagedorn in the context of SBM~\cite{Hagedorn:1965st,Hagedorn:1971mc}. A rapid increase in the number of states is also seen, in Figs.~\ref{fig:cumulant_b} and~\ref{fig:cumulant_c}, to appear separately for the mesonic and baryonic sector, as well as for the strange and non-strange mesons with $m<2 \, {\rm GeV}$. Baryons of different strangeness, as illustrated in Fig.~\ref{fig:cumulant_d}, follow a similar trend with the exception of $|S|= 3$ baryons, which consists only of $\Omega$ hyperons. For an uncorrelated gas of particles (and antiparticles) with a mass spectrum $\rho(m)$, the thermodynamic pressure $\hat{P}=P/T^4$ is obtained as \begin{align} \hat{P}(T,V,\vec{\mu}) =\pm & \int \mathrm{d} m \, \rho(m) \; \int \, \frac{ \mathrm{d}\hat{p}}{2\pi^2} \; \hat{p}^2 & \nonumber \\ &\times\left[\ln (1\pm \lambda \, e^{-\hat{\epsilon}})+ \ln (1\pm \lambda^{-1} e^{-\hat{\epsilon}}) \right]\textrm{,} \label{eq5} \end{align} where $\hat{p} = p/T$, $\hat{m} = m/T$, $\hat{\epsilon}=\sqrt{\hat{p}^2+\hat{ m}^2}$, and the $(\pm)$ sign refers to fermions and bosons respectively. For a particle of mass $m$, carrying baryon number $B$, strangeness $S$ and electric charge $Q$, the fugacity $\lambda$ reads \begin{align} \lambda(T,\vec\mu )= \exp \left({{B\hat{\mu}_B+S\hat{\mu}_S+Q\hat{\mu}_Q} }\right)\textrm{,} \label{eq6} \end{align} where $\hat{\mu} = \mu/T$. Note that, for scalar particles with vacuum quantum number, the antiparticle term should be dropped to avoid double counting. Expanding the logarithm and performing the momentum integration in Eq.~(\ref{eq5}) with the discrete mass spectrum $\rho^{\rm HRG}$ in Eq.~(\ref{eq2}), one obtains \begin{align} \hat{P}=\sum_i{{d_i}\over{2\pi^2}} & \sum_{k=1}^\infty {{(\pm 1)^{k+1}}\over {k^2}} \, \hat{ m}_i^2 \, K_2(k\hat{m}_i) \, \lambda^k \rm , \label{eq7} \end{align} \noindent where the first sum over $i$ includes the contributions of all known hadrons and antihadrons, and $K_2$ is the modified Bessel function. The upper and lower sign is for bosons and fermions respectively. The Boltzmann approximation corresponds to retaining only the first term in summation. The thermodynamic pressure in Eq.~\eqref{eq5}, through the mass spectrum $\rho$, contains all the relevant information about the distribution of mass and quantum number of the medium. Thus, it allows for the study of different thermodynamic observables, including fluctuations of conserved charges. Furthermore, one can turn this argument around, and explore the implication of the thermodynamic observables on medium composition. This approach has been applied, for example, in the pure gauge theory to extract an effective glueball mass and spectrum based on the lattice results on pressure and trace anomaly \cite{Meyer:2009tq, Caselle:2015tza}. For the case of QCD, the constraint imposed by the trace anomaly on the hadronic spectrum has been investigated \cite{Arriola:2014bfa, Arriola:2015gra}. In this work, we focus on the impact of recent LQCD data on the fluctuations of conserved charges. \section{Hadron Resonance Gas and LQCD} \label{HRG_LQCD} LQCD provides a theoretical framework to calculate the equation of state and bulk properties of strongly interacting matter at finite temperature. The first comparison of the equation of state calculated on the lattice with that derived from Eq.~(\ref{eq7}) has shown that thermodynamics of hadronic matter is well approximated by the HRG with mass spectrum generated on the corresponding lattice~\cite{tawfik1,tawfik2}. Presently, we have a rather detailed picture of the thermodynamics of hadronic matter, thanks to the advancement of LQCD simulations with physical quark masses and to the progress in extrapolating observables to the continuum limit~\cite{Borsanyi:2013bia,eqsf1,eqsf2}. Thus, a direct comparison of the equation of state from Eq.~(\ref{eq5}) and LQCD can be performed with the physical mass spectrum~\cite{reseqsf,Bazavov:2012jq,ratti}. \begin{figure*}[htp!] \centering\subfigure[]{\centering\includegraphics[width=1\columnwidth]{hagedorn_pressure.pdf} \label{pressure}} \centering \centering\subfigure[]{\includegraphics[width=1\columnwidth]{hagedorn_bb.pdf} \label{bb}} \caption{(Color online) Lattice QCD results of HotQCD~\cite{eqsf1,Bazavov:2012jq} and Budapest-Wuppertal Collaborations~\cite{Borsanyi:2013bia,Borsanyi:2011sw} for different observables in dimensionless units: (a) the thermodynamic pressure; (b) the net-baryon number fluctuations $\hat{\chi}_{\rm BB}$. Also shown are the HRG results for the discrete PDG mass spectrum (dashed line) and for the effective mass spectrum in Eq.~(\ref{eq12}), which contains a continuous part to describe the effects of massive resonances (continuous line).} \label{pressure_bb} \end{figure*} In Fig.~\ref{pressure} we show the temperature dependence of the thermodynamic pressure obtained recently in lattice simulations with physical quark masses~\cite{eqsf1,Borsanyi:2013bia}. The bands of the LQCD result indicate the systematic errors due to continuum extrapolation. The vertical band marks the temperature $T_c = 155(1)(8) \, {\rm MeV}$, which is the chiral crossover temperature from the hadronic phase to the quark-gluon plasma~\cite{tc}. These LQCD results are compared in Fig.~\ref{pressure} with predictions of the HRG model for the mass spectrum~(\ref{DEF:rho_pdg}), which includes all known hadrons and resonances listed by the PDG~\cite{pdg}. There is a clear coincidence of the HRG and LQCD results on the equation of state at low temperatures. The pressure is strongly increasing with temperature towards the chiral crossover. This behavior is well understood within HRG as the consequence of growing contributions from the escalating number of higher resonances. Although HRG formulated with a discrete mass spectrum does not exhibit any critical behavior, it nevertheless reproduces remarkably well the lattice results in the hadronic phase. This agreement has now been extended to the fluctuations and correlations of conserved charges~\cite{Bazavov:2012jq,Borsanyi:2011sw,ejiri1,ejiri2}. In a thermal medium, the second-order fluctuations and correlations of conserved charges are quantified by the generalized susceptibilities \begin{eqnarray} \label{sus1}\label{eq8} \hat{ \chi}_{xy} = \frac{\partial^{2}\hat P}{\partial \hat \mu_x\partial \hat \mu_y} \textrm{,} \end{eqnarray} where $(x,y)$ are conserved charges, which in the following are restricted to the baryon number $B$ and strangeness $S$. For HRG with a discrete mass spectrum of Boltzmann particles, $\hat{\chi}_{xy}$ is obtained from Eq.~(\ref{eq7}) as \begin{eqnarray} \label{eq9} \hat{ \chi}^{\rm HRG}_{xy}\Big|_{\hat \mu_x = \hat\mu_y=0} = \frac{1}{\pi^2}\sum_{i} d_i {\hat{m}_i^2}K_2\left({\hat{m}_i}\right)x_iy_i \textrm{.} \end{eqnarray} The susceptibilities~(\ref{sus1}) and in particular~(\ref{eq9}) are observables sensitive to the quantum numbers of medium constituents. Thus, $\hat\chi_{xy}$ can be used to identify contributions of different particle species to QCD thermodynamics~\cite{ejiri1,ejiri2}. \begin{figure*}[htp!] \centering\subfigure[]{\centering\includegraphics[width=1\columnwidth]{hagedorn_bs.pdf} \label{bs}} \centering\subfigure[]{\centering\includegraphics[width=1\columnwidth]{hagedorn_ss.pdf} \label{ss}} \caption{ (Color online) As in Fig.~\ref{pressure_bb}, but for baryon-strangeness correlations $\hat{\chi}_{\rm BS}$ (a) and for strangeness fluctuations $\hat{\chi}_{\rm SS}$ (b). Also shown are the corresponding results obtained from the least-square fits to lattice data up to $T\simeq 156 \, {\rm MeV}$.} \label{bs_ss} \end{figure*} Recent LQCD calculations of the HotQCD Collaboration~\cite{Bazavov:2012jq} and Budapest-Wuppertal Collaboration~\cite{Borsanyi:2013bia, Borsanyi:2011sw} provide results on different fluctuations and correlations of conserved charges. Thus, the apparent agreement of HRG and LQCD, seen on the level of the equation of state, can be further tested within different hadronic sectors~\cite{Bazavov:2012jq}. In Figs.~\ref{bb} and~\ref{bs_ss} we show the LQCD results on the fluctuation of net-baryon number, strangeness, as well as the baryon-strangeness correlations. They are compared to the HRG model, formulated with the PDG mass spectrum. From Fig.~\ref{bb}, it is clear that the net-baryon number fluctuation in the hadronic phase are well described by HRG, whereas strangeness fluctuation $\hat\chi_{\rm SS}$ in Fig.~\ref{ss} and $\hat\chi_{\rm BS}$ correlation in Fig.~\ref{bs} are underestimated in the low-temperature phase. Following an analysis of the relations between different susceptibilities of conserved charges, it was argued in Ref.~\cite{Bazavov:2012jq} that deviations seen in Fig.~\ref{bs} can be attributed to the missing resonances in the strange-baryonic sector. In view of Fig.~\ref{ss}, similar conclusion can be drawn for the strange mesons. In general, the contributions of heavy resonances in HRG are suppressed due to the Boltzmann factor. However, the relative importance of these states depends on the observable. In the hadronic phase, the pressure is dominated by the low-lying particles. At temperature $T=150 \, {\rm MeV}$, the contribution to the pressure from particles and resonances with mass $M>1.5 \, {\rm GeV}$ is of the order of 7$\%$. However, in the fluctuations of the net-baryon number and baryon-strangeness correlations, such a contribution is already significant and amounts to 26$\%$ and 33$\%$, respectively. Contributions from missing heavy states could be the potential origin of the observed differences between LQCD results and HRG predictions on fluctuations and correlations of conserved charges in the strange sector, shown in Figs.~\ref{bs} and~\ref{ss}. \section{Hagedorn mass spectrum and LQCD fluctuations} \label{Hagedorn_LQCD} \subsection{The Hagedorn mass spectrum} To account for the unknown resonance states at large masses we adopt the continuous Hagedorn mass spectrum with the parametrization \begin{eqnarray} \label{DEF:rho_hagedorn}\label{eq10} \rho^H(m) = \frac{a_0\;}{\left(m^2+m_0^2\right)^{5/4}} e^{m/T_H}\textrm{,} \end{eqnarray} and its corresponding cumulant \begin{eqnarray} \label{DEF:cumulant_hag}\label{eq11} N^H(m) = \int\limits_0^m \mathrm{d} m' \; \rho^H(m') \textrm{,} \end{eqnarray} where $T_H$ is the Hagedorn limiting temperature, whereas $a_0$ and $m_0$ are additional free parameters. In general, the parameters of $\rho(m)$ can be calculated within a model, e.g., in SBM~\cite{Hagedorn:1971mc,Frautschi:1971ij}. In the following, we constrain the Hagedorn temperature and the weight parameters $(a_0,m_0)$ in Eq.~(\ref{eq10}) based on the mass spectrum of the PDG and the lattice data. In addition, we assume that the same exponential functional form holds separately for hadrons in different sectors of quantum number, i.e., for mesons and baryons with or without strangeness. \begin{table*}[htp!] \begin{tabular}{|l||c|c||c|c|} \hline & \multicolumn{2}{|c||}{Fit to PDG} & \multicolumn{2}{|c|}{Fit to LQCD} \\ \hline & $m_0$ [${\rm GeV}$] & $a_0(m_0)$ [${\rm GeV}^{3/2}$] & $m_0$ [{\rm GeV}] & $a_0(m_0)$ [${\rm GeV}^{3/2}$] \\ \hline\hline $\rho_H$ & 0.529(22) & 0.744(40) & 0.425(24) & 0.573(36) \\ \hline $\rho_{B}$ & 0.145(23) & 0.135(7) & 0.078(13) & 0.108(9) \\ \hline $\rho_{B}^{S=0}$ & 0.053(8) & 0.064(12) & & \\ \hline $\rho_{B}^{S=-1}$ & 0.051(12) & 0.046(6) & 0.193(96)(122) & 0.067(27) \\ \hline $\rho_{B}^{S=-2}$ & 1.453(441) & 0.023(20) & 2.469(456)(297) & 0.091(47) \\ \hline $\rho_{B}^{S=-3}$ & 0.00194(0) & 0.00027(0) & & \\ \hline $\rho_{M}$ & 0.244(17) & 0.341(19) & & \\ \hline $\rho_{M}^{S=0}$ & 0.183(19) & 0.212(17) & & \\ \hline $\rho_{M}^{S=-1}$ & 0.183(43) & 0.060(9) & 0.378(32)(95) & 0.099(24) \\ \hline \end{tabular}\qquad\qquad\qquad \begin{tabular}{|l||c|c|} \hline & \multicolumn{2}{|c|}{Constraint} \\ \hline & $m_x$ [${\rm GeV}$] & $N^{\textrm{HRG}}(m_x)$ \\ \hline\hline $\rho_H$ & 0.77526 & 18 \\ \hline $\rho_{B}$ & 1.2320 & 28 \\ \hline $\rho_{B}^{S=0}$ & 1.2320 & 40 \\ \hline $\rho_{B}^{S=-1}$ & 1.3828 & 20 \\ \hline $\rho_{B}^{S=-2}$ & 1.31486 & 2 \\ \hline $\rho_{B}^{S=-3}$ & 1.67245 & 4 \\ \hline $\rho_{M}$ & 0.77526 & 18 \\ \hline $\rho_{M}^{S=0}$ & 0.77526 & 14 \\ \hline $\rho_{M}^{S=-1}$ & 0.89166 & 5 \\ \hline \end{tabular} \caption{ (Left) Parameters of the Hagedorn mass spectra in Eqs.~(\ref{eq10}) and (\ref{eq12}), in different sectors, obtained from fits to PDG and LQCD data. The Hagedorn temperature has been set to $T_H = 180 \, {\rm MeV}$. Sectors of all hadrons, all mesons, and nonstrange mesons include both the particles' and antiparticles' contributions. In matching the LQCD results, the data for pressure and second-order fluctuations are compared with the HRG model through Eqs.~(\ref{eq5}) and (\ref{DEF:HAG_fluct}). Also shown are the errors of $m_0$ arising from the least-square fits, which induce the uncertainties in $a_0(m_0)$ through Eq.~(\ref{DEF:constraint2}). For the sectors $\rho_{B}^{S=-1}$, $\rho_{B}^{S=-2}$, and $\rho_{M}^{S=-1}$, the systematic errors in the approximation schemes are also included (see text). (Right) The constraint on the continuous mass spectrum in each sector, given in Eq.~(\ref{DEF:constraint}). } \label{tab:1} \end{table*} The analysis of the experimental hadron spectrum, in the context of Hagedorn exponential form, has been extensively discussed in the literature~\cite{Hagedorn:1971mc,raf1,raf2,Majumder:2010ik}. In one of the recent studies~\cite{Broniowski:2000bj,Broniowski:2004yh}, it was shown that, in fitting the Hagedorn spectrum to experimental data, one arrives at different limiting temperatures for mesons, baryons, and hadrons with different electric charges. In particular, with $\rho(m)$ from Eq.~(\ref{eq10}), the limiting temperature for mesons, $T_H^M\simeq 195 \, {\rm MeV}$, was extracted to be considerably larger than that for baryons, $T_H^B\simeq 140 \, {\rm MeV}$. Such Hagedorn limiting temperatures, however, are inconsistent with recent lattice results, which show that the changes from the hadronic to the quarks and gluons degrees of freedom in different sectors appear in the same narrow temperature range of the chiral crossover. Thus, the Hagedorn temperature of baryons should appear beyond the chiral crossover, i.e., $T_H^B>155 \, {\rm MeV}$. In addition, the LQCD data on $\hat\chi_{\rm BB}$ are consistent with the discrete PDG baryon mass spectrum up to $T\simeq 160 \, {\rm MeV}$. This seems to suggest that large contributions form heavy resonances are not expected in $\hat\chi_{\rm BB}$ at $T_H^B<155 \, {\rm MeV}$. From the above one concludes that it is very unlikely for the Hagedorn limiting temperatures in various hadronic sectors to differ substantially. Moreover, they are expected to be larger than the chiral crossover temperature. Consequently, the extracted Hagedorn temperature $T_H^B\simeq 140 \, {\rm MeV}$ for baryons in Refs.~\cite{Broniowski:2000bj,Broniowski:2004yh}, though mathematically correct, is disfavored by LQCD. The reason for the very different Hagedorn temperatures for mesons and baryons is that the extraction of the parameters in Eq.~(\ref{eq10}) has been performed over the whole mass range of the PDG data. The low-lying baryons drives the fit towards a lower $T_H$, resulting in the deviation of Hagedorn temperatures among different sectors. To avoid the above problem, we adopt Hagedorn's idea to treat the contributions of ground state particles\footnote{Particles that do not decay under the strong interaction. In this context, there are no ground states in the $|S|=2$ and $|S|=3$ baryonic sectors.} separately from the exponential mass spectrum. In addition, we start the continuous part of the spectrum from the onset of the first resonance in the corresponding sector. Therefore, we apply the following mass spectrum \begin{eqnarray} \label{DEF:rho_mix}\label{eq12} \rho(m) = \sum_{i} d_i\delta(m - m_i)+ \rho^H(m) \theta(m - m_x) \textrm{,} \end{eqnarray} and the corresponding cumulant \small \begin{eqnarray} \label{DEF:cumulant_mix}\label{eq13} N(m) = \sum_{i} d_i\theta(m - m_i)+ \theta(m - m_x)\int\limits_{m_x}^m\mathrm{d} m\;\rho^H(m) \textrm{,} \end{eqnarray} \normalsize \noindent where $\rho^H(m)$ is given by Eq.~(\ref{eq10}). The index $i$ counts the hadronic ground states, i.e., states with masses less than $m_x$ of the first resonance in the corresponding channel. With such a prescription for analyzing the hadronic spectrum, we find no practical advantage in treating the continuous $\rho^H(m)$ as a two-parameter function of ($m_0$, $a_0$). We therefore impose the following constraint on the continuous mass spectrum \begin{eqnarray} \label{DEF:constraint} N^{\textrm{HRG}}(m_x) = \int\limits_0^{m_x}\mathrm{d} m\; \rho^H(m). \end{eqnarray} \noindent This way, $\rho^H$ is reduced to a function of a single parameter $m_0$. The parameter $a_0$ can be determined by \begin{eqnarray} \label{DEF:constraint2} a_0(m_0) = N^{\textrm{HRG}}(m_x) \left[\int\limits_0^{m_x}\mathrm{d} m\; \frac{e^{m/T_H}}{{\left(m^2+m_0^2\right)^{5/4}}}\right]^{-1} \textrm{.} \end{eqnarray} The above spectrum can now be compared with the experimental data listed by the PDG, in different sectors of quantum number. From the analysis of the mass spectrum parameters of all hadrons, we find that the best description is obtained with $T_H\simeq 180 \, {\rm MeV}$. This value is consistent with that recently found in Ref.~\cite{Majumder:2010ik}. In addition, $T_H\simeq 180 \, {\rm MeV}$ is the largest temperature obtained as the solution of the Bootstrap equation~\cite{satz}. In the following, we apply the same $T_H$ for strange and nonstrange hadrons. In Fig.~\ref{fig:cumulant} we show that the spectra of PDG hadrons in different sectors are indeed properly described by the asymptotic mass spectrum~(\ref{eq12}) with a common Hagedorn temperature $T_H\simeq 180 \, {\rm MeV}$. The weight parameters $(a_0,m_0)$ in Eq.~(\ref{eq10}) are determined by the composition and decay properties of the resonances, hence, they are distinct for each hadronic quantum number. The optimal sets of parameters of $\rho(m)$ in Eq.~(\ref{eq10}) are summarized in Table~\ref{tab:1}. The corresponding mass spectra are shown in Fig.~\ref{fig:cumulant} as continuous lines, whereas circles indicate the lowest masses $m_x$ of the corresponding fit. Also shown, as broken lines in Fig.~\ref{fig:cumulant}, are the extrapolated cumulants below $m_x$. \begin{figure*}[htp!] \centering\subfigure[]{\includegraphics[width=1\columnwidth]{baryons_fit_lat.pdf} \label{figb}} \centering\subfigure[]{\includegraphics[width=1\columnwidth]{mesons_fit_lat.pdf} \label{figm}} \caption{(Color online) Cumulants of the discrete PDG mass spectrum (black dashed line) and the corresponding fits (red dashed line) for (a) strange baryons and (b) strange mesons. Also shown are the cumulants containing in addition the unconfirmed states (broken-dashed line). Continuous lines are obtained by matching the LQCD results to the continuous mass spectra through Eq.~(\ref{DEF:HAG_fluct}), assuming that the missing strange baryons come solely from the $|S|=1$ sector $\left[ \textrm{scheme (I)}\right]$ or $|S|=2$ sector $\left[ \textrm{scheme (II)}\right]$ (see text).} \label{fig:cumulant_fit} \end{figure*} It is important for the decomposition of the hadron mass spectrum (\ref{eq12}) into different sectors, using parameters from Table~\ref{tab:1}, to produce results that are thermodynamically consistent. Thus, e.g., the total pressure $\hat P^H$ obtained from Eq.~(\ref{eq5}) with the mass spectrum from Fig.~\ref{fig:cumulant_a} should be consistent with the sum of meson $\hat P_M$ and baryon $\hat P_B$ pressures, calculated with the mass spectra in Fig.~\ref{fig:cumulant_b}. Similar results should hold for the pressure when adding up the contributions from strange particles in different sectors. This consistency check provides further constraints on the mass spectrum parameters presented in Table~\ref{tab:1}. With the PDG mass spectrum extrapolated to the continuum, we can now test whether heavy resonances can reduce or eliminate the discrepancies between HRG and LQCD on baryon-strangeness correlations and strangeness fluctuations, seen in Figs.~\ref{bs} and~\ref{ss}. The second order cumulants $\hat\chi_{xy}$, at vanishing chemical potential, are obtained in HRG as \small \begin{subequations} \label{DEF:HAG_fluct} \begin{align} \hat \chi^{H}_{\rm BB} &= \int\limits_0^\infty \frac{\mathrm{d} m}{\pi^2} \; \rho_{{B}}(m) \hat m^2K_2\left(\hat m\right) \textrm{,}\\ \hat \chi^{H}_{\rm SS} &= \int\limits_0^\infty \frac{\mathrm{d} m}{\pi^2} \;\left[\rho_{{M}}^{S=-1}(m) + \sum_{k=1}^{3}k^2\rho_{{B}}^{S=-k}(m)\right]\hat m^2K_2\left(\hat m\right)\textrm{,}\\ \hat \chi^{H}_{\rm BS} &= - \int\limits_0^\infty \frac{\mathrm{d} m}{\pi^2} \;\left[\sum_{k=1}^{3}k\rho_{{B}}^{S=-k}(m)\right]\hat m^2K_2\left(\hat m\right)\textrm{, } \end{align} \end{subequations} \normalsize using the mass spectrum $\rho(m)$ in Eq.~(\ref{eq12}) and the parameters presented in Table~\ref{tab:1}. In Figs.~\ref{pressure_bb} and~\ref{bs_ss}, we show the contribution of the continuous Hagedorn mass spectrum to the pressure and different charge susceptibilities. The difference between the full line (fit to PDG) and the dashed line (PDG) comes from the inclusion of heavy resonances. The results in Figs.~\ref{bs} and~\ref{ss} indicate that heavy resonances can capture, to a large extent, the differences between HRG and LQCD for strangeness and baryon-strangeness correlations. However, at low temperatures, $\hat \chi_{\rm BS}$ still differs from the lattice. These deviations suggest that there are additional missing resonances in the PDG data in the mass range $m<2 \, {\rm GeV}$, as they begin to contribute substantially to $\hat\chi_{\rm BS}$ and $\hat\chi_{\rm SS}$ at lower temperatures. \subsection{Spectra of strange hadrons from LQCD} To identify the missing strange resonances in the Hagedorn mass spectrum, we use the LQCD susceptibility data as input for Eq.~(\ref{DEF:HAG_fluct}) to constrain $\rho(m)$ in different sectors. We begin with the strange-baryonic sector. The $\hat\chi_{\rm BS}$ data alone do not allow for a unique determination of the contribution from a particular sector. This is because the observable depends only on a linear combination of the spectra, namely $\rho_B^S = \rho_B^{S=-1} + 2\rho_B^{S=-2} + 3\rho_B^{S=-3}$. In principle, this problem could be resolved with additional lattice data on higher-order strangeness fluctuation, e.g., $\hat\chi_{{\rm BBSS}}$ and the kurtosis. For our purpose of analyzing the present data, we instead make the following simplification. The $|S|=3$ sector is restricted to those states listed by the PDG. We then make the assumption that the additional strange baryons come solely from the $|S|=1$ $\left[ \textrm{hereafter named scheme (I)}\right]$ or $|S|=2$ sector $\left[ \textrm{scheme (II)}\right]$, and treat the remaining one with the spectrum fitted to the PDG.\footnote{The errors induced by the use of the PDG-fitted spectra are introduced as systematic errors for the spectrum parameters. See Table~\ref{tab:1}.} The resulting spectrum parameters for both schemes are presented in Table~\ref{tab:1}. In Fig.~\ref{figb} we show the cumulants of the lattice-induced $\rho(m)$ under both schemes, together with the experimental spectra including the unconfirmed states from the PDG. The mass spectrum extracted with scheme $({\rm I})$ is seen in Fig.~\ref{figb} to follow the trend of the unconfirmed states of the PDG. This is not the case for scheme $({\rm II})$. Hence, the extra PDG data for the unconfirmed hyperons support the former scenario. The $\hat \chi_{\rm SS}$ fluctuation in general receives contributions from both the strange mesons and the strange baryons. However, due to Boltzmann suppression, we expect the observable to be dominated by the mesonic contribution. This can be inferred from the fact that $\hat\chi_{\rm SS} \gg \hat \chi_{\rm BS}$. To be definite, we fix the strange baryon contribution to $\hat \chi_{\rm SS}$ by the scenario dictated by scheme $({\rm I})$. The lattice data on $\hat\chi_{\rm SS}$ then directly determines the strange-mesonic spectrum. The resulting parameter is presented in Table~\ref{tab:1}, with the corresponding cumulant shown in Fig.~\ref{figm}. Similar to the case of strange baryons, the spectrum determined from LQCD requires additional states beyond the known strange mesons. From Fig.~\ref{figm}, we find that it exceeds even the trend set by the inclusion of the unconfirmed resonances. This may point to the existence of some uncharted strange mesons in the intermediate mass range. In addition, the general conclusion of an enhanced \mbox{lattice-motivated} strange spectra, relative to the PDG, does not depend on the chosen functional form of the continuous spectrum \eqref{DEF:rho_hagedorn}. This follows from the observation that lattice data show a stronger interaction strength in the strange sector than that expected from a free gas of known hadrons. Within the framework of HRG this implies an increase in the corresponding particle content. Nevertheless, it is important to bear in mind that such conclusion, based on an ideal resonance-formation treatment of the hadron gas, is not definitive. For example, the contribution to $\hat \chi_{\rm SS}$ from the non-strange sector is also possible through the vacuum fluctuation of $s \bar{s}$ mesons. Such an effect is neglected in the current model and a theoretical investigation is under way. \section{Summary and conclusions} \label{conclusions} Modeling the hadronic phase of QCD by the hadron resonance gas (HRG), we have examined the contribution of heavy resonances, through the exponential Hagedorn mass spectrum \mbox{$\rho(m)\simeq m^a e^{m/T_H}$}, to the fluctuation of conserved charges. A quantitative comparison between model predictions and lattice QCD (LQCD) calculations is made, with a special focus on strangeness fluctuations and baryon-strangeness correlations. We have reanalyzed the mass spectrum of all known hadrons and resonances listed in the Particle Data Group (PDG) database. A common Hagedorn temperature, $T_H\simeq 180 \, {\rm MeV}$, is employed to describe hadron mass spectra in different sectors of quantum number. This value of $T_H$ exceeds the LQCD chiral crossover temperature $T_c = 155(1)(8) \, {\rm MeV}$. The latter signifies the conversion of the hadronic medium into a quark-gluon plasma. Applying the continuum-extended mass spectrum calculated from the PDG data, we have shown that the Hagedorn asymptotic states can partly remove the disparities with lattice results in the strange sector. To fully identify the missing hadronic states, we perform a matching of LQCD data on strangeness fluctuations and baryon-strangeness correlations with HRG. The parameters of the Hagedorn mass spectrum $\rho(m)$ are well constrained by LQCD data in different sectors of strange quantum number, using the same limiting temperature $T_H\simeq 180 \, {\rm MeV} $. The mass spectra for strange baryons inferred from the existing LQCD data are shown to be consistent with the trend of the unconfirmed resonances in the PDG. This is not the case for the strange-mesonic sector, where the corresponding $\rho(m)$ exceeds the current data of the PDG, even after the unconfirmed states are included. This may point to the existence of some uncharted strange mesons in the intermediate mass range. Clearly, new data and further lattice studies are needed to clarify these issues. Moreover, such missing resonances could be important for modeling particle production yields in heavy ion collisions. It would be interesting to assess the effects of resonance width on the Hagedorn spectrum. Recent studies suggest that the implementation of low-lying broad resonances in thermal models must be handled with care~\cite{Friman:2015zua, Broniowski}. The impact on the global spectrum and consequently the thermodynamics is currently under investigation. \\ \\ \acknowledgements We acknowledge fruitful discussions with Bengt Friman. P.~M.~L. and M.~M. are grateful to E. Ruiz Arriola, W. Broniowski and M. Panero for their helpful comments and to M.~A.~R.~Kaltenborn for the careful reading of the manuscript. K.~R. also acknowledges fruitful discussion with A. Andronic, S. Bass, P. Braun-Munzinger, F. Karsch, M. Nahrgang, J. Rafelski, H. Satz and J. Stachel and partial support of the U.S. Department of Energy under Grant No. DE-FG02-05ER41367. C.~S. acknowledges partial support of the Hessian LOEWE initiative through the Helmholtz International Center for FAIR (HIC for FAIR). This work was partly supported by the Polish National Science Center (NCN), under Maestro Grant DEC-2013/10/A/ST2/00106.
1,108,101,564,025
arxiv
\section{Introduction} Recently, filamentary structure in molecular clouds has attracted a great deal of attention in the context of star formation. Thanks to the high sensitivity of Herschel satellite \citep{pilbratt2010} in infrared (IR) and sub-mm ranges, \revII{Herschel} has found many filaments in the thermal dust emissions from molecular clouds, which include clouds inactive in star formation such as Polaris \citep{menshchikov2010,miville2010}, as well as active ones such as the Aquila cloud \citep{menshchikov2010}, IC 5146 \citep{arzoumanian2011}, Vela C \citep{hill2011}, and Rosette cloud \citep{schneider2012}. This indicates that the molecular clouds consist of the gas filaments and the star formation process should be studied in this context. Polarization observation of background stars beyond a molecular cloud gives the geometry of interstellar magnetic field inside and around the cloud. This is based on the fact that the dust is aligned in the magnetic field and the light obscured by the intervening aligned interstellar dusts residing in the cloud shows a polarization such as the {\bf E}-vector of the polarization is parallel to the interstellar magnetic field. From the near IR (J, H, and Ks bands) imaging polarimetry of Serpens South Cloud, \citet{sugitani2011} have found a well-ordered global magnetic field perpendicular to the main filament. They also found that small-scale filaments seem to run along the magnetic field. Even in the Taurus dark cloud, optical and near IR polarimetry \citep{moneti1984} indicates that the global magnetic field seems perpendicular to the major axis of the clouds. B211 and B213 filamentary clouds run in a direction perpendicular to the magnetic field, while many low-density striations seen outside of the filament extend along the magnetic field \citep{palmeirim2013}. This geometry is sometimes believed to be an outcome of interstellar MHD (magnetohydrodynamic) turbulence \citep{li2006}. That is, a turbulent sheet or filament is formed perpendicular to the global magnetic field. There are a number of studies about the filamentary gas cloud based on the hydrostatic and magnetohydrostatic equilibria. When a filament is sufficiently long compared with its width, the cloud can be regarded as an infinitely long cylinder. \revII{Under the assumptions of axisymmetry and no magnetic field,} density distribution of a cylindrical isothermal cloud with a central density $\rho_c$ is expressed analytically \citep{stodolkiewicz1963,ostriker1964} as \begin{equation} \rho(r)=\rho_c\left(1+\frac{r^2}{8H^2}\right)^{-2} \label{eqn:cyl-rho} \end{equation} where $H$ is a scale-height and is expressed using the isothermal sound speed $c_s$, the central density $\rho_c$, and the gravitational constant $G$ as $H=c_s/(4\pi G \rho_c)^{1/2}$. This leads to a mass distribution, which is defined as a mass contained inside radius $r$ per unit length, such as \begin{mathletters} \begin{eqnarray} \lambda(r) &=& \int_0^r 2\pi \rho r dr\\ &=&\frac{2c_s^2}{G}\frac{{r^2}/{8H^2}}{1+{r^2}/{8H^2}}. \label{eqn:cyl-lambda-dens} \end{eqnarray} \end{mathletters} \hspace*{-3mm} The solution is truncated at a radius $R$ where the pressure $p=\rho c_s^2$ is balanced with the external ambient pressure $p_{\rm ext}$, that is, $\rho \ge p_{\rm ext}/c_s^2$. Equation (\ref{eqn:cyl-lambda-dens}) shows that the cylindrical filament has a maximum line-mass (mass per unit length) $\lambda(R) \le \lambda(R=\infty)=2c_s^2/G$, which corresponds to the mass of a filament in the vacuum $p_{\rm ext}=0$. The character of the isothermal filament is controlled by a parameter as $\lambda(R)/(2c_s^2/G)$ \citep{nagasawa1987,inutsuka1992,fischera2012}. Herschel's observations of Aquila and Polaris clouds show that the Aquila main cloud has a large line-mass as $\lambda \gtrsim 5 \times (2c_s^2/G)$ and is rich in protoclusters but that a portion with low line-mass as $\lambda \lesssim 2c_s^2/G$ is devoid of prestellar cores and protostars \citep{andre2010} (To derive mass per unit length from observed column density, the width of the filament is assumed constant in their paper as FWHM $\sim 14000 {\rm AU}$ in Aquila and $\sim 9000{\rm AU}$ in Polaris). When the filaments have magnetic fields only parallel to their axes, $B_z$, the structure is also analytically given by equation (\ref{eqn:cyl-rho}) but in this case $H=c_s(1+\beta^{-1})^{1/2}/(4\pi G \rho_c)^{1/2}$, where the plasma beta is assumed to be constant as $\beta=c_s^2\rho/(B_z^2/8\pi)={\rm const}$ \citep{stodolkiewicz1963}. The mass distribution $\lambda(r)$ increases with the magnetic field strength as \begin{equation} \lambda(r) = \frac{2c_s^2}{G}\left(1+\beta^{-1}\right)\frac{{r^2}/{8H^2}}{1+{r^2}/{8H^2}}. \label{eqn:cyl-lambda-mag-dens} \end{equation} Comparing equations (\ref{eqn:cyl-lambda-dens}) and (\ref{eqn:cyl-lambda-mag-dens}), the line-mass increases in proportion to $1+\beta^{-1}$. The maximum line-mass supported against the self-gravity (for $r \gg H$) increases also in proportion to $1+\beta^{-1}$. Similar solutions are obtained numerically for the case whose mass-to-flux ratio is constant $\Gamma_z\equiv B_z/\rho={\rm const}$ \citep{fiege2000a,fiege2000b}. In both cases, the poloidal magnetic field $B_z$ has the effect of increasing the mass of a static filament. \citet{fiege2000a,fiege2000b} also consider the effect of toroidal magnetic field $B_\phi$ assuming a type of flux conservation, $\Gamma_\phi\equiv B_\phi/(r\rho)={\rm const}$. In this case, the toroidal magnetic field $B_\phi$ has the opposite effect of reducing the supported mass, because the $B_\phi$ exerts the ``hoop stress'' to compress the filament in the radial direction. \footnote{\revII{The radial Lorentz force coming from $B_\phi$ is proportional to the current in $z$-direction, $\propto \dif{rB_\phi}{r}=\dif{r^2\rho\Gamma}{r}$. Thus, the direction of the force is determined from the density distribution, or $M\equiv \dif{\log\rho}{\log r}+2$. \citet{fiege2000a} have shown that $M>0$ for their all isothermal and logatropic models, which means this force is working inwardly.}} However, the relation between the axis of the filament and the interstellar magnetic field is observed far from such simple configurations previously studied; that is, the actual filament is often perpendicular to the global magnetic field rather than the filament simply having poloidal and/or toroidal components. In the present paper, we revisit the magnetohydrostatic configuration of isothermal filaments, paying attention to the polarization observations, which indicate that the global magnetic field is often perpendicular to the filament. The axisymmetric cloud threaded by the poloidal magnetic field has a similar maximum mass that depends on the magnetic flux. \revII{From numerically obtained magnetohydrostatic configurations, it is shown that the maximum column density $\sigma_{\rm cr}$ depends on the magnetic flux density $B_0$ as $\sigma_{\rm cr}\sim 0.17 B_0/G^{1/2}$(eq.(4.8) of \citet{tomisaka1988b}).} This gives the maximum mass $M_{\rm cr}$ is proportional to the magnetic flux $\phi_0$, $M_{\rm cr}\sim 0.17 \phi_0/G^{1/2}$. \revII{This maximum column density is nearly equal to the maximum stable column density against the gravitational instability of magnetized plane-parallel sheet, $\sigma_{\rm cr} = B_0/(2\pi G^{1/2})$ obtained by \citet{nakano78}.} The filamentary structure may be in dynamical contraction \citep{inutsuka1992, kawachi1998} rather than in a hydrostatic state considered here. However the condition to begin the dynamical contraction is given from the hydrostatic maximum line-mass supported against the self-gravity. The structure of this paper is as follows: in $\S$2, the model and formulation for obtaining a magnetohydrostatic configuration are given. The method is a self-consistent field method similar to \citet{mouschovias1976a,mouschovias1976b} and \citet{tomisaka1988a,tomisaka1988b}, although these authors considered a disk-like cloud threaded perpendicularly by the magnetic field. Formulation for the filament with a lateral magnetic field is described in the following section. We show the numerical result in $\S$ 3, in which structure of the filament is shown. Discussion on the effect of a magnetic field, such as how much mass is supported by the lateral magnetic field, is shown in $\S$ 4. Section 5 is devoted to a summary. \section{Method and Model} \subsection{Magnetohydrostatic Equations} Basic equations to obtain magnetohydrostatic configurations of isothermal gas are composed of three equations: the force balance equation between the Lorentz force, gravity and the pressure force, the Poisson equation for gravitational potential $\psi$, and the Ampere's law between the current {\bf j} and the magnetic flux density {\bf B} as follows: \begin{equation} \frac{1}{c}{\bf j}\times {\bf B} - \rho \nabla \psi - c_s^2 \nabla \rho =0, \label{eqn:force-balance} \end{equation} \begin{equation} \nabla^2 \psi = 4 \pi G \rho, \label{eqn:poisson-eq} \end{equation} \begin{equation} {\bf j}=\frac{c}{4\pi} \nabla \times {\bf B}, \label{eqn:ampere-law} \end{equation} where $\rho$, $c_s$, $c$, and $G$ represent, respectively, the gas density, isothermal sound speed, speed of light and Newton's constant of gravity. We assume here a filament is extending along the $z$-axis and assume the filament is also uniform in the $z$-direction in the Cartesian coordinate $(x, y, z)$. We use a flux function $\Phi(x,y)$ to calculate the magnetic flux density ${\bf B}$ as \begin{mathletters} \begin{eqnarray} B_x&=&-\difd{\Phi}{y},\\ B_y&=&\difd{\Phi}{x}. \end{eqnarray} \end{mathletters} Although we will call this flux function $\Phi$ as a magnetic flux of a cylindrical cloud in the present paper, $\Phi$ has a dimension of the magnetic flux density $B$ times the size $L$, that is, $[\Phi]=[B][L]$ not an ordinary magnetic flux $[B][L]^2$. Since $\dif{}{z}=0$, $\Phi$ is the $z$-component of the natural vector potential, $\Phi=A_z$. Assuming $\dif{}{z}=0$, from the Ampere's law (eq.[\ref{eqn:ampere-law}]) the electric current is given as \begin{mathletters} \begin{eqnarray} j_x&=&\frac{c}{4\pi}\difd{B_z}{y},\\ j_y&=&-\frac{c}{4\pi}\difd{B_z}{x},\\ j_z&=&\frac{c}{4\pi}\left(\difd{B_y}{x}-\difd{B_x}{y}\right)= -\frac{c}{4\pi}\Delta_2\Phi, \end{eqnarray} \end{mathletters} where $\Delta_2 \equiv \partial^2/\partial x^2+\partial^2/\partial y^2$. The $z$-component of equation (\ref{eqn:force-balance}) is reduced to ${\bf j}\times {\bf B}|_z=0$ (``force-free'' condition) and thus \begin{equation} \left(\difd{B_z}{y}\right)\left(\difd{\Phi}{x}\right) -\left(\difd{B_z}{x}\right)\left(\difd{\Phi}{y}\right)=0. \end{equation} Since this equation is rewritten as \begin{equation} \frac{\left(\dif{B_z}{y}\right)}{\left(\dif{\Phi}{y}\right)} =\frac{\left(\dif{B_z}{x}\right)}{\left(\dif{\Phi}{x}\right)}, \end{equation} this requires that $B_z$ depends only on the flux function as $B_z=B_z(\Phi)$. The $x$- and $y$-components of equation (\ref{eqn:force-balance}) are reduced to \begin{mathletters} \begin{eqnarray} \frac{1}{4\pi}\Delta_2\Phi\difd{\Phi}{x} -\rho\difd{\psi}{x}-c_s^2\difd{\rho}{x}-\frac{1}{8\pi}\difd{B_z^2}{x}&=&0,\\ \frac{1}{4\pi}\Delta_2\Phi\difd{\Phi}{y} -\rho\difd{\psi}{y}-c_s^2\difd{\rho}{y}-\frac{1}{8\pi}\difd{B_z^2}{y}&=&0, \end{eqnarray} \end{mathletters} in which the last term of the left-hand side represents the magnetic pressure force. In this paper, we restrict ourselves to the $B_z=0$ model. In this case, the force balance is simply reduced to the following equations: \begin{mathletters} \begin{eqnarray} \frac{1}{4\pi}\Delta_2\Phi\difd{\Phi}{x} &=&\rho\difd{\psi}{x}+c_s^2\difd{\rho}{x},\label{eqn:comp-force-balancea} \\ \frac{1}{4\pi}\Delta_2\Phi\difd{\Phi}{y} &=&\rho\difd{\psi}{y}+c_s^2\difd{\rho}{y}.\label{eqn:comp-force-balanceb} \end{eqnarray} \end{mathletters} Since the Lorentz force exerts no force in the direction of the magnetic field, the force balance along the magnetic field, i.e., in the direction of $(B_x,B_y)$ is expressed as \begin{equation} -\rho\difd{\psi}{s}-c_s^2\difd{\rho}{s}=0, \end{equation} where $s$ represents the distance measured along the magnetic field line. Integrating along the magnetic field line, the density is expressed as \begin{equation} \rho=\frac{q}{c_s^2}\exp{\left(-\frac{\psi}{c_s^2}\right)},\label{eqn:rho-s} \end{equation} where $q$ is an integration constant determined for each magnetic field line and thus $q(\Phi)$ is a function of $\Phi$. Using equation (\ref{eqn:rho-s}), the right-hand side of equations (\ref{eqn:comp-force-balancea}) and (\ref{eqn:comp-force-balanceb}) become respectively $(\dif{q}{x})\exp{\left(-\psi/c_s^2\right)}$ and $(\dif{q}{y})\exp{\left(-\psi/c_s^2\right)}$. Considering the fact that $q$ is a function of $\Phi$, equations (\ref{eqn:comp-force-balancea}) and (\ref{eqn:comp-force-balanceb}) require \begin{equation} \Delta_2\Phi = 4 \pi \Difd{q}{\Phi}\exp{\left(-\frac{\psi}{c_s^2}\right)}. \label{eqn:poisson-Phi} \end{equation} The other equation to be solved is the Poisson equation for the gravitational potential $\psi$ (eq.[\ref{eqn:poisson-eq}]) as \begin{equation} \Delta_2\psi = 4 \pi G \frac{q(\Phi)}{c_s^2}\exp{\left(-\frac{\psi}{c_s^2}\right)}. \label{eqn:poisson-psi} \end{equation} Equations (\ref{eqn:poisson-Phi}) and (\ref{eqn:poisson-psi}) are the basic equations, which are a coupled partial differential equation system of the elliptic type for the two variables $\Phi$ and $\psi$ after $q(\Phi)$ is determined. We search for the solutions of $\psi$ and $\Phi$ that simultaneously satisfy equations (\ref{eqn:poisson-Phi}) and (\ref{eqn:poisson-psi}) by the self-consistent field method. We assume initial guesses for $\psi$ and $\Phi$ and let them converge to true solutions. \subsection{Mass Loading} The function $q(\Phi)$ is calculated based on the line-mass distribution against the magnetic flux per unit length, which is sometimes called mass loading. The line-mass $\Delta \lambda$ between two field lines specified by $\Phi$ and $\Phi+\Delta \Phi$ is calculated to first order in $\Delta\Phi$ as \begin{eqnarray} \Delta \lambda(\Phi) &=&2\int_0^{y_s(\Phi)}dy\int_{x(y,\Phi)}^{x(y,\Phi+\Delta \Phi)}dx \rho(x,y)\nonumber \\ &=&2\int_0^{y_s(\Phi)}dy\frac{\rho}{(\dif{\Phi}{x})}\Delta \Phi\nonumber \\ &=&2\int_0^{y_s(\Phi)}dy\frac{q(\Phi)}{c_s^2}\frac{\exp{(-\psi/c_s^2)}}{(\dif{\Phi}{x})}\Delta \Phi, \end{eqnarray} where $y_s(\Phi) > 0$ is the $y$-coordinate of the surface of the cloud where the density is equal to the density at the surface $\rho(y_s(\Phi))=\rho_s\equiv p_{\rm ext}/c_s^2$ (see Fig.\ref{fig1}b). (In the present paper, we assume that all the physical quantities have mirror symmetries against the $x$- and $y$-axes.) Thus, the mass-to-flux ratio is calculated as \begin{equation} \Difd{\lambda}{\Phi}=\frac{2q(\Phi)}{c_s^2}\int_0^{y_s(\Phi)} \frac{\exp{(-\psi/c_s^2)}}{(\dif{\Phi}{x})}dy, \label{eqn:dlambda_dPhi} \end{equation} where the integration of the right-hand side of the equation can be evaluated for the approximate solutions of $\psi$ and $\Phi$ even if they are not converged yet. Consider a cylindrical cloud (parent cloud) with a uniform density $\rho_0$ and a radius $R_0$ which is threaded by a uniform magnetic field $B_0$ (see Fig.\ref{fig1}). Line mass $\Delta \lambda$ contained between two magnetic field lines specified by $\Phi$ and $\Phi+\Delta \Phi$ is \begin{equation} \Delta \lambda= 2 \left(R_0 \frac{\Delta \Phi}{\Phi_{\rm cl}}\right) \left(R_0\left[1-(\Phi/\Phi_{\rm cl})^2\right]^{1/2}\right) \rho_0, \end{equation} where $\Phi_{\rm cl}$ is a total flux per unit length of a cloud defined as \begin{equation} \Phi_{\rm cl}=R_0B_0, \end{equation} and the flux function $\Phi$ varies from $-\Phi_{\rm cl}$ to $+\Phi_{\rm cl}$. Thus, the mass-to-flux distribution for this uniform cylinder is written as \begin{equation} \Difd{\lambda}{\Phi}=2R_0^2\rho_0 \frac{\left[1-(\Phi/\Phi_{\rm cl})^2\right]^{1/2}} {\Phi_{\rm cl}}\ \ \ \ (-\Phi_{\rm cl}\le\Phi\le\Phi_{\rm cl}). \label{eqn:model_dm_dPhi} \end{equation} Using the total line mass $\lambda_0=2\int_0^{\Phi_{\rm cl}}(\Dif{\lambda}{\Phi})d\Phi=\pi R_0^2 \rho_0$, equation (\ref{eqn:model_dm_dPhi}) is rewritten as \begin{equation} \Difd{\lambda}{\Phi}=\frac{2\lambda_0}{\pi \Phi_{\rm cl}} \left[1-(\Phi/\Phi_{\rm cl})^2\right]^{1/2}. \label{eqn:dlambda_dPhi0} \end{equation} If we require that the line-mass distribution of the solution ($d\lambda/d\Phi$ of eq.[\ref{eqn:dlambda_dPhi}]) should be equal to that of the uniform cylinder (eq.[\ref{eqn:dlambda_dPhi0}]), $q(\Phi)$ is calculated as follows: \begin{equation} q(\Phi)=\frac{c_s^2\lambda_0\left[1-(\Phi/\Phi_{\rm cl})^2\right]^{1/2}}{\pi\Phi_{\rm cl}\int_0^{y_s(\Phi)}\exp(-\psi/c_s^2)/(\dif{\Phi}{x})dy}, \label{eqn:obtain_q} \end{equation} which can be coupled with the basic equations (\ref{eqn:poisson-Phi}) and (\ref{eqn:poisson-psi}). These three equations are sufficient to describe the magnetohydrostatic configuration. \revII{The structure and the line-mass are affected by mass loading of the filament. Difference depending on the mass-loading will be quantitatively discussed in a forthcoming paper.} We normalize the basic equations using the surface density $\rho_s=p_{\rm ext}/c_s^2$, isothermal sound speed $c_s$, free-fall time $(4\pi G \rho_s)^{-1/2}$, scale-height $H=c_s/(4\pi G \rho_s)^{1/2}$, and unit magnetic strength $B_u=(8\pi c_s^2\rho_s)^{1/2}$. Dependent and independent variables are normalized as follows: $\rho=\rho' \rho_s$, $\psi=\psi' c_s^2$, $\lambda=\lambda' \rho_s H^2$, $\Phi=\Phi' H B_u$, $q=q' \rho_s c_s^2$, and ${\bf r}=H {\bf r}'$, where the variables with $'$ represent the normalized ones. Equations (\ref{eqn:poisson-Phi}) and (\ref{eqn:poisson-psi}) reduce respectively to \begin{eqnarray} \Delta_2'\Phi'& = &-\frac{1}{2}\Difd{q'}{\Phi'}\exp{\left(-\psi'\right)}. \label{eqn:final_basic_equation_1}\\ \Delta_2'\psi'& = & q'\exp{\left(-\psi'\right)}. \label{eqn:final_basic_equation_2} \end{eqnarray} Considering the magnetic field line running along the $y$-axis, the central density $\rho_c$ is written in terms of the central gravitational potential $\psi_c$ and the central $q(\Phi=0)=q_c$ as \begin{equation} \rho_c=\frac{q_c}{c_s^2}\exp{(-\psi_c/c_s^2)}, \end{equation} and thus the central $q'_c$ is expressed as \begin{equation} q'_c=\rho_c'\exp{\psi'_c}. \label{eqn:q'} \end{equation} Using this, equation (\ref{eqn:dlambda_dPhi}) gives the mass-to-flux ratio of the central flux tube of $\Phi'=0$ and $x'=0$ as \begin{equation} \left. \Difd{\lambda'}{\Phi'}\right|_c=2q'_c \int_0^{y'_s(\Phi'=0)} \left[\exp(-\psi')/\left(\dif{\Phi'}{x'}\right)\right]_{x'=0}dy'. \label{eqn:dlambda_from_q'c} \end{equation} Equation (\ref{eqn:model_dm_dPhi}) gives the mass-to-flux ratio of $\Phi'\ne 0$ from that of $\Phi'=0$ as \begin{equation} \Difd{\lambda'}{\Phi'}=\left. \Difd{\lambda'}{\Phi'}\right|_c \left[1-\left(\frac{\Phi'}{\Phi'_{cl}}\right)^2\right]^{1/2}. \label{eqn:dm_dPhi_from_central_value} \end{equation} Finally, from equation (\ref{eqn:obtain_q}) $q'$ for $\Phi' \ne 0$ is obtained \begin{equation} q'=\frac{\Dif{\lambda'}{\Phi'}}{2\int_0^{y'_s(\Phi')}\exp{(-\psi')}/(\dif{\Phi'}{x'})dy'}, \label{eqn:q'2} \end{equation} where $\Dif{\lambda'}{\Phi'}$ is calculated from equation (\ref{eqn:dm_dPhi_from_central_value}) using equations (\ref{eqn:q'}) and (\ref{eqn:dlambda_from_q'c}). This equation gives $q'(\Phi')$ as a function of $\rho_c$, although equation (\ref{eqn:obtain_q}) needs to specify $\lambda_0$. Equations (\ref{eqn:final_basic_equation_1}) and (\ref{eqn:final_basic_equation_2}) with this equation to derive $q'$ are the basic equations and we find solutions to simultaneously satisfy these two partial differential equations using the self-consistent field method. The outer boundary condition for these partial differential equations of elliptic type is set by \begin{eqnarray} \psi&=&2G\lambda_0\log{r}+C,\\ \Phi&=&B_0 x, \end{eqnarray} where $r=(x^2+y^2)^{1/2}$ represents the distance from the center of the filament, the former condition represents the gravitational potential far from the filament being approximated by that of an infinitesimally thin filament with the same total line-mass $\lambda_0$ and the latter means the magnetic field is uniform with $B_0$ and running in the $y$-direction far from the filament. These are reduced to non-dimensional form: \begin{eqnarray} \psi'&=&\frac{\lambda_0'}{2\pi}\log{r'}+C',\\ \Phi'&=&\beta_0^{-1/2}x', \end{eqnarray} where $\beta_0$ represents the plasma $\beta$ outside and far from the filamentary cloud as $\beta_0=p_{\rm ext}/(B_0^2/8\pi)$. The model is specified with three non-dimensional parameters: $\beta_0$: plasma beta outside the filament, $R_0'$: the radius of the parent filamentary cloud, and $\lambda_0'$: total line mass. Since it is very convenient to choose the central density $\rho_c'=\rho_c/\rho_s$ as the last parameter rather than the total mass, we use not $\lambda_0'$ but $\rho_c'$ to specify the model. See Figure \ref{fig1} for explanation. The reason for this substitution comes from the fact that $\lambda_0'$ has a maximum value above which no equilibrium solution exists while $\rho'_c$ does not have such an upper limit and the maximum of $\lambda_0'$ cannot be known a priori. To change the last parameter from $\lambda'_0$ to $\rho_c'$, equation (\ref{eqn:q'2}) is derived from equation (\ref{eqn:obtain_q}) and equation (\ref{eqn:q'2}) enables determination of $q'$ as a function of $\rho'_c$ not $\lambda'_0$. In our calculation, the total line-mass $\lambda_0'$ is obtained after the static configuration is calculated. For simplicity, we will hereafter abbreviate the prime representing the normalized quantity, unless the meaning of the quantity is unclear. Model parameters are summarized in Table \ref{table1}. Although we calculated 12 cases of the central density for each model as $\rho_c=2$, 3, 5, 10, 20, 30, 50, 100, 200, 300, 500, and $10^3$, it was difficult to obtain solutions especially for the models with high central density. We encountered an oscillation rather than smooth convergence in the converging scheme to find a solution. This oscillation is seen in radially outermost regions of geometrically thin (flat) filaments, which appear in the high central density models. In Table \ref{table1}, the range of the central density is shown in which the self-consistent solution is obtained in the last column. To solve the two-dimensional Poisson equation, we applied the finite-difference method to the Laplacian $\Delta_2$ in equations (\ref{eqn:final_basic_equation_1}) and (\ref{eqn:final_basic_equation_2}). The number of finite-difference cells is 641$\times$641 and the 321th cells are located on the $x$- and $y$-axes. Compared with a low-resolution study of 161$\times$161, the obtained line-mass differs only 0.2\% for typical models ($\rho_c=50$ of Model C3). Thus, the numerical convergence is sufficient. ICCG (Incomplete Cholesky factorization Conjugate Gradient) algorithm \citep{barrett1994} is used to solve the Poisson equation. \section{Result} \subsection{Models with Small $R_0$} We begin with the models with small $R_0 \lesssim 1$, Models A and B. The model parameters, $R_0$, $\beta_0$, and the range of $\rho_c/\rho_s$ are summarized in Table \ref{table1}. Model A assumes small $R_0=0.5$ and strong magnetic field $\beta_0=0.03$. Figure \ref{fig2} illustrates the structures of three states of Model A with different central densities ({\it a}: $\rho_c=10$, {\it b}: $\rho_c=100$, and {\it c}: $\rho_c=10^3$). As is clearly shown, the vertical size is larger than the horizontal one in the models with the low central density as $\rho_c \lesssim 10^2$ (Figs. \ref{fig2} [{\it a}] and [{\it b}]). On the other hand, in the model with high central density (Fig.~2[{\it c}]) the vertical size is smaller, which is ordinarily seen for the magnetized cloud. Although the magnetic field line seems straight, the shape of the field line differs between the \revII{model} with low central density ({\it a}) and that with high central density ({\it c}). The \revII{model} with $\rho_c=10$ (Fig.\ref{fig2}[{\it a}]) has magnetic field lines which bow outwardly. That is, $(B_x,B_y)=(+,+)$ in the fourth quadrant while $(B_x,B_y)=(-,+)$ in the first quadrant. In this case, the Lorentz force is exerted radially inwardly, which contributes to the shape of the filament in which the major axis coincides with the direction of the magnetic field. On the other hand, in the \revII{model} with $\rho_c=10^3$ (Fig.\ref{fig2}[{\it c}]), the magnetic field lines bow inwardly ($(B_x,B_y)=(-,+)$ in the fourth quadrant while $(B_x,B_y)=(+,+)$ in the first quadrant). Thus, the Lorentz force is toward outward direction. Since the Lorentz force has no component in the direction of the magnetic field (vertically), the filament preferentially contracts in the vertical direction. As a result the major axis of the gas distribution is perpendicular to the direction of the magnetic field for models with high central densities. When axisymmetric clouds (not filaments) are considered, a prolate shape that extends in the direction of the magnetic field is expected either in a cloud which is overlapped by a toroidal magnetic field and is pinched by the magnetic hoop stress \citep{tomisaka1991,fiege2000c} or in a cloud which has a small $R_0$ and the magnetic field plays a role not in supporting but in confining the cloud \citep{caitaam2010}. In the geometry of the filament, the magnetic field plays a similar role in the models with small initial radius $R_0$. The relation between the mass and the central density indicates the stability of the cloud \revII{(for spherically-symmetric polytropes, see \citet{bonnor1958}; for magnetized clouds, see \citet{tomisaka1988b})}. That is, when a filament has a line-mass that exceeds the maximum allowable one from the relation between the mass and the central density, the filament has no magnetohydrostatic equilibrium and it must undergo a dynamical contraction. Shown for the disk-like cloud, when one mass corresponds to two central densities, one of the two is stable and the other is unstable (\citet{zeldovich1971}; for the specific case of isothermal magnetized clouds, see \S IVb of \citet{tomisaka1988b}). We plot the line-mass $\lambda_0$ against the central density $\rho_c$ for the filamentary cloud. From equations (\ref{eqn:cyl-rho}) and (\ref{eqn:cyl-lambda-dens}), the line-mass of the non-magnetized filament is written down by the normalized central density $\rho'_c\equiv \rho_c/\rho_s$ as \begin{equation} \lambda_{\rm 0}=\frac{2c_s^2}{G}\left(1-{\rho'_c}^{-1/2}\right), \label{eqn:lambda-dim} \end{equation} which is rewritten in the non-dimensional form as \begin{equation} \lambda'_{\rm 0}=8\pi\left(1-{\rho'_c}^{-1/2}\right).\label{eqn:nonmag-lambda} \label{eqn:lambda-nondim} \end{equation} Figure \ref{fig3} ({\it a}) illustrates the line-mass against the normalized central density for Models A and B. Compared with the dash-dotted line, which represents equation (\ref{eqn:nonmag-lambda}), it is shown that Model A with $\rho_c\lesssim 50$ is less massive than the non-magnetized filament. Figure \ref{fig3} ({\it a}) also indicates that even the filament of Model B is less massive than the non-magnetized one when $\rho_c \lesssim 5$. This means that the magnetic field plays a role in confining the filament in the models with small $R_0$ especially for low central density. In this case, the magnetic field has a role in the reduction of the equilibrium mass. The density and plasma $\beta$ distributions both along the $x$- and $y$-axes are shown in Figure \ref{fig4}. The distribution in the $y$-direction (dotted line) is more extended compared with the $x$-direction (solid line) in the model with $\rho_c=10$, while the distribution in the $y$-direction (dotted line) is more compact than that in the $x$-direction (solid line) for $\rho_c=100$ and $10^3$. Figure \ref{fig4} ({\it a}) also shows that the density distribution in the $y$-direction (dotted line) is similar to that of the non-magnetized filament (dashed line), although the distribution is slightly compact compared with the non-magnetized filament. In addition, the distribution in the $x$-axis is far from that of the non-magnetized filament, especially near the surface of the filament. Although the plasma $\beta$ is small near the surface in this model, it increases as it reaches the center. Although the plasma beta at the center $\beta_c$ is below unity for the models with $\rho_c \lesssim 30$, it exceeds unity for the models with $\rho_c \gtrsim 50$. \subsection{Standard Model} Now we move on to the models with larger $R_0$ as $R_0=2$ and 5. In Figure \ref{fig5}, we illustrate the structure of the filament of Model C3 ($R_0=2$ and $\beta_0=1$) for three central densities, $\rho_c=10$, 100, and 300. In contrast to Figure \ref{fig2} ({\it a}) and ({\it b}), the cross-section of the filament indicates a shape whose major axis is perpendicular to the magnetic field. Panels ({\it b}) and ({\it c}) show that the magnetic field is strongly squeezed inwardly near the mid-plane. The magnetic field strength is weak in the horizontally peripheral region or in other words near the outer mid-plane of the filament. In the same figure, we plotted the radius of the non-magnetized filament with the same central density by dotted line. Figure \ref{fig5} indicates that the size in the direction parallel to the magnetic field ($y$-direction) is slightly smaller than that of the non-magnetized one, while that in the perpendicular direction ($x$-direction) is larger than that of the non-magnetized one. In particular, models with high central density (({\it c}): $\rho_c=300$) have a flat shape. Comparison of models C3 -- C6 shows that the half width of the filament in the $y$-direction $Y_s$ decreases with increasing the field strength (from C3 to C6). That is, models with the same $\rho_c=100$ but different $\beta_0$ have $Y_s=0.7$ ($\beta_0=1$), 0.65 ($\beta_0=0.5$), 0.55 ($\beta_0=0.1$), and 0.475 ($\beta_0=0.01$). If we compare the respective models for $\rho_c=10$ and for $\rho_c=300$, the trend is the same for both central densities. On the other hand, that of the $x$-direction $X_s$ increases with the magnetic field strength. That is, $X_s=1.3$ ($\beta_0=1$), 1.41 ($\beta_0=0.5$), 1.64 ($\beta_0=0.1$), and 1.88 ($\beta_0=0.01$), if we compare models with $\rho_c=100$. This trend is also seen in other central densities. This shows that the width of the $x$-direction \revII{seems to converge} to $X_s \rightarrow R_0=2$ \revII{(the radius of the parent cloud)} when increasing the strength of magnetic field. As shown in Figure \ref{fig6}, the density distributions both in the $x$- (solid line) and $y$-directions (dotted line) are more compact for the models with higher central density. This figure also shows that the slope of the density distribution is similar to that of the non-magnetized filament (dashed line). The plasma $\beta$ distribution along the $x$- and $y$-axes are illustrated in Figure \ref{fig6} ({\it b}). This figure shows clearly that the central plasma $\beta$ is approximately constant, $\beta_c \simeq 3-4$, irrespective of the central density $10 \le \rho_c \le 300$. On the other hand, the plasma $\beta$ in the envelope varies greatly depending on the central density $\rho_c$. Furthermore, the plasma $\beta$ increases with increasing distance from the center in the $x$-direction, while it decreases in the $y$-direction. The increase in plasma $\beta$ in the $x$-direction corresponds to the relative weakness of the magnetic field seen in the outer ($|x| \gtrsim 0.3-0.5$) mid-plane disk region. Contrarily, the decrease in plasma $\beta$ in the $y$-direction is explained by the decrease in the density (and simultaneously the pressure). We should pay attention to the great contrast to the models with small $R_0$ of Figure \ref{fig4} ({\it b}). The plasma $\beta$ decreases near the surface in both $x$- and $y$-directions for Model A. The inhomogeneity in the magnetic field is shown to be characteristic of the models with larger $R_0$. Figure \ref{fig7} illustrates the structure of Models D1 ({\it a}), D2 ({\it b}), and D3 ({\it c}). Comparing Models C3 (Fig. \ref{fig5}({\it b})) and D1 (Fig. \ref{fig7}({\it a})) under the condition of $\rho_c$ being fixed, we can see that the major-to-minor axis ratio of the cross-section of filaments increases with $R_0$. This seems to come from the fact that the size in the $x$-direction is approximately given by the initial radius $R_0$ but the width of the $y$-direction is similar to the diameter of the non-magnetized filament. Increasing the magnetic field strength from $\beta_0=1$ ({\it a}) to $\beta_0=0.01$ ({\it c}), curvature of the magnetic field decreases, since the strong magnetic field prevents itself from bending. Although the horizontal size of the filament is similar, these three models have completely different line-masses $\lambda_0=40.8$ ({\it a}), $85.9$ ({\it b}), and $169$ ({\it c}), respectively. Figure \ref{fig8}({\it a}) plots the density distributions along the $x$- and $y$-axes. The distribution along the $x$-axis (the solid line) is extended in the model with stronger magnetic field (Model D3 $\beta_0=0.01$). On the other hand, Model D1 indicates a centrally condensed density distribution and the slope of $\rho(|x|)$ is shallower than that of the non-magnetized filament (the dashed line) in the envelope region $r\gtrsim 1$. The figure also indicates that the distribution in the $y$-direction is more compact than that of the non-magnetized filament with the same central density, which is also seen in Models C. As seen in Figure \ref{fig8} ({\it b}), the central plasma $\beta$ varies as $\beta_c \simeq 2.1$ ({\it a}: $\beta_0=1$), 1.2 ({\it b}: $\beta_0=0.1$), and 0.54 ({\it c}: $\beta_0=0.01$), according to $\beta_0$. That is, increasing $\beta_0$ in two orders of magnitude induces the increase of a factor of $\sim 4$ in $\beta_c$. Although the plasma $\beta$ decreases along the $y$-axis with increasing distance from the center (dotted line), distribution of the plasma $\beta$ along the $x$-axis is complex (solid line). In Model D1 ({\it a}), $\beta(|x|)$ increases from $\sim 2$ to $40$ along the $x$-axis; in Model D2 ({\it b}), $\beta(|x|)$ increases from $\simeq 1.2$ and reaches $\simeq 2$ then reduces below unity; in Model D3 ({\it c}), $\beta(|x|)$ decreases from $\simeq 0.5$ monotonically. This explains the fact that the magnetic field is extremely weak in the outer region of the filament, $|x| \gtrsim 3$ in Model D1, while the field strength is almost uniform in Model D3 where the Lorentz force is much stronger than the pressure force and gravity. We illustrate the relation between the line-mass and the central density for Models C and D ($R_0=2$ and 5) in Figure \ref{fig3}({\it b}), which shows that the line-mass of the magnetized filaments is always larger than that of the non-magnetized one (dash-dotted line). This means that the magnetic field plays a role in supporting the isothermal filament against the self-gravity. In contrast to Models C and D, Models A and B with small $R_0$ show that the magnetic field has an effect to confine the filamentary gas and thus the line-mass of magnetized filament is sometimes less-massive than that of the non-magnetized filament. As a result, the line-masses of Models C and D (standard model) are more massive than those of Models A and B (the models with small $R_0$), comparing between the respective models with the same central density. \section{Discussion} \subsection{Maximum Mass} \label{sec:4.1} Figure \ref{fig3} shows the masses of the equilibrium solutions. As is expected from the non-magnetized model, the line-mass is an increasing function of the central density or central-to-surface density ratio. The non-magnetized filament has a maximum line-mass of $\lambda'_0=8\pi$ (see eq.[\ref{eqn:lambda-nondim}]), which corresponds to the dimensional value of $\lambda_0=2c_s^2/G$ (see eq.[\ref{eqn:lambda-dim}]), which is reached for $\rho_c\rightarrow \infty$. Similarly to the non-magnetized model, the maximum line-mass for given parameters $R_0$ and $\beta_0$ seems to be reached when $\rho_c\rightarrow \infty$. The maximum line-mass should be estimated from the line-mass of the filament with finite central densities. We calculate $\dif{\log \lambda_0}{\log \rho_c}$ for the state with the highest central density. When the slope $\dif{\log \lambda_0}{\log \rho_c}$ is as small as $ < 0.1$, the maximum line-mass given from the largest line-mass obtained is a good estimation as the maximum line-mass that can be supported. The largest line-masses obtained for respective models are plotted in Figure \ref{fig9} and shown in Table \ref{table1}. The line-mass ($y$-axis) is plotted against the magnetic flux per unit length $\Phi_{\rm cl} \equiv R_0B_0$ ($x$-axis). Asterisks represent the maximum line-mass for more reliable models, as $\dif{\log \lambda_0}{\log \rho_c} \le 0.1$, while the cross represents the maximum line-mass for the models with $0.1 < \dif{\log \lambda_0}{\log \rho_c} \le 0.25$. From the asterisk points, we obtain an empirical formula for the maximum line-mass as \begin{equation} \lambda'_{\rm max} \simeq 4.3 \Phi_{\rm cl}' + 20.8, \label{eqn:-lambda'} \end{equation} using the least-squares method, where we add $'$ to emphasize that the quantities are normalized. Although the maximum line-mass of the non-magnetized filament given from the formula $\lambda'_{\rm max}(\Phi_{\rm cl}'=0)\simeq 20.8$ is somewhat smaller than that obtained analytically $\lambda'_{\rm max}(\Phi_{\rm cl}'=0)=8\pi\simeq 25.1$, the fit is remarkable. Since the line-mass and the magnetic line-flux are normalized by $c_s^2/4\pi G$ and $c_s^2/(G/2)^{1/2}$ in this paper, the dimensional maximum line-mass is expressed with the dimensional flux as \begin{equation} \lambda_{\rm max} \simeq 0.24 \frac{\Phi_{\rm cl}}{G^{1/2}}+1.66 \frac{c_s^2}{G}. \label{eqn:mcr_nond} \end{equation} The critical mass-to-flux ratio has been obtained for the disk-like cloud as $(G^{1/2}M/\Phi_B)_{\rm crit}\simeq 0.18 \simeq 1/2\pi$ (eq.(4.1) of \citet{tomisaka1988b}; see also \citet{nakano78}). It should be noticed that the factor for the filament, $0.24$, is similar to that of the disk, $0.18$\footnote{ The mass-to-flux ratio ($M/\phi$) and the line-mass to flux per unit length ($\lambda/\Phi_{\rm cl}$) ratio have the identical dimension to the ratio between column density and magnetic flux density ($\sigma/B$). Increasing the strength of magnetic field, a flat structure appears near the center of the cloud both in a disklike cloud \citep{tomisaka1988b} and in a filamentary cloud. Thus, a flat structure threaded perpendicularly by the magnetic field appears commonly and such a disk seems to control the maximum mass and line-mass. If the condition of criticality is related to the structure of this flat part and thus related to the $\sigma/B$ ratio of the structure, this explains the similarity of the factors appeared in the critical $M/\phi$ and in the critical $\lambda/\Phi_{\rm cl}$.}. Finally, the maximum line-mass (eq.[\ref{eqn:mcr_nond}]) is finally evaluated as \begin{equation} \lambda_{\rm max} \simeq 22.4 \left(\frac{R_0}{0.5\,{\rm pc}}\right) \left(\frac{B_0}{10\,{\rm \mu\, G}}\right) M_\odot\,{\rm pc}^{-1} +13.9\left(\frac{c_s}{190\, {\rm m\,s^{-1}}}\right)^2 M_\odot\,{\rm pc}^{-1}. \label{eqn:mcr_dim} \end{equation} Thus, when the magnetic flux per unit length is larger than $\Phi \gtrsim 3\, {\rm pc \,\mu G}(c_s/190\,{\rm m\,s^{-1}})^2$, it is concluded that the maximum line-mass of the filament is affected by the magnetic field (the first term is larger than the second term). The factors in front of $\Phi'_{\rm cl}$ and $\Phi_{\rm cl}$ in equations (\ref{eqn:-lambda'}) and (\ref{eqn:mcr_nond}) must depend on the way of mass loading adopted here (eq.[\ref{eqn:model_dm_dPhi}]). Analogy from the case of disk-like cloud (\S IVb of \citet{tomisaka1988b}), the factor may increase if we choose more uniform $d\lambda/d\Phi_{\rm cl}$ rather than the centrally concentrated one assumed in equation (\ref{eqn:model_dm_dPhi}). \subsection{Virial Analysis} In this section, we analyze the equilibrium state of the cloud using a virial analysis. The equation of motion for MHD is written \begin{equation} \rho\left[\difd{\bf u}{t} +\left({\bf u}\cdot\nabla\right){\bf u}\right]=-\nabla p -\rho\nabla\psi +\frac{1}{4\pi}\left(\nabla\times {\bf B}\right)\times {\bf B},\label{eqn:eq-motion} \end{equation} where {\bf u} represents the flow velocity. We consider an axisymmetric filament uniform in the axis direction. \revII{Using the cylindrical coordinates, we multiply the position vector from the center, and integrate over the filament. The first term of the right-hand side gives} \begin{equation} \int_0^R -\difd{p}{r}r2\pi r dr=-\left[p 2 \pi r^2\right]_0^{R}+\int_0^Rp4\pi rdr =-2\pi R^2p_s+2c_s^2\lambda,\label{eqn:virial1} \end{equation} where $R$, $p_s$, and $\lambda$ represent the radius of the filament, the pressure at the surface of the filament, and the line-mass defined as $\lambda=\int_0^R \rho 2\pi r dr$. Treating the second term of the right-hand side of equation (\ref{eqn:eq-motion}) in a similar way provides \begin{equation} \int_0^R -\rho \difd{\psi}{r}r 2 \pi rdr =-\int_0^R 2G\lambda_r \rho 2\pi r dr =-2G\int_0^R \lambda_r d\lambda_r =-G\lambda^2, \label{eqn:virial2} \end{equation} where we use the Poisson equation for the gravity as $ -\dif{\psi}{r}=-{2G\lambda_r}/{r} $ and the definition of the line-mass contained inside the radius $r$ as $ \lambda_r=\int_0^r \rho 2\pi r dr. $ Thus, for a non-magnetic filament ${\bf B}=0$, equation (\ref{eqn:eq-motion}) gives the virial equation in equilibrium \begin{equation} 2\pi R^2 p_s = G \lambda\left(\frac{2c_s^2}{G}-\lambda\right) \label{eqn:eq-of-B=0} \end{equation} Since the right-hand side of the equation must be positive, the line-mass $\lambda$ has a maximum, $\lambda_{\rm max}=2c_s^2/G$. This critical line-mass is estimated as $\lambda_{\rm max}\simeq 17\, M_\odot{\rm pc^{-1}}(c_s/190\,{\rm m\,s^{-1}})^2$ for the interstellar molecular gas with $T=10\, {\rm K}$. When the filament is laterally threaded by the magnetic field, the last term of the right-hand side of equation (\ref{eqn:eq-motion}) gives the magnetic force term and is estimated as \begin{equation} \frac{B^2}{8\pi}\pi R^2 =\frac{\Phi_{\rm cl}^2}{8}, \label{eqn:magnetic-term} \end{equation} where $\Phi_{\rm cl}$ represents the magnetic flux per unit length as $\Phi_{\rm cl}=BR=B_0R_0$. Here we assumed that the flux $\Phi_{\rm cl}$ is \revII{the same as in the parent state} in which the radius and the magnetic field strength are equal to $R_0$ and $B_0$, respectively. Equation (\ref{eqn:eq-of-B=0}) becomes \begin{equation} 2\pi R^2 p_s = 2c_s^2\lambda-G\lambda^2+\frac{\Phi_{\rm cl}^2}{8}. \label{eqn:eq-of-B<>0} \end{equation} Thus, the maximum line-mass increases and is equal to \begin{equation} \lambda=\frac{c_s^2+(c_s^4+G\Phi_{\rm cl}^2/8)^{1/2}}{G} \sim \frac{2c_s^2}{G}+\frac{\Phi_{\rm cl}^2}{16c_s^2}, \label{eqn:mass-formula-small_phi} \end{equation} where to derive an approximate formula \revII{for} $\lambda$ \revII{in the weakly magnetized case} we assumed $2c_s^2/G\gg \Phi_{\rm cl}^2/(16c_s^2)$ or $\Phi_{\rm cl} \ll 3\, {\rm pc\,\mu G}(c_s/190\, {\rm m\,s^{-1}})^{2}$. The extra line-mass allowed by the magnetic field is expected to be \begin{equation} \Delta \lambda \sim 2.5\, M_\odot\,{\rm pc}^{-1} (R_0/0.1\,{\rm pc})^2(B_0/10\,{\rm \mu G})^2(c_s/190\,{\rm m\,s^{-1}})^{-2}. \end{equation} Equation (\ref{eqn:mass-formula-small_phi}) means that the maximum line-mass is expressed as $\lambda'_{\rm max}=a\Phi_{\rm cl}'^2 + b$ in the limit of small magnetic flux per unit length. Using our result for the five models with $0 \le \Phi_{\rm cl}' =R'_0/\beta_0^{1/2} < 5$ (Models A, B1, C3, C4 and the non-magnetized model), the maximum line-mass is well fitted by the following formula as \begin{equation} \lambda'_{\rm max}=0.82 \Phi_{\rm cl}'^2 +25, \end{equation} or in the dimensional form as \begin{equation} \lambda_{\rm max}=0.033 \Phi_{\rm cl}^2/c_s^2 + 2.0 c_s^2/G. \label{eqn:empirical-lambda-weak-B} \end{equation} Although the numerical factor in front of $\Phi_{\rm cl}^2$ is twice as large as that expected from the virial analysis, our empirical fit [eq.(\ref{eqn:empirical-lambda-weak-B})] seems to have a theoretical base. In the case of $2c_s^2/G \ll \Phi_{\rm cl}^2/(16c_s^2)$ or $\Phi_{\rm cl} \gg 3\, {\rm pc\,\mu G}(c_s/190\, {\rm m\,s^{-1}})^{2}$, equation (\ref{eqn:eq-of-B<>0}) gives \begin{equation} \lambda \simeq \frac{c_s^2}{G}+\frac{\Phi_{\rm cl}}{2^{3/2}G^{1/2}}+\frac{2^{1/2}c_s^4}{G^{3/2}\Phi_{\rm cl}}, \end{equation} and the mass increment due to the magnetic field (the second term) is expected to be proportional to the magnetic flux per unit length for a filament with large $\Phi_{\rm cl}$ as follows: \begin{equation} \Delta \lambda \sim 30\, M_\odot\,{\rm pc}^{-1} (R_0/0.5\, {\rm pc})(B_0/10\, {\rm \mu G}). \end{equation} This reproduces the empirical mass formula obtained numerically (eq.~[\ref{eqn:mcr_dim}]). When the filament is magnetically supported, the maximum line-mass is proportional to the magnetic flux per unit length $\Phi_{\rm cl}$. Therefore, the functional form of the empirical mass formula in $\S$\ref{sec:4.1} also has a theoretical meaning. \subsection{Column Density Distribution} As for the modeling of the density profile of the filament, Plummer-like profiles are commonly adopted as \citep{nutter2008,arzoumanian2011} \begin{equation} \rho(r)=\frac{\rho_c}{[1+(r/R_f)^2]^{p/2}}, \end{equation} which has the analytic expression also for the column density as \begin{equation} \sigma(r)=A\frac{\rho_cR_f}{[1+(r/R_f)^2]^{(p-1)/2}}, \label{eqn:plummer-sigma} \end{equation} where $A$ is a numerical factor. The distribution is determined by the central density $\rho_c$, core radius $R_f$, and the density slope parameter $p$ \citep{nutter2008,arzoumanian2011}. However, there is a contradiction between theory and observation. That is, Herschel observations indicate $p \sim 2$, although the isothermal hydrostatic filament has $p=4$ (eq.[\ref{eqn:cyl-rho}]). This is sometimes considered as a consequence of dynamical contraction. Dynamical contraction of a filament whose pressure obeys the polytropic relation $p\propto \rho^\gamma$ is expressed by a self-similar solution and has a slope such as $\rho \propto r^{-2/(2-\gamma)}$ \citep{kawachi1998}. And the infall speed expected from the solutions is reported consistent with observations \citep{palmeirim2013}. On the other hand, from a standpoint that the filament is hydrostatic, some ideas to change the gas equation-of-state are proposed. Here we study the possibility that the column density distribution is changed by a magnetic effect. We study the column density distribution expected for the magnetized filament. In Figure \ref{fig10}, the column density integrated along the direction of the $y$-axis, $\sigma(x)=\int\rho dy$, and that from the $x$-axis, $\sigma(y)=\int\rho dx$, are plotted respectively in panels ({\it a}) and ({\it b}), in which the column density is calculated for the state with $\rho_c=300$ of Model C3 (Fig.\ref{fig5}{\it c}). To mimic this kind of observation, an additional background column density is added to the magnetohydrostatic solution. We assume the additional column density of 0\%, 1\%, 3\%, and 7\% of the column density observed at the center, that is, $\sigma(x=0)$ and $\sigma(y=0)$. Four curves in this figure correspond to these different backgrounds. To estimate the parameters, we fit the distribution at three radii: at $r=0$, from $\sigma(x=0)$ and $\sigma(y=0)$ we calculate the column density at the center, $A\rho_cR_f\equiv \sigma_0$, in equation (\ref{eqn:plummer-sigma}); we calculate $r=r_{1/2}$ at which the equation gives a half of the central column density $\sigma(r_{1/2})=\sigma_0/2$; we also calculate $r=r_{1/10}$ at which the equation gives one-tenth of the central column density, $\sigma(r_{1/10})=\sigma_0/10$. The values of $\sigma_0$, $r_{1/2}$, and $r_{1/10}$ for this model are shown in Table \ref{table2}. The parameter $r_{1/2}$ represents the size of the core and the parameter $r_{1/10}$ is related to the steepness of the density distribution in the envelope. Parameters to fit the numerical solution by the Plummer-like distribution are also shown in Table \ref{table2}. Fitted Plummer-like distributions are also shown by dashed-dotted lines in Figure \ref{fig10}. Although the $\chi^2$-fitting might be more appropriate than this three-point fitting, Figure \ref{fig10} shows that our fitting works well. This figure shows that the parameter $p$ takes a value $p \gtrsim 4$ when the additional background is low ($\lesssim 3\%$) or no background is added (0\%). This means that the filaments have $p\sim 4$ distribution, even if the magnetic field is taken into account. On the other hand, if we add relatively large background column density, $7\%$ of $\sigma_0$, a shallow slope in the envelope appears and the slope parameter is as small as $p\lesssim 3$. This means that the slope parameter $p$ is highly dependent on the completeness of background subtraction from an observation. When the background subtraction is incomplete, a shallow slope in the envelope and a small $p$ parameter are expected, even if the observed power $p \simeq 2$ may have other origin. \section{Summary} We calculated magnetohydrostatic configurations of the isothermal filaments that are threaded by the magnetic field laterally. The magnetic field has an effect in supporting the filament, unless the radius of the parent filament $R_0$ is small. The maximum line-mass supported against the self-gravity is obtained by a function of the magnetic flux threading the filament per unit length and the isothermal sound speed. When considering a filamentary cloud, we have to take the magnetic field into account. \acknowledgments This work was supported in part by JSPS Grant-in-Aid for Scientific Research (A) 21244021 in FY 2009--2012. Numerical computations were in part carried out on Cray XT4 and Cray XC30 at the Center for Computational Astrophysics, CfCA, of the National Astronomical Observatory of Japan.
1,108,101,564,026
arxiv
\section{introduction} Fundamental studies of unconventional superconductors are currently hindered by the scarcity of direct methods to determine the structure of the superconducting order parameter. Apart from Josephson junction experiments, few spectroscopic probes provide the valuable information about the phase of the order parameter. In this work, we discuss how phase sensitive coherence effects can be studied using scanning tunneling spectroscopy/microscopy (STS/STM). The key idea is that the evolution of the phase of the order parameter in momentum space can be determined from the Fourier transformed fluctuations in the tunneling density of states. The sensitivity of these fluctuations to the scattering rates of superconducting quasiparticles manifests itself through coherence factor effects. Quasiparticles in a superconductor are a coherent superposition of excitations of electrons and holes. Coherence factors characterize how the scattering rate of a superconducting quasiparticle off a given scatterer differs from the scattering rate of a bare electron off the same scatterer \cite{Tinkham}. Coherence factors are determined by combinations of the Bogoliubov coefficients $u_{\bf{k}}$ and $v_{\bf{k}}$, which give proportions of the particle and hole components that constitute a superconducting quasiparticle, \begin{eqnarray}\label{Bog} c_{{\bf{k}}\uparrow}=u_{\bf{k}} a_{{\bf{k}}\uparrow}+v_{\bf{k}} a^{\dagger }_{-{\bf{k}}\downarrow},\\ c_{{\bf{k}}\downarrow}=-v_{\bf{k}} a^{\dagger }_{{\bf{k}}\downarrow}+u_{\bf{k}} a_{-{\bf{k}}\uparrow}. \end{eqnarray} The momentum-dependent order parameter $\Delta_{\bf{k}}=|\Delta_{\bf{k}}|e^{i\phi({\bf{k}})}$ has the same sign as the Bogoliubov coefficient $v_{\bf{k}}$, so that studies of scattering rates of quasiparticles with different momenta can delineate how the phase of the order parameter $\phi({\bf{k}})$ changes in momentum space. In studies of unconventional superconductors with spatially varying order parameter, scanning tunneling spectroscopy provides a spectroscopic probe with a real space resolution at the atomic level. In the past, observation of phase sensitive coherence effects with STM has been thwarted by the problem of controlling the scatterers \cite{HoffmanThesis}. An ingenious solution of this problem has been found in the application of a magnetic field, which introduces vortices as controllable scatterers in a given system \cite{Hanaguri}. In this work, we develop a framework observation of coherence factor effects with Fourier Transform Scanning Tunneling Spectroscopy (FT-STS). Using this framework, we analyze the recent observations of the coherence factor effects in a magnetic field to develop a phenomenological model of quasiparticle scattering in a disordered vortex array. \section{Coherence factors in STM measurement} Scanning tunneling spectroscopy, which involves tunneling of single electrons between a scanning tip and a superconducting sample, offers an opportunity to examine how the spectrum of superconducting quasiparticles responds to disorder. We now discuss how we can extract phase-sensitive information from STM data. \subsection{LDOS correlators $R^{even}$ and $R^{odd}$ have well-defined coherence factors} We describe the electron field inside a superconductor by a Balian-Werthammer spinor \cite{BW} \begin{gather*} \Psi({\bf{r}},\tau )= \begin{pmatrix} &\psi_{\uparrow} ({\bf{r}},\tau)\\ &\psi_{\downarrow} ({{\bf{r}}},\tau )\\ &\psi_{\downarrow}^{\dagger} ({{\bf{r}}},\tau )\\ &-\psi_{\uparrow}^{\dagger} ({{\bf{r}}},\tau ) \label{Psi} \end{pmatrix}, \end{gather*} where ${\bf{r}}$ denotes real space coordinates and $\tau$ is imaginary time. The Nambu Green's function is defined as the ordered average \begin{equation}\label{matrixG} \hat {G}_{\alpha \beta }({{\bf{r}}'},{{\bf{r}}};\tau )= -\langle T_\tau \Psi_{\alpha }({\bf{r}}',\tau ) \Psi_{\beta }^{\dagger}({\bf{r}},0) \rangle . \end{equation} Tunneling measurements determine local density of states, which is given by \begin{equation}\label{rhoNambu} \rho ({\bf{r}},\omega) = \frac{1}{\pi} ~Im ~ {\rm Tr}~\frac{1+\tau_3}{2} \left[ G ({\bf{r}},{\bf{r}}; \omega-i\delta )\right], \end{equation} where $G ({\bf{r}}',{\bf{r}};z)$ is the analytic continuation $G ({\bf{r}}',{\bf{r}};i\omega_{n})\rightarrow G ({\bf{r}}',{\bf{r}};z)$ of the Matsubara Green's function \begin{equation}\label{GMats} G ({\bf{r}}',{\bf{r}};i\omega_{n}) = \int_{0}^{\beta }G ({\bf{r}}',{\bf{r}};\tau )e^{i\omega_{n}\tau }d\tau , \end{equation} with $\omega_n=(2n+1)\pi T$. The appearance of the combination $\frac{1+\tau_3}{2}$ in (\ref{rhoNambu}) projects out the normal component of the Nambu Green's function \begin{equation} Tr~\frac{1+\tau_3}{2} ~G ({\bf{r}}',{\bf{r}};\tau )= -\sum_{\sigma} \langle T_\tau \psi_{\sigma } ({\bf{r}}',\tau )\psi ^{\dagger }_{\sigma } ({\bf{r}},0)\rangle . \end{equation} The mixture of the unit and the $\tau_3$ matrices in this expression prevents the local density of states from developing a well-defined coherence factor. We now show that the components of the local density of states that have been symmetrized or antisymmetrized in the bias voltage have a well-defined coherence factor. The key result here is that \begin{eqnarray}\label{odd-even} \rho ({{\bf{r}}},\omega) \pm \rho ({{\bf{r}}},-\omega) = \frac{1}{\pi}~ Im~ {\rm Tr}~ \left[\left\{\begin{array}{c} 1\cr \tau_3 \end{array} \right\} G ({\bf{r}},{\bf{r}}; \omega-i\delta )\right]. \end{eqnarray} In particular, this implies that the antisymmetrized density of states has the same coherence factor as the charge density operator $\tau_{3}$. To show these results, we introduce the ``conjugation matrix'' $C= \sigma_{2}\tau_{2} $, whose action on the Nambu spinor is to conjugate the fields, \begin{equation}\label{conj} C\Psi= (\Psi ^{\dagger } )^{T}\equiv \Psi^{*} , \end{equation} effectively taking the Hermitian conjugate of each component of the Nambu spinor. This also implies that $\Psi ^{\dagger } C= \Psi^{T}$. Here $\tau_i$ are Pauli matrices acting in particle-hole space, for example, \begin{gather*} {\bf \tau_3}= \begin{pmatrix} &\underline{1}&0\\ &0&-\underline{1} \label{tau3} \end{pmatrix}, \end{gather*} and $\sigma_i$ are Pauli matrices acting in spin space, \begin{gather*} {\bf \sigma_i}= \begin{pmatrix} &\underline{\sigma_i}&0\\ &0&\underline{\sigma_i} \label{sigmai} \end{pmatrix}. \end{gather*} Using (\ref{conj}), it follows that \begin{eqnarray}\label{l} [C G({\bf{r}}',{\bf{r}};\tau )C]_{\alpha \beta } &=& - \langle T_\tau C\Psi ({\bf{r}}',\tau)\Psi^{\dagger } ({\bf{r}},0)C\rangle_{\alpha \beta } \cr &=& - \langle T_\tau\Psi_{\alpha }^{*} ({\bf{r}}',\tau)\Psi_{\beta }^{T} ({\bf{r}},0)\rangle\cr &=& \langle T_\tau\Psi_{\beta } ({\bf{r}},0)\Psi_{\alpha }^{\dagger } ({\bf{r}}',\tau)\rangle\cr &=& - G_{\beta \alpha } ({\bf{r}},{\bf{r}}',-\tau), \end{eqnarray} or, in the matrix notation, \begin{eqnarray}\label{CGC} C G({\bf{r}},{\bf{r}}';\tau )C&=& -G^{T} ({\bf{r}}',{\bf{r}};-\tau), \end{eqnarray} which in turn implies for the Matsubara Green's function (\ref{GMats}) \begin{eqnarray}\label{CGCiomega} C G({\bf{r}},{\bf{r}}';i \omega_n )C&=& -G^{T} ({\bf{r}}',{\bf{r}};-i \omega_n ). \end{eqnarray} For the advanced Green's function, which is related to the Matsubara Green's function via analytic continuation, $G ({\bf{r}},{\bf{r}}',i\omega_{n})\rightarrow G ({\bf{r}},{\bf{r}}',z)$, we obtain \begin{eqnarray}\label{CGCAdv} C G({\bf{r}},{\bf{r}}'; \omega-i\delta )C&=& -G^{T} ({\bf{r}}',{\bf{r}};-\omega+i\delta ). \end{eqnarray} Using this result and the commutation relations of Pauli matrices, we obtain \begin{eqnarray} \rho ({{\bf{r}}},-\omega) = -\frac{1}{\pi}~ Im~ {\rm Tr}~ \frac{1+\tau_3}{2}~G ({\bf{r}},{\bf{r}}; -\omega+i\delta )=\nonumber\\ = \frac{1}{\pi}~ Im~ {\rm Tr}~ \frac{1+\tau_3}{2}~C~G^T ({\bf{r}},{\bf{r}}; \omega-i\delta)~C=\nonumber\\ = \frac{1}{\pi}~ Im~ {\rm Tr}~ \frac{1-\tau_3}{2}~G ({\bf{r}},{\bf{r}}; \omega-i\delta). \end{eqnarray} Finally, we obtain \begin{eqnarray} \rho ({{\bf{r}}},\omega) \pm \rho ({{\bf{r}}},-\omega) = \frac{1}{\pi}~ Im~ {\rm Tr}~ \left[\frac{1+\tau_3}{2}~ G ({\bf{r}},{\bf{r}}; \omega-i\delta )\pm \frac{1-\tau_3}{2}~G ({\bf{r}},{\bf{r}}; \omega-i\delta)\right] =\nonumber\\ = \frac{1}{\pi}~ Im~ {\rm Tr}~ \left[\left\{\begin{array}{c} 1\cr \tau_3 \end{array} \right\} G ({\bf{r}},{\bf{r}}; \omega-i\delta )\right]. \end{eqnarray} \subsection{Coherence factors in a BCS superconductor, T-matrix approximation} Next, applying this result to a BCS superconductor, we show that in the t-matrix approximation the coherence factors that arise in the conductance ratio $Z({\bf{q}},V)$ are given by the product of the coherence factors associated with the charge operator and the scattering potential. T-matrix approximation \cite{Balatsky, Hirschfeld-86} allows to compute the Green's function in the presence of multiple scattering off impurities. In terms of the bare Green's function $G({\bf{k}},\omega)$ and the impurity t-matrix ${\hat t}({\bf{k}},{\bf{k}}')$, the full Green's function is given by \begin{eqnarray}\label{GT} {\tilde G}({\bf{k}},{\bf{k}}',\omega)=G({\bf{k}},\omega)+ G({\bf{k}},\omega){\hat t}({\bf{k}},{\bf{k}}')G({\bf{k}}',\omega)= G({\bf{k}},\omega)+ \delta G({\bf{k}},{\bf{k}}',\omega). \end{eqnarray} Using this expression, we obtain for the Fourier transformed odd fluctuations in the tunneling density of states \begin{eqnarray}\label{deltarhooddT} \delta \rho^{odd}({\bf{q}},\omega)=\frac{1}{2\pi } {\rm Im} \int_{{\bf{k}}} {\rm Tr}\Bigl[\tau_3 \delta G_{{\bf{k}}_{+},{\bf{k}}_{-}} (\omega-i\delta ) \Bigr]=\nonumber\\= \frac{1}{2\pi} {\rm Im} \int_{{\bf{k}}} {\rm Tr}\Bigl[\tau_3 G_{{\bf{k}}_{-}} (\omega-i\delta )\hat{t}({\bf{q}},{\bf{k}}) G_{{\bf{k}}_{+}} (\omega-i\delta ) \Bigr]. \end{eqnarray} The Fourier transformed even fluctuations in the tunneling density of states \begin{eqnarray}\label{deltarhoevenT} \delta \rho^{even}({\bf{q}},\omega)=\frac{1}{2\pi } {\rm Im} \int_{{\bf{k}}} {\rm Tr}\Bigl[ \delta G_{{\bf{k}}_{+},{\bf{k}}_{-}} (\omega-i\delta ) \Bigr]=\nonumber\\= \frac{1}{2\pi} {\rm Im} \int_{{\bf{k}}} {\rm Tr}\Bigl[ G_{{\bf{k}}_{-}} (\omega-i\delta )\hat{t}({\bf{q}},{\bf{k}}) G_{{\bf{k}}_{+}} (\omega-i\delta ) \Bigr]. \end{eqnarray} For scattering off a single impurity with a scattering potential ${\hat U}({\bf{k}},{\bf{k}}')$, the t-matrix ${\hat t}({\bf{k}},{\bf{k}}')$ denotes the infinite sum \begin{eqnarray}\label{t-matrix} {\hat t}({\bf{k}},{\bf{k}}')={\hat U}({\bf{k}},{\bf{k}}')+\sum_{{\bf{k}}''}{\hat U}({\bf{k}},{\bf{k}}'') G({\bf{k}}'',\omega){\hat U}({\bf{k}}'',{\bf{k}}')+...=\nonumber\\ ={\hat U}({\bf{k}},{\bf{k}}')+\sum_{{\bf{k}}''}{\hat U}({\bf{k}},{\bf{k}}'') G({\bf{k}}'',\omega){\hat t}({\bf{k}}'',{\bf{k}}'). \end{eqnarray} Working in the Born approximation, which is equivalent to taking only the first term in the series (\ref{t-matrix}), we derive the expressions for the coherence factors associated with some common scattering processes that arise in the even and odd density-density correlators $R^{even}({\bf{q}},V)$ and $R^{odd}({\bf{q}},V)$ in a BCS superconductor (see Table 1). We use the following expression for the BCS Green's function for an electron with a normal state dispersion $\epsilon_{{\bf{k}} }$ and a gap function $\Delta_{{\bf{k}} }$: \begin{eqnarray}\label{GBCS} G_{{\bf{k}} } (\omega)= [\omega -\epsilon_{{\bf{k}} }\tau_{3}-\Delta_{{\bf{k}} }\tau_{1}]^{-1}, \end{eqnarray} $\hat{t}({\bf{q}},{\bf{k}})$ is the scattering t-matrix of the impurity potential, and ${\bf{k}}_{\pm }= {\bf{k}} \pm {\bf{q}} /2$. If the scattering potential has the t-matrix given by $\hat{t}({\bf{q}},{\bf{k}})=T_3({\bf{q}})~\tau_3$, corresponding to a weak scalar (charge) scatterer, the change in the odd part of the Fourier transformed tunneling density of states becomes $\delta \rho^{odd}_{scalar}({\bf{q}} ,\omega)= T_3({\bf{q}})~\Lambda^{odd}_{scalar} ({\bf{q}} ,\omega)$ with \begin{eqnarray}\label{Lambdascodd} \Lambda^{odd}_{scalar} ({\bf{q}} ,\omega)= \frac{1}{2\pi }{\rm Im}\int_k~ \Bigl[ \frac{z^2+\epsilon_{{\bf{k}}_+}\epsilon_{{\bf{k}}_-}-\Delta_{{\bf{k}}_+}\Delta_{{\bf{k}}_-}} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta }, \end{eqnarray} where $E_{{\bf{k}}}=[\epsilon_{{\bf{k}}}^2+\Delta_{{\bf{k}}}^2]^{\frac{1}{2}}$ is the quasiparticle energy. Expressed in terms of the Bogoliubov coefficients $u_{\bf{k}}$ and $v_{\bf{k}}$, given by $u^2_k(v^2_k)=\frac{1}{2}(1\pm\epsilon_k/E_k)$, the expression under the integral in (\ref{Lambdascodd}) is proportional to $(u_+u_--v_+v_-)^2$. Fluctuations in the even part of the Fourier transformed tunneling density of states due to scattering off a scalar impurity are substantially smaller, $R^{even}_{scalar}({\bf{q}} ,\omega)\ll R^{odd}_{scalar}({\bf{q}} ,\omega)$, where $R^{even(odd)}_{scalar}({\bf{q}} ,\omega)$ is defined by (\ref{correven-odd}), $\delta \rho^{even}_{scalar}({\bf{q}} ,\omega)= T_3({\bf{q}})~\Lambda^{even}_{scalar} ({\bf{q}} ,\omega)$ with \begin{eqnarray}\label{Lambdasceven} \Lambda^{even}_{scalar} ({\bf{q}} ,\omega)= \frac{1}{2\pi }{\rm Im}\int_k~ \Bigl[ \frac{z(\epsilon_{{\bf{k}}_+}+\epsilon_{{\bf{k}}_-})} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta }. \end{eqnarray} Expressed in terms of the Bogoliubov coefficients $u_{\bf{k}}$ and $v_{\bf{k}}$, the expression under the integral in (\ref{Lambdasceven}) is proportional to $(u_+u_-+v_+v_-)(u_+u_--v_+v_-)$, and is, therefore, small for the nodal quasiparticles involved, $|\Lambda^{even}_{scalar}({\bf{q}} ,\omega)|\ll |\Lambda^{odd}_{scalar}({\bf{q}} ,\omega)|$. Thus, scattering off a weak scalar impurity contributes predominantly to odd-parity fluctuations in the density of states, $R^{odd}({\bf{q}},V)$. In a second example, consider scattering off a pair-breaking ``Andreev" scatterer with the t-matrix given by $\hat{t}({\bf{q}},{\bf{k}})=T_1({\bf{q}},{\bf{k}})~\tau_1$. Here the change in the even and odd parts of the Fourier transformed tunneling density of states are $\delta \rho_{A}^{even(odd)}({\bf{q}} ,\omega)= \Lambda_{A}^{even(odd)} ({\bf{q}} ,\omega)$ with \begin{eqnarray}\label{LambdaAeven} \Lambda_{A} ^{even}({\bf{q}} ,\omega)= \frac{1}{2\pi }{\rm Im}\int_k~T_1({\bf{q}},{\bf{k}})~ \Bigl[ \frac{z(\Delta_{{\bf{k}}_+}+\Delta_{{\bf{k}}_-})} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta },\\ \label{LambdaAodd} \Lambda_{A} ^{odd}({\bf{q}} ,\omega)= \frac{1}{2\pi }{\rm Im}\int_k~T_1({\bf{q}},{\bf{k}})~ \Bigl[ \frac{\epsilon_{{\bf{k}}_+}\Delta_{{\bf{k}}_-}+\epsilon_{{\bf{k}}_-}\Delta_{{\bf{k}}_+}} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta }. \end{eqnarray} In terms of the Bogoliubov coefficients $u_{\bf{k}}$ and $v_{\bf{k}}$, the expressions in square brackets in $\Lambda_{A} ^{even}({\bf{q}} ,\omega)$ and $\Lambda_{A} ^{odd}({\bf{q}} ,\omega)$ are proportional to $(u_+u_-+v_+v_-)(u_+v_-+v_+u_-)$ and $(u_+u_--v_+v_-)(u_+v_-+v_+u_-)$, respectively. For the nodal quasiparticles involved, the latter expression is substantially smaller than the former, $|\Lambda^{odd}_{A}({\bf{q}} ,\omega)|\ll |\Lambda^{even}_{A}({\bf{q}} ,\omega)|$. Thus, scattering off an Andreev scatterer gives rise to mainly even parity fluctuations in the density of states, $R^{even}({\bf{q}},V)$. We summarize the coherence factors arising in $R^{even}({\bf{q}},V)$ and $R^{odd}({\bf{q}},V)$ for some common scatterers in Table 1. The dominant contribution for a particular type of scatterer is given in bold. \begin{table}[h!b!p!] \caption{Coherence factors C(q) in $R^{even}({\bf{q}},V)$ and $R^{odd}({\bf{q}},V)$ for some common scatterers. } \begin{tabular}{llllll} \hline T-matrix~ ~& Scatterer & C(q) in $R^{even}({\bf{q}},V)$ & C(q) in $R^{odd}({\bf{q}},V)$ &Enhanced $q_i$ &Enhances "++"?\\ \hline ${\bf \tau_3}$~ & Weak Scalar~& $(uu'+vv')(uu'-vv')$ ~& ${\bf (uu'-vv')^2}$~& 2,3,6,7 ~&No\\ ${\bf \sigma \cdot m}$~ & Weak Magnetic~& 0 ~& 0 ~& None ~& No\\ i sgn~$\omega$~${\bf \hat {1}}$~ & Resonant ~& ${\bf (uu'+vv')^2} $ ~& $(uu'+vv')(uu'-vv')$~& 1,4,5 ~& Yes\\ ${\bf \tau_1}$~ & Andreev ~& $ {\bf (uu'+vv')(uv'+vu')}$~& $(uu'-vv')(uv'+vu')$ ~&1,4,5 ~& Yes\\ \hline \end{tabular} \label{table1} \end{table} \noindent \noindent From Table 1, we see that the odd correlator $R^{odd}({\bf{q}},V)$ is determined by a product of coherence factors associated with the charge operator and the scattering potential, while the even correlator $R^{even}({\bf{q}},V)$ is determined by a product of the coherence factors associated with the unit operator and the scattering potential. \subsection{Conductance ratio - measure of LDOS} An STM experiment measures the differential tunneling conductance $\frac{dI}{dV}({\bf{r}},V)$ at a location ${\bf{r}}$ and voltage $V$ \cite{NewReview}. In a simplified model of the tunneling, \begin{equation} \label{sigma} \frac{dI}{dV}({\bf{r}},V) \propto \int_{-eV}^0 d\omega [-f'(\omega-eV)]\int d{\bf{r}}_1d{\bf{r}}_2 M({\bf{r}}_1,{\bf{r}})M^*({\bf{r}}_2,{\bf{r}})A({\bf{r}}_2,{\bf{r}}_1,\omega), \end{equation} where $A({\bf{r}}_2,{\bf{r}}_1,\omega)=\frac{1}{\pi}Im~ G({\bf{r}}_2,{\bf{r}}_1,\omega-i\delta)$ is the single electron spectral function and $f(\omega)$ is the Fermi function. Here ${\bf{r}}_1$, ${\bf{r}}_2$ and ${\bf{r}}$ are the two-dimensional coordinates of the incoming and outgoing electrons, and the position of the tip, respectively. $M({\bf{r}}_1,{\bf{r}})$ is the spatially dependent tunneling matrix element, which includes contributions of the sample wave function around the tip. Assuming that the tunneling matrix element is local, we write $M({\bf{r}}_1,{\bf{r}})=M({\bf{r}})\delta^{(2)}({\bf{r}}_1-{\bf{r}})$, where $M({\bf{r}})$ is a smooth function of position ${\bf{r}}$. In the low-temperature limit, when $T\rightarrow 0$, the derivative of the Fermi function is replaced by a delta-function, $-f'(\omega-eV)=\delta(\omega-eV)$. With these simplifications, we obtain \begin{equation} \label{sigmaLoc} \frac{dI}{dV}(r,V)\propto |M({\bf{r}})|^2\rho({\bf{r}},V), \end{equation} where $\rho({\bf{r}},V)=A({\bf{r}},{\bf{r}},V)$ is the single-particle density of states. In the WKB approach the tunneling matrix element is given by $|M({\bf{r}})|^2=e ^{-2\gamma({\bf{r}})}$ with $\gamma({\bf{r}})= \int_0^{s({\bf{r}})} dx\sqrt{\frac{2m\psi({\bf{r}})}{\hbar^2}} =\frac{s({\bf{r}})}{\hbar}\sqrt{2m\psi({\bf{r}})}$, where $s({\bf{r}})$ is the barrier width (tip-sample separation), $\psi({\bf{r}})$ is the barrier height, which is a mixture of the work functions of the tip and the sample, $m$ is the electron mass \cite{NewReview,HoffmanThesis}. Thus, the tunneling conductance is a measure of the thermally smeared local density of states (LDOS) of the sample at the position of the tip. To filter out the spatial variations in the tunneling matrix elements $M({\bf{r}})$, originating from local variations in the barrier height $\phi$ and the tip-sample separation $s$, the conductance ratio is taken: \begin{equation} \label{Z} Z(r,V)=\frac{\frac{dI}{dV}(r,+V)}{\frac{dI}{dV}(r,-V)}= \frac {\rho (r,+V)}{\rho (r,-V)}= \frac { \rho_0(+V)+\delta\rho(r,+V) }{\rho_0(-V)+\delta\rho(r,-V) }. \end{equation} For small fluctuations of the local density of states, $\delta\rho(r,\pm V)\ll\rho_0(\pm V)$, $Z(r,V) $ is given by a linear combination of positive and negative energy components of the tunneling density of states, \begin{equation} \label{Zexpanded} Z(r,V)\simeq Z_0 (V)~\Bigl[1+ \frac { \delta\rho(r,+V) }{\rho_0(+V) }- \frac { \delta\rho(r,-V) }{\rho_0(-V) }\Bigl] \end{equation} with $Z_0 (V)\equiv \frac{\rho_0(+V) }{\rho_0(-V) }$. The Fourier transform of this quantity contains a single delta function term at ${\bf{q}}=0$ plus a diffuse background, \begin{equation} \label{Z(q,V)} Z({\bf{q}},V)= Z_0 (V)(2\pi)^2 \delta^2({\bf{q}})+Z_0 (V)~\Bigl[ \frac { \delta\rho({\bf{q}},+V) }{\rho_0(+V) }- \frac { \delta\rho({\bf{q}},-V) }{\rho_0(-V) }\Bigr]. \end{equation} Interference patterns produced by quasiparticle scattering off impurities are observed in the diffuse background described by the second term. Clearly, linear response theory is only valid when the fluctuations in the local density of states are small compared with its average value, $\overline{\delta\rho(r,\pm V)^{2}}\ll\rho_0(\pm V)^{2}$. In the clean limit, this condition is satisfied at finite and sufficiently large bias voltages $|V|>0$. At zero bias voltage $V\rightarrow 0$, however, the fluctuations in the local density of states become larger than the vanishing density of states in the clean limit, $|\delta\rho(r,\pm V)|>\rho_0(\pm V)$, and linear response theory can no longer be applied. At finite bias voltages, $|V|>0$, fluctuations in the conductance ratio $Z({\bf{q}},V)$ are given by a sum of two terms, even and odd in the bias voltage: \begin{equation} \label{Z-even-odd} Z({\bf{q}},V)|_{{\bf{q}}\neq 0}=Z_0 (V)~\Bigl[ \delta\rho^{even}({\bf{q}},V)(\frac {1}{\rho_0(+V) }-\frac {1}{\rho_0(-V) })+ \delta\rho^{odd}({\bf{q}},V)(\frac {1}{\rho_0(+V) }+\frac {1}{\rho_0(-V) })\Bigr], \end{equation} where $\delta\rho^{even(odd)}({\bf{q}},V)\equiv ( \delta\rho({\bf{q}},+V) \pm \delta\rho({\bf{q}},-V) )/2$. Depending on the particle-hole symmetry properties of the sample-averaged tunneling density of states $\rho_0(V)$, one of these terms can dominate. For example, if at the bias voltages used, the sample-averaged tunneling density of states $\rho_0(V)$ is approximately particle-hole symmetric, $\rho_0(-V)\approx\rho_0(+V) =\rho_0(V)$, then $Z({\bf{q}},V)$ is dominated by the part of LDOS fluctuations that is odd in the bias voltage $V$, \begin{equation} \label{Zphsym} Z({\bf{q}},V)|_{{\bf{q}}\neq 0}\simeq Z_0 (V)~\frac {2}{\rho_0(V) } \delta\rho^{odd}({\bf{q}},V). \end{equation} In general, when we average over the impurity positions, the Fourier transformed fluctuations in the tunneling density of states, $\delta \rho ({{\bf{q}}},V)$, vanish. However, the variance in the density of states fluctuations is non-zero and is given by the correlator \begin{eqnarray}\label{corr} R({\bf{q}},V)&=&\overline{\delta \rho ({{\bf{q}}},V)\delta \rho^*({-{\bf{q}}},V)}. \end{eqnarray} Defining \begin{eqnarray} \label{correven-odd} R^{even(odd)}({\bf{q}},V)&=&\overline{\delta \rho^{even(odd)}({{\bf{q}}},V)\delta \rho^{*even(odd)} ({-{\bf{q}}},V)}, \end{eqnarray} we obtain that for ${\bf{q}}\neq 0$ \begin{equation} \label{Zcorrodd} |Z({\bf{q}},V)|^2=\frac{4|Z_0(V)|^2}{\rho_0^2(V)}R^{odd}({\bf{q}},V). \end{equation} \subsection{Observation of coherence factor effects in QPI: coherence factors and the octet model} In high-Tc cuprates the quasiparticle interference (QPI) patterns, observed in the Fourier transformed tunneling conductance $dI(q,V)/dV\propto \rho(q,V)$, are dominated by a small set of wavevectors $q_{1-7}$, connecting the ends of the banana-shaped constant energy contours \cite{Hoffman, Howald,DHLee}. This observation has been explained by the so-called ''octet'' model, which suggests that the interference patterns are produced by elastic scattering off random disorder between the regions of the Brillouin zone with the largest density of states, so that the scattering between the ends of the banana-shaped constant energy contours, where the joint density of states is sharply peaked, gives the dominant contribution to the quasiparticle interference patterns. In essence, the octet model assumes that the fluctuations in the Fourier transformed tunneling density of states are given by the following convolution: \[ \delta\rho({\bf{q}},V)\propto\int_{\bf{k}}\rho({\bf{k}}_+,\omega)\rho({\bf{k}}_-,\omega). \] While this assumption allows for a qualitative description, it is technically incorrect \cite{Pereg-Barnea-Franz,Scalapino}, for the correct expression for change in the density of states involves the imaginary part of a product of Green's functions, rather than a product of the imaginary parts of the Green's function, as written above. In this section, we show that the fluctuations in the conductance ratio at wavevector ${\bf{q}} $, given by $Z({\bf{q}},V)$, are, nevertheless, related to the joint density of states via a Kramers-Kronig transformation, so that the spectra of the conductance ratio $Z({\bf{q}},V)$ can still be analyzed using the octet model. As we have discussed, fluctuations in the density of states $\delta\rho({\bf{q}},V)$ are determined by scattering off impurity potentials and have the basic form (\ref{deltarhooddT}). This quantity involves the imaginary part of a product of two Green's functions, and as it stands, it is not proportional to the joint density of states. However, we can relate the two quantities by a Kramers-Kronig transformation, as we now show. We write the Green's function as \begin{eqnarray} G_{\bf{k}}(E-i\delta)=\int\frac{d\omega}{\pi}\frac{1}{E-\omega-i\delta} G''_{\bf{k}}(\omega-i\delta), \end{eqnarray} where $G''_{\bf{k}}(\omega-i\delta)=\frac{1}{2i}(G_{\bf{k}}(\omega-i\delta)- G_{\bf{k}}(\omega+i\delta))$. Substituting this form in (\ref{deltarhooddT}), we obtain \begin{eqnarray}\label{deltarhooddTJ} &\delta \rho^{odd}({\bf{q}},E)= &\frac{1}{2\pi^2} \int_{{\bf{k}}} {\rm Tr}\Bigl[\tau_3 \int dE'~\bigl[\frac{1}{E-E'}\sum_{\bf{k}} G''_{{\bf{k}}-}(E){\hat t}({\bf{q}},{\bf{k}}) G''_{{\bf{k}}+}(E')-[E\leftrightarrow E']\bigr] \Bigr]. \end{eqnarray} As we introduce the joint density of states, \begin{eqnarray}\label{JDOSgen} &J({\bf{q}},E,E') =\frac{1}{\pi^2}\sum_{\bf{k}} Tr[\tau_3 G''_{{\bf{k}}-}(E){\hat t}({\bf{q}},{\bf{k}}) G''_{{\bf{k}}+}(E')], \end{eqnarray} (\ref{deltarhooddTJ}) becomes \begin{eqnarray}\label{deltarhooddTJgen} &\delta \rho^{odd}({\bf{q}},\omega)= &\frac{1}{2} \int dE'~\frac{1}{E-E'}[J({\bf{q}},E,E')+J({\bf{q}},E',E)]. \end{eqnarray} The Fourier transformed conductance ratio $Z({\bf{q}},E)$ given by (\ref{Zphsym}) now becomes (for ${\bf{q}}\neq 0$) \begin{eqnarray}\label{Z2} &Z({\bf{q}},E) = \frac{1}{\rho_0(E)}\int dE'~\frac{1}{E-E'}[J({\bf{q}},E,E')+J({\bf{q}},E',E)]. \end{eqnarray} Substituting the expression for the BCS Green's function (\ref{GBCS}) in (\ref{JDOSgen}), we obtain \begin{eqnarray}\label{JDOSBCS} J({\bf{q}},E,E') =&\frac{1}{4}\sum_{\bf{k}} \frac{1}{E_{{\bf{k}}+}E_{{\bf{k}}-}} Tr[\tau_3 (E+\epsilon_{{\bf{k}}-}\tau_3+\Delta_{{\bf{k}}-}\tau_1){\hat t}({\bf{q}},{\bf{k}}) (E'+\epsilon_{{\bf{k}}+}\tau_3+\Delta_{{\bf{k}}+}\tau_1)]\cdot\nonumber\\ \cdot& [\delta(E-E_{{\bf{k}}-})- \delta(E+E_{{\bf{k}}-})] [\delta(E'-E_{{\bf{k}}+})- \delta(E'+E_{{\bf{k}}+}) \cdot sgn E\cdot sgn E', \end{eqnarray} where $E_{{\bf{k}}_\pm}\equiv\sqrt{\epsilon^2_{{\bf{k}}_\pm}+\Delta^2_{{\bf{k}}_\pm}}$. Provided both the energies are positive, $E,E'>0$, we obtain \begin{eqnarray}\label{JDOS} &J({\bf{q}},E,E') =\sum_{{\bf{k}}_1,{\bf{k}}_2} C({\bf{k}}_1,{\bf{k}}_2)\delta(E- E_{{\bf{k}}_1})\delta(E'-E_{{\bf{k}}_2})\delta^{(2)} ({\bf{k}}_1-{\bf{k}}_2-{\bf{q}}), \end{eqnarray} where the coherence factor is \begin{eqnarray}\label{C} C({\bf{k}}_1,{\bf{k}}_2)\equiv \frac{1}{4} \frac{1}{E_{{\bf{k}}_1}E_{{\bf{k}}_2}} Tr[\tau_3 (E+\epsilon_{{\bf{k}}_1}\tau_3+\Delta_{{\bf{k}}_1}\tau_1){\hat t}({\bf{k}}_1,{\bf{k}}_2) (E'+\epsilon_{{\bf{k}}_2}\tau_3+\Delta_{{\bf{k}}_2}\tau_1)]. \end{eqnarray} Now the fluctuations in the conductance ratio at wavevector ${\bf{q}}$ are given by: \begin{eqnarray}\label{Z-JDOS} Z({\bf{q}},E)|_{{{\bf{q}}\neq 0}}\propto \int \frac{dE'}{E-E'}\int d{\bf{k}}_1d{\bf{k}}_2C({\bf{k}}_1,{\bf{k}}_2) \delta(E- E_{{\bf{k}}_1})\delta(E'-\beta E_{{\bf{k}}_2})\delta^{(2)} ({\bf{k}}_1-{\bf{k}}_2-{\bf{q}}). \end{eqnarray} Thus, the fluctuations in the conductance ratio $Z({\bf{q}},E)$ are determined by a Kramers-Kronig transform of the joint density of states with a well-defined coherence factor. Conventionally, coherence factors appear in dissipative responses, such as (\ref{JDOS}). The appearance of a Kramers-Kronig transform reflects the fact that tunneling conductance is determined by the non-dissipative component of the scattering. The validity of the octet model depends on the presence of sharp peaks in the joint density of states. We now argue that if the joint density of states contains sharp peaks at well-defined points in momentum space, then these peaks survive through the Kramers-Kronig procedure, so that they still appear in the conductance ratio $Z({\bf{q}},E)$ with a non-Lorentzian profile, but precisely the same coherence factors. We can illustrate this point both numerically and analytically. Fig. 1 contrasts joint density of states with the Fourier transformed conductance ratio $Z({\bf{q}},E)$ for scattering off a weak scalar impurity, showing the appearance of the ``octet'' scattering wavevectors in both plots. Similar comparisons have been made by earlier authors\cite{Pereg-Barnea-Franz,Scalapino}. Let us now repeat this analysis analytically. Suppose $J({\bf{q}},E_1,E_2)$ (\ref{JDOS}) has a sharp peak at an octet $\bf{q}$ vector, ${\bf{q}}={\bf{q}}_i$ ($i=1-7$), defined by the delta function $J({\bf{q}},E_1=E,E_2=E)=C_i\delta^{(2)}({\bf{q}}-{\bf{q}}_i)$, where $C_i$ is the energy-dependent coherence factor for the $i$th octet scattering process. When we vary the energy $E_2$ away from $E$, the position of the characteristic octet vector will drift according to \begin{eqnarray} &{\bf{q}}_i(E_1,E_2)={\bf{q}}_i(E)-\nabla_{E_1}{\bf{q}}_i(E_1-E)+\nabla_{E_2}{\bf{q}}_i(E_2-E), \end{eqnarray} where $\nabla_{E_1}{\bf{q}}_i=\frac{ 1}{v_\Delta}\hat{{\bf n}}_1 (i) $ and $\nabla_{E_2}{\bf{q}}_i=\frac{1}{v_\Delta}\hat{{\bf n}}_2 (i)$ are directed along the initial and final quasiparticle velocities, and $v_\Delta$ is the quasiparticle group velocity. Carrying out the integral over $E'$ in (\ref{Z-JDOS}) we now obtain \begin{eqnarray}\label{Z-exp} Z({\bf{q}},E)&\propto& \int dE'~\frac{C_i}{E-E'}\Bigl[ \delta({\bf{q}}-{\bf{q}}_i(E)-\frac{\hat{n_2}}{v_\Delta}(E'-E))+ \delta ({\bf{q}}-{\bf{q}}_i(E)+\frac{\hat{n_1}}{v_\Delta}(E'-E))\Bigr]\nonumber \\ &=&C_i\Bigl[ \frac{1}{({\bf{q}}-{\bf{q}}_i)_{\| 1}}\delta\bigl(({\bf{q}}-{\bf{q}}_i)_{\perp 1} \bigr )- \frac{1}{({\bf{q}}-{\bf{q}}_i)_{\| 2}}\delta\bigl (({\bf{q}}-{\bf{q}}_i)_{\perp 2} \bigr )\Bigr], \end{eqnarray} where \[ ({\bf{q}} -{\bf{q}}_{i})_{\| 1,2} = ({\bf{q}} -{\bf{q}}_{i})\cdot \hat {\bf n}_{1,2} (i) \] denotes the component of $({\bf{q}} -{\bf{q}}_{i})$ parallel to the initial/final quasiparticle velocity and \[ ({\bf{q}} -{\bf{q}}_{i})_{\perp 1,2} = ({\bf{q}} -{\bf{q}}_{i})\cdot[{\hat {\bf z}} \times \hat {\bf n}_{1,2} (i)]\cdot \] denotes the component of $({\bf{q}} -{\bf{q}}_{i})$ {\sl perpendicular} to the initial/final quasiparticle velocity, where $\hat {\bf{z}}$ is the normal to the plane. Thus, a single sharp peak in the joint density of states produces an enhanced dipolar distribution in the conductance ratio $Z({\bf{q}},E)$, with the axes of the dipoles aligned along the directions of the initial and final quasiparticle velocities. The above analysis can be further refined by considering the Lorentzian distribution of the quasiparticle interference peaks, with the same qualitative conclusions. To summarize, the conductance ratio $Z({\bf{q}},E)$ is a spectral probe for fluctuations in the quasiparticle charge density in response to disorder. $Z({\bf{q}},E)$ is characterized by the joint coherence factors of charge ($\tau_3$) and the scattering potential. Provided the original joint density of states is sharply peaked at the octet vectors ${\bf{q}}_i,~i=1-7$, the conductance ratio $Z({\bf{q}},E)$ is also peaked at the octet vectors ${\bf{q}}_i,~i=1-7$. \begin{figure \centering \subfigure[] { \label{jdos} \includegraphics[width=0.45\linewidth]{0315-figs/0315-JDOS.eps} } \hspace{.3in} \subfigure[] { \label{sca} \includegraphics[width=0.45\linewidth]{0315-figs/0315-odd-scalar.eps} } \caption{(Color online) Observation of coherence factor effects in the squared joint density of states $|J({\bf{q}},V,V)|^2$ and in the squared Fourier transformed conductance ratio $|Z({\bf{q}},V)|^2$. Fig. (a) shows the squared joint density of states $|J({\bf{q}},V,V)|^2$ at the bias voltage $V=\Delta_0/2$, Fig. (b) shows the squared Fourier transformed conductance ratio $|Z({\bf{q}},V)|^2$ produced by a weak scalar scattering potential ${\hat t}({\bf{q}})={\hat\tau_3} $. Red lines label the positions of the sign-reversing q-vectors $q= q_{2,3,6,7}$, where weak scalar scattering is peaked. Blue lines label the positions of the sign-preserving q-vectors $q= q_{1,4,5}$, where weak scalar scattering is minimal.} \label{J-scalar} \end{figure} \section{Model for quasiparticle interference in vortex lattice} Next, we discuss the recent experiments by Hanaguri et al.\cite{Hanaguri} on the underdoped cuprate superconductor calcium oxychloride, $Ca_{2-x}Na_xCuO_2Cl_2$ (Na-CCOC), which have successfully observed the coherence factor effects with Fourier Transform Scanning Tunneling Spectroscopy (FT-STS) in a magnetic field. The main observations are: \begin{itemize} \item {\bf{ A selective enhancement of sign-preserving; depression of sign-reversing scattering events}}. In a field, Hanaguri et al.\cite{Hanaguri} observe a selective enhancement of the scattering events between parts of the Brillouin zone with the same gap sign, and a selective depression of the scattering events between parts of the Brillouin zone with opposite gap signs, so that the sign-preserving q-vectors ${\bf{q}}_{1,4,5}$ are enhanced, and the sign-reversing q-vectors ${\bf{q}}_{2,3,6,7}$ are depressed. \item {\bf{Large vortex cores}} with a core size $\xi\sim 10a$ of order ten lattice constants. Experimentally, vortex cores are imaged as regions of shallow gap \cite{Hanaguri}. The figure $\xi\sim 10a $ is consistent with magnetization and angular resolved photoemission (ARPES) measurements \cite{largecores}. \item {\bf High momentum transfer scattering} involving momentum transfer over a large fraction of the Brillouin zone size at $q_{4,5}\sim k_F$. A paradoxical feature of the observations is the enhancement of high momentum transfer $q\sim \pi/a $ scattering by objects that are of order ten lattice spacings in diameter. The enhanced high momentum scattering clearly reflects sub-structure on length scales much smaller than the vortex cores. \item {\bf Core-sensitivity}. Fourier mask analysis reveals that the scattering outside the vortex core regions differs qualitatively from scattering inside the vortex core regions. In particular, the enhancement of the sign-preserving scattering events is associated with the signal inside the ``vortex cores'', whereas the depression of the sign-reversing scattering events is mainly located outside the vortex regions. \end{itemize} Recently, T. Pereg-Barnea and M. Franz \cite{Pereg-Barnea-Franz2} have proposed an initial interpretation of these observations in terms of quasiparticle scattering off vortex cores. Their model explains the enhancement of the sign preserving scattering in the magnetic field in terms of scattering off vortex cores, provided vortex cores are small with $\xi\sim a$, as in high temperature superconductor $Bi_2Sr_2CaCu_2O_{8+\delta}$ (Bi2212). However, the large vortex core size of $Ca_{2-x}Na_xCuO_2Cl_2$ is unable to account for the field-driven enhancement in the high momentum scattering. Motivated by this observation, we have developed an alternative phenomenological model to interprete the high-momentum scattering. In our model, vortices bind to individual impurities, incorporating them into their cores and modifying their scattering potentials. This process replaces random potential scattering off the original impurities with gap-sign-preserving Andreev reflections off order parameter modulations in the vicinity of the pinned vortices. The high-momentum transfer scattering, involved in the selective enhancement and suppression, originates from the impurities whose scattering potentials are modified by the presence of the vortex lattice. Rather than attempt a detailed microscopic model for the pseudo-gap state inside the vortex cores and impurities bound therein, our approach attempts to characterize the scattering in terms of phenomenological form factors that can be measured and extracted from the data. \subsection{Construction of the model} In the absence of a field, random fluctuations in the tunneling density of states are produced by the original impurities. We assume that scattering off the impurities is mutually independent permitting us to write the change in density of states as a sum of contributions from each impurity \begin{equation}\label{B0} \delta \rho ({{\bf{r}}},V,B=0) = \sum_j \delta \rho_{i} ({{\bf{r}}-{\bf{r}}_j},V), \end{equation} where ${\bf{r}}_{j}$ denote the positions of the impurities. If \[ n_{i}= \hbox{original concentration of impurities in the absence of magnetic field}, \] then we obtain \begin{eqnarray}\label{corrB0} R({\bf{q}},V,B=0)= &n_{i}~\overline{ {\delta \rho_{i} ({{\bf{q}}},V) \delta \rho^*_{i}({-{\bf{q}}},V)}}. \end{eqnarray} Next we consider how the quasiparticle scattering changes in the presence of a magnetic field. Pinned vortices arising in the magnetic field act as new scatterers. In the experiment \cite{Hanaguri}, vortices are pinned to the preexisting disorder, so that in the presence of a magnetic field, there are essentially three types of scatterers: \begin{itemize} \item bare impurities, \item vortices, \item vortex-decorated impurities. \end{itemize} Vortex-decorated impurities are impurities lying within a coherence length of the center of a vortex core. We assume that these three types of scattering centers act as independent scatterers, so that the random variations in the tunneling density of states are given by the sum of the independent contributions, from each type of scattering center: \begin{equation} \delta \rho ({{\bf{r}}},V,B) = \sum_j \delta \rho_{V} ({{\bf{r}}-{\bf{r}}_j},V)+ \sum_l \delta \rho_{DI} ({{\bf{r}}-{\bf{r}}'_l},V)+ \sum_m \delta \rho_{I} ({{\bf{r}}-{\bf{r}}''_m},V), \end{equation} where ${\bf{r}}_{j},{\bf{r}}'_{l},{\bf{r}}''_{m}$ denote the positions of vortices, decorated impurities and bare impurities, respectively. In a magnetic field, the concentration of vortices is given by \[ n_{V}= \hbox{concentration of vortices}=\frac{2eB}{h}, \] In each vortex core, there will be $n_{core}= n_{i}\pi (\xi^{2}/4)$ impurities, where $\pi (\xi^2)/4$ is the area of a vortex and $n_{i}$ is the original concentration of bare scattering centers in the absence of a field. The concentration of vortex-decorated impurities is then given by \[ n_{DI}= \hbox{concentration of vortex-decorated impurities}=n_{core}n_{V} =\frac{2eB}{h}n_i\pi (\xi/2)^2. \] Finally, the residual concentration of ``bare'' scattering centers is given by \begin{equation}\label{n_BI} n_{I}= n_{i} - n_{DI} = \hbox{concentration of residual ``bare'' impurities}. \end{equation} Treating the three types of scatterers as independent, we write \begin{eqnarray}\label{corrB} R({\bf{q}},V)= & n_{V}~ \overline{{\delta \rho_{V} ({{\bf{q}}},V)\delta \rho_{V}^{*} ({-{\bf{q}}},V)}} +n_{DI} ~\overline{ {\delta \rho_{DI}({{\bf{q}}},V)\delta \rho^{*}_{DI}({-{\bf{q}}},V)}}\nonumber\\ +&\left(n_{i}-n_{DI} \right)~\overline{ {\delta \rho_{I}({{\bf{q}}},V)\delta \rho^{*}_{I}({-{\bf{q}}},V)}}. \end{eqnarray} The first term in (\ref{corrB}) accounts for the quasiparticle scattering off the vortices, the second term accounts for the quasiparticle scattering off the vortex-decorated impurities and the third term accounts for the quasiparticle scattering off the residual bare impurities in the presence of the superflow. It follows that \begin{equation}\label{Zfull} |Z(q,V,B)|^{2} = \frac{2eB}{h}~|Z_{V}(q,V,B)|^{2}+ \frac{2eB}{h}n_{core}|Z_{DI}(q,V,B)|^{2}+ (n_i - \frac{2eB}{h}n_{core})~ |Z_{I}(q,V,B)|^{2}, \end{equation} where $Z(q,V,B)$ is given by (\ref{Zphsym}), averaged over the vortex configurations, $Z_{V} (q,V,B)$, $Z_{DI}(q,V)$ and $Z_{I}(q,V)$ are Fourier images of the Friedel oscillations in the tunneling density of states induced by the vortices, vortex-decorated impurities and the bare impurities in the presence of the superflow. Our goal here is to model the quasiparticle scattering phenomenologically, without a recourse to a specific microscopic model of the scattering in the vortex interior. To achieve this goal, we introduce $Z_{VI}(q,V,B)$, a joint conductance ratio of the vortex-impurity composite, which encompasses the scattering off a vortex core and the impurities decorated by the vortex core, \begin{eqnarray}\label{ZVI} |Z_{VI}|^2=|Z_{V}|^2+n_{core}|Z_{DI}|^2, \end{eqnarray} so that we obtain \begin{equation}\label{Z(q,V,B)} |Z(q,V,B)|^{2} = \frac{2eB}{h}~|Z_{VI}(q,V,B)|^{2}+ (n_i - \frac{2eB}{h}n_{core}) |Z_{I}(q,V)|^{2}. \end{equation} This expression describes quasiparticle scattering in a clean superconductor in low magnetic fields in a model-agnostic way, namely, it is valid regardless of the choice of the detailed model of quasiparticle scattering in the vortex region. $Z_{VI}(q,V,B)$ here describes the scattering off the vortex-impurity composites, which we now proceed to discuss. \subsection{Impurities inside the vortex core: calculating $Z_{VI}$} As observed in the conductance ratio $Z({\bf{q}},V,B)$, the intensity of scattering between parts of the Brillouin zone with the same sign of the gap grows in the magnetic field, which implies that the scattering potential of a vortex-impurity composite has a predominantly sign-preserving coherence factor. We now turn to a discussion of the scattering mechanisms that can enhance sign-preserving scattering inside the vortex cores. Table 1 shows a list of scattering potentials and their corresponding coherence factor effects. Weak potential scattering is immediately excluded. Weak scattering off magnetic impurities can also be excluded, since the change in the density of states of the up and down electrons cancels. This leaves two remaining contenders: Andreev scattering off a fluctuation in the gap function, and multiple scattering, which generates a t-matrix proportional to the unit matrix. We can, in fact, envisage both scattering mechanisms being active in the vortex core. Take first the case of a resonant scattering center. In the bulk superconductor, the effects of a resonant scatterer are severely modified by the presence of the superconducting gap \cite{Balatsky}. When the same scattering center is located inside the vortex core where the superconducting order parameter is depressed, we envisage that the resonant scattering will now be enhanced. On the other hand, we can not rule out Andreev scattering. A scalar impurity in a d-wave superconductor scatters the gapless quasiparticles, giving rise to Friedel oscillations in the order parameter that act as Andreev scattering centers \cite{Nunner,Pereg-Barnea-Franz,Pereg-Barnea-Franz2}. Without a detailed model for the nature of the vortex scattering region, we can not say whether this type of scattering is enhanced by embedding the impurity inside the vortex. For example, if, as some authors have suggested \cite{WignerSuperSolid}, the competing pseudo-gap phase is a Wigner supersolid, then the presence of an impurity may lead to enhanced oscillations in the superconducting order parameter inside the vortex core. With these considerations in mind, we consider both sources of scattering as follows \begin{equation}\label{} \hat t ({\bf{q}} ,{\bf{k}} ,i\omega_n) = t_{A}({\bf{q}} ,{\bf{k}} ,i\omega_n) + t_{R}({\bf{q}} ,{\bf{k}} ,i\omega_n) \end{equation} where \begin{align*} \hat{t}_A({\bf{q}},{\bf{k}},i\omega_n)= \frac{1}{2}\Delta_0 f_A({\bf{q}})(\chi_{{\bf{k}}_+}+\chi_{{\bf{k}}_-})\hat{\mbox{\boldmath{$\tau$}}}_1,\qquad \qquad (\hbox{Andreev scattering}) \end{align*} describes the Andreev scattering. Here $\chi_{\bf{k}}=c_x-c_y$ is the d-wave function with $c_{x,y}\equiv\cos k_{x,y}$. The resonant scattering is described by \[ {\hat t}_R({\bf{q}},{\bf{k}} ,i\omega_n)=i\Delta_{0} \hbox{sgn} (\omega_n)~f_R({\bf{q}}){\bf 1}. \qquad \qquad (\hbox{Resonant scattering}) \] Using the T-matrix approximation, we obtain for the even and odd components of Fourier transformed fluctuations in the local density of states due to the scattering off the superconducting order parameter amplitude modulation, \begin{eqnarray}\label{deltarhoeven-oddV} &\delta \rho^{even}_{VI}({\bf{q}},\omega)=\frac{1}{2\pi } &{\rm Im} \int_{{\bf{k}}} {\rm Tr}\Bigl[ {G}_{{\bf{k}}_{-}} (\omega-i\delta )\ \hat{t}({\bf{q}},{\bf{k}},\omega-i\delta ) \ { G}_{{\bf{k}}_{+}} (\omega-i\delta ) \Bigr],\\ &\delta \rho^{odd}_{VI}({\bf{q}},\omega)=\frac{1}{2\pi } &{\rm Im} \int_{{\bf{k}}} {\rm Tr}\Bigl[\tau_3 {G}_{{\bf{k}}_{-}} (\omega-i\delta )\ \hat{t}({\bf{q}},{\bf{k}},\omega-i\delta ) \ { G}_{{\bf{k}}_{+}} (\omega-i\delta ) \Bigr], \end{eqnarray} where ${\bf{k}}_{\pm }= {\bf{k}} \pm {\bf{q}} /2$, $G_{{\bf{k}} } (\omega)= [\omega -\epsilon_{{\bf{k}} }\tau_{3}-\Delta_{{\bf{k}} }\tau_{1}]^{-1}$ is the Nambu Green's function for an electron with normal state dispersion $\epsilon_{{\bf{k}} }$ and gap function $\Delta_{{\bf{k}} }$. We now obtain \[ \delta \rho^{even(odd)}_{V}({\bf{q}} ,\omega)= f_A({\bf{q}})\Lambda_A^{even(odd)}({\bf{q}} ,\omega)+ f_R({\bf{q}})\Lambda_R^{even(odd)}({\bf{q}} ,\omega) \] with \begin{eqnarray}\label{Lambda-even} \Lambda^{even}_{A} ({\bf{q}} ,\omega)&=& \frac{\Delta_{0} }{4\pi }{\rm Im}\int_k~ (\chi_{{\bf{k}}_+}+\chi_{{\bf{k}}_-})~\Bigl[ \frac{z(\Delta_{{\bf{k}}_+}+\Delta_{{\bf{k}}_-})} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta },\\ \Lambda^{even}_{R} ({\bf{q}} ,\omega)&=& \frac{\Delta_{0} }{2\pi }{\rm Im}\int_k~ ~\Bigl[ \frac{-i(z^2+\epsilon_{{\bf{k}}_+}\epsilon_{{\bf{k}}_-}+\Delta_{{\bf{k}}_+} \Delta_{{\bf{k}}_-})} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta }. \end{eqnarray} The substantially smaller odd components are: \begin{eqnarray}\label{Lambda-odd} \Lambda^{odd}_{A} ({\bf{q}} ,\omega)&=& \frac{\Delta_{0} }{4\pi }{\rm Im}\int_k~ (\chi_{{\bf{k}}_+}+\chi_{{\bf{k}}_-})~\Bigl[ \frac{\epsilon_{{\bf{k}}_+}\Delta_{{\bf{k}}_-}+\epsilon_{{\bf{k}}_-}\Delta_{{\bf{k}}_+}} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta },\\ \Lambda^{odd}_{R} ({\bf{q}} ,\omega)&=& \frac{\Delta_{0} }{2\pi }{\rm Im}\int_k~ ~\Bigl[ \frac{-i~z(\epsilon_{{\bf{k}}_+}+\epsilon_{{\bf{k}}_-})} {(z^2-E_{{\bf{k}}_+}^2) (z^2-E_{{\bf{k}}_-}^2)}\Bigr]_{z=\omega-i\delta }. \end{eqnarray} where $E_{{\bf{k}}}=[\epsilon_{{\bf{k}}}^2+\Delta_{{\bf{k}}}^2]^{\frac{1}{2}}$ is the quasiparticle energy. The vortex contribution to the Fourier transformed conductance ratio (\ref{Z(q,V,B)}) is then \begin{eqnarray}\label{Z_VI} Z_{VI} ({\bf{q}} ,V,B) = n_{V}( Z_{A} ({\bf{q}} ,V,B)+ Z_{R} ({\bf{q}} ,V,B)), \end{eqnarray} where \begin{eqnarray}\label{Z_A} Z_{A} ({\bf{q}} ,V,B)=f_A({\bf{q}})~\biggl[ (\frac{1}{\rho_0(V)}-\frac{1}{\rho_0(-V)})\Lambda_A ^{even}({\bf{q}} ,V)+(\frac{1}{\rho_0(V)}+\frac{1}{\rho_0(-V)})\Lambda_A ^{odd}({\bf{q}} ,V)\biggr] \end{eqnarray} and \begin{eqnarray}\label{Z_R} Z_{R} ({\bf{q}} ,V,B)=f_R({\bf{q}})~\biggl[ (\frac{1}{\rho_0(V)}-\frac{1}{\rho_0(-V)})\Lambda_R ^{even}({\bf{q}} ,V)+(\frac{1}{\rho_0(V)}+\frac{1}{\rho_0(-V)})\Lambda_R ^{odd}({\bf{q}} ,V)\biggr]. \end{eqnarray} \section{Numerical simulation} In this section we compare the results of our phenomenological model with the experimental data by numerically computing $Z_{VI}({\bf{q}},V,B)$ (\ref{Z_VI}) for Andreev (\ref{Z_A}) and resonant (\ref{Z_R}) scattering. In these calculations we took a BCS superconductor with a d-wave gap $\Delta_{{\bf{k}}}=\Delta_0/2(\cos k_x-\cos k_y)$ with $\Delta_0=0.2 t$ and a dispersion which has been introduced to fit the Fermi surface of an underdoped $Ca_{2-x}Na_xCuO_2Cl_2$ sample with $x=0.12$ \cite{ShenThesis}: \[ \epsilon_{\bf{k}}=-2t(\cos k_x+\cos k_y)-4t'\cos k_x\cos k_y- 2t''(\cos 2k_x+\cos 2k_y)+\mu, \] where $t=1$, $t'=-0.227$, $t''=0.168$, $\mu=0.486$. \subsection{Evaluation of $Z_{VI}$} In the absence of a microscopic model for the interior of the vortex core, we model the Andreev and the resonant scattering in the vortex region by constants $f_A ({\bf{q}},i\omega_n )=f_A$ and $f_R ({\bf{q}},i\omega_n )=f_R$. Fig. 2 shows the results of calculations using these assumptions. Our simple model reproduces the enhancement of sign-preserving q-vectors $q_{1,4,5}$ as a result of Andreev and resonant scattering off vortex-impurity composites. Some care is required in interpreting Fig. 2, because the squared conductance ratio $|Z({\bf{q}} ,V)|^{2}$ contains weighted contributions from both even and odd fluctuations in the density of states, with the weighting factor favoring {\sl odd} fluctuations, especially near $V=0$. Both Andreev and resonant scattering contribute predominantly to the even fluctuations of the density of states (see Table 1), and give rise to the signals at $q_{1,4,5}$. In the case of resonant scattering, we observe an additional peak at $q_{3}$. From Table 1, we see that the Andreev and the resonant scattering potentials also produce a signal in the odd channel which experiences no coherence factor effect, contributing to all the octet q-vectors, which, however, enters the conductance ratio $Z({\bf{q}} ,V)$ given by (\ref{Z-even-odd}) with a substantial weighting factor. This is the origin of the peak at $q_ {3}$ in Fig. 2(b). \begin{figure \centering \subfigure[] { \label{Andr} \includegraphics[width=0.45\linewidth]{0315-figs/0315-Andr.eps} } \hspace{.3in} \subfigure[] { \label{Res} \includegraphics[width=0.45\linewidth]{0315-figs/0315-Res.eps} } \caption{(Color online) Quasiparticle interference produced by the Andreev and the resonant scattering potentials, the primary candidates for producing the experimentally observed enhancement of sign-preserving scattering. Fig. (a) displays a density plot of the squared Fourier transformed conductance ratio $|Z_{A}({\bf{q}},V)|^2$ predicted by (\ref{Z_A}) at a bias voltage $V= \Delta_{0}/2$ produced by pure Andreev scattering ($f_{A}\neq 0$, $f_{R}=0$). Fig. (b) displays a density plot of the squared Fourier transformed conductance ratio $|Z_{R}({\bf{q}},V)|^2$ predicted by (\ref{Z_R}) at a bias voltage $V= \Delta_{0}/2$ produced by resonant scattering ($f_{R}\neq 0$, $f_{A}=0$). Blue lines label the positions of the sign-preserving q-vectors $q= q_{1,4,5}$, where both Andreev and resonant scattering is peaked. Red lines label the positions of the sign-reversing q-vectors $q= q_{2,3,6,7}$, where both Andreev and resonant scattering is minimal.} \end{figure} \subsection{Comparison with experimental data} The results of the calculation of the full squared conductance ratio $|Z(q,V,B)|^2$ are obtained by combining the scattering off the impurities inside the vortex core $Z_{VI}$ with the contribution from scattering off impurities outside the vortex core $Z_{I}$, according to equation (\ref{Z(q,V,B)}), reproduced here: \begin{equation}\label{Z(q,V,B)again} |Z(q,V,B)|^{2} = \frac{2eB}{h}~|Z_{VI}(q,V,B)|^{2}+ (n_i - \frac{2eB}{h}n_{core})~ |Z_{I}(q,V,B)|^{2}. \end{equation} where $n_{core}= n_{i}\pi (\xi/2)^{2}$ is the number of impurities per vortex core. Fig. 3 displays a histogram of the computed field-induced change in the conductance ratio $|Z({\bf{q}}_{i},V,B)|^2-|Z({\bf{q}}_{i},V)|^2$ at the octet q-vectors. In these calculations, we took an equal strength of Andreev and resonant scattering $f_{R}=f_{A}$, with a weak scalar scattering outside the vortex core of strength $f_{I}= f_{R}=f_{A}$. In all our calculations, we find that Andreev and resonant scattering are equally effective in qualitatively modelling the observations. The main effect governing the depression of sign-preserving wavevectors $q_{1,4,5}$ derives from the change in the impurity scattering potential that results from embedding the impurity inside the vortex core. We estimated the percentage of the impurities decorated by the vortices from the fraction of sample area covered by the vortices. The concentration of vortices is $n_V(B)=2eB/h=B/\Phi_0$, where $\Phi_0=h/ (2e)=2.07\times 10^{-15} $ weber is the superconducting magnetic flux quantum. The area of a vortex region is estimated as $A_V=\pi(\xi_0/2)^2$ with the superconducting coherence length $\xi_0=44$ \AA \cite{Kim}, so that the percentage of the original impurities that are decorated by vortices in the presence of the magnetic field is $\alpha(B)=n_V(B)~A_V$. Using these values, we obtain for the magnetic field of $B=5$ T $\alpha(B=5~T)\approx 3.7\%$, and for $B=11$ T $\alpha(B=11~T)\approx 8.1\%$. For simplicity, we assume that a vortex core is pinned to a single impurity, $n_{core}= n_{i}\pi (\xi/2)^{2}=1$, so that the ratio of the concentrations of the impurities and vortices is $n_i/n_V(B)~=n_{core}/A_V/(2eB/h)$, which becomes for $B=5$ T $n_i/~n_V(B=5T)\approx 27$, and for $B=11$ T $n_i/n_V(B=11T)\approx 12$. \begin{figure \centering \subfigure[] { \label{histT-Andr} \includegraphics[width=0.45\linewidth]{0315-figs/0315-histTAndr.eps} } \hspace{.3in} \subfigure[] { \label{histT-Res} \includegraphics[width=0.45\linewidth]{0315-figs/0315-histTRes.eps} } \hspace{.3in} \subfigure[] { \label{Experiment} \includegraphics[width=0.45\linewidth]{0315-figs/0315-histE.eps} } \caption{(Color online) Comparison between the results of the model calculations and the experimental data. Figs. (a)-(b) show the change in the squared Fourier transformed conductance ratio $\delta Z^2\equiv|Z({\bf{q}},V,B)|^2-|Z({\bf{q}},V,B=0)|^2$ at ${\bf{q}}=q_{1-7}$, computed for a magnetic field of $B=$5 T (grey bars) and 11 T (red bars) at a bias voltage $V=\Delta_{0}/2$, provided the origin of the selective enhancement is the Andreev (Fig. (a)) or the resonant (Fig. (b)) scattering in the vortex core region. Here a vortex, pinned to a scalar impurity, transforms its original scattering potential with enhanced scattering at $q=q_{2,3,6,7}$ into an Andreev (Fig. (a)) or into a resonant (Fig. (b)) scattering potential with enhanced scattering at $q=q_{1,4,5}$ (see Table 1). Fig. (c) shows the experimentally observed change in the squared Fourier transformed conductance ratio $\delta Z^2\equiv|Z({\bf{q}},V,B)|^2-|Z({\bf{q}},V,B=0)|^2$ at ${\bf{q}}=q_{1-7}$, in a magnetic field of $B=$5 T (grey bars) and 11 T (red bars) at a bias voltage $V=4.4$ meV.} \end{figure} In Fig. 3 we have modelled the scattering provided the origin of the selective enhancement is the Andreev (Fig. (a)) or the resonant (Fig. (b)) scattering in the vortex core region. Both the Andreev and the resonant scattering are equally effective in qualitatively modelling the observations. Thus our model has qualitatively reproduced the experimentally observed enhancement of the sign-preserving scattering and the depression of the sign-reversing scattering. \section{Discussion} In this work, we have shown how scanning tunneling spectroscopy can serve as a phase-sensitive probe of the superconducting order parameter. In particular, we find that the even and odd components of the density of states fluctuations can be associated with a well-defined coherence factor. The measured Fourier transformed conductance ratio $Z({\bf{q}},V)=\frac{dI/dV({\bf{q}},+V)}{ dI/dV({\bf{q}},-V)}$ is a weighted combination of these two terms, and in the limit of particle-hole symmetry it is dominated by the odd component of the density of states. Observation of coherence factor effects with scanning tunneling spectroscopy requires the presence of controllable scatterers. In the study by Hanaguri et al. \cite{Hanaguri} these controllable scatterers are vortices. Our phenomenological model of quasiparticle scattering in the presence of vortices is able to qualitatively reproduce the observed coherence factor effects under the assumption that impurity scattering centers inside the vortex cores acquire an additional Andreev or resonant scattering component. This study raises several questions for future work. In particular, can a detailed model of a d-wave vortex core provide a microscopic justification for the modification of the impurity scattering potential? One of the issues that can not be resolved from the current analysis, is whether the enhanced Andreev scattering originates in the core of the pure vortex, ($|Z_{V}|^{2}$), or from the decoration of impurities that are swallowed by the vortex core ($n_{core}|Z_{DI}\vert^{2}$). This is an issue that may require a combination of more detailed experimental analysis and detailed modelling of vortex-impurity composites using the Bogoliubov de Gennes equations. Another open question concerns whether it is possible to discriminate between the Andreev and resonant scattering that appear to be equally effective in accounting for the coherence factor effects. There are several aspects to the experimental observations that lie beyond our current work. For example, experimentally, it is possible to spatially mask the Fourier transform data, spatially resolving the origin of the scattering. These masked data provide a wealth of new information. In particular, most of the enhancement of the sign preserving scattering is restricted to the vortex core region, as we might expect from our theory. However, to extend our phenomenology to encompass the masked data, requires that we compute the fluctuations of the density of states as a function of distance from the vortex core, \begin{equation}\label{} R ({\bf{r}},{\bf{r}}';{\bf{r}}_V,V)=\langle \delta \rho ({\bf{r}} - {\bf{r}}_{V},V) \delta \rho ({\bf{r}}' - {\bf{r}}_{V},V)\rangle, \end{equation} a task which requires a microscopic model of the vortex core. In our theory we have used the bulk quasiparticle Green's functions to compute the scattering off the vortex-decorated impurities. Experiment does indeed show that the quasiparticle scattering off impurities inside the vortex cores is governed by the quasiparticle dispersion of the bulk: can this be given a more microscopic understanding? The penetration of superconducting quasiparticles into the vortex core is a feature that does not occur in conventional s-wave superconductors. It is not clear at present to what extent this phenomenon can be accounted for in terms of a conservative d-wave superconductor model, or whether it requires a more radical interpretation. One possibility here, is that the quasiparticle fluid in both the pseudo-gap phase and inside the vortex cores is described in terms of a ``nodal liquid'' \cite{Balents-Fisher-Nayak}. Beyond the cuprates, scanning tunneling spectroscopy in a magnetic field appears to provide a promising phase-sensitive probe of the symmetry of the order parameter in unconventional superconductors. One opportunity that this raises, is the possibility of using STM in a field to probe the gap phase of the newly discovered iron-based high-temperature superconductors. According to one point of view \cite{Mazin}, the iron-based pnictide superconductors possess an $s_\pm$ order parameter symmetry in which the order parameter has opposite signs on the hole pockets around $\Gamma$ and the electron pockets around M. If this is, indeed, the case, then in a magnetic field quasiparticle scattering between parts of Fermi surface with same gap signs should exhibit an enhancement, while scattering between parts of Fermi surface with opposite gap signs will be suppressed. This is a point awaiting future theoretical and experimental investigation. We are indebted to Hide Takagi and Tetsuo Hanaguri for providing the experimental data. We thank Hide Takagi, Tetsuo Hanaguri, J.C. Seamus Davis, Ali Yazdani, Tami Pereg-Barnea, Marcel Franz, Peter Hirschfeld, Zlatko Tesanovic, Eduardo Fradkin, Steven Kivelson, Jian-Xin Zhu, Sasha Balatsky and Lev Ioffe for helpful discussions. This research was supported by the National Science Foundation grant DMR-0605935.
1,108,101,564,027
arxiv
\section{Introduction} Formally, the notion of pattern avoidance is defined as follows \begin{mydef} An $n$-permutation $\sigma$ {\bf contains} a $k$-permutation $\pi$ iff there exist integers $1 \leq x_1 < x_2 < \cdots x_k \leq n$ such that $$\pi(i)<\pi(j) \Leftrightarrow \sigma(x_i)<\sigma(x_j)$$ for all $i,j$. Otherwise, we say $\sigma$ {\bf avoids} $\pi$. \end{mydef} In the late 1980s/early 1990s, Richard P. Stanley and Herbert Wilf independently conjectured that for every permutation $\pi$, there exists a constant $c_\pi$ such that the number of $n$-permutations avoiding $\pi < c_\pi^n$ for all $n$. As there are $n!$ $n$-permutations, this constant exponential bound is non-trivial. To generalize the Stanley-Wilf conjecture, we generalize this notion of pattern avoidance. \begin{mydef} Let $\Lambda$ be a $k$-uniform hypergraph on vertex set $\{1,2,\cdots,n\}$. We say an $n$-permutation $\sigma$ {\bf $\Lambda$-contains} a $k$-permutation $\pi$ iff there exist integers $1 \leq x_1 < x_2 < \cdots x_k \leq n$ such that $$\pi(i)<\pi(j) \Leftrightarrow \sigma(x_i)<\sigma(x_j)$$ for all $i,j$ {\bf AND} $\{x_1,\cdots,x_k\} \in E(\Lambda)$. Otherwise, we say $\sigma$ {\bf $\Lambda$-avoids} $\pi$. \end{mydef} In this paper, we analyze the generalized $\Lambda$-avoidance problem for both random hypergraphs and fixed hypergraphs, a problem originally posed by Asaf Ferber. When $\Lambda$ is a random hypergraph with edge density $\alpha$, we show that, for every permutation $\pi$, the number of $\Lambda$-avoiding $n$-permutations is $\exp(O(n))\alpha^{-\frac{n}{k-1}}$ in expectation. We also show that, for fixed $\Lambda$, the number of $n$-permutations $\Lambda$-avoiding $\pi$ is $O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$ for all $\epsilon > 0$, as long as $\Lambda$ is $k$-uniform and satisfies the following:\\ $\Lambda$ contains a collection of $L$-vertex cliques where each of the $n$ vertices belongs to at least $\delta(\Lambda) = \Omega(1)$ cliques in the collection and at most $\Delta(\Lambda) = O(1)$.\\ We see that, for $L=n^{O(1)}$, these bounds are non-negligible improvements on the $O(n^n)$ total $n$-permutations.\\ A few years after the proposal of Stanley-Wilf, in 1992, Zolt\'an F\"uredi and P\'eter Hajnal proposed a similar conjecture \cite{FH} that extended the notion of pattern-avoiding permutations to pattern-avoiding matrices. Essentially, an $n \times n$ $0\mhyphen 1$ matrix $A$ contains a $k \times k$ $0\mhyphen 1$ matrix $P$ if there exists a $k \times k$ submatrix of $A$ that has 1-entries at all the locations where $P$ has 1-entries. Formally, \begin{mydef} For an $n \times n$ $0\mhyphen 1$ matrix $A$ and a $k \times k$ $0\mhyphen 1$ matrix $P$, we say that $A$ {\bf contains} $P$ iff there exists row indices $1 \leq x_1 < x_2 < \cdots x_k \leq n$ and column indices $1 \leq y_1 < y_2 < \cdots y_k \leq n$ such that $$P_{ij} = 1 \Rightarrow A_{x_iy_j} = 1$$ for all $i,j$. Otherwise, we say $A$ {\bf avoids} $P$. We note that, for $A$ to contain $P$, we don't require that $P$ be a submatrix of $A$, but that the 1-entries of $P$ be present in a submatrix of $A$. \end{mydef} The F\"uredi-Hajnal conjecture states that, if an $n \times n$ $0\mhyphen 1$ matrix $A$ avoids a permutation matrix $P_\pi$, it has $<c_Pn$ 1-entries for some constant $c_P$ in terms of $\pi$. Progress was first made on these conjectures by Martin Klazar in 2000 \cite{K}, who showed that the F\"uredi-Hajnal conjecture implies the Stanley-Wilf conjecture. Then, in 2004, Adam Marcus and G\'abor Tardos proved the F\"uredi-Hajnal conjecture \cite{MT}. Combined with Klazar's arguments, a proof of the Stanley-Wilf conjecture was finally achieved.\\ This notion of pattern-avoiding matrices parallels that of pattern-avoiding permutations, as a permutation $\sigma$ contains a permutation $\pi$ if and only if the permutation matrix $P_\sigma$ contains the permutation matrix $P_\pi$. The notion of $\Lambda$-avoidance can also be extended to this matrix context, where $A$ must only avoid $P$ on submatrices whose columns correspond to an edge in $\Lambda$. Viewing pattern avoidance in this matrix context was the key to proving the Stanley-Wilf conjecture and will be one of the main insights in our analysis.\\ \section{Main Results} When $\Lambda$ is a random hypergraph, we will prove the following bound. \begin{theorem}\label{randomcasecor} Let $k\in\mathbb{Z}$ with $k>1$, and take $\pi\in S_k$. Then there is some constant $C=C(\pi)$ such that if $\Lambda$ is the $k$-uniform Erd\H{o}s-R\'{e}nyi random hypergraph on $n$ vertices with edge probability $\alpha$, then the expected number of $\sigma\in S_n$ that $\Lambda$-avoid $\pi$ is at most \[\exp(Cn)\alpha^{-\frac{n}{k-1}}.\] Furthermore, this bound is sharp to within an exponential factor; that is, up to a modification in $C$. \end{theorem} Due to linearity of expectation, Theorem \ref{randomcasecor} reduces to bounding the number of permutations containing few copies of $\pi$, for which we will require bounds on the maximal number of ones $0\mhyphen 1$ matrices containing few copies of the permutation matrix $A_{\pi}$. Both of these bounds may be of independent interest as they give sharp first-order approximations. \begin{theorem}\label{01matrices} Let $k\in\mathbb{Z}^+$, $\pi\in S_k$, and let $A_{\pi}$ be the $k\times k$ permutation matrix corresponding to $\pi$. There exist constants $C=C(\pi)$ and $C'=C'(\pi)>0$ such that if $M$ is an $n\times n$ $0\mhyphen 1$ matrix containing $a$ ones, with $Cn\leq a\leq n^2$, then $M$ contains at least $C'\frac{a^{2k-1}}{n^{2k-2}}$ copies of $A_{\pi}$. Furthermore, for $n\leq a\leq n^2$ this bound is sharp to within a constant factor (depending on $\pi$). \end{theorem} \begin{theorem}\label{permsupersaturation} Let $k\in\mathbb{Z}^+$, $k>1$ and $\pi\in S_k$. There exists some constant $C=C(\pi)$ such that for all $m,n\in\mathbb{Z}^{\geq 0}$, $m\leq\binom{n}{k}$, the number of permutations in $S_n$ containing at most $m$ copies of $\pi$ is at most \[\exp(Cn)\max\left(1,\left(\frac{m}{n}\right)^{\frac{n}{k-1}}\right).\] Furthermore, this bound is sharp to within an exponential factor (that is, up to a change in $C$). \end{theorem} In Section \ref{cordeduction}, we will make the easy deduction of Theorem \ref{randomcasecor} as a corollary of Theorem \ref{permsupersaturation}. In Section \ref{01boundsection}, we will prove Theorem \ref{01matrices}, and deduce an upper bound on the number of $0-1$ matrix satisfying the conditions of Theorem \ref{01matrices}. Finally, in Section \ref{randomproof} we will prove Theorem \ref{permsupersaturation}. We will also consider the case when $\Lambda$ is a fixed graph with particular structure. In particular, we will show the following. \begin{theorem}\label{fixedtheorem} For every permutation $\pi$, the number of $n$-permutations $\Lambda$-avoiding $\pi$ is $O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$ for all $\epsilon > 0$, as long as $\Lambda$ is $k$-uniform and satisfies the following:\\ $\Lambda$ contains a collection of $L$-vertex cliques where each of the $n$ vertices belongs to at least $\delta(\Lambda) = \Omega(1)$ cliques in the collection and at most $\Delta(\Lambda) = O(1)$. \end{theorem} In Sections \ref{fixedcase} to \ref{genbound}, we will prove Theorem \ref{fixedtheorem}. The main tool in our analysis will be the hypergraph containers method. The containers method enables us to distribute the vertices of a hypergraph into containers such that every independent set in the hypergraph belongs to one of the containers. We can apply this method recursively, breaking each container down further into more containers in a branching fashion, to bound the total number of independent sets in a hypergraph.\\ We will set up a hypergraph whose vertices represent the 1-entries in a matrix and whose edges represent the entries in a submatrix containing $P_\pi$ with columns $\in E(\Lambda)$. In this context, independent sets correspond to $\Lambda$-avoiding matrices. Using the hypergraph containers method, we bound the number of permutation-matrix independent sets, utilizing F\"uredi-Hajnal to show that the conditions needed to apply the method hold.\\ In Section \ref{fixedcase} we introduce this fixed $\Lambda$ case and motivate the $L$-vertex clique constraint on $\Lambda$ with $L=n^{O(1)}$, showing that fixed $\Lambda$ graphs with $O(1)$ maximal clique can contain $\Theta(n^k)$ edges and still be avoided by almost all $n$-permutations. In Section \ref{formulation}, we establish the matrix/hypergraph formulation of the problem. In Section \ref{HC}, we formally introduce the hypergraph containers lemma and investigate the necessary conditions to apply the lemma in a recursive branching fashion. In Sections \ref{RL} and \ref{SRB}, we verify that these conditions are met using two additional lemmas. Finally, in Section \ref{genbound}, we apply the branching hypergraph containers and prove Theorem \ref{fixedtheorem}.\\ Many of the arguments in these sections parallel those presented in a paper \cite{FMS} by Asaf Ferber, Gweneth Anne McKinley, and Wojciech Samotij. Additionally, the application of the hypergraph container lemma in a recursive branching fashion is adopted from a paper \cite{MS} by Morris and Saxton.\\ Lastly, in Section \ref{conclusion}, we will compare Theorems \ref{randomcasecor} and \ref{fixedtheorem} and summarize our results. \section{Linearity of Expectation}\label{cordeduction} Suppose $\Lambda$ is a random hypergraph with each edge chosen independently at random with edge probability $\alpha$. In this case, we may simplify the problem by making use of linearity of expectation. In particular, let us define \[Av_{n,\Lambda}(\pi):=\{\sigma\in S_n:\sigma\text{ }\Lambda\text{-avoids }\pi\}.\] Then by linearity of expectation, we have that \[\mathbb{E}_{\Lambda}[|Av_{n,\Lambda}(\pi)|]=\displaystyle\sum_{\sigma\in S_n}\Pr[\sigma\text{ }\Lambda\text{-avoids }\pi].\] This latter probability is simply the probability that none of the copies of $\pi$ in $\sigma$ correspond to edges of $\Lambda$, which is $(1-\alpha)^{\#\text{ of copies of }\pi\text{ in }\sigma}$. Therefore, \begin{equation}\label{linexp} \mathbb{E}_{\Lambda}[|Av_{n,\Lambda}(\pi)|]=\displaystyle\sum_{\sigma\in S_n}(1-\alpha)^{\#\text{ of copies of }\pi\text{ in }\sigma}. \end{equation} Thus bounds on the number of permutations containing few copies of $\pi$, as given in Theorem \ref{permsupersaturation}, will give us bounds on our desired quantity $\mathbb{E}_{\Lambda}[|Av_{n,\Lambda}(\pi)|]$. We now make this argument rigorous. \begin{proof}[Deduction of Theorem \ref{randomcasecor} from Theorem \ref{permsupersaturation}] We first prove the upper bound. By (\ref{linexp}), \begin{align*} \mathbb{E}_{\Lambda}[|Av_{n,\Lambda}(\pi)|] & =\displaystyle\sum_{\sigma\in S_n}(1-\alpha)^{\#\text{ of copies of }\pi\text{ in }\sigma} \\ & \leq\displaystyle\sum_{m=1}^{\binom{n}{k}}(1-\alpha)^m\cdot |\{\sigma\in S_n:\sigma\text{ contains at least }m\text{ copies of }\pi\}|. \end{align*} By Theorem \ref{permsupersaturation}, there exists $C=C(\pi)$ such that this is at most \begin{align*} \displaystyle\sum_{m=1}^{\infty}(1-\alpha)^m\exp(Cn)\max\left(1,\left(\frac{m}{n}\right)^{\frac{n}{k-1}}\right) & =\displaystyle\sum_{m=1}^n(1-\alpha)^m\exp(Cn) \\ & +\displaystyle\sum_{m=n+1}^{\binom{n}{k}}(1-\alpha)^m\exp(Cn)\left(\frac{m}{n}\right)^{\frac{n}{k-1}} \\ & \leq n\exp(Cn)+\frac{\exp(Cn)}{n^{\frac{n}{k-1}}}\displaystyle\sum_{m=n+1}^{\binom{n}{k}} (1-\alpha)^m m^{\frac{n}{k-1}} \\ & \leq n\exp(Cn)+\binom{n}{k}\frac{\exp(Cn)}{n^{\frac{n}{k-1}}}\displaystyle\max_{m\in\mathbb{R}^+}(1-\alpha)^m m^{\frac{n}{k-1}} \\ & \leq n^k\exp(Cn)\left(1+n^{-\frac{n}{k-1}}\displaystyle\max_{m\in\mathbb{R}^+}(1-\alpha)^m m^{\frac{n}{k-1}}\right) \\ & \leq\exp((C+k)n)\left(1+n^{-\frac{n}{k-1}}\displaystyle\max_{m\in\mathbb{R}^+}e^{-\alpha m} m^{\frac{n}{k-1}}\right), \end{align*} where we are simply bounding our sum by its number of terms times its maximum term, and using the trivial bounds $n<e^n$ and $1-\alpha\leq e^{-\alpha}$. Now, $e^{-\alpha m} m^{\frac{n}{k-1}}$ is maximized when $m=\frac{n}{(k-1)\alpha}$, whereupon $e^{-\alpha m} m^{\frac{n}{k-1}}=\left(\frac{n}{e(k-1)\alpha}\right)^{\frac{n}{k-1}}$. Substituting, \begin{align*} \mathbb{E}_{\Lambda}[|Av_{n,\Lambda}(\pi)|] & \leq\exp((C+k)n)\left(1+n^{-\frac{n}{k-1}}\left(\frac{n}{e(k-1)\alpha}\right)^{\frac{n}{k-1}}\right) \\ & =\exp((C+k)n)\left(1+\left(\frac{1}{e(k-1)\alpha}\right)^{\frac{n}{k-1}}\right) \\ & \leq\exp((C+k)n)\left(1+\alpha^{-\frac{n}{k-1}}\right) \\ & \leq\exp((C+k+1)n)\alpha^{-\frac{n}{k-1}}. \end{align*} Replacing $C+k+1$ by $C$, we have deduced the upper bound. For the lower bound, let $m=\left\lceil\frac{n}{\alpha}\right\rceil$ in the lower bound of Theorem \ref{permsupersaturation}. We obtain that there are at least $\exp(C'n)\alpha^{-\frac{n}{k-1}}$ permutations in $S_n$ containing at most $\left\lceil\frac{n}{\alpha}\right\rceil$ copies of $\pi$ for some $C'=C'(\pi)$. Thus \begin{align*} \mathbb{E}_{\Lambda}[|Av_{n,\Lambda}(\pi)|] & \geq\exp(C'n)\alpha^{-\frac{n}{k-1}}(1-\alpha)^{\left\lceil\frac{n}{\alpha}\right\rceil} \\ & \geq\exp(C'n)\alpha^{-\frac{n}{k-1}}\exp\left(-\frac{\alpha}{1-\alpha}\cdot\frac{2n}{\alpha}\right) \\ & \geq\exp\left(\left(C'-\frac{2}{1-\alpha}\right)n\right)\alpha^{-\frac{n}{k-1}}, \end{align*} where in the second line we used the inequality $\log(1-\alpha)\geq-\frac{\alpha}{1-\alpha}$ (an easy consequence of Taylor expansion) and $\left\lceil\frac{n}{\alpha}\right\rceil\leq\frac{2n}{\alpha}$ (immediate as $\alpha\leq 1$ and $n\geq 1$). This proves the lower bound with a constant of $C=C'-4$ when $\alpha\leq\frac{1}{2}$. With $\alpha\geq\frac{1}{2}$, note that $\mathbb{E}_{\Lambda}[|Av_{n,\Lambda}(\pi)|]\geq 1\geq\exp(-n)\alpha^{-\frac{n}{k-1}}$ (as either the all-increasing or all-decreasing permutation avoids $\pi$ over any hypergraph), so the constant $-1$ suffices. So letting $C=\min(C'-4,-1)$ is sufficient to prove the lower bound, completing our argument. \end{proof} \section{Bounds on $0\mhyphen 1$ Matrices}\label{01boundsection} As in the proof strategy of \cite{MT}, before we prove our result for permutations we first pass to the domain of $0\mhyphen 1$ matrices. Since we would like to bound the number of permutations with few copies of $\pi$, we first show that a matrix that contains few copies of the corresponding permutation matrix $A_{\pi}$ must have few ones. The technique we use to prove Theorem \ref{01matrices} is a classic method for proving supersaturation results; that is, we show that a random submatrix of $M$ will with non-negligible probability contain at least one copy of $A_{\pi}$, so all of $M$ must contain several copies of $A_{\pi}$. In \cite{MT}, Marcus and Tardos famously proved the following result, previously known as the F\"uredi-Hajnal conjecture. \begin{theorem}[Marcus-Tardos]\label{FurHaj} There exists a constant $c_{\pi}$ such that for all $n$, any $n\times n$ $0\mhyphen 1$ matrix containing at least $c_{\pi}n$ ones contains a copy of $A_{\pi}$. \end{theorem} From this, we can immediately deduce the following (extremely weak) supersaturation result, which we will bootstrap using sampling into our stronger results. \begin{lemma}\label{easybound} With $c_{\pi}$ as in Theorem \ref{FurHaj}, any $n\times n$ $0\mhyphen 1$ matrix with $m$ ones contains at least $m-c_{\pi}n$ copies of $A_{\pi}$. \end{lemma} \begin{proof}[Proof of Lemma \ref{easybound}] Suppose for the sake of contradiction that our matrix has fewer than $m-c_{\pi}n$ copies of $A_{\pi}$. Take one $1$-entry from each of those copies and change it to a $0$. Now the matrix still has at least $c_{\pi}n$ ones, but by assumption has no copies of $A_{\pi}$, contradicting Theorem \ref{FurHaj}. \end{proof} We are now ready to prove Theorem \ref{01matrices}. \begin{proof}[Proof of Theorem \ref{01matrices}] Let $c_{\pi}$ be as given by Theorem \ref{FurHaj} and Lemma \ref{easybound}. Take an $r$ by $r$ submatrix $R$ of $M$, with $r$ to be chosen later. Let the density of ones in $R$ (that is, the number of ones in $R$ divided by $r^2$) be $1(R)$. Similarly, let the density of $A_{\pi}$ in $R$ (that is, the number of copies of $A_{\pi}$ in $R$ divided by $\binom{r}{k}^2$) be $\pi(R)$. Define $1(M)$ and $\pi(M)$ similarly; in particular, $1(M)=\frac{a}{n^2}\geq\frac{C}{n}$ by assumption. In this notation, Lemma $2.4$ tells us that \[\binom{r}{k}^2\pi(R)\geq r^21(R)-c_{\pi}r,\] or rearranging, \begin{equation}\label{Rbound} 1(R)\leq\frac{\binom{r}{k}^2}{r^2}\pi(R)+\frac{c_{\pi}}{r}. \end{equation} Now, let $R$ be a \emph{random} $r\times r$ submatrix of $M$ (we choose a random subset of size $r$ of the rows and similarly for the columns). Now, for each copy of $A_{\pi}$ in $M$ (defined by $k$ rows and $k$ columns), there is a $\frac{\binom{r}{k}^2}{\binom{n}{k}^2}$ probability that all rows and columns corresponding to this copy of $A_{\pi}$ are chosen to be in $R$. Thus the expected number of copies of $A_{\pi}$ in $R$ is $\frac{\binom{r}{k}^2}{\binom{n}{k}^2}$ times the expected number of copies of $A_{\pi}$ in $M$, and therefore \[\mathbb{E}[\pi(R)]=\pi(M).\] Similarly, each entry of $M$ has equal probability of appearing in $R$, and so \[\mathbb{E}[1(R)]=1(M)\geq n^{-\epsilon},\] as by assumption $M$ has at least $n^{2-\epsilon}$ ones. Now that we have $1(M)$ and $\pi(M)$ expressed in terms of $1(R)$ and $\pi(R)$, (\ref{Rbound}) applied to $R$ will give an inequality between $1(M)$ and $\pi(M)$. Explicitly, \begin{equation}\label{1pirelation} 1(M)=\mathbb{E}[1(R)]\leq\mathbb{E}\left[\frac{\binom{r}{k}^2}{r^2}\pi(R)+\frac{c_{\pi}}{r}\right]=\frac{\binom{r}{k}^2}{r^2}\pi(M)+\frac{c_{\pi}}{r} \end{equation} We now choose the value of $r$. Clearly, we need $1(M)>\frac{c_{\pi}}{r}$ for (\ref{1pirelation}) to be useful, so we choose $r=\left\lfloor\frac{3c_{\pi}}{1(M)}\right\rfloor$. We require $r\leq n$ (so that we can sample $r\times r$ submatrices), but this holds as long as $1(M)\geq\frac{3c_{\pi}}{n}$; that is, $M$ has at least $3c_{\pi}n$ ones. Thus taking $C=3c_{\pi}$ in the statement of Theorem \ref{01matrices} is sufficient to satisfy $r\leq n$. Note that since $c_{\pi}\geq 1$ and $1(M)\leq 1$, we have $r\geq\frac{2c_{\pi}}{1(M)}$. Thus $1(M)-\frac{c_{\pi}}{r}\geq\frac{1(M)}{2}$. Substituting into (\ref{1pirelation}), \begin{align*} \frac{1(M)}{2} & \leq 1(M)-\frac{c_{\pi}}{r} \\ & \leq\frac{\binom{r}{k}^2}{r^2}\pi(M). \end{align*} Now, since $n\geq r$, and the function $\frac{\binom{a}{k}}{a^k}$ is increasing for $a>k$ (and $0$ on integers less than $k$), we have that $\binom{r}{k}\leq\frac{r^k}{n^k}\binom{n}{k}$. Substituting this yields \begin{align*} 1(M) & \leq 2\frac{\binom{r}{k}^2}{r^2}\pi(M) \\ & \leq 2\frac{r^{2k-2}}{n^{2k}}\binom{n}{k}^2\pi(M) \\ & \leq 2\frac{(3c_{\pi})^{2k-2}}{(1(M))^{2k-2}n^{2k}}\binom{n}{k}^2\pi(M). \end{align*} Letting $C'=C'(\pi):=\frac{1}{2(3c_{\pi})^{2k-2}}$, we have shown that \begin{equation}\label{randomresult} C'\cdot 1(M)^{2k-1}n^{2k}\leq\binom{n}{k}^2\pi(M). \end{equation} Now, $1(M)=\frac{a}{n^2}$ by definition. Furthermore, the right hand side of (\ref{randomresult}) is simply the number of copies of $A_{\pi}$ in $M$ (by the definition of $\pi(M)$. Thus we have shown that $M$ contains at least $C'\frac{a^{2k-1}}{n^{2k-2}}$ copies of $A_{\pi}$, so taking this value of $C'$ and $C=3c_{\pi}$ (as above), we have proven our upper bound. To show that this bound is sharp, take $n\leq a\leq n^2$. We may modify $a$ and $n$ by at most a constant factor so that $n|a$ and $a|n^2$. Now, suppose $\pi(1)>\pi(k)$ without loss of generality. Divide our $n\times n$ matrix $M$ into blocks of side length $\frac{a}{n}$ (so there are $\frac{n^2}{a}$ blocks on each side). Consider the $\frac{n^2}{a}$ blocks along the upper left-lower right diagonal. Fill each of these blocks with ones, and fill the rest of $M$ with zeroes. How many copies of $A_{\pi}$ are contained in $M$? Recall that a copy of $A_{\pi}$ is given by a set of $k$ $1$-entries of $M$, say at indices $(i_1,j_1),\ldots,(i_k,j_k)$, with $i_1<\cdots<i_k$ and the relative ordering of the $j_k$ given by $\pi$. There are $a$ ones in $M$ (as required) and so at most $a$ choices for $(i_1,j_1)$. Let $B$ be the $\frac{n}{a}\times\frac{n}{a}$ block containing $(i_1,j_1)$. Now, $i_1<i_k$ and $j_1>j_k$ (since $\pi(1)>\pi(k)$), so we are looking for a point to the lower-left of $(i_1,j_1)$. But since all blocks containing ones are on the upper-right to lower-left diagonal, $(i_k,j_k)$ must be contained in $B$ as well. Now, since for all $r$, $i_1\leq i_r\leq i_k$, and $B$ is the only block in its row containing ones, all other entries $(i_r,j_r)$ must be contained in $B$. So for all of the remaining $k-1$ entries $(i_r,j_r)$ with $r>1$, there are at most $|B|=\left(\frac{a}{n}\right)^2$ choices. So in total there are at most \[a\left(\frac{a}{n}\right)^{2k-1}=\frac{a^{2k-1}}{n^{2k-2}}\] copies of $\pi$ in $M$. Since we only had to adjust $a,n$ by a constant factor in the start, this proves the desired sharpness bounds, completing the proof of Theorem \ref{01matrices}. \end{proof} En route to the proof of Theorem \ref{permsupersaturation} in the next section, we bound the number of $0\mhyphen 1$ matrices containing few copies of $A_{\pi}$. \begin{proposition}\label{matrixcountprop} Let $k\in\mathbb{Z}^+$ with $k>1$, $\pi\in S_k$ be fixed. There is a constant $C=C(\pi)$ such for all $m,n\geq 0$, the number of $n\times n$ $0\mhyphen 1$ matrices containing at most $m$ copies of $A_{\pi}$ is at most \[\exp\left(C\left(n+\sqrt[2k-1]{mn^{2k-2}}\right)\right).\] \end{proposition} \begin{proof} Let $S(n,m)$ be the set of $n\times n$ $0\mhyphen 1$ matrices containing at most $m$ copies of $\pi$, and let $f(n,m)=|S(n,m)|$. For an $n\times n$ $0\mhyphen 1$ matrix $M$ with $2|n$, let the $2$-contraction of $M$ be the $n/2\times n/2$ $0\mhyphen 1$ matrix $M'$ such that $M'_{i,j}=0$ if and only if $M_{2i-1,2j-1}=M_{2i-1,2j}=M_{2i,2j-1}=M_{2i,2j}=0$. Now, for each copy of $A_{\pi}$ in $M'$, there is at least one corresponding copy of $A_{\pi}$ in $M$. This is because a copy of $A_{\pi}$ in $M'$ corresponds to a choice of $k$ $1$-entries of $M'$ with relative row- and column- ordering given by $\pi$, and each $1$-entry of $M'$ corresponds (in an order-preserving way) to at least one $1$-entry of $M$. Thus $M$ contains at least as many copies of $A_{\pi}$ as its $2$-contraction $M'$, so $M'$ must also contain at most $m$ copies of $A_{\pi}$. Therefore, if $M\in S(n,m)$, then we must have $M'\in S(n/2,m)$, where $M'$ is the $2$-contraction of $M$. Thus \begin{equation}\label{01recursiontechnique} f(n,m)=|S(n,m)|\leq\displaystyle\sum_{M'\in S(n/2,m)}\left|\{M:M'\text{ is the }2\text{-contraction of }M\}\right|. \end{equation} Now, given a matrix $M'$, how many matrices $M$ $2$-contract to $M'$? For every $0$-entry of $M'$, the corresponding four entries of $M$ must be $0$, so there are no choices to be made. For every $1$-entry of $M'$, the corresponding four entries of $M$ may be either $1$ or $0$ (but not all $0$), so there are $15$ choices for those entries of $M$. Thus there are at most $15^{(\#\text{ of ones in }M')}$ matrices that $2$-contract to $M'$. Combining this with (\ref{01recursiontechnique}), we obtain that \begin{equation}\label{01recursiontechnique2} f(n,m)\leq\displaystyle\sum_{M'\in S(n/2,m)}15^{(\#\text{ of ones in }M')}\leq f(n/2,m)\cdot 15^{\displaystyle\max_{M'\in S(n/2,m)}(\#\text{ of ones in }M')} \end{equation} We now apply Theorem \ref{01matrices}. For $M'\in S(n/2,m)$, we know that $M'$ has at most $m$ copies of $A_{\pi}$ by definition, so by Theorem \ref{01matrices} it must have at most $O(n+\sqrt[2k-1]{mn^{2k-2}})$ ones. Substituting into (\ref{01recursiontechnique2}), \begin{equation*} f(n,m)\leq f(n/2,m)\cdot \exp(C_0(n+\sqrt[2k-1]{mn^{2k-2}})). \end{equation*} for some $C_0=C_0(\pi)$. This recursion is fairly easy to solve; we see that for $a\in\mathbb{Z}^{\geq 0}$ \begin{align*} \log(f(2^a,m)) & \leq\log(f(1,m))+C_0\displaystyle\sum_{i=1}^a\left(2^a+\sqrt[2k-1]{m2^{a(2k-2)}}\right) \\ & \leq 1+C_0\left(2^{a+1}+\sqrt[2k-1]{m}\cdot \frac{2^{\frac{(a+1)(2k-2)}{2k-1}}}{2^{\frac{2k-2}{2k-1}}-1}\right) \\ & \leq (C_0+1)\left(2^{a+1}+2\sqrt[2k-1]{m\cdot 2^{(a+1)(2k-2)}}\right), \end{align*} where we simply summed the geometric series and used that $\log(f(1,m))\leq\log(2)\leq 1$ and that $2^{\frac{2k-2}{2k-1}}\geq\frac{3}{2}$ for $k\geq 2$. Now, $f(n,m)$ is nondecreasing in $n$ (as we may `pad' any $n\times n$ matrix with zeroes to form an $n'\times n'$ matrix with the same number of copies of $A_{\pi}$, and this process is injective). For any $n$, take $a\in\mathbb{Z}^{\geq 0}$ such that $2^{a-1}<n\leq 2^a$. Then by the previous computation, \begin{align*} \log(f(n,m)) & \leq\log(f(2^a,m)) \\ & \leq (C_0+1)\left(2^{a+1}+2\sqrt[2k-1]{m\cdot 2^{(a+1)(2k-2)}}\right) \\ & \leq (C_0+1)\left(4n+2\sqrt[2k-1]{m\cdot (4n)^{2k-2}}\right) \\ & \leq 8(C_0+1)\left(n+\sqrt[2k-1]{mn^{2k-2}}\right). \end{align*} Letting $C=8(C_0+1)$ completes the proof of Proposition \ref{matrixcountprop}. \end{proof} Now that we have bounded the total number of $0-1$ matrices that contain few copies of $A_{\pi}$, in the next section we may bound the number of permutations that contain few copies of $\pi$. \section{Permutations with Few Copies of $\pi$}\label{randomproof} This section will be devoted to the proof of Theorem \ref{permsupersaturation}. Let \[S_n(m,\pi):=\{\sigma\in S_n:\sigma\text{ contains at most }m\text{ copies of }\pi\}.\] We would like to show that \[|S_n(m,\pi)|=\exp(O(n))\max\left(1,\left(\frac{m}{n}\right)^{\frac{n}{k-1}}\right).\] First suppose $m<n$, such that the max is dominated by the first term. Then Proposition \ref{matrixcountprop} guarantees that the number of $n\times n$ $0\mhyphen 1$ matrices avoiding $A_{\pi}$ is at most $\exp(O(n))$. Since each $\sigma\in S_n$ avoiding $\pi$ gives rise to the permutation matrix $A_{\sigma}$ that avoids $A_{\pi}$, we see that the number of $\sigma$ avoiding $\pi$ is $\exp(O(n))$, as desired. Now suppose $m\geq n$. Take $b=\sqrt[2k-2]{\frac{m}{n}}$. Just as we took the $2$-contraction of a matrix in the proof of Proposition \ref{matrixcountprop}, we will define the $b$-contraction of any $n\times n$ $0\mhyphen 1$ matrix. The $b$-contraction of such a matrix $A$ is the $0\mhyphen 1$ matrix $B$ such that the dimensions of $B$ are $\left\lceil\frac{n}{b}\right\rceil\times\left\lceil\frac{n}{b}\right\rceil$, and such that $B_{i,j}=1$ if and only if there exists $i',j'$ with $\left\lceil\frac{i'}{b}\right\rceil=i$ and $\left\lceil\frac{j'}{b}\right\rceil=j$ such that $A_{i',j'}=1$ (so if $A_{i',j'}=0$ for all such $i',j'$, then $B_{i,j}=0$). Let \[n':=\left\lceil\frac{n}{b}\right\rceil=\left\lceil\sqrt[2k-2]{n^{2k-1}m^{-1}}\right\rceil\] so that $B$ is here an $n'\times n'$ matrix. Similarly to before, any occurrence of $A_{\pi}$ in $B$ will correspond to at least one occurrence of $A_{\pi}$ in $A$. This again comes from, for each $1$-entry in $B$ appearing in that occurrence of $A_{\pi}$, choosing a corresponding $1$-entry of $A$, and realizing that these $1$-entries have the same relative row- and column-ordering. For all $\sigma\in S_n$, let $B_{\sigma}$ be the $b$-reduction of $A_{\sigma}$. Now, we have shown that each occurrence of $A_{\pi}$ in $B_{\sigma}$ gives rise to at least one occurrence of $A_{\pi}$ in $A_{\sigma}$, and the occurrences of $A_{\pi}$ in $A_{\sigma}$ correspond to an occurrence of $\pi$ in $\sigma$. Thus we must have that for all $\sigma\in S_n(m,\pi)$, $B_{\sigma}$ contains at most $m$ copies of $A_{\pi}$. But by Proposition \ref{matrixcountprop} (using the fact that $m>n\geq n'$), there are at most $\exp\left(C\left(\sqrt[2k-1]{m{n'}^{2k-2}}\right)\right)$ such matrices of the correct dimension (for $C=C(\pi)$). So as $\sigma$ ranges over all elements of $S_n(m,\pi)$, $B_{\sigma}$ ranges over at most \begin{align*} \exp\left(C\left(\sqrt[2k-1]{m{n'}^{2k-2}}\right)\right) & \leq\exp\left(C\left(\sqrt[2k-1]{m\left(2\sqrt[2k-2]{n^{2k-1}m^{-1}}\right)^{2k-2}}\right)\right) \\ & =\exp\left(C\left(\sqrt[2k-1]{2^{2k-2}mn^{2k-1}m^{-1}}\right)\right) \\ & \leq\exp(2Cn) \end{align*} different matrices (where we used the fact that $n'=\left\lceil\sqrt[2k-2]{n^{2k-1}m^{-1}}\right\rceil\leq 2\sqrt[2k-2]{n^{2k-1}m^{-1}}$ as $m\leq\binom{n}{k}<n^{2k-1}$). Therefore, \begin{align} |S_n(m,\pi)| & =\displaystyle\sum_{B}\left|\{\sigma\in S_n(m,\pi):B_{\sigma}=B\}\right| \\ & \label{contractionbound}\leq\exp(2Cn)\displaystyle\max_B\left|\{\sigma\in S_n(m,\pi):B_{\sigma}=B\}\right|. \end{align} Now, since $A_{\sigma}$ is a permutation matrix, it has $n$ ones. By the definition of $b$-contraction, $B_{\sigma}$ must have at most $n$ ones. So in computing $\displaystyle\max_B\left|\{\sigma\in S_n(m,\pi):B_{\sigma}=B\}\right|$ we may assume $B$ is an $n'\times n'$ matrix with at most $n$ ones. Let $B$ be such a matrix, and suppose there are $a_i$ ones in the $i^{th}$ row of $B$. Then $\displaystyle\sum_{i=1}^{n'}a_i\leq n$. How many choices are there for $\sigma$ such that $B_{\sigma}=B$? Consider the first row of $A_{\sigma}$, in which there is exactly one $1$. This $1$, when we take the $b$-reduction, must correspond to a $1$ of $B$ in the first row of $B$. There are $a_1$ such ones in the first row of $B$, and each one corresponds to at most $\left\lceil b\right\rceil$ entries in the first row of $A_{\sigma}$. Thus there are at most $\left\lceil b\right\rceil\cdot a_1$ ways to choose the position of the $1$ in the first row of $A_{\sigma}$--in other words, to choose $\sigma(1)$. Similarly, the $1$-entry in the $i^{th}$ row of $A_{\sigma}$ must correspond to a $1$-entry in the $\left\lceil\frac{i}{b}\right\rceil^{th}$ row of $B$, so there are at most $\left\lceil b\right\rceil\cdot a_{\left\lceil\frac{i}{b}\right\rceil}$ ways to choose the value of $\sigma(i)$. This implies that the total number of choices for $\sigma$ such that $B_{\sigma}=B$ is at most \begin{equation}\label{aiproductbound} \displaystyle\prod_{i=1}^{n}\left\lceil b\right\rceil\cdot a_{\left\lceil\frac{i}{b}\right\rceil}=\left\lceil b\right\rceil^n\displaystyle\prod_{i=1}^{n}a_{\left\lceil\frac{i}{b}\right\rceil}. \end{equation} Now, in the sum \begin{equation}\label{aisum} \displaystyle\sum_{i=1}^{n}a_{\left\lceil\frac{i}{b}\right\rceil}, \end{equation} every particular $a_j$ occurs at most $\left\lceil b\right\rceil$ times, once for every $i$ such that $bj-b<i\leq bj$. Thus (\ref{aisum}) is bounded by $\left\lceil b\right\rceil\displaystyle\sum_{j=1}^{n'}a_j\leq\left\lceil b\right\rceil\cdot n$. So by the AM-GM inequality, \[\displaystyle\prod_{i=1}^{n}a_{\left\lceil\frac{i}{b}\right\rceil}\leq\left\lceil b\right\rceil^n.\] Substituting into (\ref{aiproductbound}), we see that there are at most $\left\lceil b\right\rceil^{2n}$ choices for $\sigma$ such that $B_{\sigma}=B$. Finally, substituting into (\ref{contractionbound}), we have derived that \[|S_n(m,\pi)|\leq\exp(2Cn)\left\lceil b\right\rceil^{2n}.\] Now by definition, $b=\sqrt[2k-2]{\frac{m}{n}}$, and $m\geq n$, so $b\geq 1$ and $\left\lceil b\right\rceil\leq 2b=2\sqrt[2k-2]{\frac{m}{n}}$. Therefore, \[\left\lceil b\right\rceil^{2n}\leq4^n\left(\frac{m}{n}\right)^{\frac{n}{k-1}}.\] This implies that \[|S_n(m,\pi)|\leq\exp((2C+2)n)\left(\frac{m}{n}\right)^{\frac{n}{k-1}},\] and replacing $2C+2$ by $C$ finishes the proof of the upper bound in Theorem \ref{permsupersaturation}. It remains to show that this bound is sharp to within an exponential. Suppose without loss of generality that $\pi(1)>\pi(k)$. For $m\leq n$ the all-increasing permutation avoids $\pi$, so we get a lower bound of $1$, which is sufficient. Now suppose $m>n$. Note that $S_n(m,\pi)$ is nondecreasing in $m$ and that changing $m$ by at most a constant multiple does not change our desired lower bound by more than an exponential factor. Thus we may without loss of generality modify $m$ by a constant multiple. In particular, we may assume without loss of generality that $\frac{m}{n}$ is a $(k-1)^{st}$ power, say $a^{k-1}=\frac{m}{n}$, $a\in\mathbb{Z}^+$. Let $S_{n,a}$ be the set of permutations $\sigma\in S_n$ such that: $\sigma(1),\ldots,\sigma(a)$ is a permutation of $1,\ldots,a$ $\sigma(a+1),\ldots,\sigma(2a)$ is a permutation of $a+1,\ldots,2a$ $\vdots$ $\sigma\left(\left(\left\lfloor\frac{n}{a}\right\rfloor-1\right)a+1\right),\ldots,\sigma\left(\left\lfloor\frac{n}{a}\right\rfloor a\right)$ is a permutation of $\left(\left\lfloor\frac{n}{a}\right\rfloor-1\right)a+1,\ldots,\left\lfloor\frac{n}{a}\right\rfloor a$ $\sigma\left(\left\lfloor\frac{n}{a}\right\rfloor a+1\right),\ldots,n$ is a permutation of $\left\lfloor\frac{n}{a}\right\rfloor a+1,\ldots,n$. Let $n=qa+r$, $q,r\in\mathbb{Z}^{\geq 0}$, $r<a$. Then $|S_{n,a}|=\left(a!\right)^q\cdot r!$. Since $t!\geq\left(\frac{t}{e}\right)^t$ for all $t\in\mathbb{Z}^{\geq 0}$ (using $0^0=1$), we see that \begin{align*} |S_{n,a}| & \geq\left(\frac{a}{e}\right)^{qa}\left(\frac{r}{e}\right)^r \\ & =\left(\frac{r}{a}\right)^r\left(\frac{a}{e}\right)^n, \end{align*} as $qa+r=n$. Now, the function $x^x$ is minimized for $x\in[0,1]$ when $x=\frac{1}{e}$, so $x^x\geq e^{-\frac{1}{e}}$. Thus $\left(\frac{r}{a}\right)^r=\left(\frac{r}{a}\right)^{a\frac{r}{a}}\geq \exp(-\frac{a}{e})$. Now, $m\leq\binom{n}{k}<n^k$, and therefore $a<n$. Thus \[\left(\frac{r}{a}\right)^r\geq\exp(-n).\] Therefore, \[|S_{n,a}|\geq a^n\exp(-2n)=\exp(-2n)\left(\frac{m}{n}\right)^{\frac{n}{k-1}}.\] This is, up to an exponential, our desired bound, so it suffices to show that $S_{n,a}\subseteq S_n(m,\pi)$. Suppose $\sigma\in S_{n,a}$. At what indices can $\pi$ occur in $\sigma$? Let $\pi$ occur at some set of $k$ indices $i_1<\cdots<i_k$. Then since $\pi(1)>\pi(k)$, we must have $\sigma(i_1)>\sigma(i_k)$, while of course $i_1<i_k$. By the definition of $S_{n,a}$, this can only occur when $\left\lceil\frac{i_1}{a}\right\rceil=\left\lceil\frac{i_k}{a}\right\rceil$. Since $i_1<\cdots<i_k$, this means that there is some $t$, $0\leq t\leq q$, such that $ta+1\leq i_1<\cdots<i_k\leq (t+1)a$ (where again $qa+r=n$, $r<a$). Given a particular value of $t$, there are thus at most $\binom{a}{k}$ choices for $(i_1,\ldots,i_k)$. However, if $t=q$, we have that $qa+1\leq i_1<\cdots<i_k\leq qa+r=n$, so there are in this case only at most $\binom{r}{k}$ choices for $(i_1,\ldots,i_k)$. Thus the total number of occurrences of $\pi$ in $\sigma$ is at most \begin{align*} q\binom{a}{k}+\binom{r}{k} & <qa^k+r^k \\ & \leq qa^k+ra^{k-1} \\ & \leq (qa+r)a^{k-1} \\ & =na^{k-1} \\ & =m. \end{align*} Thus $S_{n,a}\subseteq S_n(m,\pi)$, so we have proved the lower bound and we are done. \section{The Fixed Hypergraph Case}\label{fixedcase} Fix $k,L\in\mathbb{Z}^+$ and $\Lambda$ $k$-uniform on $n$ vertices such that $\Lambda$ contains a collection of $L$-vertex cliques where each of the $n$ vertices belongs to at least $\delta(\Lambda) = \Omega(1)$ cliques in the collection and at most $\Delta(\Lambda) = O(1)$. That is, $\Lambda$ satisfies the hypotheses of Theorem \ref{fixedtheorem}. We would like to show that for every permutation $\pi\in S_k$, \[Av_{n,\Lambda}(\pi)=O\left(\p{\frac{n\log^{2+\epsilon}n}{L}}^n\right)\] for all $\epsilon > 0$. For $L=\Theta(n^c)$ with $c \in (0,1]$, this bound is a strict improvement on the $n!$ total $n$-permutations. For $L=n$, we are a logarithmic factor off from the Stanley-Wilf conjecture.\\ It may seem unnatural at first to restrict our arguments only to hypergraphs containing polynomially large cliques. However, we see that there are very dense hypergraphs $\Lambda^*$ with $O(1)$ maximal clique size for which the number of $n$-permutations $\Lambda^*$-avoiding $\pi$ is $O(n)^n$. Namely, the worst case is the multipartite graph. Consider partitioning the vertices of $\Lambda^*$ into two parts, $\{1,\cdots,n/2\}$ and $\{n/2+1,\cdots,n\}$, and adding an edge to $\Lambda^*$ for every collection of $k$ vertices not entirely lying in a single part. This graph will be very dense, containing ${n \choose k}-2{n/2 \choose k} \approx (1-\frac{1}{2^k}){n \choose k}$ edges. However, there is a large class of $n$-permutations avoiding $\pi$ on these edges. Say WLOG $\pi(1)<\pi(k)$. We see that all $n$-permutations $\sigma$ in which the $n/2$ largest elements belong in the first $n/2$ indices and the $n/2$ smallest elements belong in the last $n/2$ indices necessarily $\Lambda^*$-avoid $\pi$. Each edge of $\Lambda^*$ corresponds to a sub permutation $(\sigma(x_1),\cdots,\sigma(x_k))$ in which $\sigma(x_1)>\sigma(x_k)$ and so it cannot be a copy of $\pi$. There are $(n/2)!^2\approx \p{\frac{n}{2e}}^n = O(n)^n$ such permutations, and so there is no meaningful bound we can prove on the number of $\Lambda^*$-avoidant $n$-permutations.\\ Importantly, multipartite graphs are characterized by their small maximal cliques. The bipartite graph we considered has maximal clique $2(k-1)$, taking $k-1$ vertices from each part. Thus, our bounds on $\Lambda$-avoidance being contingent on $\Lambda$ containing large cliques is necessary. \section{Hypergraph Formulation of Pattern-Avoidance}\label{formulation} We consider a $k$-uniform hypergraph $H$ on an $n \times n$ grid of vertices $V(H)$, which we index $v(i,j)$. Define a canonical set to be a subset of $V(H)$ of size $n$ containing exactly one vertex from each row and each column. We see that a canonical set corresponds bijectively to an $n$-permutation $\sigma$. We add edges to $H$ in such a way that a canonical set is independent if and only if the corresponding $n$-permutation $\sigma$ $\Lambda$-avoids the $k$-permutation $\pi$. Essentially, we add an edge for each copy of $\pi$ in the vertices on columns in $E(\Lambda)$. For all $1 \leq x_1 < x_2 < \cdots < x_k \leq n$ with $\{x_1,\cdots,x_k\} \in E(\Lambda)$ and all $1 \leq y_1< \cdots < y_k \leq n$, we have $\{v(x_1,y_{\pi(1)}),v(x_2,y_{\pi(2)}),\cdots,v(x_k,y_{\pi(k)})\} \in E(H)$. We see that a canonical set containing the vertices of this edge would correspond to a permutation $\sigma$ that contains a copy of $\pi$ at indices $x_1,\cdots,x_k$, as desired.\\ We want to show that the number of $n$-permutations that $\Lambda$-avoid $\pi$ is $O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$. Since each permutation corresponds to a single canonical set, we want to show that the number of independent canonical sets is $O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$. In fact, our goal will be to prove a stronger claim, that the number of independent sets of size $n$, of which the independent canonical sets are a subset, is $O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$. \section{The Hypergraph Containers Lemma}\label{HC} We introduce a version of the hypergraph container lemma due to Balogh, Morris, and Samotij \cite{BMS}. Essentially, the container lemma is a means of placing the vertices of a hypergraph into a collection of containers $\mathcal{C}$ in such a way that each independent set in the hypergraph belongs to one of the containers. Additionally, we ensure that no individual container contains too many vertices and that the number of containers isn't too large. We let $\Delta_{\ell}(\mathcal{H})$ be the maximum number of hyperedges of $\mathcal{H}$ that contain a given set of $\ell$ vertices.\\ \begin{proposition}[Container lemma \cite{BMS}]\label{containerlemma} Let $\mathcal{H}$ be a $k$-uniform hypergraph and let $K$ be a constant. There exists a constant $g$ depending only on $k$ and $K$ such that the following holds. Suppose that for some $p\in(0,1)$ and all $\ell \in \{1, \dotsc, k\}$, $$\Delta_{\ell}(\mathcal{H}) \le K\cdot p^{\ell-1}\cdot\frac{e(\mathcal{H})}{v(\mathcal{H})}$$ Then, there exists a family $\mathcal{C}\subseteq \mathcal P(V(\mathcal{H}))$ of \emph{containers} with the following properties: \begin{enumerate} \item $|\mathcal{C}| \leq \binom{v(\mathcal{H})}{k p v(\mathcal{H})} \leq \left(\frac{e}{kp}\right)^{k p v(\mathcal{H})}$, \item $|G| \leq (1-g) \cdot v(\mathcal{H})$ for each $G \in \mathcal{C}$, \item each independent set of $\mathcal{H}$ is contained in some $G \in \mathcal{C}$. \end{enumerate} \end{proposition} This lemma is extremely useful in bounding the number of independent sets of a hypergraph, as the number of independent sets is upper bounded by the sum of the number of independent sets in each container. Or, in our context, the number of independent sets of size $n$ in $H$ is upper bounded by the total number of independent sets of size $n$ over all the containers. However, a single application of the container lemma to our problem will not be strong enough for our purposes, as a single container can still contain $(1-g)|V(\mathcal{H})|=(1-g)n^2$ vertices and potentially have ${(1-g)n^2 \choose n} = O(n^n)$ many independent sets of size $n$. So, we will apply the lemma recursively. Each time we encounter a container with too many vertices, we apply the lemma to the subgraph induced by the vertices of the container and further break it up into more containers. We do this until all the containers are sufficiently small. Namely, we will attempt to apply the container lemma recursively until all the containers have $\leq U = \frac{Cn^2 \log^{2+\epsilon} n}{L}$ vertices, for a constant $C$ that will only be in terms of $k$ and $\pi$.\\ Unfortunately, since we know nothing about the structure of the containers, we have no guarantee that the necessary $\Delta_\ell$ bounds will hold, which are required to apply the lemma to a container. To overcome this problem we employ a strategy similar to that used by Morris and Saxton \cite{MS}. Consider a subgraph $G$ of $H$ induced by some subset/container of the vertices. If we remove some of the edges of $G$ to produce a new subgraph $G'$, then every independent set of $G$ will also be an independent set of $G'$. So, if we apply the containers lemma to $G'$, the resulting containers will also cover all the independent sets of $G$. This will be our approach: for each container subgraph $G$ with $>U$ vertices, we find a subgraph $G' \subseteq G$ on the same vertex set such that $G'$ satisfies the $\Delta_\ell$ bounds for a sufficiently small $p$. We break up $G'$ into containers with the lemma and continue to recurse, guaranteeing that all of the independent sets in the original $H$ are preserved. We ensure that we will not have too many containers in the end because we keep $p$ small.\\ \section{The Recursive Lemma}\label{RL} \begin{lemma}\label{recursivelemma} Let $\gamma = \frac{1}{1-g}$, where $g$ is defined in Proposition \ref{containerlemma}. Consider a subgraph $G \subseteq H$ induced by some subset of the vertices, where $$Cn\gamma^{t-1} < |V(G)| \leq Cn\gamma^{t}$$ for some $t \geq t_0+1$ with $Cn\gamma^{t_0}=U$. There exists a subgraph $G' \subseteq G$ on the same vertex set such that $$\Delta_{\ell}(G') \le K\cdot p_t^{\ell-1}\cdot\frac{|E(G')|}{|V(G)|}$$ for all $\ell \in \{1,\cdots,k\}$, where $p_t = \frac{n}{t^{2+\epsilon}|V(G)|}$\\ \end{lemma} \begin{proof} We notice that $\Delta_{k}(G')=1$ as a set of $k$ vertices belonging to multiple edges would imply that we have duplicate edges. Therefore, $1 \le K\cdot p_t^{k-1}\cdot\frac{|E(G')|}{|V(G)|}$, and so we must have $|E(G')| \geq \frac{|V(G)|}{Kp_t^{k-1}}$. We define $$N = \frac{|V(G)|}{Kp_t^{k-1}}$$ and so, we will construct a $G'$ with $|E(G')|=N$, satisfying the $\Delta_k$ bound.\\ For each of the $L$-cliques in $\Lambda$, we define its ``block" to be the subgraph induced on the set of vertices in $V(G)$ belonging to the $L$ columns corresponding to this clique. We call a block $B$ rich if $$|V(B)| \geq d = \sqrt{Lt^{2 + \epsilon}|V(G)|}$$ We will show that for every rich block $B$, there exists a subgraph $B'$ of $B$, on the same vertex set with $$|E(B')|=N_B=\frac{2\Delta(\Lambda)}{\delta(\Lambda)}\cdot \frac{|V(B)|}{Kp_t^{k-1}}$$ We also require $$\Delta_{\ell}(B') \le \frac{1}{\Delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N}{|V(G)|}$$ If we can prove that such a $B'$ exists for every rich block $B$, then we can construct $G'$ by taking the union of all the $B'$ and then removing edges until only $N$ remain. We see that, for any collection of $\ell$ vertices $v_1,\cdots,v_\ell$, $$\deg_{G'}(v_1,\cdots,v_\ell) \le \sum_{\text{rich blocks }B} \deg_{B'}(v_1,\cdots,v_\ell) \leq \Delta(\Lambda) \cdot \frac{1}{\Delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N}{|V(G)|}$$ since any collection of $\ell$ vertices, as well as any single vertex, belongs to at most $\Delta(\Lambda)$ blocks. And so, $$\Delta_{\ell}(G') \le \Delta(\Lambda) \cdot \frac{1}{\Delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N}{|V(G)|} = K\cdot p_t^{\ell-1}\cdot\frac{N}{|V(G)|}$$ as desired. We also see that we will have at least $N$ edges in the union of the $B'$ because \begin{align*} \bigcup_{\text{rich blocks }B} E(B') &\geq \frac{1}{\Delta(\Lambda)} \sum_{\text{rich blocks }B} E(B')\\ &= \frac{1}{\Delta(\Lambda)} \sum_{\text{rich blocks }B} N_B\\ &= \frac{1}{\Delta(\Lambda)} \sum_{\text{rich blocks }B} \p{\frac{2\Delta(\Lambda)}{\delta(\Lambda)} \cdot \frac{N}{|V(G)|}} |V(B)|\\ &= \frac{2}{\delta(\Lambda)} \cdot \frac{N}{|V(G)|} \cdot \sum_{\text{rich blocks }B} |V(B)|\\ \end{align*} and we see $$\sum_{\text{rich blocks }B} |V(B)| = \sum_{\text{blocks }B} |V(B)| - \sum_{\text{unrich blocks }B} |V(B)|$$ where $$\sum_{\text{blocks }B} |V(B)| \geq \delta(\Lambda) |V(G)|$$ since each vertex belongs to at least $ \delta(\Lambda)$ blocks, and $$\sum_{\text{unrich blocks }B} |V(B)| \leq d(\text{number of unrich blocks}) \leq d(\text{number of blocks})$$ Now, since $|V(G)| \leq n^2$, $$d=\sqrt{Lt^{2 + \epsilon}|V(G)|} = \sqrt{L|V(G)|\log_\gamma^{2 + \epsilon} \p{\frac{\gamma |V(G)|}{Cn}}} \leq \sqrt{L|V(G)|\log_\gamma^{2 + \epsilon} n}$$ for $C \geq \gamma$. And since each of the $n$ vertices in $\Lambda$ belongs to at most $\Delta(\Lambda)$ of the size $L$ cliques, the number of $L$-cliques, which is the number of blocks, is at most $\Delta(\Lambda)n/L$. So, \begin{align*} \sum_{\text{rich blocks }B} |V(B)| &\geq \sum_{\text{blocks }B} |V(B)| -d(\text{number of blocks})\\ & \geq \delta(\Lambda) |V(G)| - \p{\sqrt{L |V(G)| \log_\gamma^{2 + \epsilon} n}}(\Delta(\Lambda)n/L)\\ & \geq \delta(\Lambda) |V(G)| /2 \end{align*} because \begin{align*} |V(G)| &\geq U = \frac{Cn^2 \log^{2+\epsilon} n}{L}\\ \therefore \sqrt{|V(G)|} &\geq \sqrt{\frac{Cn^2 \log^{2+\epsilon} n}{L}}\\ \therefore |V(G)| &\geq \frac{n}{L} \cdot \sqrt{|V(G)| \cdot CL \log^{2+\epsilon} n}\\ \therefore \delta(\Lambda) |V(G)| /2 &\geq \p{\sqrt{L |V(G)| \log_\gamma^{2 + \epsilon} n}}(\Delta(\Lambda)n/L)\\ \end{align*} for $C \geq \frac{\Delta(\Lambda)^2}{(\delta(\Lambda)/2)^2\log(\gamma)}$, which is not in terms of $n$ and is therefore a valid bound on the constant $C$. And so, \begin{align*} \bigcup_{\text{rich blocks }B} E(B') &\geq \frac{2}{\delta(\Lambda)} \cdot \frac{N}{|V(G)|} \cdot \sum_{\text{rich blocks }B} |V(B)|\\ &\geq \frac{2}{\delta(\Lambda)} \cdot \frac{N}{|V(G)|} \cdot \delta(\Lambda) |V(G)| /2\\ & = N \end{align*} as desired.\end{proof} \section{Supersaturation on the Rich Blocks} \label{SRB} From the previous section, we showed that, to prove Lemma \ref{recursivelemma}, it was sufficient to show the following: \begin{lemma} For a block subgraph $B \subseteq G \subseteq H$ with $$|V(B)| \geq d =\sqrt{Lt^{2 + \epsilon}|V(G)|}$$ and $$Cn\gamma^{t-1} < |V(G)| \leq Cn\gamma^{t}$$ for some $t \geq t_0+1$, there exists a subgraph $B' \subseteq B$ on the same vertex set such that $$|E(B')|=N_B=\frac{2\Delta(\Lambda)}{\delta(\Lambda)}\cdot \frac{|V(B)|}{Kp_t^{k-1}}$$ and $$\Delta_{\ell}(B') \le \frac{1}{\Delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N}{|V(G)|}$$ for all $\ell \in \{1,\cdots,k\}$, where $p_t = \frac{n}{t^{2+\epsilon}|V(G)|}$ and $\gamma = \frac{1}{1-g}$, where $g$ is defined in Proposition \ref{containerlemma}.\\ \end{lemma} \begin{proof} We see $$\frac{N_B}{|V(B)|} = \frac{2\Delta(\Lambda)}{\delta(\Lambda)}\cdot \frac{1}{Kp_t^{k-1}} = \frac{2\Delta(\Lambda)}{\delta(\Lambda)}\frac{N}{|V(G)|}$$ So, the $\Delta_\ell$ bound is equivalent to $$\Delta_{\ell}(B') \le \frac{2}{\delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N_B}{|V(B)|}$$ We start our construction of $B'$ with the hypergraph $B_0$ on the vertices of $B$ with no edges. We then iteratively construct $B_1,B_2,\cdots,B_{N_B}$ where we construct $B_{i+1}$ by adding an edge to $B_i$. $B_{N_B}$ will be our $B'$.\\ For every $\ell \in [1,k-1]$ and every $i \in [0,N_B-1]$, we define the dangerous set $D_\ell(B_i)$ to be the set of all sets of $\ell$ vertices $\{v_1,\cdots,v_\ell\}$ where $$|\{E \in E(B_i) | \{v_1,\cdots,v_\ell\} \subseteq E\}| \geq \frac{1}{\delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N_B}{|V(B)|}$$ We can bound $|D_\ell(B_i)|$ by double counting $F,E$ pairs where $$F = \{v_1,\cdots,v_\ell\} \subseteq E \in E(B_i)$$ For an upper bound, we know there are $i \leq N_B$ ways to choose $E \in E(B_i)$ and there are ${k \choose \ell} \leq 2^k$ ways to choose an $F$ belonging to that $E$. For a lower bound, each $F \in D_{\ell}(B_i)$ belongs to at least $\frac{1}{\delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N_B}{|V(B)|}$ many edges and each $F \not \in D_{\ell}(B_i)$ belongs to at least 0 edges. So, $$2^kN_B \geq \text{number of }F,E\text{ pairs} \geq |D_\ell(B_i)|\p{\frac{1}{\delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N_B}{|V(B)|}}$$ $$\therefore |D_\ell(B_i)| \leq \frac{2^{k}\delta(\Lambda)|V(B)|}{Kp_t^{\ell-1}}$$ Now, we say that an edge $E \in E(B)$ is $i$-safe if $F \not \in D_{|F|}(B_i)$ for every nonempty, strict subset $F \subset E$. Our goal for all $i$ will be to construct $B_{i+1}$ by adding an $i$-safe edge to $B_i$ that is not already in $E(B_i)$. If this is always possible, we see that, for all $\ell \in \{1,\cdots,k-1\}$, $$\Delta_\ell(B_{i+1}) \leq \max\p{\Delta_\ell(B_i),\frac{1}{\delta(\Lambda)} \cdot K\cdot p_t^{\ell-1}\cdot\frac{N_B}{|V(B)|}+1}$$ $$\leq \max\p{\Delta_\ell(B_i), \frac{2}{\delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N_B}{|V(B)|}}$$ and therefore, we can show inductively that the $B_{N_B}$ we construct will satisfy $\Delta_\ell(B_N) \leq \frac{2}{\delta(\Lambda)}\cdot K\cdot p_t^{\ell-1}\cdot\frac{N_B}{|V(B)|}$ and be a valid choice for $B'$, as desired. In order to show there is always an $i$-safe edge $E$ not already in $E(B_i)$, it is sufficient to show that the number of $i$-safe edges is $\geq N_B$, meaning that, by pigeonhole, one of them is not already in $E(B_i)$.\\ Let $Z$ be the number of $i$-safe edges in $B$. We want to show $Z \geq N_B$. The vertices of $B$ belong to an $n \times L$ matrix grid. We define $S$ to be the set of vertices in $B$ that belong to a random submatrix, selecting each column independently with probability $q$ and each row independently with probability $\frac{Lq}{n}$, for a fixed $q \in (0,1]$. That is, the probability that a single vertex is included in $S$ is $\frac{Lq^2}{n}$ as both its row and column need to be selected. Then, we generate another vertex subset $S' \subseteq S$. We start with $S' = S$ and, for each subset $F \subseteq S'$, if $F \in D_{|F|}(B_i)$, we remove one of the vertices in $F$ from $S'$.\\ Now, we consider the subgraph $R$ induced by $S'$ and define the random variable $X$ to be the number of $i$-safe edges in $R$. Since we removed a vertex from every dangerous $F$ in $S'$, there will be no dangerous $F$ in $V(R)$ and every edge in $R$ is $i$-safe. So, we have $X = |E(R)|$.\\ The probability that any $i$-safe edge in $B$ belongs to $R$ is $\leq \p{\frac{Lq^2}{n}}^{k}$, as each of the $k$ vertices in the edge belongs to $S$ with probability $\frac{Lq^2}{n}$ and $S' \subseteq S$. So, by linearity of expectation, we can upper bound $$\mathbb{E}[X] \leq Z\p{\frac{Lq^2}{n}}^{k}$$ The F\"uredi-Hajnal conjecture \cite{FH} states that any $x \times x$ $0\mhyphen 1$ matrix $A$ that avoids a permutation matrix $P$ can have at most $c_Px$ 1-entries, for a constant $c_P$ only in terms of $P$. The analog of this conjecture in our hypergraph formulation is this. For a hypergraph with an $x \times x$ grid of vertices and edges corresponding to the copies of $P$ on this grid, any independent set of this graph has at most $c_Px$ vertices. Using this, we can lower bound the number of edges in $R$ using a supersaturation argument.\\ Let $x = \max(\text{number of rows selected in }S, \text{number of columns selected in }S)$. So, all of the vertices in $R$ belong to an $x \times x$ subgrid. I claim that, by F\"uredi-Hajnal, $|E(R)| \geq |V(R)|-c_Px$. While $R$ has more than $c_Px$ vertices, we can find an edge in $R$ and delete one of the vertices in that edge. This decreases $|V(R)|$ by 1, and decreases $|E(R)|$ by at least 1. Repeating this process until the number of vertices left in $R$ is $c_Px$, we must have removed at least $|V(R)|-c_Px$ edges which were originally in $R$. Thus, by linearity of expectation, $$\mathbb{E}[|E(R)|] \geq \mathbb{E}[|V(R)|-c_Px] = \mathbb{E}[|V(R)|]-c_P\mathbb{E}[x]$$ Now, \begin{align*} \mathbb{E}[|V(R)|] &= \mathbb{E}[|S'|] = \mathbb{E}[|S| - \text{at most 1 for each dangerous set in }S]\\ &\geq \mathbb{E}[|S|] - \sum_{\ell = 1}^{k-1}\sum_{F \in D_\ell(B_i)} \text{Pr}[F \subseteq S]\\ &= \frac{Lq^2}{n} |V(B)| - \sum_{\ell = 1}^{k-1} |D_\ell(B_i)| \cdot \p{\frac{Lq^2}{n}}^{\ell}\\ \end{align*} and \begin{align*} \mathbb{E}[x] &= \mathbb{E}[\max(\text{number of rows selected}, \text{number of columns selected})]\\ &\leq \mathbb{E}[\text{number of rows selected}]+\mathbb{E}[ \text{number of columns selected}]\\ &= \frac{Lq}{n}\cdot n + q\cdot L = 2qL \end{align*} Therefore, $$Z\p{\frac{Lq^2}{n}}^{k} \geq \mathbb{E}[|E(R)|] \geq \frac{Lq^2}{n} |V(B)| - \sum_{\ell = 1}^{k-1} |D_\ell(B_i)| \cdot \p{\frac{Lq^2}{n}}^{\ell} - 2qc_PL$$ We take $C > 4c_P$, which is only in terms of $\pi$ and is therefore a valid bound on the constant $C$. Then, since $|V(B)| > U = Cn\gamma^{t_0} > 4c_Pn$, we can set $q = \frac{4c_Pn}{|V(B)|} < 1$ and thus $$\frac{Lq^2}{n} |V(B)| - 2qc_PL \geq \frac{Lq^2}{2n} |V(B)|$$ and $$Z\p{\frac{Lq^2}{n}}^{k} \geq \mathbb{E}[|E(R)|] \geq \frac{Lq^2}{2n} |V(B)| - \sum_{\ell = 1}^{k-1} |D_\ell(B_i)| \cdot \p{\frac{Lq^2}{n}}^{\ell}$$ So, in order to show $Z \geq N_B$, it is sufficient to show $$\frac{Lq^2}{2n} |V(B)| - \sum_{\ell = 1}^{k-1} |D_\ell(B_i)| \cdot \p{\frac{Lq^2}{n}}^{\ell} \geq N_B\p{\frac{Lq^2}{n}}^{k}= \frac{2\Delta(\Lambda)}{\delta(\Lambda)}\cdot \frac{|V(B)|}{Kp_t^{k-1}}\p{\frac{Lq^2}{n}}^{k}$$ Substituting in our bound for $ |D_\ell(B_i)|$, it is sufficient to show $$\frac{Lq^2}{2n} |V(B)| - \sum_{\ell = 1}^{k-1} \frac{2^{k}\delta(\Lambda)|V(B)|}{Kp_t^{\ell-1}} \cdot \p{\frac{Lq^2}{n}}^{\ell} \geq \frac{2\Delta(\Lambda)}{\delta(\Lambda)}\cdot \frac{|V(B)|}{Kp_t^{k-1}}\p{\frac{Lq^2}{n}}^{k}$$ $$\Leftrightarrow \p{\frac{Lq^2|V(B)|}{n}}\cdot \frac{1}{2} - \p{\frac{Lq^2|V(B)|}{n}}\sum_{\ell = 1}^{k-1} \frac{2^{k}\delta(\Lambda)}{K} \cdot \p{\frac{Lq^2}{np_t}}^{\ell-1} \geq \p{\frac{Lq^2|V(B)|}{n}}\frac{2\Delta(\Lambda)}{K\delta(\Lambda)}\p{\frac{Lq^2}{np_t}}^{k-1}$$ $$\Leftrightarrow \frac{1}{2} - \sum_{\ell = 1}^{k-1} \frac{2^{k}\delta(\Lambda)}{K} \cdot \p{\frac{Lq^2}{np_t}}^{\ell-1} \geq \frac{2\Delta(\Lambda)}{K\delta(\Lambda)}\p{\frac{Lq^2}{np_t}}^{k-1}$$ and it is sufficient to show $$\frac{K}{2\max\p{2^{k}\delta(\Lambda),\frac{2\Delta(\Lambda)}{\delta(\Lambda)}}} \geq \sum_{\ell = 1}^{k} \p{\frac{Lq^2}{np_t}}^{\ell-1} $$ We can take $$K \geq 16c_P^2 \cdot k \cdot 2\max\p{2^{k}\delta(\Lambda),\frac{2\Delta(\Lambda)}{\delta(\Lambda)}}$$ which is not in terms of $n$ and is therefore a valid bound on the constant $K$. So, all that remains to show is $\frac{Lq^2}{np_t} \leq 16c_P^2$. We see $$\frac{Lq^2}{np_t} = \frac{L}{n}\cdot\frac{16c_P^2n^2}{|V(B)|^2}\cdot \frac{1}{p_t}=\frac{16c_P^2Ln}{p_t|V(B)|^2} \leq 16c_P^2$$ $$\Leftrightarrow |V(B)|^2 \geq \frac{Ln}{p_t}$$ and $$|V(B)|^2 \geq Lt^{2 + \epsilon}|V(G)|=Ln/\p{\frac{n}{t^{2 + \epsilon}|V(G)|}} = \frac{Ln}{p_t}$$ as desired. \end{proof} \section{Applying Recursive Hypergraph Containers}\label{genbound} \begin{proof}[Proof of Theorem \ref{fixedtheorem}] To recap our hypergraph formulation of the problem, to prove Theorem \ref{fixedtheorem} it is sufficient to prove that the hypergraph $H$ has at most $O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$ independent sets of size $n$.\\ Lemma \ref{recursivelemma} has shown that, for a general container $G$, we can apply the container lemma for a certain $p=p_t$ depending on the size of $G$, and further split $G$ into more containers. Starting from the original graph $H$, we can repeat this process recursively until all of our containers have $\leq U = \frac{Cn^2 \log^{2+\epsilon} n}{L}$ vertices. We are trying to count the number of independent sets of size $n$ in the original hypergraph and we know every independent set in the original graph is a subset of one of these containers. Each container of size $\leq U$ has $\leq {U \choose n} \leq \p{\frac{eU}{n}}^n = O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$ subsets of size $n$, and so the number of independent sets of size $n$ in this container is also bounded by this amount. Therefore, all that remains to show is that the number of containers is singly exponential in $n$. If we can show this, then we will have \begin{align*} \text{number of size-$n$ independent sets in $H$} &\leq \sum_{\text{containers }C}\text{number of size-$n$ independent sets in $C$}\\ = \text{number of conatiners}\cdot O\p{\frac{n\log^{2+\epsilon}n}{L}}^n & = c^n \cdot O\p{\frac{n\log^{2+\epsilon}n}{L}}^n = O\p{\frac{n\log^{2+\epsilon}n}{L}}^n\\ \end{align*} Say we encounter a container $G$ with $Cn\gamma^{t-1} < v(G) \leq Cn\gamma^{t}$ and $t \geq t_0+1$. From our lemma, we know we can apply the container lemma with $p=p_t = \frac{n}{t^{2+\epsilon}|V(G)|}$ and split $G$ into at most $$\p{\frac{e}{kp_t}}^{kp_tv(G)} = \p{\frac{et^{2+\epsilon}|V(G)|}{kn}}^{kn/t^{2+\epsilon}} \leq \p{\frac{et^{2+\epsilon}Cn\gamma^t}{kn}}^{kn/t^{2+\epsilon}} = \p{\frac{et^{2+\epsilon}C\gamma^t}{k}}^{kn/t^{2+\epsilon}}$$ containers. Additionally, we know that all of the resulting containers will contain at most $(1-\gamma)v(G) \leq Cn\gamma^{t-1}$. We will subsequently break down these child containers using $p=p_s$ for some $s \leq t-1$. Say $T=\log_\gamma(n/C)$ or equivalently $Cn\gamma^T = n^2$. In the worst case, after we break up $H$ with $p=p_T$, we break up all of $H$'s child containers with $p=p_{T-1}$, all of $H$'s grandchild containers with $p=p_{T-2}$, etc all the way to $p=p_{t_0+1}$. However, we can never encounter two consecutive generations of containers on which we apply the containers lemma with the same $p_t$; $t$ is always strictly decreasing. Thus, the number of containers we have at the end is at most $$\prod_{t=t_0+1}^T \p{\frac{e}{kp_t}}^{kp_tv(G)} \leq \prod_{t=t_0+1}^T \p{\frac{et^{2+\epsilon}C\gamma^t}{k}}^{kn/t^{2+\epsilon}}\leq \prod_{t=t_0+1}^T (A^t)^{kn/t^{2+\epsilon}} = A^{kn\sum_{t=t_0+1}^T \frac{1}{t^{1+\epsilon}}}$$ which is constant exponential in $n$, as $\sum_{t=1}^\infty \frac{1}{t^{1+\epsilon}}$ is a convergent sum and $A$ is in terms of $C$ and $\gamma$ which are in terms of $\pi$ and $k$, as desired. \end{proof} \section{Conclusion}\label{conclusion} We have managed to show that the number of $n$-permutations $\Lambda$-avoiding $\pi$ is $O\p{\frac{n\log^{2+\epsilon}n}{L}}^n$ only relying on the fact that $\Lambda$ contains a certain collection of size-$L$ cliques. This bound holds for positive $\epsilon$ arbitrarily close to $0$. When $L$ is polynomial in $n$, that is $L=\Theta(n^c)$ with $c \in (0,1]$, this bound is a strict improvement on the $n!=O(n)^n$ total $n$-permutations. For $L=n$, we are a logarithmic factor off from the Stanley-Wilf conjecture.\\ Our matching bound for when $\Lambda$ is a random hypergraph with edge probability $\alpha$ of $\exp(O(n))\alpha^{-\frac{n}{k-1}}$ is therefore more general in many ways, as there are no cliques of polynomial size in $n$ w.h.p. in such a random graph. This is expected as the weakest part of our argument came from the deterministic nature of $\Lambda$. When we are bounding the sum of the vertices in the rich blocks, $$\sum_{\text{rich blocks }B} |V(B)| = \sum_{\text{blocks }B} |V(B)| - \sum_{\text{unrich blocks }B} |V(B)|$$ \noindent the best bound for the unrich blocks $$\sum_{\text{unrich blocks }B} |V(B)| \leq d(\text{number of unrich blocks}) \leq d(\text{number of blocks})$$ assumes that all the blocks are unrich, accounting for the worst deterministic case. When the locations of the blocks are randomized, we can make a stronger statement in expectation. However, such a reliance on large cliques in the fixed $\Lambda$ case is necessary to achieve any meaningful bound, as we showed there are dense multipartite hypergraphs $\Lambda^*$ which are avoided by $O(n)^n$ $n$-permutations, but which have constant maximal clique. This gives us hope that the conditions we place on the fixed $\Lambda$ and relatively tight.\\ An open problem is to remove the $\log^{2+\epsilon}n$ term from the bound. The term comes from the use of hypergraph containers in a recursive branching fashion. Each container in the tree is broken down using the containers lemma as a black box, necessitating this term. It may be removable by reworking the arguments of the containers lemma to tailor to this recursive usage, which would improve our bound especially for $L=\Theta(n)$. \section{Acknowledgements} The authors would like to thank Asaf Ferber for his mentorship throughout this research. His teachings and advice were invaluable.
1,108,101,564,028
arxiv
\section{ Introduction} Observations of ultra-luminous X-ray sources and simulations of globular cluster dynamics suggest the existence of intermediate-mass black holes (IMBHs). However, observational evidence for their existence is still under debate, see e.g.~\cite{MillerLISA:2009,MillerIMBH:2004}. Gravitational waves from binary coalescences involving IMBHs with masses \unit{50}{\smass} $\lesssim M \lesssim$ \unit{500}{\smass} are potentially detectable by advanced detectors -- including Advanced LIGO \cite{Harry:2010}, Advanced Virgo \cite{aVirgo}, and KAGRA \cite{KAGRA} -- with a low frequency cutoff of around \unit{10}{\hertz}. If IMBHs do exist, one likely contribution to gravitational-wave detections is believed to be through the coalescence of a compact stellar-mass companion (black hole or neutron star) with an IMBH, at a possible rate of up to $\sim$\unit{10}{\yr^{-1}} \cite{LSCVrates:2010,BrownIMRI:2007,MandelIMRI:2008}. We will denote these signals as intermediate mass-ratio coalescences (IMRACs)\footnote{ In the literature, the term frequently used for this class of objects is \textit{intermediate mass-ratio inspirals} or IMRIs, see e.g.~\cite{BrownIMRI:2007,MandelIMRI:2008}. However, in the context of ground-based observations, in particular with second-generation instruments, we will show that the full coalescence is important for these systems, and it therefore seems to be more appropriate to call them IMRAC.}. Given that IMBHs in this mass range have proved extremely difficult to observe in the electromagnetic spectrum, gravitational-wave detections may provide the first unambiguous observations of such objects through the robust measurement of their masses. Such observations would form an important channel for probing the dynamical history of globular clusters. Furthermore, Advanced LIGO/Virgo (aLIGO/Virgo) may be able to provide measurements of the quadrupole moment of a black hole \cite{BrownIMRI:2007, Rodriguez:2012}, which would allow a null-hypothesis test of the Kerr metric for IMBHs. The gravitational waveform from the coalescence of two compact objects can be divided into three phases: a gradual inspiral, a rapid merger, and the quasi-normal ringdown of the resulting black hole. The relative contribution to the expected coalescence signal from inspiral, merger and ringdown is an important consideration for gravitational-wave searches. To leading Newtonian order the gravitational wave frequency at the inner-most stable circular orbit (ISCO) is \beq f_{\mathrm{ISCO}} = \unit{4.4}k{\hertz}\ \left(\frac{M_{\odot}}{M}\right)\,, \label{eq:fisco} \end{equation} where $M$ is the total mass of the binary. For advanced detectors with a low frequency cut-off of $\sim 10\,\mathrm{Hz}$, we may only have access to either the very late stages of the inspiral, or solely merger and ringdown for the heaviest IMRAC systems. While the power in the merger and ringdown is suppressed by a factor of the mass ratio relative to the power in the inspiral, the fact that IMRAC systems are liable to merge either in-band, or at the low frequency limit of the bandwidth, means that merger and ringdown may be significant over a large portion of the detectable mass-space. Additionally, for cases where IMRAC waveforms are inspiral-dominated, it is useful to know where inspiral-only searches could be targeted. Detecting IMRACs through gravitational waves will require template gravitational-waveform families adapted to highly asymmetrical mass-ratio systems. However the development of numerical relativity simulations and perturbative techniques in this regime is at an early stage which is potentially problematic (see, e.g. \cite{NRpert} for a discussion of this issue). The issue of appropriate template waveform families is thus central to the detection of IMRACs through gravitational waves. The effective-one-body approach, calibrated to numerical relativity, has led to template waveforms, known as EOBNR \cite{PanEOBNR:2011}, that describe the full inspiral, merger and ringdown coalescence-signal for comparable mass-ratio binaries; EOBNR waveforms should also be accurate at extreme mass ratios. However, to date only one full numerical simulation exists for mass-ratio $q=1/100$ binaries \cite{RIT:2011}. Furthermore, EOBNR waveforms have been constructed to reproduce the dynamical evolution of binaries with extreme mass-ratios. EOBNR waveforms have not yet been compared to numerical relativity simulations at such mass ratios, so their validity in the IMRAC regime remains to be demonstrated. Meanwhile, in the context of extreme mass-ratio binaries, several authors have modelled the two-body motion by computing radiative and conservative self-force corrections to Kerr geodesic motion \cite{GG:2006, BFGGH:2007}. Waveforms computed within this scheme are inspiral-only and are only developed to lowest order in the mass ratio. These waveforms have been adapted to describe intermediate mass-ratio inspirals by including higher-order-in-mass-ratio corrections in \cite{HuertaGair:2009} and have been used to study the detection of intermediate mass-ratio inspirals in the context of the proposed third-generation ground-based gravitational-wave interferometer the Einstein Telescope \cite{HG:2011}. We refer to these intermediate mass-ratio inspiral waveforms as the ``Huerta Gair'' (HG) waveform family after its authors. This waveform family should be physically well motivated to describe the inspiral of IMRACs. Typically one does not have an exact representation of ``true'' gravitational-wave signals but requires templates which are sufficiently effective at filtering such signals. A common metric for quantifying how well approximate waveform families are at filtering gravitational-wave signals is known the ``effectiveness'', or fitting factor \cite{Buonanno:2009}. This measures the fraction of the theoretical maximum signal-to-noise ratio (SNR) that could be recovered by using non-exact template waveforms. The work in this paper proceeds as follows. Firstly, by computing the effectiveness of inspiral-only template waveforms at filtering the full coalescence signal, we determine the relative importance of the inspiral and merger-ringdown phases. We identify three regions in the component mass plane in which: $(a)$ inspiral-only searches are feasible with losses in detection rates $L$ in the range $10\% \lesssim L \lesssim 27\%$, $(b)$ searches are limited by the lack of merger and ringdown in template waveforms and are liable to incur losses in detection rates in the range $27\% \lesssim L \lesssim 50\%$, and $(c)$ merger and ringdown are essential for searches in order to prevent losses in detection rates greater than $50\%$. Secondly, to gain insight into the accuracy of the inspiral portion of IMRAC waveforms we compute the effectiveness of the inspiral-only portion of EOBNR waveforms at filtering gravitational-wave signals as described by the HG waveform family. We find that there is a non-negligible discrepancy between EOBNR and HG inspirals in the regime where inspiral-only searches could be considered sufficient. For reference we also compare EOBNR inspirals to a post-Newtonian (PN) waveform family known as TaylorT4 \cite{Buonanno:2009}. The PN expansion is liable to be a poor choice of approximant for IMRACs because of the large number of cycles spent at small radii. We find that EOBNR and HG are in better agreement with each other than to TaylorT4, as might be expected from the previous observation. Our approach does not directly address the accuracy of template waveforms, because none of the waveforms considered have been matched to full numerical waveforms. However, assuming that the waveform families we consider ``bracket'' the correct gravitational waveforms in the IMRAC regime, this approach provides a useful heuristic for quantifying the effectiveness of existing gravitational waveforms for IMRAC searches. Further numerical relativity simulations will be important in the continuing development of accurate template waveforms for IMRACs. Our analysis improves upon previous work to determine the detectability of IMRAC sources \cite{MandelGair:2009} which only considered the so-called ``faithfulness'' of template waveforms, i.e., the effectiveness of template waveforms evaluated at the signal parameters. Additionally, that study only considered inspiral-only waveforms and focused on low frequency observations, e.g. with the proposed Laser Interferometer Space Antenna (LISA). This paper is organized as follows. In Sec.~\ref{sec:waveforms} we describe the waveform families used in our study. In Sec.~\ref{sec:SNR} we compute the contributions to the SNR from the inspiral and merger and ringdown phases of EOBNR waveforms in the intermediate mass-ratio regime. In Sec.~\ref{sec:IMR} we study the effectiveness of inspiral-only waveforms to filter full coalescence signals from IMRAC sources and identify the three regions in which different searches could be conducted. In Sec.~\ref{sec:insp_only} we compare the inspiral portion of EOBNR waveforms to HG and TaylorT4 waveforms. In Sec~\ref{sec:conclusion} we consider the implications of our results for future searches in advanced detectors. \section{Waveforms} In this section we summarise the key concepts entering the construction of the waveforms used in this study. Throughout the paper, for a binary system with individual component masses $m_1$ and $m_2$ (with $m_{2} < m_{1}$) we define the total mass as $M \equiv m_{1}+m_{2}$, and mass ratio and symmetric mass ratio as $ q \equiv m_2/m_1$ and $\eta \equiv m_{1}m_{2}/(m_{1}+m_{2})^{2}$, respectively. We consider the family of waveforms constructed by calibrating the effective-one-body approach to numerical relativity (EOBNR) \cite{PanEOBNR:2011}. The EOBNR family describes the full inspiral-merger-ringdown signal; it is currently used in searches that reach the IMBH mass range, so far up to \unit{100}{\smass} \cite{s6Highmass:2012}. The free parameters in the family have been fitted to comparable mass ratio numerical relativity simulations, and by construction this family is deemed to be faithful in the test particle limit. For this work, we use the implementation provided by the LIGO Scientific Collaboration Algorithm Library (LAL) that corresponds to the approximant EOBNRv2. We also consider a waveform family based on test particle motion in Kerr/Schwarzschild space-time with radiative and conservative self-force corrections which we refer to as the Huerta-Gair (HG) family~\cite{HG:2011}. The approximation scheme is constructed specifically to handle highly-asymmetrical mass-ratio binaries and is therefore a physically well motivated approximation scheme for intermediate mass-ratio inspirals. These waveforms have been compared against, Teukolsky based waveforms for inspiralling test particles on geodesic orbits and the match exceeds $95\%$ over a large portion of the parameter space \cite{HuertaGair:2009}. These waveforms have been used to study detection of intermediate mass-ratio inspirals in Einstein telescope in \cite{HG:2011}. The Huerta-Gair waveforms describe only the inspiral portion of the coalescence signal. There is no corresponding LAL approximant. The gravitational-wave polarization states can be computed from Eqs.~(14) and (15) of \cite{HG:2011}. For our study, effects of orientation of the gravitational-wave source are irrelevant and we can consider only circularly-polarized face-on binaries. We fix the spin parameter to zero. Finally, as a reference we use the standard inspiral-only post-Newtonian approximation corresponding to the LAL approximant TaylorT4, which includes corrections to the phase of the waveform at 3.5PN order~\cite{Buonanno:2009}. The TaylorT4 waveforms used here are computed in the so-called ``restricted'' amplitude approximation, which assumes the waveform amplitude to be zeroth post-Newtonian order and only includes the leading second harmonic of the orbital phase. We only include the leading second harmonic of the orbital phase in the EOBNR waveforms. We do not consider the effects of spin or eccentricity in any of the waveform families, as we restrict to circular orbits and non-spinning black holes. The HG and TaylorT4 families are inspiral-only time-domain waveforms and are terminated when the gravitational waveforms reach the ISCO frequency. \label{sec:waveforms} \section{SNR from inspiral, merger and ringdown} In this section we consider the relative contributions of the different portions of the gravitational-wave coalescence signal to the SNR as a function of the IMRAC's mass. We work in the frequency domain and define the Fourier transform of the gravitational-wave strain signal, $\tilde{h}(f)$, as \beq\label{ft} \tilde{h}(f) = \int^{+\infty}_{-\infty} dt\ h(t)e^{-2\pi i ft}\,, \end{equation} where $h(t)$ is the time-domain strain signal. We define the noise-weighted inner product as \beq (a|b) = 4\Re\left[\int^{f_{\mathrm{max}}}_{f_{\mathrm{min}}}df \ \frac{\tilde{a}(f)\tilde{b}^{*}(f)}{S_{n}(f)}\right],\ \end{equation} \noindent where $S_{n}(f)$ is the instrument noise power spectral density (PSD), which we will take to be the Advanced LIGO high-power, zero-detuned noise PSD \cite{aLIGOpsd}. The limits of integration correspond to the bandwidth of the detector. The expectation value of the optimal matched filtering SNR, in the case when the signal and template waveforms are identical, is given by \cite{CutFlan:93} \bea \Big(\frac{S}{N}\Big)_{\mathrm{max}} & = & (h|h)^{1/2}\,, \nonumber\\ & = & \Bigg[4\Re\int \left(\frac{f |\tilde{h}(f)|}{\sqrt{f S_n(f)}}\right)^2 d\ln f\Bigg]^{1/2}.\ \label{eq:max_snr} \end{eqnarray} Writing the maximum SNR in the form above clearly separates it into contributions from the signal strain, $f |\tilde{h}(f)|$, and the root-mean-squared (rms) noise spectral amplitude, $\sqrt{f S_n(f)}$, which is the strain signal associated with the detector noise. One can gain insight into the relative contributions to the SNR from inspiral, merger and ringdown by comparing the gravitational-wave strain to the noise rms value. In Fig.~\ref{fig:RMS} we show the strain for selected overhead and face-on (i.e., optimally-located and oriented) IMRAC sources at a fiducial distance of 1Gpc as described by the EOBNR waveform family, and noise rms amplitude. The ISCO frequency of each signal is shown as a vertical line. The strain from merger and ringdown is thus the portion after the ISCO frequency. The contribution to the strain from merger and ringdown from binaries with component masses $(m_{1}, m_{2}) = [\unit{(200, 20)}{\smass}, \unit{(200, 2)}{\smass}]$, is greater than that of the noise rms amplitude (black curve with triangles). In general, systems with ISCO frequencies between \unit{30-100}{\hertz}, merge in the ``bucket'' of the noise curve, i.e., where the detector is most sensitive. For example, for the $(m_{1}, m_{2}) = \unit{(200, 20)}{\smass}$ system (red dotted curve in Fig.~\ref{fig:RMS}), the merger and ringdown contribute the bulk of the SNR. Conversely, the inspiral contribution to the SNR is strongly suppressed for such massive systems. \begin{figure*} \includegraphics[scale=0.5]{./rms_strain.png} \caption{Strain of optimally-located and oriented IMRAC sources at a fiducial distance of 1 Gpc as described by EOBNR waveforms, and aLIGO noise. The corresponding ISCO frequency of each signal is shown as a vertical dashed line. The strain from the merger and ringdown from each source contributes after the ISCO frequency. For the sources with component masses $(m_{1}, m_{2}) = [\unit{(200, 20)}{\smass}, \unit{(200, 2)}{\smass}]$, the strain from merger and ringdown sits above the noise spectrum. The SNR from the full EOBNR waveform and from its inspiral-only portion are shown in Fig.~\ref{fig:SNRs}.} \label{fig:RMS} \end{figure*} In Fig.~\ref{fig:SNRs} we show the maximum SNR, Eq.~(\ref{eq:max_snr}), as a function of the binary's total mass produced by inspiral-only and full EOBNR waveforms at four different mass-ratios in the range $1/200 \le q \le 1/10$. We construct inspiral-only EOBNR waveforms by Fourier transforming the full waveform into the frequency domain and truncating it at the ISCO frequency. We have considered the SNR for optimally-located and oriented sources at a fiducial distance of 1 Gpc. The lower-bound mass of the smaller body is set to $m_{2} = \unit{1.4}{\smass}$ which is the canonical neutron-star mass. The lowest total mass for the $q=1/50,\ 1/100$ and $1/200$ subplots is set by fixing the mass of the smaller body to $m_{2} = \unit{1.4}{\smass}$. For the $q=1/10$ subplot in Fig.~\ref{fig:SNRs} the smallest total mass is set to $M=$ \unit{35}{\smass} as the inspiral phase accounts for the vast majority of the SNR below this value. The lower limit of integration of Eq.~(\ref{eq:max_snr}) is \unit{10}{\hertz} and the upper limit is \unit{2048}{\hertz}, which is the Nyquist frequency of discretely sampled EOBNR waveforms generated at a sampling rate of $\Delta\,t = 4096\,$s in the time-domain. We only consider systems with total masses such that the ISCO frequency is greater than \unit{10}{\hertz} (our low frequency cut-off). The highest total mass for each of the subplots in Fig~\ref{fig:SNRs} is set to $M=$ \unit{300}{\smass} which ensures the ISCO frequency is greater than \unit{10}{\hertz}. As anticipated from Fig.~\ref{fig:RMS} there is a significant difference in the SNR between inspiral-only and full EOBNR waveforms that can be seen at all four mass-ratios. We also note that for systems with mass-ratios of $q=1/10$ with total masses below around $M=$ \unit{35}{\smass} the inspiral phase is the dominant source of SNR. If we consider 3\% as a fiducial value of the difference between the full SNR and the one associated to the inspiral-only waveform -- which leads to a loss in detection rates of $10\%$ -- this happens at $M\approx$ \unit{35}{\smass} for $q=1/10$. For binaries with $q=1/50,\ 1/100$ and $1/200$, the minimum difference in SNR between inspiral-only and full waveforms is $\approx 6\%,\ 15\%$ and $40\%$ respectively for the mass ranges considered in Fig~\ref{fig:SNRs}. In summary, we have shown that inspiral-only templates will miss a significant portion of the total SNR of IMRAC signals over the bulk of the detectable mass-space. Future searches will therefore require templates that can match the full inspiral-merger-ringdown. However, there is a small region of the parameter space for which inspiral-only templates may suffice for searches, without inducing drastic losses in detection rates. In the following section we quantify the effectiveness of inspiral-only templates for searching for full coalescence signals in aLIGO. \begin{figure*} \centering \begin{subfigure} \centering \includegraphics[scale=0.16]{./SNRq_1_10.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[scale=0.16]{./SNRq_1_50.png} \\ \end{subfigure} \begin{subfigure} \centering \includegraphics[scale=0.16]{./SNRq_1_100.png} \end{subfigure} \begin{subfigure} \centering \includegraphics[scale=0.16]{./SNRq_1_200.png} \end{subfigure} \caption{SNR of optimally-located and oriented IMRAC sources at a fiducial distance of 1 Gpc vs total mass for four different mass ratios; $q= 1/10, 1/50, 1/100, 1/200$. The solid line is the SNR from full EOBNR waveforms and the dashed line from inspiral-only EOBNR waveforms truncated at the ISCO frequency in the frequency domain. The lower-bound mass of the smaller body is set to $m_{2} = \unit{1.4}{\smass}$ which is the canonical neutron-star mass. The lowest total mass for the $q=1/50,\ 1/100$ and $1/200$ subplots in Fig.~\ref{fig:SNRs} is set by fixing the mass of the smaller body to $m_{2} = \unit{1.4}{\smass}$. For the $q=1/10$ subplot the smallest total mass is set to $M=$ \unit{35}{\smass} as the inspiral accounts for the vast majority of the SNR below this value. We only consider systems with total masses such that the ISCO frequency is greater than \unit{10}{\hertz} (our low frequency cut-off). The highest total mass for each of the subplots in Fig~\ref{fig:SNRs} is set to $M=$ \unit{300}{\smass} which ensures that the ISCO frequency is greater than \unit{10}{\hertz}. We find that there is a non-negligible contribution to the SNR from merger and ringdown in IMRAC signals above a total mass of around $M=$ \unit{35}{\smass}. The difference in SNR between inspiral-only and full waveforms is at the $3\%$ level at around $M=$ \unit{35}{\smass} at $q=1/10$. For binaries with $q=1/50, 1/100\ \mathrm{and}\ 1/200$, the minimum loss in SNR are at the $6 \%$, $15\%$ and $40\%$ levels, respectively, in our mass range of interest. For IMRACs of astrophysical interest, more extreme mass ratios correspond to greater total mass, which can place the merger and ringdown at a frequency where the detector has the greatest sensitivity.} \label{fig:SNRs} \end{figure*} \label{sec:SNR} \section{Effectiveness of inspiral-only templates for IMRAC searches} We have shown in Sec.~\ref{sec:SNR} that the SNR from merger and ringdown will provide a significant contribution to the total SNR over a broad portion of the IMRAC mass-space, c.f. Fig.~\ref{fig:SNRs}. There is however a small portion of the parameter space where the SNR is dominated by the inspiral phase. This can be seen in Fig.~\ref{fig:SNRs} for binaries with $q=1/10$ binaries in Fig.~\ref{fig:SNRs} with total masses $M\leq$ \unit{35}{\smass}. Thus it is important to quantify the effect of using inspiral-only templates to search for IMRAC signals which contain inspiral, merger and ringdown phases. The use of template waveforms that are not exact representations of the signals they filter degrades the SNR, as the optimal SNR can be recovered only when the template waveform corresponds exactly to $h$, see .~Eq.~(\ref{eq:max_snr}). In practice however, we do not have access to an exact representation for $h$. Using a non-exact template waveform $T$ to filter $h$ caps the maximum recoverable SNR to \bea\nonumber \Big(\frac{S}{N}\Big) &=& \max_{\vec{\theta}} \frac{(h(\vec{\lambda}) | T(\vec{\theta}))} {(T(\vec{\theta})|T(\vec{\theta}))^{1/2}}\,,\\ &=&\epsilon \ \Big(\frac{S}{N}\Big)_{max}\,, \end{eqnarray} where $\vec{\lambda}$ and $\vec{\theta}$ represent the parameter vector of the signal and template, respectively. We define $\epsilon$ as the \textit{effectiveness} of a template waveform family $T$ at recovering the maximum SNR from a gravitational-wave signal $h$; by definition $0 \le \epsilon \le 1$. This quantity is also referred to as the ``fitting factor" in the literature \cite{Apo:1995}. It is convenient to define waveforms normalized to unit norm as $\mathbf{\hat{a}}(f)=\tilde{a}(f)/(a|a)^{1/2}$ so that $(\mathbf{\hat{h}}|\mathbf{\hat{h}})= (\mathbf{\hat{T}}|\mathbf{\hat{T}})=1$ and the effectiveness can be written succinctly as \cite{Apo:1995} \beq \epsilon = \max_{\vec{\theta}} (\mathbf{\hat{h}}(\vec{\lambda})|\mathbf{\hat{T}}(\vec{\theta}))\,. \label{epsilon} \end{equation} Using normalized waveforms also has the advantage of eliminating the dependence of the waveforms on the source orientation and distance, which enter as an overall scaling. Calculating the effectiveness, Eq. (\ref{epsilon}), requires maximizing over the component masses ($m_{1}, m_{2}$) and the time and phase at coalescence. We can efficiently maximize over the time and phase by Fourier transforming the integrand of the noise-weighted inner-product \cite{findchirp:2005}, \beq z(t_{c}) = 4\int^{f_{max}}_{f_{min}}df\ \frac{\tilde{a} (f)\tilde{b}^{*}(f)}{S_{n}(f)}\ e^{2\pi ift_{c}},\ \end{equation} which yields a complex time series whose elements correspond to the inner-product of $a$ and $b$ as one of the signals is time-shifted with respect to the other. We can efficiently find the time at coalescence, $t_{c}$, by finding the time at which the norm of this time series is a maximum. The phase at coalescence $\phi_{c}$ is then automatically given by finding the argument of the time-series at its peak amplitude. We thus modify the inner product $(a|b)$: \beq (a|b) \rightarrow (a|b)^{\prime} = \max_{t_{c}}\left|z(t_{c})\right|,\ \label{eq:mod_innerprod} \end{equation} which we will adopt as the definition of the inner-product for the remainder of this paper. To compute the effectiveness of an inspiral-only IMRAC search we evaluate Eq.~(\ref{epsilon}) for signals covering the IMRAC mass space. We take as our signal waveform, $h$, the full inspiral-merger-ringdown EOBNR waveform. We take the template, $T$, to be an \textit{inspiral-only} EOBNR waveform, formed by truncating the full EOBNR waveform at the ISCO frequency in the frequency domain. With such signals and templates the effectiveness provides a measure of the maximum SNR which could be achieved through using an inspiral-only template to filter full coalescence-signals. To get a broad coverage of the IMRAC mass space we compute Eq.~(\ref{epsilon}) for signals whose source masses cover the ranges \unit{1.4}{\smass} $\leq m_{2} \leq$ \unit{18.5}{\smass} and \unit{24}{\smass} $\leq m_{1} \leq$ \unit{200}{\smass}, with mass ratios spanning the range $q:=m_{2}/m_{1} \in [1/140, 1/10]$. For each signal we evaluate the effectiveness, Eq.~(\ref{epsilon}), where the template $T$ describes the inspiral-only portion of an EOBNR waveform. We maximize over time and phase by maximizing the inner product of the signal with an inspiral-only EOBNR template, Eq.~(\ref{eq:mod_innerprod}). The maximization over the masses is performed by finding the largest inner product between the signal and a bank of template waveforms constructed so that the minimal inner product between (normalized) neighbouring templates is $99\%$. The template bank is characterised by intrinsic parameters $(m_1,m_2)$ and spans an extended mass range $m_{1,2} \times (1 \pm 0.1)$ where $m_{1,2}$ are the masses associated with each signal waveform. The results of the effectiveness of an inspiral-only IMRAC search are shown in Fig.~\ref{fig:ff_eobnrs_rates}. Inspiral-only templates are $\sim 98\%$ effective at filtering full coalescence signals for total masses $M \lesssim$ \unit{30}{\smass}. Such systems have an ISCO frequency \unit{150}{\hertz} $\lesssim f_{\mathrm{ISCO}}$ which is well within the peak sensitivity of the noise curve. However for the bulk of the mass space the effectiveness is below $75\%$. This is unsurprising given the SNR curves in Fig.~\ref{fig:SNRs} which clearly show the importance of the contribution of merger and ringdown to the SNR. \begin{figure*} \includegraphics[scale=0.3]{./eob_eob_comp_grey.png} \caption{Effectiveness of inspiral-only EOBNR templates to filter full inspiral, merger and ringdown EOBNR signals as a function of the source component masses and corresponding loss in detection rates. The diagonal corresponds to a mass-ratio $q=1/10$. Inspiral-only EOBNR templates are constructed by truncating the full waveform at the ISCO frequency in the frequency-domain. For the bulk of the parameter space inspiral-only templates are $\lesssim 75\%$ effective at filtering inspiral, merger and ringdown signals. Inspiral-only templates are $\sim 97-98\%$ effective for total masses $M_{\odot} \lesssim$ \unit{30}{\smass}. Inspiral-only templates within the $90\%$-effectiveness contour should be sufficient for IMRAC searches without incurring greater than $30\%$ losses in detection rates.} \label{fig:ff_eobnrs_rates} \end{figure*} The loss in SNR incurred through using inspiral-only templates directly affects detection rates. Because the SNR scales inversely with the distance, the observable volume will scale with the cube of the effectiveness. Assuming that GW sources are isotropically distributed in the sky, the fractional loss in detection rates will be $1-\epsilon^{3}$. The percentage loss in detection rates through using inspiral-only EOBNR templates to recover the full coalescence signal is also shown in Fig.~\ref{fig:ff_eobnrs_rates}. Over a broad portion of the mass-space inspiral-only templates incur losses in detection rates between $60- 85\%$. As the total mass of the binary approaches \unit{440}{\smass} the Schwarzschild ISCO frequency, Eq.~(\ref{eq:fisco}), approaches \unit{10}{\hertz} which is near the low-frequency cut-off of the detectors. Hence the relative contribution of the inspiral phase to the coalescence signal of heavier systems diminishes until the only contribution is from the merger and ringdown. This is a striking indication of the need of merger and ringdown in IMRAC template waveforms. This suggests the importance of full numerical simulations in this regime in order to construct a reliable waveform family including inspiral, merger, and ringdown phases. \label{sec:IMR} We identify three regions in the $m_{1}$-$m_{2}$ plane in which various searches could be constructed. The regions are defined by contours of constant effectiveness which are approximately given by $\mathcal{C} = (m_{1}/M_{\odot})\sqrt{m_{2}/M_{\odot}}$, which are found purely empirically, with $\unit{1.4}{\smass} \leq m_{2} \leq \unit{18.5}{\smass}$ and mass-ratios $q \in [1/140, 1/10]$. The effectiveness is related to $\mathcal{C}$ by $\epsilon \approx 1/100\times (1.6\ \mathcal{C} - 7.3\times10^{-3}\ \mathcal{C}^{2})$. Between the $97\%$- and $90\%$-effectiveness contours, the losses in detection rates are between $10\% \lesssim L \lesssim 27\%$ and so an inspiral-only search could be sufficient without incurring drastic losses in detections. The region bound from below in effectiveness by the $90\%$-effectiveness contour is defined by $\mathcal{C} \leq 100$, with the effectiveness increasing with decreasing $\mathcal{C}$. Between the $90\%$- and $80\%$- effectiveness contours the losses in detection rates are around $27\% \lesssim L \lesssim 50\%$. Thus, within this region searches will be limited by the lack of merger and ringdown in template waveforms, though an inspiral-only search would be feasible in principle. This contour is defined by $100 \lesssim \mathcal{C} \lesssim 150$. Below the $80\%$-effectiveness contour, inspiral-only searches will incur losses in detection $50\% < L$ and so merger and ringdown will be crucial for searches. The region bound from above in effectiveness by the $80\%$-effectiveness contour which is defined by $150 \lesssim \mathcal{C}$, with effectiveness decreasing with increasing $\mathcal{C}$. The results are summarized in Table~\ref{table:search_regions}. \begin{table*}[!htp] \begin{center} \begin{tabular}{ | p{3cm} | p{3.cm} | p{4.8cm} |p{5.cm} |} \hline Effectiveness of inspiral-only search, $\epsilon(\%)$ & Loss in detection rates, $L(\%)$& Contours of constant effectiveness in $m_{1}$-$m_{2}$ plane $(\mathcal{C} = (m_{1}/M_{\odot})\sqrt{m_{2}/M_{\odot}})$ within mass range of interest & Implication for searches\\ \hline $ 90\% \lesssim \epsilon \lesssim 97\%$ & $ 10\% \lesssim L \lesssim 27\%$ & $\mathcal{C} \lesssim 100$ & Inspiral-only search sufficient but with non-negligible loss in detection rates.\\ \hline $80\% \lesssim \epsilon \lesssim 90\%$ & $27\% \lesssim L \lesssim 50\%$ & $100 \lesssim \mathcal{C} \lesssim 150$ & Inspiral-only search possible but limited by lack of merger and ringdown. Could potentially lose half of signals with inspiral-only templates.\\ \hline $\epsilon \lesssim 80\%$ & $50\% \lesssim L $ & $150 \lesssim \mathcal{C}$ & Merger and ringdown crucial for searches. Will miss over half of signals with inspiral-only templates.\\ \hline \end{tabular} \caption{Effectiveness of inspiral-only searches, the corresponding loss in detection rates and the region in the $m_{1}$-$m_{2}$ plane bounded by constant-effectiveness contours. For a given region in the $m_{1}$-$m_{2}$ plane bounded by constant-effectiveness contours we summarize the implications for IMRAC searches.} \label{table:search_regions} \end{center} \end{table*} \section{Comparison of inspiral-only waveforms} \label{sec:insp_only} We have shown that merger and ringdown are crucial for effective searches over a large portion of the IMRAC mass space, though there is a small region in which an inspiral-only search could be constructed without incurring losses in detection rates greater than around $27\%$. For this region, it is therefore important to study whether currently available waveforms are sufficiently accurate. The inspiral phase can be computed using perturbative expansions and thus it is interesting to quantify the consistency of different expansions. To assess the effectiveness of the EOBNR inspiral, we employ a waveform family designed to approximate intermediate mass-ratio inspirals which we refer to as ``Huerta-Gair'' (HG) waveforms~\cite{HG:2011}. HG waveforms describe only the inspiral portion of the coalescence signal. This waveform family has no corresponding LAL approximant. We repeat the study done in the previous Section using now the HG waveform family as the signal $h$ and inspiral-only EOBNR as the template $T$. The results are reported over the whole parameter space in Fig.~\ref{fig:hg_tt4_eobnr}. For completeness, in Table~\ref{table:summary} we also show the values of the effectiveness, Eq.~\ref{epsilon}, for selected mass combinations of EOBNR inspiral-only templates for filtering full EOBNR and HG signals respectively. \begin{figure*} \centering \mbox{\subfigure{\includegraphics[scale=0.78]{./eob_hg_grey.pdf}}\quad \subfigure{\includegraphics[scale=0.78]{./eob_tt4_grey.pdf} }} \caption{Effectiveness of inspiral-only EOBNR templates at filtering HG signals (left) and TaylorT4 signals (right) as a function of the source component masses encoded in the signal.} \label{fig:hg_tt4_eobnr} \end{figure*} \begin{table}[!htp] \begin{center} \begin{tabular}{|c | c | c c c |} \hline $m_1$ & $m_2$ & \multicolumn{3}{c|}{Signal waveforms} \\ (${\smass}$) & (${\smass}$) & full EOBNR & Huerta-Gair & TaylorT4 \\ \hline 50 & 5 & 0.90 & 0.95 & 0.96\\ 100 & 5 & 0.76 & 0.97 & 0.97\\ 200 & 5 & 0.53 & 0.99 & 0.99\\ 50 & 1.4 & 0.96 & 0.96 & 0.90\\ 100 & 1.4 & 0.86 & 0.98 & 0.94\\ 200 & 1.4 & 0.67 & 0.98 & 0.98\\ \hline 5 & 5 & 0.99 & 0.89 & 0.99\\ 20 & 20 & 0.99 & 0.92 & 0.99\\ 100 & 100 & 0.52 & 0.51 & 0.50\\ \hline \end{tabular} \caption{Summary of effectiveness of \textit{inspiral-only} EOBNR template waveforms in recovering signals modelled using different waveform families -- full EOBNR, Huerta-Gair and TaylorT4 -- for selected component masses. Merger and ringdown become more prominent in the coalescence signal as the total-mass of the system is increased. The EOBNR inspiral is typically better at matching HG signals in the IMRAC regime than TaylorT4 signals. Results for equal mass-ratio systems are shown for reference below the horizontal line.} \label{table:summary} \end{center} \end{table} For the high-mass part of the mass-space the effectiveness of the EOBNR inspirals with respect to the HG waveforms is close to $100\%$. This is perhaps unsurprising because very high mass systems will have short inspirals and possible differences in the waveforms will not produce a significant degradation of SNR when matched over a small number of wave cycles. However, for lighter systems the effectiveness can be as low as $90\,\%$, which occurs in the region of mass space in which inspiral-only searches would be most feasible (see Table~\ref{table:search_regions}). For reference we also compare inspiral-only EOBNR templates to TaylorT4 signal waveforms (which are inspiral-only). We construct signal waveforms on the same grid in $m_{1}- m_{2}$ as for HG waveforms and use the same template bank of inspiral-only EOBNR waveforms. The results are summarized in the right panel of Fig.~\ref{fig:hg_tt4_eobnr}, and in Table~\ref{table:summary} for selected masses. We find that the EOBNR inspiral has good filtering efficiency for the TaylorT4 waveform family. However, EOBNR is clearly a better match to the HG waveform family over a larger range of masses and mass ratios than to TaylorT4. This can be seen more clearly by comparing the subplots in Fig.~\ref{fig:hg_tt4_eobnr}. This is unsurprising given that the PN expansion is unreliable at high velocities and highly asymmetrical mass-ratios. For orbital velocities $v/c = (M\pi f)^{1/3} \gtrsim 0.2$ the PN energy flux deviates significantly from numerical results, see \cite[e.g.,][]{Poisson:1995, Yunes:2008}. A binary at its ISCO frequency has $v/c \sim 0.4$, which is well beyond the region of validity. \section{Discussion and Conclusion} We have shown that over the bulk of the IMRAC mass space, merger and ringdown contribute significantly to the gravitational-wave coalescence signal. This happens despite the suppression of the power in the merger and ringdown in signals from binaries with very asymmetric mass ratios. The importance of merger and ringdown is due to the greater sensitivity to these waveform portions for high-mass signals, for which most of the inspiral may fall at frequencies below the detector's sensitive band. However there is a relatively large patch in mass space in which the inspiral-only waveforms are more than $90\%$ effective. We identified three regions in which different searches could be considered appropriate based on thresholds of acceptable losses in detection rates. The mass space splits into a region in which inspiral-only searches could be feasible, incurring losses in detection rates of up to $\sim 27\%$; a region in which searches would be limited by lack of merger and ringdown in template waveforms, incurring losses in detection rates up to $50\%$; and a region in which merger and ringdown are critical to prevent losses in detection rates over $50\%$. The search regions are summarized in Table~\ref{table:search_regions}. We have further shown that in the region of the IMRAC mass space in which inspiral-only searches are feasible, approximants adapted to asymmetric mass-ratio binaries are important, as here the binary is liable to have highly relativistic velocities $v/c \gtrsim 0.2$. We considered a waveform family designed to describe intermediate mass-ratio binaries which we referred to as the ``Huerta-Gair'' (HG) waveform family. By computing the effectiveness of inspiral-only EOBNR waveforms to filter signals described by the HG waveform family, we showed that losses in recovered SNR could be as great as $10\%$. In Table~\ref{table:summary} we summarize the effectiveness of the signal--template combinations used in the paper. We believe that template waveforms for IMRAC searches will benefit from calibration to several numerical simulations. We note that there already exists one very short numerical waveform of a $q=1/100$ binary which we have not used in our study, and which EOBNR is not currently calibrated to \cite{RIT:2011}. \label{sec:conclusion} \section{Acknowledgements} We thank Jonathan Gair for useful discussions and help in implementing the HG waveform family and Chad Hanna for useful discussions. We would also like to thank Eliu Antonio Huerta Escudero for reading a draft of the manuscript and providing us with useful comments. Research at Perimeter Institute is supported through Industry Canada and by the Province of Ontario through the Ministry of Research $\&$ Innovation. This document has LIGO document number LIGO-P1300009. \bibliographystyle{apsrev}
1,108,101,564,029
arxiv
\section{Introduction} \label{Sect:Intro} In the past two decades much effort has been made to calculate QCD corrections to weak processes. The indispensable renormalization group improvement of perturbatively calculated Feynman amplitudes requires their factorization into Wilson coefficients and matrix elements, which are obtained from an effective field theory containing four-fermion interactions. When calculating QCD radiative corrections to these four-fermion operators using dimensional regularization ($D=4-2 \varepsilon $) one faces evanescent Dirac structures such as \begin{eqnarray} \gamma_{\mu} \gamma_{\nu} \gamma_{\vartheta} (1-\gamma_5) \otimes \gamma^{\vartheta} \gamma^{\nu} \gamma^{\mu } (1-\gamma_5) - (4+a \varepsilon) \gamma_{\mu} (1-\gamma_5) \otimes \gamma^{\mu} (1-\gamma_5), \label{introex} \end{eqnarray} which vanish in $D=4$ dimensions, but appear with a factor of $1/\varepsilon$ in counterterms to physical operators. By introducing the parameter $a$ in \eq{introex} we have displayed the arbitrariness in the definition of the evanescent operators: A priori one can add any multiple of $\varepsilon $ times any physical operator to a given evanescent operator. In the literature one indeed finds different definitions of the latter. The consequences of this arbitrariness for renormalization group improved Green's functions are one of the main subjects of this paper. The role of evanescent operators in perturbation theory has been investigated since the pioneering era of dimensional regularization \cite{b,c,bm}. When perturbative results are to be improved by means of the operator product expansion and renormalization group (RG) techniques to sum large logarithms, new subtleties arise: First the matrix elements of evanescent operators can affect the matching equation determining the Wilson coefficients which multiply the effective four-fermion operators \cite{bw}. Second the appearance of evanescent operators in counterterms to physical operators and vice versa leads to the mixing of physical and evanescent operators during the RG evolution \cite{c,dg}. In \cite{c,bw} a finite renormalization of the evanescent operators has been proposed to render their matrix elements zero. By this the Wilson coefficients of the evanescent operators become irrelevant at the matching scale. For this to be true at any scale it is important that simultaneously the evanescent operators do not mix into physical ones. In \cite{dg} it has been proven for a very special definition of the bare evanescent operators that this is indeed the case, if the finite renormalization proposed in \cite{c,bw} is performed. But does this feature hold for any definition of the evanescent operators, i.e.\ for any choice of $a$ in \eq{introex}? We will affirm this question in section~\ref{Sect:Triang} after setting up our notations and describing and generalizing the commonly used definitions of evanescent operators in section~\ref{Sect:Prelim}. It is well-known that a change in the renormalization prescription of the composite operators affects the Wilson coefficients and the anomalous dimension matrix in the next-to-leading order (NLO) and beyond. In section~\ref{Sect:Scheme} we will find that a change in the definition of the evanescent operators, i.e.\ a change of $a$ in \eq{introex}, leads to a different form of the {\em physical} part of the anomalous dimension matrix. Hence a different $a$ corresponds to a different renormalization scheme. This result is of utmost practical importance for any calculation beyond leading logarithms, as it shows that it is meaningless to state a result for some anomalous dimension matrix without mentioning the definition of the evanescent operators used in the calculation. If one wants to combine some anomalous dimension matrix with Wilson coefficients or perturbative matrix elements calculated with a different definition of the evanescent operators, one clearly needs scheme transformation formulae for the Wilson coefficients and the anomalous dimension matrix. We will derive these scheme transformation formulae in the next-to-leading order in section~\ref{Sect:Scheme}, too. When studying particle-antiparticle mixing or rare decays one faces Green's functions with two insertions of four-fermion operators. The second main subject of this paper is to work out the correct treatment of these Green's functions when one or both inserted operators are evanescent. In section~\ref{Sect:Double} we extend the results of \cite{bw,dg} and of sections~\ref{Sect:Triang} and \ref{Sect:Scheme} to this case of double insertions. Then in section~\ref{Sect:Inclusive} inclusive decays are discussed. We close our paper with our conclusions. \section{Preliminaries and Notation} \label{Sect:Prelim} Let $\{ \hat{Q}_k=\overline{\psi} q_k \psi \cdot \overline{\psi} \tilde{q}_k \psi, \; k=1,2,3,\ldots \}$ be a set of physical dimension-six four-quark operators. We are interested in the Green's functions of a SU(N) gauge theory with insertions of $\hat{Q}_k$ renormalized by minimal subtraction (${\rm MS}$). The arguments are easily generalized to other mass-independent renormalization schemes like $\overline{\rm MS}$. The Dirac structures $Q_k = q_k \otimes \tilde{q}_k$ corresponding to $\hat{Q}_{k}$ are considered to form a basis of the space of Lorentz singlets and pseudosinglets for $D=4$. Neither the Lorentz indices of $q_k$ and $\tilde{q}_k$ are displayed nor any flavour or colour indices, which are irrelevant for the discussion of the subject. $\left[ \Gamma \otimes 1 \right] Q_k \left[ 1 \otimes \Gamma ^\prime \right]$ means $\Gamma q_k \otimes \tilde{q}_k \Gamma ^\prime$. Frequently we will use the example \begin{eqnarray} Q&=&\gamma_{\mu } \lt( 1-\g_5 \rt) \otimes \gamma^{\mu } \lt( 1-\g_5 \rt) . \label{q} \end{eqnarray} The matrix elements of $\hat{Q}_k$ have some perturbative expansion in the gauge coupling $g$: \begin{eqnarray} Z_\psi ^2 \langle \hat{Q}_k^{\mbox{\tiny bare} } \rangle &=& \sum _{j \geq 0} \left( \frac{g^2}{16 \pi ^2} \right)^j \langle \hat{Q}_k ^{\mbox{\tiny bare} } \rangle ^{(j) }. \label{me} \end{eqnarray} Here $Z_{\psi}$ is the quark wave function renormalization constant. The right hand side of (\ref{me}) still contains divergences, which are to be removed by the renormalization of the operators $\hat{Q}_{k}$ \cite{zim}. \begin{figure}[htb] \centerline{ \rotate[r]{\epsfysize=12cm \epsffile{fig1.ps }} } \caption{Diagrams contributing to $\protect\langle \protect\hat{Q}_k ^{\protect\mbox{\tiny bare} } \protect\rangle ^{(1)}$.} \label{Fig:1} \end{figure} Now the insertion of $\hat{Q}_k$ into the one-loop diagrams of fig.~\ref{Fig:1} yields a linear combination of the $\hat{Q}_l$'s and a new operator with the Dirac structure $Q_k^\prime= \left[ \gamma_\rho \gamma_\sigma \otimes 1 \right] Q_k \left[ 1 \otimes \gamma^\sigma \gamma^\rho \right]$: \begin{eqnarray} \langle \hat{Q}_k ^{\mbox{\tiny bare} } \rangle ^{(1) } &=& d^{(1)}_{kl} \langle \hat{Q}_l \rangle ^{(0)} + d^{(1)}_{k , Q^\prime _k} \langle \hat{Q}^\prime _k \rangle ^{(0)} \quad \mbox{no sum on } k \label{d1}, \end{eqnarray} where $\langle \ldots \rangle^{(0)}$ denote tree level matrix elements. Both coefficients have a term proportional to $1/\varepsilon $ and a finite part. $\hat{Q}_k^\prime$ is now decomposed into a linear combination of the $\hat{Q}_l$'s and an evanescent operator: \begin{eqnarray} \hat{Q}_k^{\prime \mbox{\tiny bare} } &=& \left( f_{kl} + a_{kl} \varepsilon \right) \hat{Q}_l ^{\mbox{\tiny bare} } + \hat{E}_1[ Q_k]^{\mbox{\tiny bare} } \label{e1} +O(\varepsilon ^2). \label{DefEvan1} \end{eqnarray} Here the $f_{kl}$'s are uniquely determined by the Dirac basis decomposition in $D=4$ dimensions. The $a_{kl}$'s, however, are arbitrary, and a different choice for the $a_{kl}$'s corresponds to a different definition of $\hat{E}_1 [ Q_k ]=\hat{E}_1 [ Q_k, \{ a_{rs} \} ]$. When going beyond the one-loop order new evanescent operators $\hat{E}_2 [ Q_k ], \hat{E}_3 [ Q_k ], \ldots $ will appear. Their precise definition is irrelevant for the moment and will be given after (\ref{defe2}). Now in the framework of dimensional regularization the renormalization of some physical operator $\hat{Q}_k$ requires counterterms proportional to physical and evanescent operators: We define the renormalization matrix $Z$ by \begin{eqnarray} Z_\psi ^2 \langle \hat{Q}_k ^{\mbox{\tiny bare} } \rangle &=& Z_{kl} \langle \hat{Q}_l ^{\mbox{\tiny ren} } \rangle + Z_{k,E_{jm}} \langle \hat{E}_j [Q_m] ^{\mbox{\tiny ren} } \rangle \quad. \label{z} \end{eqnarray} Here and in the following we will distinguish the renormalization constants related to some evanescent operator $E_j[Q_m]$ by denoting the corresponding index with $E_{jm}$. (\ref{d1}) and (\ref{e1}) imply that $Z_{kl}^{(1)}$ depends on the $a_{rs}$'s, while $Z_{k,E_{jm}}^{(1)}$ is independent of them. We define the coefficients in the expansion of $Z$ in terms of the gauge coupling constant $g$ and in terms of $\varepsilon $ by \begin{eqnarray} Z &=& 1+ \sum _{j} \left( \frac{g^2 }{16 \pi ^2} \right)^j Z ^{(j)} , \quad \quad Z^{(j)} \, = \, \sum _{k=0}^{j} \frac{1}{\varepsilon ^k} Z ^{(j)} _{k} \label{exp}. \end{eqnarray} The first analysis of evanescent operators in the context of RG improved QCD corrections to electroweak transitions has been done by Buras and Weisz \cite{bw}. They have determined the $a_{kl}$'s by choosing some set of Dirac structures $M= \{ \gamma^{(1)} \otimes \tilde{\gamma} ^{(1)}, \ldots \gamma^{(10)} \otimes \tilde{\gamma} ^{(10)} \} $, which forms a basis for $D=4$, and contracting all elements in $M$ with $Q_k^\prime$ and $Q_l$ in (\ref{e1}): \begin{eqnarray} \mbox{tr} \left( \gamma^{(m)} q_k ^\prime \tilde{\gamma} ^{(m)} \tilde{q}_k ^\prime \right) &=& (f_{kl} + a_{kl} \varepsilon ) \, \mbox{tr} \left( \gamma^{(m)} q_l \tilde{\gamma}^{(m)} \tilde{q}_l \right) +O(\varepsilon ^2) , \nonumber \\ && \quad \quad \quad \mbox{no sum on } k \mbox{ and on $m$\/=1,\ldots , 10}. \label{gp} \end{eqnarray} The solution of the equations (\ref{gp}) uniquely defines the $f_{kl} + a_{kl} \varepsilon $. In other words, $E_1[Q_k ]$ obeys the equations: \begin{eqnarray} E_1[Q_k]_{ijrs} \gamma^{(m)}_{si} \tilde{\gamma}^{(m)}_{jr} &=& O ( \varepsilon^2 ) \quad \quad \quad \mbox{ for $m$\/=1, \ldots ,10}, \end{eqnarray} where $i,j,r,s$ are Dirac indices. Our arguments will not depend on the scheme used for the treatment of $\gamma_5$. In the examples we will use a totally anticommuting $\gamma_5$. This does not cause any ambiguity in the trace operation in (\ref{gp}), because all Lorentz indices are contracted, so that the traced Dirac string is a linear combination of $\gamma_5$ and the unit matrix. E.g.\ the choice of \begin{eqnarray} M&=& \{ 1\otimes1, 1\otimes \gamma_5, \gamma_5 \otimes 1, \gamma_5 \otimes \gamma _5 , \gamma_{\mu} \otimes \gamma ^{\mu}, \gamma_{\mu} \otimes \gamma ^{\mu} \gamma_5, \nonumber \\ && \quad \gamma_5 \gamma_{\mu} \otimes \gamma^{\mu}, \gamma_5 \gamma_{\mu} \otimes \gamma^{\mu} \gamma_5, \sigma_{\mu \nu } \otimes \sigma ^{\mu \nu }, \gamma_5 \sigma_{\mu \nu } \otimes \sigma ^{\mu \nu } \} \label{m} \end{eqnarray} gives for $Q$ in (\ref{q}) \begin{eqnarray} Q^\prime &\! = \! & \gamma_\rho \gamma_\sigma \gamma_{\mu } \lt( 1-\g_5 \rt) \otimes \gamma^{\mu } \gamma^\sigma \gamma^\rho \lt( 1-\g_5 \rt) = ( 4- 8 \varepsilon ) Q + E_1[Q] +O(\varepsilon^2) \label{ex} \end{eqnarray} as in \cite{bw}. We remark that this choice $a=-8$ respects the Fierz symmetry, which relates the first to the second diagram in fig.~\ref{Fig:1}. A basis different from $M$ in (\ref{m}) yields the same $f_{k,l}$'s, but different $a_{kl}$'s. For example by replacing the sixth and eighth element of $M$ in (\ref{m}) by $\gamma_\alpha \gamma_\beta \gamma_\delta \otimes \gamma ^\alpha \gamma^\beta \gamma^\delta $ and $\gamma_5 \gamma_\alpha \gamma_\beta \gamma_\delta \otimes \gamma ^\alpha \gamma^\beta \gamma^\delta $ one finds \begin{eqnarray} Q^\prime &=& 4 Q + 16 \varepsilon (1+\gamma_5) \otimes (1-\gamma_5) + E_1^\prime [Q_k] +O(\varepsilon^2), \nonumber \end{eqnarray} instead of (\ref{ex}), i.e.\ a different evanescent operator. The Dirac algebra is infinite dimensional for non-integer $D$ and is spanned by $M$ and an infinite set of evanescent Dirac structures. Hence one can reverse the above procedure and first arbitrarily choose the $a_{kl}$'s and then add properly adjusted linear combinations of the evanescent structures to the elements of $M$ such as to obtain the chosen $a_{kl}$'s. Yet the so defined evanescent operators do not decouple from the physics in four dimensions: In \cite{c,bw} it has been observed that their one-loop matrix elements generally have nonvanishing components proportional to the physical operators $Q_k$: \begin{eqnarray} \langle \hat{E}_1 [Q_k ] ^{\mbox{\tiny bare} } \rangle ^{(1)} &=& \left[ Z^{(1)}_0 \right]_{E_{1k},l} \langle \hat{Q}_l \rangle ^{(0)} + \frac{1}{\varepsilon } \left[ Z^{(1)}_1 \right]_{E_{1k},E_{1k} } \langle \hat{E}_1 [Q_k ] \rangle ^{(0)} \nonumber \\ && + \frac{1}{\varepsilon } \left[ Z^{(1)}_1 \right]_{E_{1k},E_{2k} } \langle \hat{E}_2 [Q_k ] \rangle ^{(0)} +O(\varepsilon) \label{e2}. \end{eqnarray} Here a second evanescent operator $\hat{E}_2$, which will be discussed in a moment, has appeared. Clearly no sum on $k$ is understood in (\ref{e2}) and in following analogous places. In (\ref{e2}) $[Z^{(1)}_0]_{E_{1k},l}$ is local, because it originates from the local $1/\varepsilon$--pole of the tensor integrals and a term proportional to $\varepsilon$ stemming from the evanescent Dirac algebra. For the same reason there is no divergence in the term proportional to $\langle \hat{Q}_l \rangle ^{(0)} $. Now in \cite{c,bw} it has been proposed to renormalize $\hat{E}_{1}$ by a finite amount to cancel this component: \begin{eqnarray} \hat{E}_{1} [Q_k ] ^{ \mbox{\tiny ren} } &=& \hat{E}_{1} [Q_k ] ^{ \mbox{\tiny bare} } + \frac{g^2}{16 \pi ^2 } \left\{ - \left[ Z^{(1)}_0 \right]_{E_{1k},l} \hat{Q}_l \right. \nonumber \\ && \quad \quad - \frac{1}{\varepsilon } \left[ Z^{(1)}_1 \right]_{E_{1k},E_{1k} } \hat{E}_1 [Q_k] \nonumber \\ && \quad \quad \left. - \frac{1}{\varepsilon } \left[ Z^{(1)}_1 \right]_{E_{1k},E_{2k} } \hat{E}_2 [Q_k ] \right\} +O \left( g^4 \right). \label{fin} \end{eqnarray} With (\ref{fin}) the renormalized matrix elements of the evanescent operators are $O(\varepsilon )$, so that they do not contribute to the one-loop matching of some Green's function $G^{\mbox{\tiny ren} }$ in the full renormalizable theory with matrix elements in the effective theory: \begin{eqnarray} i G^{\mbox{\tiny ren} } &=& C_l \langle \hat{Q}_{l} \rangle ^{\mbox{\tiny ren} } + C_{ E_{1k} } \langle \hat{E}_{1} [Q_k] \rangle ^{\mbox{\tiny ren} } + O \left( g^4 \right), \label{match} \end{eqnarray} i.e.\ the coefficients $C_{ E_{1k} }$ are irrelevant, because they multiply matrix elements which vanish for $D=4$. In \cite{bw} it has been further noticed that $ Z ^ {(1)}_0 $ in (\ref{fin}) influences the two-loop anomalous dimension matrix of the {\em physical} operators, so that the presence of evanescent operators indeed has an impact on physical observables. In a different context this has also been observed in \cite{bos}. Next we discuss $\hat{E}_2[Q_k]$, which has entered the scene in (\ref{e2}): When inserting $\hat{E}_1[Q_k]$ defined in (\ref{e1}) into the one-loop diagrams of fig.~\ref{Fig:1}, one involves \begin{eqnarray} Q_k^{\prime \prime} &=& \left[ \gamma_\rho \gamma_\sigma \otimes 1 \right] Q_k^\prime \left[ 1 \otimes \gamma^\sigma \gamma^\rho \right] \nonumber \\ &=& \left[ f +a \varepsilon \right]^2_{\, kl} Q_l + \left( f_{kl} + a_{kl} \varepsilon \right) E_1[Q_l] \nonumber \\ && + \left[ \gamma_\rho \gamma_\sigma \otimes 1 \right] E_1 [ Q_k ] \left[ 1 \otimes \gamma^\sigma \gamma^\rho \right] \label{ins} \\ &=& \left\{ \left[ f + a \varepsilon \right]^2 _{kl} + b_{kl} \varepsilon \right\} Q_l + E_2 [Q_k] + O\left( \varepsilon ^2 \right) , \label{defe2} \end{eqnarray} which defines $E_2[Q_k]= E_2[Q_k,\{ a_{rs} \} , \{ b_{rs} \} ]$. Only the last term in (\ref{ins}) can contribute to the new coefficients $b_{kl}$. If the projection is performed with e.g.\ $M$ defined in (\ref{m}), one finds $b_{kl}=0$ \footnote{This is the case for any basis $M$ in which for each $\gamma^{(m)} \otimes \gamma^{(m)} \in M$ the quantity $\gamma_\rho \gamma_\sigma \gamma^{(m)} \gamma^\sigma \gamma^\rho \otimes \gamma^{(m)}$ is a linear combination of the elements in $M$.}. In our discussion we will keep $b_{kl}$ arbitrary. Clearly, one has a priori to deal with the mixing of an infinite set of evanescent operators $\left\{ \hat{E}_j[Q_k] \right\}$ for each physical operator $\hat{Q}_k$, where $\hat{E}_{j+1}[Q_k]$ denotes the new evanescent operator appearing first in the one-loop matrix elements of $\hat{E}_j[Q_k]$. With the finite renormalization of $\hat{E}_1[Q_k]$ in (\ref{fin}) the evanescent operators do not affect the physics at the matching scale, at which (\ref{match}) holds. In order that this will be true at any scale $\mu $, however, one must also ensure that the evanescent operators do not mix into the physical ones. This has been noticed first by Dugan and Grinstein in \cite{dg}. For the operator basis $(\vec{Q},\vec{E}) =(\hat{Q}_1,\ldots \hat{Q}_n , \, \hat{E}_1 \left[ Q_1 \right], \ldots \hat{E}_j \left[ Q_k \right], \ldots )$ this means that the anomalous dimension matrix \begin{eqnarray} \gamma &=& \left( \begin{array}{cc} \gamma_{QQ} & \gamma_{QE} \\ \gamma_{EQ} & \gamma_{EE} \end{array} \right) \nonumber \end{eqnarray} has an upper block-triangular form with $\gamma_{EQ}=0$. The authors of \cite{dg} have introduced another way to define the evanescent operators, which is also frequently used: It is easy to see that one can restrict the operator basis $\{Q_k\}$ to the set of operators whose Dirac structures $q_k, \tilde{q}_k$ are completely antisymmetric in their Lorentz indices. This is the normal form of Dirac strings introduced in \cite{bm}. Dirac strings being antisymmetric in more than four indices vanish in four dimensions and are therefore evanescent. Operators with five antisymmetrized indices correspond to $\hat{E}_1$ in our notation, and $\hat{E}_2$ would be expressed in terms of a linear combination of Dirac structures with seven and with five antisymmetrized indices. Clearly this method also corresponds to some special choice for the $a_{kl}$'s and $b_{kl}$'s in (\ref{e1}) and (\ref{defe2}). Now in \cite{dg} the authors have proven that with the use of those definitions and a finite renormalization analogous to (\ref{fin}) the anomalous dimension matrix indeed has the desired block-triangular form, so that the evanescent operators do not mix into the physical ones. While the anomalous dimension matrix is trivially block-triangular at one-loop level, the proof for the two-loop level was given in \cite{dg} by the use of the abovementioned special definition of the evanescent operators. The latter, however, has some very special features, which are absent for the general case with arbitrary $a_{kl}$'s and $b_{kl}$'s, e.g.\ the definition used in \cite{dg} automatically yields an anomalous dimension matrix which is tridiagonal in the evanescent sector. Consider now a definition of the evanescent operators different from the one used in \cite{dg}: By inserting the definition (\ref{e1}) of $\hat{E}_1 \left[ Q_k \right]$ into (\ref{e2}) one realizes that $\left[ Z^{(1)}_0 \right]_{E_{1k},l}$ depends on the $a_{kl}$'s. Similarly at the two-loop level $\left[ Z_1^{(2)} \right]_{E_{jk},l} $ depends on the definition (\ref{e1}), so that one has to wonder which choices for the $a_{kl}$'s lead to the desired block-triangular form of $\gamma$ with $\gamma_{EQ}=0$. In the following section we will prove that any choice is permissible. Further we will find that also the $b_{kl}$'s may be chosen completely arbitrary. On the other hand the physical submatrix $\gamma_{QQ}$ of the anomalous dimension matrix depends on the $a_{kl}$'s as we will show in section~\ref{Sect:Scheme}. Hence the freedom in the definition of the bare evanescent operators induces a renormalization scheme dependence in the physical sector of the operator basis. This feature has not been discussed in the literature so far. As emphasized in the introduction it is of practical importance for NLO calculations to know the scheme transformation formulae for the physical submatrix $\gamma_{QQ}$ and the Wilson coefficients. We will come back to this point in section~\ref{Sect:Scheme}. \section{Block Triangular Anomalous Dimension Matrix} \label{Sect:Triang} Consider some set of physical operators $\{ \hat{Q}_k \}$ which closes under renormalization together with the corresponding evanescent operators $\{ \hat{E}_j[Q_k] : j \geq 1 \}$. Their $O(\varepsilon)$--parts $a_{rs},b_{rs}, \ldots $ are chosen arbitrarily. We want to show that the block of the anomalous dimension matrix describing the mixing of $\hat{E}_j[Q_k]$ into $\hat{Q}_l$ equals zero, \begin{eqnarray} \left[ \gamma \right]_{E_{jk},l} &=& 0, \label{nomix} \end{eqnarray} provided one uses the finite renormalization described in (\ref{fin}). Our sketch will follow the outline of \cite{dg}, where (\ref{nomix}) has been proven by complete induction. At the one-loop level (\ref{nomix}) is trivial, and the induction starts in two-loop order: The next-to-leading order contribution to the anomalous dimension matrix $\gamma = \frac{g^2}{16\pi^2} \gamma^{\left(0\right)} + \left(\frac{g^2}{16\pi^2}\right)^2 \gamma^{\left(1\right)} + \ldots$ reads \cite{bw}: \begin{eqnarray} \gamma^{(1)} &=& -4 Z_1^{(2)} -2 \beta_0 Z_0^{(1)} + 2 \left\{ Z_1^{(1)} , Z_0^{(1)} \right\}. \label{twomatrix} \end{eqnarray} The nonzero contributions to (\ref{nomix}) in two-loop order are \begin{eqnarray} \left[ \gamma^{(1)} \right]_{ E_{jk},l } &=& -4 \left[ Z_1^{(2)} \right]_{ E_{jk},l } -2 \beta_0 \left[ Z_0^{(1)} \right] _{ E_{jk},l } \nonumber \\ & & + 2 \left\{ \left[ Z_1^{(1)} \right]_{ E_{jk},E_{rs} } \left[ Z_0^{(1)}\right] _{ E_{rs},l } + \left[ Z_0^{(1)} \right] _{ E_{jk},m } \left[ Z_1^{(1)} \right] _{ m l } \right\}. \label{twoloop} \end{eqnarray} Here (\ref{twoloop}) contains terms which are absent when the special definition of the evanescent operators in \cite{dg} is used: In \cite{dg} one has $\left[ Z^{(1)} \right]_{E_{jk},l}=0 $ for $j \geq 2$ contrary to the general case, where any evanescent operator can have counterterms proportional to physical operators. Next we look at $\left[ Z_1^{(2)} \right]_{E_{jk},l}$, which stems from the $1/\varepsilon$--term of the $O(g^4)$--matrix elements of $\hat{E}_j[Q_k]$. As discussed in \cite{dg}, these $1/\varepsilon$--terms originate from $1/\varepsilon^2$--poles in the tensor integrals multiplying a factor proportional to $\varepsilon$ stemming from the evanescent Dirac algebra. Now in each two-loop diagram the former are related to the corresponding one-loop counterterm diagrams by a factor of 1/2, because the non-local $1/\varepsilon$--poles cancel in their sum \cite{ho}. For this to hold it is crucial that the one-loop counterterms are properly adjusted, i.e.\ that they cancel the $1/\varepsilon$--poles in the one-loop tensor integrals. In the one-loop matrix elements of evanescent operators the latter are multiplied with $\varepsilon$ originating from the Dirac algebra. Hence the proper one-loop renormalization of the evanescent operators must be such as to give matrix elements of order $\varepsilon$, as shown for $E_1[Q_k]$ in (\ref{fin}). {From} the one-loop counterterm graphs one simply reads off: \begin{eqnarray} \left[ Z_1^{(2)} \right]_{E_{jk},l} \!\!\! &=& \!\! \frac{1}{2} \left\{ \left[ Z_0^{(1)} \right]_{E_{jk},m} \! \left[ Z_1^{(1)} \right]_{ ml} \! \! + \left[ Z_1^{(1)} \right]_{E_{jk},E_{rs}} \! \left[ Z_0^{(1)} \right]_{E_{rs},l} \! \! - \beta_0 \left[ Z_0^{(1)} \right]_{E_{jk},l} \right\}, \nonumber \end{eqnarray} which yields the desired result when inserted into (\ref{twoloop}). Here the first two terms stem from insertions of physical and evanescent counterterms to $E_j[Q_k]$, while the term involving the coefficient of the one-loop $\beta$--function $\beta (g) = - g^3/(16 \pi^2) \beta_0 $ originates from the diagrams with coupling constant counterterms. The terms involving the wave function renormalization constants cancel with those stemming from the factor $Z_\psi^2$ in (\ref{z}). The inductive step in \cite{dg} proving (\ref{nomix}) to any loop order does not use any special definition of the evanescent and therefore applies unchanged here. \section{Evanescent Scheme Dependences} \label{Sect:Scheme} In this section we will analyze the dependence of the physical part of $\gamma^{(1)}$ given in (\ref{twomatrix}) and of the one-loop Wilson coefficients on $a_{il}$ and $b_{il}$. In practical next-to-leading order calculations one often has to combine Wilson coefficients and anomalous dimension matrices obtained with different definitions of the evanescent operators and it is therefore important to have formulae allowing to switch between them (see e.g.\ appendix B of \cite{m}). We start with the investigation of the dependence of $\gamma ^{(1)}$ on $a_{il}$. The bare one-loop matrix element \begin{eqnarray} \langle \hat{Q}_k ^{\mbox{\tiny bare} } \rangle^{(1) } &=& \left\{ \frac{1}{\varepsilon} \left[ Z_1^{(1)} \right]_{kj} + \left[ d_0^{(1)} \right]_{kj} \right\} \langle \hat{Q}_j \rangle^{(0)} + \frac{1}{\varepsilon} \left[ Z_1^{(1)} \right]_{k,E_{1k}} \langle \hat{E}_1 [Q_k] \rangle ^{(0)} \nonumber \\ &&+ O( \varepsilon) \label{q1} \end{eqnarray} is independent of $a_{il}$, which is evident from (\ref{d1}). $E_1[Q_k]$ depends linearly on $a_{il}$ through its definition (\ref{e1}) with the coefficient \begin{eqnarray} \frac{\partial }{\partial a_{il}} \hat{E}_1[Q_k] &=& - \varepsilon \, \delta _{ki} \, \hat{Q}_l, \label{DiffE1} \end{eqnarray} so that (\ref{q1}) gives: \begin{eqnarray} \frac{\partial }{\partial a_{il}} \left[ d_0^{(1)} \right]_{kj} &=& \left[ Z_1^{(1)} \right]_{k,E_{1k}} \delta _{ki} \delta_{lj}, \label{d0} \end{eqnarray} while $Z_1^{(1)}$ is independent of $a_{il}$. In the same way on can obtain the $a_{kl}$--dependence of $Z_1^{(2)}$. (\ref{z}) reads to two-loop order (cf.~(\ref{me})): \begin{eqnarray} \langle \hat{Q}_k ^{\mbox{\tiny bare} } \rangle^{(2) } &=& Z_{kj}^{(2)} \langle \hat{Q}_j \rangle^{(0)} + Z_{k,E_{1m}}^{(2)} \langle \hat{E}_1 [Q_m] \rangle^{(0) } + Z_{k,E_{2m}}^{(2)} \langle \hat{E}_2 [Q_m] \rangle^{(0) } \nonumber \\ && + Z_{kr}^{(1)} \langle \hat{Q}_r ^{\mbox{\tiny ren} } \rangle^{(1) } + Z_{k,E_{1k}}^{(1)} \langle \hat{E}_1 [Q_k] ^{\mbox{\tiny ren} } \rangle^{(1)} + \langle \hat{Q}_k ^{\mbox{\tiny ren} } \rangle^{(2) }. \label{q2} \end{eqnarray} {From} (\ref{defe2}) we know \begin{eqnarray} \frac{\partial }{\partial a_{il}} \hat{E}_2 [Q_m] &=& - \varepsilon \left[ f_{mi} \delta _{lj} + \delta_{mi} f_{lj} \right] \hat{Q}_j , \end{eqnarray} and from (\ref{q1}) one reads off: \begin{eqnarray} \langle \hat{Q}_r ^{\mbox{\tiny ren} } \rangle^{(1) } &=& \left[ d_0^{(1)} \right]_{rj} \langle \hat{Q}_j \rangle^{(0)} \label{qren}. \end{eqnarray} These relations and (\ref{d0}) allow to calculate the derivative of (\ref{q2}) with respect to $a_{il}$. Keeping in mind that the evanescent matrix elements are $O(\varepsilon )$ the $O(1/ \varepsilon)$--part of the derivative yields: \begin{eqnarray} \frac{\partial }{\partial a_{il}} \left[ Z_1^{(2)} \right] _{kj} &=& - \left[ Z_1^{(1)} \right]_{ki} \left[ Z_1^{(1)} \right]_{i,E_{1i}} \delta_{lj} + \left[ Z_2^{(2)} \right]_{k,E_{1i}} \delta_{lj} \nonumber \\ && + \left[ Z_2 ^ {(2)} \right]_{k,E_{2m}} \left( \delta_{mi} f_{lj} + f_{mi} \delta_{lj} \right) ,\quad \mbox{no sum on } i. \label{z2} \end{eqnarray} Again $\left[ Z_2^{(2)} \right]$ can be extracted from the one-loop counterterm diagrams as described in the preceding section: \begin{eqnarray} \left[ Z_2^{(2)} \right]_{k,E_{1i}} &=& \frac{1}{2} \left[ Z_1^{(1)} \right]_{ki} \left[ Z_1^{(1)} \right]_{i,E_{1i} } + \frac{1}{2} \left[ Z_1^{(1)} \right]_{i,E_{1i}} \left[ Z_1^{(1)} \right]_{E_{1i},E_{1i} } \delta _{ki} \nonumber \\ && - \, \frac{1}{2} \beta _0 \left[ Z_1^{(1)} \right]_{i,E_{1i} } \delta_{ki} , \quad \quad \mbox{no sum on } i \nonumber \\ \left[ Z_2^{(2)} \right]_{k,E_{2m}} &=& \frac{1}{2} \left[ Z_1^{(1)} \right]_{k,E_{1k}} \left[ Z_1^{(1)} \right]_{ E_{1k},E_{2k} } \delta_{km} , \quad \quad \mbox{no sum on } k. \label{coun} \end{eqnarray} After inserting (\ref{coun}) into (\ref{z2}) we want to substitute the last term in (\ref{z2}). For this we derive both sides of (\ref{e2}) with respect to $a_{il}$ giving: \begin{eqnarray} \lefteqn{ \left[ Z_1^{(1)} \right]_{E_{1k} ,E_{2k} } \left( \delta_{ki} f_{lj} + f_{ki} \delta_{lj} \right) \; =} \nonumber \\ && \frac{\partial }{\partial a_{il}} \left[ Z_0^{(1)} \right]_{E_{1k},j} + \left[ Z_1^{(1)} \right]_{lj} \delta_{ki} - \left[ Z_1^{(1)} \right]_{E_{1k} ,E_{1k} } \delta_{ki} \delta_{lj}, \label{z0} \quad \mbox{no sum on } k. \end{eqnarray} Finally one has to insert the expression for (\ref{z2}) obtained by the described substitutions into \begin{eqnarray} \frac{\partial }{\partial a_{il}} \left[ \gamma^ {(1)} \right]_{kj} &=& -4 \frac{\partial }{\partial a_{il}} \left[ Z_1^{(2)} \right]_{kj} + 2 \left[ Z_ 1 ^ {(1)} \right]_{k,E_{1k}} \frac{\partial }{\partial a_{il}} \left[ Z_0^{(1)} \right] _{E_{1k},j}, \quad \mbox{no sum on } k , \nonumber \end{eqnarray} which follows from (\ref{twomatrix}). The result reads: \begin{eqnarray} \frac{\partial }{\partial a_{il}} \left[ \gamma ^{(1)} \right]_{kj} &=& - 2 \left[ Z_1^{(1)} \right]_{lj} \left[ Z_1^{(1)} \right]_{i,E_{1i}} \delta _{ki} + 2 \left[ Z_1^{(1)} \right]_{ki} \left[ Z_1^{(1)} \right]_{i,E_{1i}} \delta_{lj} \nonumber \\ && + 2 \beta_0 \left[ Z_1^{(1)} \right]_{i,E_{1i}} \delta_{ki} \delta_{lj}, \quad \quad \mbox{no sum on } i. \label{dag} \end{eqnarray} Since the quantities on the right hand side of (\ref{dag}) do not depend on $a_{il}$, one can easily integrate (\ref{dag}) to find the desired relation between two $\gamma$'s corresponding to different choices for $a_{kl}$ in (\ref{e1}). To write the result in matrix form we recall the expression for the physical one-loop anomalous dimension matrix \begin{eqnarray} \left[ \gamma ^{(0)} \right]_{lj} &=& -2 \left[ Z_1^{(1)} \right] _{lj} \nonumber \end{eqnarray} and introduce the diagonal matrix $D$ with \begin{eqnarray} D_{im} &=& \left[ Z_1^{(1)} \right] _{i,E_{1i}} \delta_{im} , \quad \quad \quad \mbox{no sum on } i. \label{DiagMatrix1} \end{eqnarray} Hence \begin{eqnarray} \gamma ^{(1)} ( a^\prime)&=& \gamma ^{(1)} (a ) + \left[ D \cdot ( a^\prime -a ), \gamma^{(0)} \right] + 2 \, \beta_0 \, D \cdot (a^\prime -a) , \label{result} \end{eqnarray} where the summation in the row and column indices only runs over the physical submatrices. (\ref{result}) exhibits the familiar structure of the scheme dependence of $\gamma ^{(1)}$ \cite{bjlw}. Usually scheme dependences are analyzed for a fixed definition of the bare operators and different subtraction procedures. Our situation, however, is more complicated, because we investigate the scheme dependence associated with different definitions of the {\em bare} operator basis (i.e.\ of the bare evanescent operators). The dependence of the one-loop matrix elements on $a$ can be found easily from (\ref{qren}) and (\ref{d0}): \begin{eqnarray} \langle \vec{Q} \rangle ^{\mbox{\tiny ren} } (a^\prime) &=& \left[ 1+ \frac{g^2}{16 \pi^2} D \cdot (a^\prime-a) \right] \langle \vec{Q} \rangle ^{\mbox{\tiny ren} } (a) + O( g^4). \label{depma} \end{eqnarray} Since in (\ref{match}) $G$ does not depend on $a$ and the evanescent matrix element is $O(\varepsilon )$, the corresponding relation for the Wilson coefficients at the matching scale reads: \begin{eqnarray} \vec{C}^T (a^\prime) &=& \vec{C}^T (a) \left[ 1 - \frac{g^2}{16 \pi^2} D \cdot (a^\prime-a) \right] + O( g^4). \label{SchemeWC1} \end{eqnarray} Hence we can apply the result of \cite{bjlw}, which shows that in the renormalization group improved Wilson coefficient the scheme dependences in (\ref{result}) cancels the one in (\ref{depma}), so that physical observables are scheme independent, provided the hadronic matrix elements are defined scheme independently. Let us drop some words on the results \eq{result}, \eq{depma} and \eq{SchemeWC1}: In general one would expect scheme transformation formulae involving the full operator basis $(\vec{Q},\vec{E})$. Yet all summations only run over the indices corresponding to the physical operators, the only ingredient from the evanescent sector being the matrix $D$. This is why we could not simply deduce \eq{result} from \eq{depma} using the results of \cite{bjlw}. Possible contributions from summations over evanescent operator indices in the matrix products in \eq{result} cannot be inferred from \eq{depma}, because there they would multiply vanishing matrix elements. In the same way one can investigate the dependence of $\gamma^{(1)}$ on $b_{il}$ given in (\ref{defe2}): While $Z_1^{(2)}$ and $Z_0^{(1)}$ depend on $b_{il}$, this dependence cancels in (\ref{twomatrix}). Hence neither $\gamma^{(1)}$ nor the one-loop Wilson coefficient are affected by the choice of $b_{il}$. In general $\gamma^{(0)}$ and $\gamma^{(1)}$ do not commute, so that one has to cope with complicated matrix equations in order to solve the renormalization group equation in next-to-leading order \cite{bjlw}. Now one can use (\ref{result}) to simplify $\gamma^{(1)}$: By going to the diagonal basis for \mbox{$\gamma^{(0)}=\mbox{diag}\left(\gamma^{(0)}_i\right)$} one can easily find solutions for $a^\prime - a$ in (\ref{result}) which even give \mbox{$\gamma^{(1)}(a^\prime)=0$} provided that all $Z_{k,E_{1k}}$'s are nonzero and all eigenvalues of $\gamma^{(0)}$ satisfy \mbox{$\left| \gamma^{(0)}_i -\gamma^{(0)}_j \right| \neq 2 \beta_0$}. We will exemplify this in a moment. A choice for $a_{il}$ which leads to a $\gamma^{(1)}$ commuting with $\gamma^{(0)}$ has been done implicitly in \cite{bw}: There the mixing of the two operators $Q_+ = Q \left( {\rm 1} + \tilde{\rm 1} \right) /2 $ and $Q_- = Q \left( {\rm 1} - \tilde{\rm 1} \right) /2 $ has been considered, where ${\rm 1}$ and $\tilde{\rm 1}$ denote colour singlet and antisinglet and $Q$ was introduced in (\ref{q}). Now $Q_+$ is self-conjugate under the Fierz transformation, while $Q_-$ is anti-self-conjugate, so that $\gamma^{(0)}$ is diagonal to maintain the Fierz symmetry in the leading order renormalization group evolution. As remarked after (\ref{ex}), the definition of $E_1[Q]$ in (\ref{ex}) is necessary to ensure the Fierz symmetry in the one-loop matrix elements. Consequently with (\ref{ex}) also $\gamma^{(1)}$ has to obey the Fierz symmetry preventing the mixing of $Q_+$ and $Q_-$, i.e.\ yielding a diagonal $\gamma^{(1)}$. A different definition of $E_1[Q]$ would result in non-Fierz-symmetric matrix elements, but in renormalization scheme independent expressions they would combine with a non-diagonal $\gamma^{(1)}$ such as to restore Fierz symmetry. Let us consider the example above to demonstrate that one can pick $a^\prime$ such that $\gamma ^{(1)} (a ^\prime ) =0$: In \cite{bw} the definitions \begin{eqnarray} E_1[ Q_\pm] & =& \left( \pm 1- \frac{1}{N} \right) \left[ \gamma_\rho \gamma_\sigma \gamma_\nu (1-\gamma_5) \otimes \gamma^\nu \gamma^\sigma \gamma^\rho (1-\gamma_5) \right. \nonumber \\ && \quad \quad \left. - (4 - 8 \varepsilon)\;\; \gamma_\nu (1-\gamma_5) \otimes \gamma ^\nu (1-\gamma_5) \right], \nonumber \end{eqnarray} i.e.\ $a_{++}=8 (1/N-1), a_{--}=8 (1/N+1), a_{+-}=a_{-+}=0 $, were adopted yielding a diagonal $\gamma^{(1)}(a)=$diag$\left(\gamma^{(1)}_{+}(a),\gamma^{(1)}_{-}(a)\right)$. {From} the insertion of $Q_+$ and $Q_-$ into the diagrams of fig.~\ref{Fig:1} one finds $Z_{+,E_{1+}}=Z_{-,E_{1-}}=1/4$. Hence if we pick $a^\prime_{\pm \pm}=a_{\pm \pm} - 2/\gamma^{(1)}_{\pm} (a)/\beta_0$ and $a^\prime_{\pm \mp}=0$, (\ref{result}) will imply $\gamma^{(1)}(a^\prime)=0$. \section{Double Insertions} \label{Sect:Double} \renewcommand{\nonumber \\}{\nonumber } \subsection{Motivation} \label{Sect:DoubleMotiv} In the following we will investigate Green functions with two insertions of local operators. Consider first the effective Lagrangian to first order written in terms of bare local operators \begin{eqnarray} {\cal L}^{\rm I} &=& C_{k} Z^{-1}_{kl} \hat{Q}^{\ba}_{l} + C_{k} Z^{-1}_{k E_{rl}} \hat{E}_{r}\left[Q_{l}\right]^{\ba} \nonumber \\ \\ & & + C_{E_{jk}} Z^{-1}_{E_{jk} l} \hat{Q}^{\ba}_{l} + C_{E_{jk}} Z^{-1}_{E_{jk} E_{rl}} \hat{E}_{r}\left[Q_{l}\right]^{\ba} . \label{Lagr1} \end{eqnarray} According to the procedure presented in the preceding sections, the coefficients $C_{E_{jk}}$ were found to be irrelevant and therefore remained undetermined. Now consider 4-fermion Green functions with insertion of two local operators from ${\cal L}^{\rm I}$ \begin{eqnarray} \Bigl\langle 0 \Bigr| {\, \rm \bf T \,} \bar{\Psi}_{1} \bar{\Psi}_{2} \left( \frac{i}{2} \int {\rm d}^{D}y {\cal L}^{\rm I}\left(x\right) {\cal L}^{\rm I}\left(y\right) \right) \Psi_{3} \Psi_{4} \Bigl| 0 \Bigr\rangle . \label{GFdouble} \end{eqnarray} Such Green functions appear in applications like particle-antiparticle mixing or rare hadron decays. The diagram contributing to lowest order is depicted in fig.~\ref{Fig:2}. \begin{figure}[htb] \centerline{ \rotate[r]{ \epsfysize=5cm \epsffile{fig2.ps} }} \caption{The lowest order diagram contributing to the Green function in \protect\eq{GFdouble}.} \label{Fig:2} \end{figure} Renormalization of them in general requires additional counterterms proportional to new local dimension-eight operators $\tilde{Q}_{l}$, because the diagram of fig.~\ref{Fig:2} is divergent: \begin{eqnarray} {\cal L} &=& {\cal L}^{\rm I} + {\cal L}^{\rm II} \label{Lagr} \\ {\cal L}^{\rm II} &=& C_{k} C_{k'} \left\{ Z^{-1}_{kk',l} \tilde{Q}^{\ba}_{l} + Z^{-1}_{kk',E_{rl}} \tilde{E}_{r}\left[\tilde{Q}_{l}\right]^{\ba} \right\} \nonumber \\ \\ & & + C_{k} C_{E_{j' k'}} \left\{ Z^{-1}_{k E_{j' k'},l} \tilde{Q}^{\ba}_{l} + Z^{-1}_{k E_{j' k'},E_{rl}} \tilde{E}_{r}\left[\tilde{Q}_{l}\right]^{\ba} \right\} \nonumber \\ \\ & & + C_{E_{j k}} C_{E_{j' k'}} \left\{ Z^{-1}_{E_{jk} E_{j' k'},l} \tilde{Q}^{\ba}_{l} + Z^{-1}_{E_{jk} E_{j' k'},E_{rl}} \tilde{E}_{r}\left[\tilde{Q}_{l}\right]^{\ba} \right\} \nonumber \\ \\ & & + \tilde{C}_{k} \tilde{Z}^{-1}_{kl} \tilde{Q}^{\ba}_{l} + \tilde{C}_{k} \tilde{Z}^{-1}_{k E_{rl}} \tilde{E}_{r}\left[\tilde{Q}_{l}\right]^{\ba} \nonumber \\ \\ & & + \tilde{C}_{E_{jk}} \tilde{Z}^{-1}_{E_{jk} l} \tilde{Q}^{\ba}_{l} + \tilde{C}_{E_{jk}} \tilde{Z}^{-1}_{E_{jk} E_{rl}} \tilde{E}_{r}\left[\tilde{Q}_{l}\right]^{\ba} \label{Lagr2} \end{eqnarray} Here $Z^{-1}_{..,.} \tilde{Q}_{.} $ are the local operator counterterms needed to renormalize the divergences originating purely from the double insertion. Further we have explicitly distinguished physical and evanescent operators. The renormalization constants $Z_{..,.}$, clearly being symmetric in their first two indices, give rise to an inhomogeneity in the RG equation for the Wilson coefficients $\tilde{C}_{k}$, $\tilde{C}_{E_{rs}}$, which we call the anomalous dimension tensor of the double insertion. Note, that this quantity also has three indices, see \eq{AnomDimTens}. It has become standard to define the local operator $\tilde{Q}_{l}$ with inverse powers of the coupling constant such that $Z^{-1}_{..,.}=O\left(g^2\right)$ to avoid mixing already at the tree level. As an example take $\tilde{Q}=\frac{m^2}{g^2}\; \gamma_{\mu} \left(1-\gamma_5\right) \otimes \gamma^{\mu} \left(1-\gamma_5\right)$ for which $\tilde{C}_{l}=O\left(g^2\right)$. For simplicity, we assume the $\tilde{Q}_{l}$'s to be linearly independent from the $\hat{Q}_{k}$'s \footnote{e.g.\ the $\hat{Q}_{k}$'s represent $\Delta F = 1$ operators, the $\tilde{Q}_{l}$'s denote $\Delta F = 2$ operators, where $F$ is some quantum number, which is conserved by the SU(N) interaction.}. The $E_{r}\left[\tilde{Q}_{l}\right]$ in \eq{Lagr2} are defined analogously to \eq{DefEvan1} with new coefficients $\tilde{f}_{kl}$, $\tilde{a}_{kl}$, $\tilde{b}_{kl}$, etc. Hence new arbitrary constants $\tilde{a}_{kl}$, $\tilde{b}_{kl}$ potentially causing scheme dependences enter the scene. Clearly the following questions arise here: \begin{enumerate} \item \label{Q1} Are the coefficient functions $C_{E_{jk}}$ irrelevant also for the double insertions; i.e.\ do \begin{eqnarray} \left\langle \int {\, \rm \bf T \,} \hat{E} \hat{Q} \right\rangle \hspace{1cm} \mbox{and} \hspace{1cm} \left\langle \int {\, \rm \bf T \,} \hat{E} \hat{E} \right\rangle \label{DoubleWithEva} \end{eqnarray} contribute to the matching procedure and the operator mixing? \item \label{Q2} Does one need a {\em finite} renormalization in the evanescent sector of double insertions; if yes, how does this affect the anomalous dimension tensor? \item \label{Q3} How do the $\tilde{C}_{l}$ and anomalous dimension matrices depend on the $a_{kl}$, $b_{kl}$, $\tilde{a}_{kl}$, $\tilde{b}_{kl}$ ? \item \label{Q4} Are the RG improved observables scheme independent? \end{enumerate} \subsection{Scheme Consistency} \label{Sect:DoubleConsist} In this section we will carry out the program of section~\ref{Sect:Triang} for the case of double insertions to answer questions~\ref{Q1} and \ref{Q2} (on page~\pageref{Q1}). Two cases have to be distinguished: The matrix element of the double insertion of the two local renormalized operators can be divergent or finite: \begin{eqnarray} \left\langle \frac{i}{2} \int {\, \rm \bf T \,} \hat{Q}_{k} \hat{Q}_{k'} \right\rangle &=& \left\{ \begin{array}{lcl} \mbox{divergent} & , & \mbox{case 1} \\ \mbox{finite} & , & \mbox{case 2} \end{array} \right. . \label{cases} \end{eqnarray} Case~1 is the generic one, appearing in the calculation of the coefficient $\eta_{3}$ in ${\rm K}^{0}$--$\overline{{\rm K}^{0}}$ mixing \cite{hn2} or in ${\rm K} \to \pi \nu \bar{\nu}$ \cite{bb}. Case~2 appears, if the divergent parts of different contributions to \eq{cases} add such that the divergences cancel. It is realized e.g.\ in the determination of $\eta_{1}$ in ${\rm K}^{0}$--$\overline{{\rm K}^{0}}$ mixing \cite{hn}. Therefore we need or do not need a separate renormalization for the double insertion \begin{eqnarray} Z^{-1}_{k k',l} \left\{ \begin{array}{clcl} \neq & 0 & , & \mbox{case 1} \\ = & 0 & , & \mbox{case 2} \end{array} \right. . \end{eqnarray} Since we need an extra renormalization in case~1, let us introduce the symbol $\left[ \frac{i}{2} \int {\, \rm \bf T \,} Q Q \right]^{\re}$ for the completely renormalized operator product constructed from two renormalized local operators $Q$ with an additional renormalization factor for the double insertion. Let us start the discussion with the matching procedure: At some renormalization scale we have to match Green functions obtained in the full theory with Green functions calculated in the effective theory: \begin{eqnarray} -i G^{\re} &=& C_{k} C_{k'} \left\langle \left[\frac{i}{2} \int {\, \rm \bf T \,} \hat{Q}_{k} \hat{Q}_{k'} \right]^{\re} \right\rangle + C_{k} C_{E_{i' k'}} \left\langle \left[i \int {\, \rm \bf T \,} \hat{Q}_{k} \hat{E}_{i'}\left[Q_{k'}\right] \right]^{\re} \right\rangle \nonumber \\ \\ & & + C_{E_{ik}} C_{E_{i' k'}} \left\langle \left[\frac{i}{2} \int {\, \rm \bf T \,} \hat{E}_{i}\left[Q_{k}\right] \hat{E}_{i'}\left[Q_{k'}\right] \right]^{\re} \right\rangle + C_{l} \left\langle \tilde{Q}_{l} \right\rangle \nonumber \\ \\ & & + \tilde{C}_{E_{jl}} \left\langle \tilde{E}_{j}\left[\tilde{Q}_{l}\right] \right\rangle , \end{eqnarray} where $G^{\re}$ corresponds e.g.\ to a ``box'' function in the full SM. Since the coefficients $C_{E_{jk}}$ must be irrelevant for this matching procedure, one must have \begin{eqnarray} Z_{\psi}^{2} \left\langle \left[ \frac{i}{2} \int {\, \rm \bf T \,} \hat{E}_{j}\left[Q_{k}\right] \hat{Q}_{k'} \right]^{\re} \right\rangle &\stackrel{!}{=}& \left\{ \begin{array}{l@{\hspace{1cm}}l} O\left(\varepsilon^{0}\right) & \mbox{in case 1 (LO)} \\ O\left(\varepsilon^{1}\right) & \mbox{in case 1 (NLO and higher)} \\ O\left(\varepsilon^{1}\right) & \mbox{in case 2} \end{array} \right. \nonumber \\ \\ & & \label{CondMatchDouble} \end{eqnarray} and analogously for two insertions of evanescent operators. To understand this recall that the purpose of RG improved perturbation theory is to sum logarithms. In case~1 the LO matching is performed by the comparison of the coefficients of logarithms of the full theory amplitude and the effective theory matrix element \eq{CondMatchDouble} (the latter being trivially related to the coefficient of the divergence), while the NLO matching is obtained from the finite part and also involves the matrix elements of the local operators \cite{hn2,bb}. In case~2 the matching is performed with the finite parts in all orders \cite{hn}. In both cases the condition \eq{CondMatchDouble} is trivially fulfilled in LO, because the evanescent Dirac algebra gives an additional $\varepsilon$ compared to the case of the insertion of two physical operators. Therefore a finite renormalization for the double insertion turns out to be unnecessary at the LO level. This statement remains valid at the NLO level only in case~2, in case~1 condition \eq{CondMatchDouble} no longer holds if one only subtracts the divergent terms in the matrix elements containing a double insertion. With the argumentation preceding \eq{fin} one finds that in this case the finite term needed to satisfy the condition \eq{CondMatchDouble} is local and therefore can be provided by a finite counterterm. The operator mixing is more complicated. To deal with this, we need the evolution equation for the Wilson coefficient functions $\tilde{C}_{k}$, $\tilde{C}_{E_{rs}}$, which can be easily derived from the renormalization group invariance of ${\cal L}^{\rm II}$ and reads \begin{eqnarray} \mu \frac{{\rm d}}{{\rm d} \mu} \tilde{C}_{l} &=& \tilde{C}_{l'} \tilde{\gamma}_{l' l} + C_{k} C_{n} \gamma_{k n,l} \label{inh} \end{eqnarray} with the anomalous dimension tensor of the double insertion \begin{eqnarray} \gamma_{k n,l} &=& - \left[\gamma_{k k'} \delta_{n n'} +\delta_{k k'} \gamma_{n n'}\right] Z^{-1}_{k' n',l'} \tilde{Z}_{l' l} - \left[\mu \frac{{\rm d}}{{\rm d} \mu} Z^{-1}_{kn,l'}\right] \tilde{Z}_{l' l} . \label{DefAnomTensor} \end{eqnarray} Using the perturbative expansions for the renormalization constants \begin{eqnarray} Z^{-1}_{kn,l} &=& \sum_{j} \left(\frac{g^2}{16\pi^2}\right)^{j} Z^{-1,\left(j\right)}_{kn,l}, \hspace{1cm} Z^{-1,\left(j\right)}_{kn,l} = \sum_{i=0}^{j} \frac{1}{\varepsilon^{i}} \left[Z^{-1,\left(j\right)}_{i}\right]_{kn,l} \end{eqnarray} and $\tilde{Z}$ we derive the perturbative expression for \begin{eqnarray} \gamma_{kn,l} &=& \frac{g^2}{16 \pi^2} \gamma_{kn,l}^{\left(0\right)} + \left(\frac{g^2}{16 \pi^2}\right)^{2} \gamma_{kn,l}^{\left(1\right)} + \ldots \label{AnomDimTens} \end{eqnarray} in \eq{DefAnomTensor} up to NLO: \begin{eqnarray} \gamma^{\left(0\right)}_{kn,l} &=& 2 \left[Z^{-1,\left(1\right)}_{1}\right]_{kn,l} + 2 \varepsilon \left[Z^{-1,\left(1\right)}_{0}\right]_{kn,l} \nonumber \\ \\ \gamma^{\left(1\right)}_{kn,l} &=& 4 \left[Z^{-1,\left(2\right)}_{1}\right]_{kn,l} + 2 \beta_{0} \left[Z^{-1,\left(1\right)}_{0}\right]_{kn,l} \nonumber \\ \\ & & - 2 \left[Z^{-1,\left(1\right)}_{0}\right]_{kn,l'} \left[\tilde{Z}^{-1,\left(1\right)}_{1}\right]_{l' l} - 2 \left[Z^{-1,\left(1\right)}_{1}\right]_{kn,l'} \left[\tilde{Z}^{-1,\left(1\right)}_{0}\right]_{l' l} \nonumber \\ \\ & & - 2 \left\{ \left[Z^{-1,\left(1\right)}_{0}\right]_{k k'} \delta_{n n'} + \delta_{k k'} \left[Z^{-1,\left(1\right)}_{0}\right]_{n n'} \right\} \left[Z^{-1,\left(1\right)}_{1}\right]_{k' n',l} \nonumber \\ \\ & & - 2 \left\{ \left[Z^{-1,\left(1\right)}_{1}\right]_{k k'} \delta_{n n'} + \delta_{k k'} \left[Z^{-1,\left(1\right)}_{1}\right]_{n n'} \right\} \left[Z^{-1,\left(1\right)}_{0}\right]_{k' n',l} . \label{gamma1double} \end{eqnarray} The indices run over both physical and evanescent operators. The reader may have noticed, that we have used the perturbative expansions of $Z^{-1}$, $\tilde{Z}^{-1}$ rather than $Z$, $\tilde{Z}$ as in the previous sections. This is more convenient for the case of double insertions. Using these equations, the finite renormalization ensuring \eq{CondMatchDouble} to hold and the locality of counterterms, one shows in complete analogy to section~\ref{Sect:Triang}: \begin{eqnarray} \gamma^{\left(0\right)}_{E_{rk}l,n} &=& \gamma^{\left(1\right)}_{E_{rk}l,n} = 0 \hspace{1cm} \mbox{and} \hspace{1cm} \gamma^{\left(0\right)}_{E_{rk}E_{sl},n} = \gamma^{\left(1\right)}_{E_{rk}E_{sl},n} = 0 , \end{eqnarray} i.e.\ a double insertion containing at least one evanescent operator does not mix into physical operators. Together with the statement that evanescent operators do not contribute to the matching this proves our method to be consistent at the NLO level. As in the case of single insertions one can pick the $\tilde{a}_{kl}$, $\tilde{b}_{kl}$,\ldots completely arbitrary and then has to perform a finite renormalization for the double insertions containing an evanescent operator in \eq{DoubleWithEva}. This statement remains valid also in higher orders of the SU(N) interaction, which can be proven analogously to the proof given by Dugan and Grinstein \cite{dg} for the case of single insertions. Now we use the findings above to show the nonvanishing terms in \eq{gamma1double} explicitly for the physical submatrix: \begin{eqnarray} \gamma^{\left(1\right)}_{kn,l} &=& 4 \left[Z^{-1,\left(2\right)}_{1}\right]_{kn,l} - 2 \left[Z^{-1,\left(1\right)}_{1}\right]_{kn,E_{1 l'}} \left[\tilde{Z}^{-1,\left(1\right)}_{0}\right]_{E_{1 l'}l} \nonumber \\ \\ & & - 2 \left[Z^{-1,\left(1\right)}_{1}\right]_{k E_{1 k'}} \left[Z^{-1,\left(1\right)}_{0}\right]_{E_{1 k'} n,l} - 2 \left[Z^{-1,\left(1\right)}_{1}\right]_{n E_{1 n'}} \left[Z^{-1,\left(1\right)}_{0}\right]_{k E_{1 n'},l} \nonumber \\ \\ & & \label{physdoub} \end{eqnarray} The last equation encodes the following rule for the correct treatment of evanescent operators in NLO calculations: {\em The correct contribution of evanescent operators to the NLO physical anomalous dimension tensor is obtained by inserting the evanescent one-loop counterterms with a factor of $\frac{1}{2}$ instead of\/ $1$ into the counterterm graphs.} Hence the finding of \cite{bw} for a single operator insertion generalizes to Green's functions with double insertions. Here the second term in \eq{physdoub} corresponds to the graphs with the insertion of a local evanescent counterterm into the graphs depicted in fig.~\ref{Fig:1}, while the last to terms correspond to the diagrams of fig.~\ref{Fig:2} with one physical and one evanescent operator. \subsection{Double Insertions: Evanescent Scheme Dependences} \label{Sect:DoubleScheme} In this section we will answer questions~\ref{Q3} and \ref{Q4} from page~\pageref{Q3}. Let us first look at the variation of the anomalous dimension tensor $\gamma_{kk',l}$ on the coefficients $a_{rs}$. First one notices, that the LO $\gamma^{\left(0\right)}_{kk',l}$ is independent of the choice of the $a_{rs}$. In the NLO case one derives in a way completely analogously to the procedure presented in section~\ref{Sect:Scheme} the following relation \begin{eqnarray} \gamma^{\left(1\right)}_{kk',l}\left(a'\right) &=& \gamma^{\left(1\right)}_{kk',l}\left(a\right) + \left[D \cdot \left(a'-a\right)\right]_{ks} \gamma^{\left(0\right)}_{sk',l} + \left[D \cdot \left(a'-a\right)\right]_{k' s} \gamma^{\left(0\right)}_{sk,l} \label{SchemeAnomBi1} \end{eqnarray} with the diagonal matrix $D$ from \eq{DiagMatrix1}. Note that the indices only run over the physical subspace. The variation of the anomalous dimension tensor $\gamma_{k k',l}$ with the coefficients $\tilde{a}_{rs}$ again vanishes in LO, in NLO we find the transformation \begin{eqnarray} \gamma^{\left(1\right)}_{kk',l}\left(\tilde{a}'\right) &=& \gamma^{\left(1\right)}_{kk',l}\left(\tilde{a}\right) + \gamma^{\left(0\right)}_{kk',i} \left[\tilde{Z}^{-1,\left(1\right)}_{1}\right]_{i E_{1i}} \left[\tilde{a}'-\tilde{a}\right]_{il} - 2 \beta_{0} \left[Z^{-1,\left(1\right)}_{1}\right]_{kk',\tilde{E}_{1i}} \left[\tilde{a}'-\tilde{a}\right]_{il} \nonumber \\ \\ & & + \left[ \gamma^{\left(0\right)}_{kj} \delta_{k' j'} + \delta_{kj} \gamma^{\left(0\right)}_{k' j'} \right] \left[Z^{-1,\left(1\right)}_{1}\right]_{jj',\tilde{E}_{1i}} \left[\tilde{a}'-\tilde{a}\right]_{il} \nonumber \\ \\ & & - \left[Z^{-1,\left(1\right)}_{1}\right]_{kk',\tilde{E}_{1i}} \left[\tilde{a}'-\tilde{a}\right]_{is} \tilde{\gamma}^{\left(0\right)}_{sl} \end{eqnarray} As in the case of single insertions, up to the NLO level there exists no dependence of $\gamma$ on the coefficients $b_{rs}$ and also no one on the $\tilde{b}_{rs}$. This provides a nontrivial check of the treatment of evanescent operators in a practical calculation, when the $b_{rs}$, $\tilde{b}_{rs}$ are kept arbitrary: the individual renormalization factors $Z$ each exhibit a dependence on the coefficients $b_{rs}$, $\tilde{b}_{rs}$ but all this dependence cancels, when the $Z$'s get combined to $\gamma$. Next we will elaborate on the scheme independence of RG improved physical observables. First look at the solution of the inhomogeneous RG equation \eq{inh} for the local operator's Wilson coefficient: \begin{eqnarray} \tilde{C}_{l}\!\left(g\left(\mu\right)\right) &=& \tilde{U}^{\left(0\right)}_{l l'} \left(g\!\left(\mu\right),g_{0}\right) \tilde{C}_{l'}\!\left(g_{0}\right) \nonumber \\ \\ & & + \left[\delta_{l l'} + \frac{g^{2}\left(\mu\right)}{16\pi^2} \tilde{J}_{l l'} \right] \cdot \int\limits_{g_{0}}^{g\left(\mu\right)} {\rm d} g'\; \tilde{U}^{\left(0\right)}_{l' k} \left(g\!\left(\mu\right),g'\right) \left[\delta_{k k'} - \frac{g'^{2}}{16\pi^2} \tilde{J}_{k k'} \right] \nonumber \\ \\ & & \hspace{1cm} \cdot \left[\delta_{n n'} + \frac{g'^{2}}{16\pi^2} J_{n n'} \right] U^{\left(0\right)}_{n' t}\left(g',g_{0}\right) \left[\delta_{t t'} - \frac{g'^{2}}{16\pi^2} J_{t t'} \right] C_{t'}\!\left(g_{0}\right) \nonumber \\ \\ & & \hspace{1cm} \cdot \left[\delta_{m m'} + \frac{g'^{2}}{16\pi^2} J_{m m'} \right] U^{\left(0\right)}_{m' v}\left(g',g_{0}\right) \left[\delta_{v v'} - \frac{g'^{2}}{16\pi^2} J_{v v'} \right] C_{v'}\!\left(g_{0}\right) \nonumber \\ \\ & & \hspace{1cm} \cdot \left\{ - \frac{\gamma^{\left(0\right)}_{nm,k'}}{\beta_{0}} \frac{1}{g'} + \left[ \frac{\beta_{1}}{\beta_{0}^2} \gamma^{\left(0\right)}_{nm,k'} - \frac{\gamma^{\left(1\right)}_{nm,k'}}{\beta_{0}} \right] \frac{g'}{16\pi^2} \right\} . \label{inhsol} \end{eqnarray} Here the matrices $U^{\left(0\right)}$, $\tilde{U}^{\left(0\right)}$ denote the LO evolution matrices stemming from the solution of the homogeneous RG equations for the Wilson coefficients $C$, $\tilde{C}$, which reads \begin{eqnarray} U^{\left(0\right)}\left(g,g_{0}\right) &=& \left[\frac{g_{0}}{g}\right] ^{ \frac{{\gamma^{\left(0\right)}}^{T}}{\beta_{0}}} . \end{eqnarray} We have not labeled the evolution matrices with the renormalization scales $\mu$, $\mu_{0}$ but rather with the corresponding coupling constants $g\left(\mu\right)$ and $g_{0}=g\left(\mu_{0}\right)$. The matrix $J$ is a solution of the matrix equation \cite{bjlw}: \begin{eqnarray} J + \left[ \frac{{\gamma^{\left(0\right)}}^{T}}{2 \beta_{0}}, J \right] &=& - \frac{{\gamma^{\left(1\right)}}^{T}}{2 \beta_{0}} + \frac{\beta_{1}}{2 \beta_{0}^2} {\gamma^{\left(0\right)}}^{T} . \end{eqnarray} The matrices $\tilde{U}^{\left(0\right)}$, $\tilde{J}$ are defined analogously in terms of $\tilde{\gamma}$. If $\gamma$ transforms according to \eq{result}, we know from \cite{bjlw} that $J$ transforms as \begin{eqnarray} J(a^\prime) &=& J(a) - \left[ D \cdot (a^\prime -a) \right]^{T} , \label{SchemeJ1} \end{eqnarray} which can be easily verified from \eq{depma}. Hence after inserting \eq{SchemeWC1}, \eq{SchemeAnomBi1} and \eq{SchemeJ1} into \eq{inhsol} one finds the independence of $\tilde{C}_l$ from the coefficients $a_{kl}$. In a way similar to the one described above, one treats the scheme dependence coming from the coefficients $\tilde{a}_{kl}$. Here some work has been necessary to prove the cancellation of the scheme dependence connected to $g^{\prime 2} \tilde{J}_{kk^\prime}$ and $\gamma^{(1)}_{nm,k^\prime}$ in \eq{inhsol}: Although it is not possible to perform the integration in \eq{inhsol} without transforming some of the operators to the diagonal basis, one can do the integral for the scheme dependent part of \eq{inhsol}, because the part of the integrand depending on $\tilde{a}_{kl}$'s is a total derivative with respect to $g$. There is one important difference compared to the case of the dependence on the $a_{kl}$'s: A scheme dependence of the Wilson coefficient stemming from the lower end of the RG evolution remains. This is a well-known feature of RG improved perturbation theory \cite{bjlw}. This residual $\tilde{a}_{kl}$ dependence must be canceled by a corresponding one in the hadronic matrix element. If the matrix elements are obtained in perturbation theory, one can show that the $\tilde{a}_{kl}$ dependence of the $\tilde{C}_{l}$ gets completely resolved. Finally, as in the case of single insertions \cite{bjlw}, one can define a scheme-independent Wilson coefficient for the local operator \begin{eqnarray} \overline{\tilde{C}_{l}\!\left(\mu\right)} &=& \left[ \delta_{l l'} + \frac{g^2\left(\mu\right)}{16\pi^2} \cdot \tilde{r}_{l' l} \right] \tilde{C}_{l'} + \frac{g^2\left(\mu\right)}{16\pi^2} \cdot \tilde{r}_{nm,l} \cdot C^{\left(0\right)}_{n}\!\left(\mu\right) C^{\left(0\right)}_{m}\!\left(\mu\right) + O\left(g^4\right) , \nonumber \\ \\ & & \end{eqnarray} which multiplies a scheme independent matrix element defined accordingly. It contains the analogue of $\hat{r}$ in \cite{bjlw} for the double insertion \begin{eqnarray} \left\langle \frac{i}{2} \int {\, \rm \bf T \,} Q_{n} Q_{m} \right\rangle^{\left(0\right)} &=& \frac{g^2}{16\pi^2} \cdot \tilde{r}_{nm,l} \cdot \left\langle \tilde{Q}_{l} \right\rangle^{\left(0\right)} . \end{eqnarray} \section{Inclusive Decays} \label{Sect:Inclusive} Inclusive decays are calculated either by calculating the renormalized amplitude and performing a subsequent phase space integration and a summation over final polarizations etc.\ (referenced as method 1) or by use of the optical theorem, which corresponds to taking the imaginary part of the self-energy diagram depicted in fig.~\ref{Fig:3} (method 2). \begin{figure}[htb] \centerline{ \rotate[r]{ \epsfysize=5cm \epsffile{fig3.ps} }} \caption{The lowest order self-energy diagram needed for the calculation of inclusive decays via the optical theorem (method~2).} \label{Fig:3} \end{figure} This figure shows that inclusive decays are in fact related to double insertions, but in contrast to the case of section~\ref{Sect:Double} they do not involve local four-quark operators as counterterms for double insertions. In fact, even local two-quark operator counterterms would only be needed to renormalize the real part, but the imaginary part of their matrix elements clearly vanishes. The only scheme dependence to be discussed is therefore the one associated with the $a_{kl}$'s, $b_{kl}$'s, etc., as there are no $\tilde{a}_{kl}$'s $\tilde{b}_{kl}$'s, etc.\ involved. To discuss the dependence on the $a_{kl}$'s it is nevertheless advantageous to consider method~1, i.e.\ the calculation of the amplitude plus the subsequent phase space integration. {From} section~\ref{Sect:Scheme} we already know most of the properties of the RG improved amplitude: At the upper renormalization scale the properly renormalized evanescent operators do not contribute and the scheme dependence cancels. Further we know the scheme dependence of the (RG improved) Wilson coefficients at the lower renormalization scale, because with \eq{SchemeWC1} and \eq{result} we can use the result of \cite{bjlw}. What we are left with is the calculation of the properly renormalized operators in perturbation theory, i.e.\ with on-shell external momenta. Clearly the form of the external states does not affect the scheme dependent terms of the matrix elements, they are again given by \eq{depma} and therefore trivially cancel between the Wilson coefficients and the matrix elements, because the scheme dependent terms are independent of the external momenta. Since we now have a finite amplitude which is scheme independent, we may continue the calculation in four dimensions and therefore forget about the evanescent operators. The remaining phase space integration and summation over final polarizations does not introduce any new scheme dependence, therefore we end up with a rate independent of the $a_{kl}$'s, $b_{kl}$'s, etc. \footnote{We discard problems due to infrared singularities and the Bloch-Nordsiek theorem. At least in NLO one can use a gluon mass, because no three-gluon vertex contributes to the relevant diagrams} Alternatively one may use the approach via the optical theorem (method~2). Then one has to calculate the imaginary parts of the diagram in fig.~\ref{Fig:3} plus gluonic corrections. Of course the properly renormalized operators have to be plugged in: \begin{eqnarray} {\rm Im}\, \left\langle \hat{O}^{\re}_{k} \hat{O}^{\re}_{l} \right\rangle \end{eqnarray} One immediately ends up with a finite rate. What we only have to show is the consistency of the optical theorem with the presence of evanescent operators and with their arbitrary definition proposed in \eq{DefEvan1}, \eq{defe2}. This means that evanescent operators must not contribute to the rate, i.e.\ diagrams containing an insertion of one or two evanescent operators must be of order $\varepsilon$ \begin{eqnarray} {\rm Im}\, \left\langle \hat{E}_{i}\left[O_{k}\right]^{\re} \hat{O}^{\re}_{l} \right\rangle = O\left(\varepsilon\right) \hspace{0.3cm} &\mbox{and}& \hspace{0.3cm} {\rm Im}\, \left\langle \hat{E}_{i}\left[O_{k}\right]^{\re} \hat{E}_{j}\left[O_{l}\right]^{\re} \right\rangle = O\left(\varepsilon\right) . \label{CondInclNoEva} \end{eqnarray} As in the previous sections one can discuss tensor integrals and Dirac algebra separately leading to \eq{CondInclNoEva}. \section{Conclusions} \label{Sect:Concl} In this work we have analyzed the effect of different definitions of evanescent operators. We have shown that one may arbitrarily redefine any evanescent operator by $(D-4)$ times any physical operator without affecting the block-triangular form of the anomalous dimension matrix, which ensures that properly renormalized evanescent operators do not mix into physical ones. Especially one is not forced to use the definition of the evanescent operators proposed in \cite{dg}, whose implementation is quite cumbersome. Then we have analyzed the renormalization scheme dependence associated with the redefinition transformation in the next-to-leading order in renormalization group improved perturbation theory. We stress that it is meaningless to give some anomalous dimension matrix or some Wilson coefficients beyond leading logarithms without specifying the definition of the evanescent operators used during the calculation. In physical observables, however, this renormalization scheme dependence cancels between Wilson coefficients and the anomalous dimension matrix. One may take advantage of this feature by defining the evanescent operators such as to achieve a simple form for the anomalous dimension matrix. Then we have extended the work of \cite{bw} and \cite{dg} to the case of Green's functions with two operator insertions and have also analyzed the abovementioned renormalization scheme dependence. For this we have set up the NLO renormalization group formalism for four-quark Green's functions with two operator insertions, we have derived the renormalization scheme dependence of the corresponding anomalous dimension tensors and defined scheme-independent Wilson coefficients. Finally we have analyzed inclusive decay rates. \section*{Acknowledgements} The authors thank Andrzej Buras and Miko{\l}aj Misiak for useful discussions.
1,108,101,564,030
arxiv
\section{On the Nature of Young Globular Clusters} \label{sec1} When studying the myriads of point-like luminous sources brighter than any individual star on \emph{HST} images of \emph{ongoing} mergers (e.g., NGC 4038/39, NGC 3256), one would like to know which ones---or at least what fraction---will survive as globular clusters (GC). Yet, it is very difficult to distinguish gravitationally bound young star clusters from unbound OB associations or even spurious asterisms. As it turns out, the adopted operational definition for ``cluster'' may determine the answers to the scientific questions we ask about these objects. Modern astronomical dictionaries universally include in their definition of ``star cluster'' (open or globular) the requirement that it be \emph{gravitationally bound}, thus distinguishing it from any looser, expanding ``stellar association'' (e.g., \cite{hopk76,ridp97}). As I explain in Sect.~\ref{sec2} below, I believe that our present inability to make this distinction for many stellar aggregates younger than 10--20~$t_{\rm cr}$ (internal crossing times) in ongoing mergers leads to a notion of ``infant mortality'' that is seriously exaggerated. In recent merger \emph{remnants}, where the merger-induced starburst has subsided (e.g., NGC 3921, NGC 7252), the definition of a \emph{young globular cluster (YGC)} is more easy and secure. Any young compact stellar aggregate older than 10--20 $t_{\rm cr}$ ($\sim$20--40 Myr), more massive than a few $10^4 M_{\odot}$, and with a half-light radius $R_{\rm eff}$ comparable to that of a typical Milky-Way globular (say, $R_{\rm eff} \lapprox 10$~pc) is most likely gravitationally bound and, hence, a YGC. It is the size requirement that places stringent upper limits on any possible expansion velocity ($\lapprox$0.2--0.5 \kms) and thus guarantees that the cluster is gravitationally bound. An important result to emerge from recent \emph{HST} and follow-up studies of YGCs concerns their masses. These masses do not only cover the full range observed in old Milky-Way GCs ($\sim$10$^4$ -- $5\times 10^6 \msun$), but also extend to nearly $10^8 \msun$ or $\sim$20$\times$ the mass of $\omega$~Cen at the high-mass end. The most massive YGCs are invariably found in remnants of gas-rich major mergers such as NGC 7252 \cite{ss98,mara04}, NGC 1316 \cite{bast06}, and NGC 5128 \cite{mart04}. Interestingly, dynamical masses determined from velocity dispersions agree well with photometric masses based on cluster-evolution models with normal (e.g., Salpeter, Kroupa, or Chabrier) initial mass functions (IMFs). Therefore, some earlier worries that YGCs formed in mergers may have highly unusual stellar IMFs (e.g., \cite{brod98}) seem now unfounded. Relatively little work has been done so far on the brightness profiles and detailed structural parameters (core and tidal radii) of YGCs in mergers. Yet, the subject looks promising. Radial profiles of selected YGCs in NGC 4038 suggest that the initial power-law envelopes of YGCs may be tidally stripped within the first few 100 Myr, while the core radii may grow \cite{whit99}. Correlations between core radius and cluster age are known to exist for the young cluster populations of the Magellanic Clouds (e.g., \cite{mack03}) and deserve further study via the rich cluster populations of ongoing mergers and merger remnants. \section{Formation and Early Evolution} \label{sec2} Star clusters form in giant molecular clouds (GMC), where optical extinction can be very significant. Hence the question arises what fraction of all young clusters ``optical'' surveys made with \emph{HST} ($0.3\lapprox\lambda\lapprox1.0\,\mu$) may miss. This question has been addressed by Whitmore \& Zhang \cite{wz02} for the ``Overlap Region'' of NGC 4038/39, which is known to harbor some of the most IR-luminous young clusters, yet appears heavily extincted at optical wavelengths and brightly emitting at 8$\mu$ \cite{wang04}. A comparison between optical clusters and strong thermal radio sources shows that 85\% of the latter have optical counterparts, whence even in this extreme region only $\sim$15\% of all clusters have been missed by \emph{HST} surveys \cite{wz02}. Measured cluster extinctions lie in the range $0.5\lapprox A_V\lapprox 7.6$ mag and diminish to $A_V\lapprox 1.0$ mag for clusters 6 Myr and older. This suggests that cluster winds disperse most of the natal gas rapidly, and that optically-derived luminosity functions for clusters older than $\sim$6 Myr should not be too incomplete. \subsection{Cluster Luminosity Functions} \label{sec21} To first order, the luminosity functions (LF) of young-cluster systems in merger galaxies are well approximated by a power law of the form\ \ $\Phi(L) dL \propto L^{-\alpha} dL$\ \ with $1.7\lapprox \alpha\lapprox 2.1$\ \ \cite{ws95,meur95,whit03}. The similarities between this power law and the power-law mass function of GMCs, including the similar observed mass ranges, strongly suggest that young clusters form from GMCs suddenly squeezed by a rapid increase in the pressure of the surrounding gas \cite{jog92,hapu94,elme97} (see also Sect.~\ref{sec23}). \begin{figure} \centering \includegraphics[width=11.6truecm]{schweizer_fig1.eps} \caption{Luminosity functions for candidate young star clusters in NGC 4038/39 from \emph{HST} observations with \textit{(left)} WFC1 \cite{ws95} and \textit{(right)} WFPC2 \cite{whit99}.} \label{fig1} \end{figure} Deep \emph{HST} observations of mergers with rich cluster systems suggest that the cluster LFs may have a break (``knee'') whose position varies from merger to merger (NGC 4038/39 \cite{whit99}; NGC 3256 \cite{zepf99}; M51 \cite{giel06}). Figure~\ref{fig1} displays for NGC 4038/39 both the original cluster LF \cite{ws95} and two versions of the deeper LF \cite{whit99} showing a break around $M_V = -10.0$ to $-$10.3. The interpretation of these breaks is presently controversial. Either the breaks reflect brightness-limited-selection effects (Whitmore et al., in prep.) or they may indicate a maximum cluster mass \cite{giel06}. In the latter case, the measured LF breaks in the above three mergers would seem to suggest that the maximum mass increases with the vehemence of the merger, presumably indicating that under increased gas pressure GMCs coagulate into more massive aggregates. \subsection{Star-Cluster Formation vs Clustered Star Formation} \label{sec22} The \emph{age distribution} of young clusters in NGC 4038/39 has recently been derived for two mass-limited subsamples defined by $M > 3\times 10^4 \msun$ and $M > 2\times 10^5 \msun$ \cite{fall05}. The masses themselves are estimates based on \emph{HST} photometry in $U\!BV\!I$ and H$\alpha$ plus Bruzual-Charlot \cite{bc03} cluster evolution models. The number distributions for both subsamples decline steeply with age $\tau$, approximately as $dN/d\tau \propto \tau^{-1}$. Thus, it would seem that $\sim$90\% of all clusters disrupt during each age decade. The median age of the clusters is a mere $\sim$10$^7$ yr, which Fall et al.\ interpret as evidence for rapid disruption, dubbed ``infant mortality.'' These authors guess that ``very likely ... most of the young clusters are not gravitationally bound and were disrupted near the times they formed by the energy and momentum input from young stars to the ISM of the protoclusters.'' In my opinion, it is unfortunate that this loose, non-astronomical use of the word ``cluster'' may reinforce an increasingly popular view that most stars form in clusters. By the traditional astronomical definition of star clusters as gravitationally bound aggregates, most of the objects tallied by Fall et al.\ in The Antennae are not clusters, but likely young stellar associations. It seems to me in much better accord with a rich body of astronomical evidence gathered during the past 50 years to state that---although \emph{star formation is clearly clustered}---even in mergers gravitationally bound clusters (open and globular) form relatively rarely and \emph{contain $<$10\% of all newly-formed stars}. I believe that only with such careful distinction can we hope to study the true disruptive effects that affect any gravitationally bound star cluster over time, including mass loss due to stellar evolution and evaporation by two-body relaxation and gravitational shocks. Further reason for caution is provided by the recent discovery that even in nearby M31, four of six claimed YGCs have turned out to be spurious asterisms when studied with adaptive optics \cite{cohe05}. Clearly, there is considerable danger in calling all luminous point-like (at \emph{HST} resolution) sources in the distant NGC 4038/39 young ``clusters''! \subsection{Shocks and High Pressure} \label{sec23} Shocks and high pressure have long been suggested to be the main drivers of GC formation in gas-rich mergers and responsible for the increased specific frequency $S_N$ of GCs observed in descendent elliptical galaxies \cite{schw87,jog92,az92}. Much new evidence supports this hypothesis. \emph{Chandra} X-ray observations of the hot ISM in merger-induced starbursts, and especially in NGC 4038/39 \cite{fabb04}, show that the pressure in the hot, 10$^6$--10$^7$K ISM of a merger can exceed 10$^{-10}$ dyn cm$^{-2}$ and is typically 10--100 times higher than it is in the hot ISM of our local Galactic neighborhood (e.g., \cite{bald06,veil05}). Thus GMCs in mergers do indeed experience strongly increased pressure from the surrounding gas. The principal source of general pressure increase are gravitational torques between the gas and stellar bars, which tend to brake the gas and lead to rapid inflows and density increases (e.g., \cite{bh96,mh96}). What has become clearer only recently is how much accompanying \emph{shocks} may affect the spatial distribution of star and cluster formation. As Barnes \cite{barn04} shows via numerical simulations, star-formation recipes that include not only the gas density (i.e., Schmidt--Kennicut laws), but also the local rate of energy dissipation in shocks, lead to spatially more extended star and cluster formation that tends to occur earlier during the merger. A model with mainly shock-induced star formation for The Mice (NGC 4676) leads to significantly better agreement with the observations of H$\;$II regions and young clusters than one with only density-dependent star formation. Shock-induced star formation may also explain why cluster formation is already so vehement and wide-spread in The Antennae, where the two disks---currently on their second approach---are still relatively intact. \begin{figure} \centering \includegraphics[width=11.6truecm]{schweizer_fig2.eps} \caption{Radial velocities of young clusters in NGC 4038/39, measured with \emph{HST}/STIS (at H$\alpha$) along three lines crossing 7 major regions, each with many clusters. The three slit positions are shown at \textit{upper left}, while \textit{lower left panel} shows slit position across regions D, C, and B in more detail. After gradient subtraction, the cluster-to-cluster velocity dispersion is \,$<$10--12 \kms\ \cite{whit05}.} \label{fig2} \end{figure} Are the shocks in mergers generated by high-velocity, 50--100 \kms\ cloud--cloud collisions \cite{kuma93} or more by large-scale gas motions? A high-resolution study with \emph{HST}/STIS of the radial velocities of many dozens of young clusters in 7 regions of The Antennae shows that the average cluster-to-cluster radial-velocity dispersion is\ \ $\sigma_{\rm v,cl} < 10$--12 \kms\ \cite{whit05}, as illustrated in Fig.~\ref{fig2}. This relatively low velocity dispersion argues strongly against high-velocity cloud--cloud collisions and in favor of the general pressure increase being what triggers GMCs into forming clusters \cite{jog92,elme97}. \section{Young Metal-Rich Halo Globulars} \label{sec3} There are several advantages to studying YGCs in relatively recent, about 0.3--3 Gyr old merger remnants: (1) Dust obscuration is much less of a problem than in ongoing mergers. (2) Most point-like luminous sources in such remnants are true GCs, since time has acted to separate the wheat from the chaff (= expanding associations), and clusters are now typically $>$100 Myr or $>$25--50 $t_{\rm cr}$ old. And (3), the remnants themselves appear to be evolving into bona fide early-type galaxies. Therefore, YGCs formed during the mergers can provide key evidence on processes that must have shaped GC populations in older E and S0 galaxies as well. \emph{HST} studies of recent merger remnants such as NGC 3921 \cite{schw96}, NGC 7252 \cite{mill97}, and NGC 3597 \cite{carl99} show that these galaxies typically host about 10$^2$--10$^3$ point-like sources that appear to be mostly \emph{young} GCs ($\lapprox$1 Gyr old). (It is not that there are no old GCs in these relatively distant remnants, only that the YGCs are much brighter and more easily studied.) Age-dating based both on broad-band photometry and spectroscopy shows that the majority of these YGCs formed in relatively short, 100--200 Myr time spans during the mergers. The YGCs appear strongly concentrated toward their hosts' centers, half of them lying typically within $\lapprox$5 kpc from the nucleus. The few spectroscopic studies that have so far been made of such YGCs invariably show them to be of approximately solar metallicity: $[Z] = 0.0\pm 0.1$ in NGC 7252 \cite{ss98}, $0.0\pm 0.5$ in NGC 3921 \cite{schw04}, and---for the intermediate-age, $\sim$3--5 Gyr old GCs in more advanced remnants---$[Z] = 0.0\pm 0.15$ in NGC 1316 \cite{goud01} and $-0.1\pm 0.2$ in NGC 5128 \cite{peng04}. Such near-solar metallicities in recently formed GCs are, of course, not unexpected and might not seem worth emphasizing, were it not for the fact that the YGCs with these metallicities all show \emph{halo} kinematics (see refs.\ above). Therefore, the inevitable conclusion is that major mergers of gas-rich disk galaxies produce \emph{young metal-rich halo GCs}. The existence of significant populations of such clusters in merger remnants ranging from $\sim$0.5 Gyr to 4--5 Gyr in age, together with observational and theoretical evidence that the remnants themselves are young to intermediate-age ellipticals, provides a strong link to the old metal-rich GC populations observed in virtually all E and many S0 galaxies (see \cite{schw03} and Goudfrooij's contribution in this volume for further details). \section{Implications for Old Metal-Poor Globular Clusters} \label{sec4} Perhaps the main result from studies of GC formation in mergers is that the process is driven by strong pressure increases that squeeze GMCs into rapid cluster formation. Observations show that the pressures in the ISM can exceed 10$^{-10}$ dyn cm$^{-2}$ already early on in a merger (Sect.~\ref{sec23}), while simulations of gas-rich mergers demonstrate that most of the pressure increase is driven gravitationally \cite{bh96,mh96,barn04}. These facts beg the question whether some nearly universal pressure increase may have caused the formation of the old metal-poor GCs that are so omnipresent in all types of galaxies and environments. Cen \cite{cen01} points out that the cosmological reionization at $z \approx 15$--7 may have provided just such a universal pressure increase. Ionization fronts driven by the external radiation field may have generated inward convergent shocks in gas-rich sub-galactic halos, which in turn triggered GMCs into forming clusters. If so, the formation of metal-poor GCs from early GMCs in many of these halos may have been nearly synchronous. If Cen's hypothesis is correct, most GCs in the universe may have formed from shocked GMCs. The first-generation GCs formed near-simultaneously from low-metallicity GMCs shocked by the pressure increase accompanying cosmological reionization. Later-generation (``second-generation'') GCs formed during subsequent galaxy mergers from metal-enriched GMCs present in the merging components and shocked by the rapid, gravitationally-driven pressure increases of the mergers. Major disk mergers, some of which occur to the present time, led to elliptical remnants with a mixture of first- and second-generation GCs that can still be traced by their bimodal color distributions. Finally, a minority of second-generation GCs seem to form sporadically from occasional pressure increases in calmer environments, such as in interacting irregulars and barred spirals. \section{Conclusions} \label{sec5} During mergers, increased gas pressure leads to much \emph{apparent} cluster formation, but most of the stellar aggregates are unbound and disperse. Gravitationally bound globular and open clusters are relatively \emph{rare} and seem to contain $<$10\% of all stars formed in the starbursts. Major gas-rich mergers form not only E and S0 galaxies, but also their metal-rich ``second-generation'' GCs. Specifically, in the local universe young remnants of major such mergers appear as protoelliptical galaxies with subpopulations of young metal-rich halo GCs (NGC 3921, NGC 7252; later NGC 1316, NGC 5128). The evidence is now strong that these second-generation GCs form from giant molecular clouds in the merging disks, squeezed into collapse by large-scale shocks and high gas pressure rather than by high-velocity cloud--cloud collisions. Similarly, first-generation metal-poor GCs may have formed during cosmological reionization from low-metallicity giant molecular clouds squeezed by the reionization pressure. \\ \\ \textbf{Acknowledgement.} I thank Brad Whitmore for his permission to reproduce some figures.
1,108,101,564,031
arxiv
\section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the CVPR 2017 web page for a discussion of the policy on dual submissions. \subsection{Paper length} Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. {\bf There will be no extra page charges for CVPR 2017.} Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $095.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. For this citation style, keep multiple citations in numerical (not chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to \cite{Alpher02,Alpher03,Authors14}. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. Page numbers should be in footer with page numbers, centered and .75 inches from the bottom of the page and make it start at the correct page number rather than the 4321 in the example. To do this fine the line (around line 23) \begin{verbatim} \setcounter{page}{4321} \end{verbatim} where the number 4321 is your assigned starting page. Make sure the first page is numbered by commenting out the first page being empty on line 46 \begin{verbatim} \end{verbatim} \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the CVPR 2017 web page for a discussion of the use of color in your document. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. {\small \bibliographystyle{ieee} \section{Introduction} Please follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press. This style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version. \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the CVPR 2017 web page for a discussion of the policy on dual submissions. \subsection{Paper length} Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. {\bf There will be no extra page charges for CVPR 2017.} Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (e.g.\ this line is $095.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Mathematics} Please number all of your sections and displayed equations. It is important for readers to be able to refer to any particular equation. Just because you didn't refer to it in the text doesn't mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for techreports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as additional material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as additional material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a techreport for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the techreport as additional material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool which is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Don't write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] didn't handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. FAQ: Are acknowledgements OK? No. Leave them for the final copy. \begin{figure}[t] \begin{center} \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \end{center} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:long} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.) However, use it only when there are three or more authors. Thus, the following is correct: `` Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. For this citation style, keep multiple citations in numerical (not chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to \cite{Alpher02,Alpher03,Authors14}. \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} All text must be in a two-column format. The total allowable width of the text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1.0 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1.0 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm) high. \subsection{Type-style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx. 1/6 inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote {This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include the name(s) of editors of referenced books. \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Many readers (and reviewers), even of an electronic copy, will choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage[dvips]{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.eps} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the CVPR 2017 web page for a discussion of the use of color in your document. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: Phone (714) 821-8380, or Fax (714) 761-1784. {\small \bibliographystyle{ieee} \section{Introduction} The Variational Autoencoder (VAE) \cite{KingmaWelling2014} is a newly introduced tool for unsupervised learning of a distribution $p(\mathbf{x})$ from which a set of training samples $\mathbf{x}$ is drawn. It learns the parameters of a generative model, based on sampling from a latent variable space $\mathbf{z}$, and approximating the distribution $p(\mathbf{x}|\mathbf{z})$. By designing the latent space to be easy to sample from (e.g. Gaussian) and choosing a flexible generative model (e.g. a deep belief network) a VAE can provide a flexible and efficient means of generative modeling. One limitation of this model is that the dimension of the latent space and the number of parameters in the generative model are fixed in advance. This means that while the model parameters can be optimized for the training data, the capacity of the model must be chosen a priori, assuming some foreknowledge of the training data characteristics. In this paper we present an approach that utilizes Bayesian non-parametric models \cite{e2013,GelmanCarlinStern2014,OrbanzTeh2011,HjortHolmesMuellerEtAl2010} to produce an \emph{infinite} mixture of autoencoders. This infinite mixture is capable of growing with the complexity of the data to best capture its intrinsic structure. Our motivation for this work is the task of semi-supervised learning. In this setting, we have a large volume of unlabelled data but only a small number of labelled training examples. In our approach, we train a generative model using unlabelled data, and then use this model combined with whatever labelled data is available to train a discriminative model for classification. We demonstrate that our infinite VAE outperforms both the classical VAE and standard classification methods, particularly when the number of available labelled samples is small. This is because the infinite VAE is able to more accurately capture the distribution of the unlabelled data. It therefore provides a generative model that allows the discriminative model, which is trained based on its output, to be more effectively learnt using a small number of samples. The main contribution of this paper is twofold: (1) we provide a Bayesian non-parametric model for combining autoencoders, in particular variational autoencoders. This bridges the gap between non-parametric Bayesian methods and the deep neural networks; (2) we provide a semi-supervised learning approach that utilizes the infinite mixture of autoencoders learned by our model for prediction with from a small number of labeled examples. The rest of the paper is organized as follows. In Section \ref{sec:related} we review relevant methods, while in Section \ref{sec:Variational-Auto-encoder} we briefly provide background on the variational autoencoder. In Section \ref{sec:Infinite-Mixture-of} our non-parametric Bayesian approach to infinite mixture of VAEs is introduced. We provide the mathematical formulation of the problem and how the combination of Gibbs sampling and Variational inference can be used for efficient learning of the underlying structure of the input. Subsequently in Section \ref{sec:Semi-Supervised-Learning-using}, we combine the infinite mixture of VAEs as an unsupervised generative approach with discriminative deep models to perform prediction in a semi-supervised setting. In Section \ref{sec:Experiments} we provide empirical evaluation of our approach on various datasets including natural images and 3D shapes. We use various discriminative models including Residual Network \cite{HeZhangRenEtAl2015} in combination with our model and show our approach is capable of outperforming our baselines. \section{Related Work} \label{sec:related} Most of the successful learning algorithms, specially with deep learning, require large volume of labeled instance for training. Semi-supervised learning seeks to utilize the unlabeled data to achieve strong generalization by exploiting small labeled examples. For instance unlabeled data from the web is used with label propagation in \cite{EbertFritzSchiele2013} for classification. Similarly, semi supervised learning for object detection in videos \cite{MisraShrivastavaHebert2015} or images \cite{WangHebert2015,FuSigal2016}. Most of these approaches are developed by either (a) performing a projection of the unlabeled and labeled instances to an embedding space and using nearest neighbors to utilize the distances to infer the labeled similar to label propagation in shallow \cite{KangJinSukthankar2006,wang2009multi,InKimTompkinPfisterEtAl2015} or deep networks \cite{WestonRatleMobahiEtAl2012}; or (b), formulating some variation of a joint generative-discriminative model that uses the latent structure of the unlabeled data to better learn the decision function with labeled instances. For example ensemble methods \cite{ChenWang2008,MallapragadaJinJainEtAl2009,LeistnerSaffariSantnerEtAl2009,Zhou2011,DaiGool2013} assigns pseudo-class labels based on the constructed ensemble learner and in turn uses them to find a new proper learner to be added to the ensemble. In recent years, deep generative models have gained attention with success in Restricted Boltzman machines (and its infinite variation \cite{CoteLarochelle2015}) and autoencoders (e.g. \cite{KingmaMohamedRezendeEtAl2014,LarochelleMandelPascanuEtAl2012}) with their stacked variation \cite{VincentLarochelleLajoieEtAl2010}. The representations learned from these unsupervised approaches are used for supervised learning. Other related approaches to ours are adversarial networks \cite{GoodfellowPouget-AbadieMirzaEtAl2014,MiyatoMaedaKoyamaEtAl2016,MakhzaniShlensJaitlyEtAl2015} in which the generative and discriminative model are trained jointly. This model penalizes the generative model for as long as the samples drawn from it does not perform well in the discriminative model in a min-max optimization. Although theoretically well justified, training such models proved to be difficult. Our formulation for semi-supervised learning is also related to the Highway \cite{SrivastavaGreffSchmidhuber2015} and Memory \cite{WestonChopraBordes2014} networks that seek to combine multiple channels of information that capture various aspects of the data for better prediction, even though their approaches mainly focus on depth. \section{Variational autoencoder\label{sec:Variational-Auto-encoder}} While typically autoencoders assume a deterministic latent space, in a variational autoencoder the latent variable is stochastic. The input $\mathbf{x}$ is generated from a variable in that latent space $\mathbf{z}$. Since the joint distribution of the input when all the latent variables are integrated out is intractable, we resort to a variational inference (hence the name). The model is defined as: \begin{eqnarray*} p_{\boldsymbol{\theta}}(\mathbf{z}) & = & \mathcal{N}(\mathbf{z};0,\mathbf{I}),\\ p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z}) & = & \mathcal{N}(\mathbf{x};\mu(\mathbf{z}),\sigma(\mathbf{z})\mathbf{I}),\\ q_{\phi}(\mathbf{z}|\mathbf{x}) & = & \mathcal{N}(\mathbf{x};\mu(\mathbf{x}),\sigma(\mathbf{x})\mathbf{I}), \end{eqnarray*} where $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ are the parameters of the model to be found. The objective is then to minimize the following loss, \begin{eqnarray} & -\underbrace{\mathbb{E}_{\mathbf{z}\sim q(\mathbf{z}|\mathbf{x})}\left[\log p(\mathbf{x}|\mathbf{z})\right]}_{\text{reconstruction error}}+\underbrace{\text{KL}\left(q_{\phi}(\mathbf{z}|\mathbf{x})||p(\mathbf{z})\right)}_{\text{regularization}}.\label{eq:vae_loss} \end{eqnarray} \begin{figure} \centering{}\includegraphics[scale=0.5]{encoder.pdf}\caption{Variational encoder: the solid lines are direct connection and dotted lines are sampled. The input layer represented by $\mathbf{x}$ and the hidden layer $\mathbf{h}$ determine moments of the variational distribution. From the variational distribution the latent variable $\mathbf{z}$ is sampled.} \label{fig:var_encoder} \end{figure} The first term in this loss is the reconstruction error, or expected negative log-likelihood of the datapoint. The expectation is taken with respect to the encoder's distribution over the representations by taking a few samples. This term encourages the decoder to learn to reconstruct the data when using samples from the latent distribution. A large error indicates the decoder is unable to reconstruct the data. A schematic network of the encoder is shown in Figure \ref{fig:var_encoder}. As shown, deep network learns the mean and variance of a Gaussian from which subsequent samples of $\mathbf{z}$ are generated. The second term is the Kullback-Leibler divergence between the encoder's distribution $q_{\theta}(\mathbf{z}|\mathbf{x})$ and $p(\mathbf{z})$. This divergence measures how much information is lost when using $q$ to represent a prior over $\mathbf{z}$ and encourages its values to be Gaussian. To perform inference efficiently a reparameterization trick is employed~\cite{KingmaWelling2014} that in combination with the deep neural networks allow for the model to be trained with the backpropagation. \section{Infinite Mixture of Variational autoencoder\label{sec:Infinite-Mixture-of}} \begin{figure} \begin{centering} \includegraphics[scale=0.5]{infinite_mix} \par\end{centering} \caption{Infinite mixture of variational inference is shown as a block within which VAE components operate. Each latent variable $z_{i}$ (one dimensional in this illustration) in each VAE is drawn from a Gaussian distribution. Solid lines indicate nonlinear encoding and the dashed lines are decoders. In this diagram, $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$ are the parameters of the encoder and decoder respectively.} \label{fig:Infinite-mixture-of-VAE} \end{figure} An auto encoder in its classical form seeks to find an embedding of the input such that its reproduction has the least discrepancy. A variational autoencoder modifies this notion by introducing a \emph{non-parametric Bayesian} view where the conditional distribution of the latent variables, given the input, is similar to the distribution of the input given the latent variable, while ensuring the distribution of the latent variable is close to a Gaussian with zero mean and variance one. A single variational encoder has a fixed capacity and thus might not be able to capture the complexity of the input well. However by using a collection of VAEs, we can ensure that we are able to model the data, by adapting the number of VAEs in the collection to fit the data. In our \emph{infinite mixture}, we seek to find a mixture of these variational autoencoders such that its capacity can theoretically grow to infinity. Each autoencoder then is able to capture a particular aspect of the data. For instance, one might be better at representing round structures, and another better at straight lines. This mixture intuitively represents the various underlying aspects of the data. Moreover, since each VAE models the \emph{uncertainty} of its representations through the density of the latent variable, we know how confident each autoencoder is in reconstructing the input. One advantage of our non-parametric mixture model is that we are taking a Bayesian approach in which the distribution of the parameters are taken into account. As such, we capture the uncertainty of the model parameters. The autoencoders that are less confident about their reconstruction, have less effect on the output. As shown in Figure \ref{fig:Infinite-mixture-of-VAE}, each encoder finds a distribution for the embedding variable with some probability through a nonlinear transform (convolution or fully connected layers in neural net). Each autoencoder in the mixture block produces a probability measure for its ability to reconstruct the input. This behavior has parallels to the brain's ability to develop specialized regions responsible for particular visual tasks and processing particular types of image pattern. Mixture models are traditionally built using a pre-determined number of weighted components. Each weight coefficient determines how likely it is for a predictor to be successful in producing an accurate output. These coefficients are drawn from a multinomial distribution where the number of these coefficients are fixed. On the other hand, to learn an infinite mixture of the variational autoencoders in a non-parametric Bayesian manner we employ \emph{Dirichlet process}. In Dirichlet process, unlike traditional mixture models, we assume the probability of each component is drawn from a multinomial with a Dirichlet prior. The advantage of taking this approach is that we can integrate over all possible mixing coefficients. This allows for the number of components to be determined based on the data. \begin{algorithm} \input{alg.tex}\caption{Learning Infinite mixture of Variational autoencoders} \label{alg:alg1} \end{algorithm} Formally, let $\mathbf{c}$ be the assignment matrix for each instance to a VAE component (that is, which VAE is able to best reconstruct instance $i$) and $\boldsymbol{\pi}$ be the mixing coefficient prior for $\mathbf{c}$. For $n$ unlabeled instances we model the infinite mixture of VAEs as, {\small \begin{eqnarray*} p(\mathbf{c}, \boldsymbol{\pi},\boldsymbol{\theta},\mathbf{x}_{1,\ldots,n},\alpha)\!\!\!\! & = &\!\!\!\! p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha) \int p_{\boldsymbol{\theta}}(\mathbf{x}_{1,\ldots,n}|\mathbf{c},\mathbf{z})p(\mathbf{z})d\mathbf{z} \end{eqnarray*} } We assume the mixing coefficients are drawn from a Dirichlet distribution with parameter $\alpha$ (see Figure \ref{fig:dir_alpha} for examples), \begin{eqnarray*} p(\pi_{1},\ldots,\pi_{C}|\alpha) & \sim & \text{Dir}(\alpha/C), \end{eqnarray*} To determine the membership of each instance in one of the components of the mixture model, i.e. the likelihood that each variational autoencoder is able to encode the input and reconstruct it with minimum loss, we compute the conditional probability of membership. This conditional probability of each instance belonging to an autoencoder component is computed by integrating over all mixing components $\boldsymbol{\pi}$, that is \cite{Rasmussen2000,RasmussenGhahramani2002}, {\small \begin{eqnarray*} p(\mathbf{c},\boldsymbol{\theta},\mathbf{x}_{1,\ldots,n},\alpha) \!\!\!\!\!\!& = & \!\!\!\!\!\! \int\!\!\int\prod_{i}^{n}p{}_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})p(\mathbf{z}_{\mathbf{c}_{i}})p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha)d\boldsymbol{\pi}d\mathbf{z}_{\mathbf{c}_{i}} \end{eqnarray*} } This integration accounts for \emph{all possible} membership coefficients for all the assignments of the instances to VAEs. The distribution of $\mathbf{c}$ is multinomial, for which the Dirichlet distribution is its conjugate prior, and as such this integration is tractable. To perform inference for the parameters $\boldsymbol{\theta}$ and $\mathbf{c}$ we perform block Gibbs sampling, iterating between optimizing for $\boldsymbol{\theta}$ for each VAE and updating the assignments in $\mathbf{c}$. Optimization uses the variational autoencoder's trick by minimizing the loss in Equation \ref{eq:vae_loss}. To update $\mathbf{c}$, we perform the following Gibbs sampling: \begin{itemize} \item The conditional probability that an instance $i$ belongs to VAE $c$: \begin{eqnarray} p(\mathbf{c}_{i}=c|\mathbf{c}_{{\backslash}i},\mathbf{x}_{i},\alpha) & = & \frac{\eta_{c}(\mathbf{x}_{i})}{n-1+\alpha}\label{eq:current_label_prob} \end{eqnarray where $\eta_{c}(\mathbf{x}_{i})$ is the \emph{occupation number} of cluster $c$, excluding instance $i$ for $n$ instances. We define, \begin{eqnarray*} \eta_{c}(\mathbf{x}_{i}) & = & (n-1)p_{\boldsymbol{\theta}_{c}}(\mathbf{c}_{i}=c|\mathbf{x}_{i}), \end{eqnarray*} and {\footnotesize \begin{eqnarray*} p_{\boldsymbol{\theta}_{c}}(\mathbf{c}_{i}=c|\mathbf{x}_{i})\!\!\!\!\! & =\!\!\!\!\! & \frac{\exp\left(\mathbb{E}_{\mathbf{z}_{c}\sim q_{\phi_{c}(\mathbf{z}|\mathbf{x})}}\left[\log p_{\boldsymbol{\theta}_{c}}(\mathbf{x}_{i}|\mathbf{z}_{c})\right]\right)}{\sum_{j}\exp\left(\mathbb{E}_{\mathbf{z}_{j}\sim q_{\phi_{j}(\mathbf{z}|\mathbf{x})}}\left[\log p_{\boldsymbol{\theta}_{j}}(\mathbf{x}_{i}|\mathbf{z}_{j})\right]\right)}\label{eq:label_assign} \end{eqnarray*}} which in evaluates how likely an instance $\mathbf{x}_{i}$ is to be assigned to the $c$th VAE using latent samples $\mathbf{z}_{c}$. \item The probability that instance $i$ is not well represented by any of the existing autoencoders and a new encoder has to be generated: \end{itemize} \begin{eqnarray} p(\mathbf{c}_{i}=c|\mathbf{c}_{{\backslash}i},\mathbf{x}_{i},\alpha) & = & \frac{\alpha}{n-1+\alpha}.\label{eq:new_label_prob} \end{eqnarray} \begin{figure}[t] \begin{centering} \subfigure[$\alpha=0.99$]{\includegraphics[scale=0.14]{images/dir/dir_0. 990. 990. 99}}\subfigure[$\alpha=2$]{\includegraphics[scale=0.14]{images/dir/dir_222}}\subfigure[$\alpha=50$]{\includegraphics[scale=0.14]{images/dir/dir_505050}} \par\end{centering} \caption{Dirichlet distribution with various values of $\alpha$. Smaller values of $\alpha$ tend to concentrate the mass in the corners (in this simplex example and in general as the dimensions increase). These smaller values reduce the chance of generating new autoencoder components. } \label{fig:dir_alpha} \end{figure} Note that in principle, $\eta_{c}(\mathbf{x}_{i})$ is the a measure calculated by excluding the $i$th instance in the observations so that its membership is calculated with respect to its \char`\"{}similarity\char`\"{} to other members of the cluster. However, here we use $c$th VAE as an estimate of this occupation number for performance reasons. This is justified so long as the influence of a single observation on the latent representation of an encoder is negligible. In Equation \ref{eq:current_label_prob} when a sample for the new assignment is drawn from this multinomial distribution there is a chance for completely different VAE to fit this new instance. If the new VAE is not successful in fitting, the instance will be assigned to its original VAE with high probability in the subsequent iteration. The entire learning process is summarised in Algorithm \ref{alg:alg1}. To improve performance, at each iteration of our approach, we keep track of the $c$th VAE assignment changes in a set $A_{c}$. This allows us to efficiently update each VAE using a backpropagation operation for the new assignments. We perform two operations after VAE assignments are done: (1) \emph{forget}, and (2) \emph{learn}. In forgetting stage, we tend to unlearn the instances that were assigned to the given VAE. It is done by performing a gradient update with negative learning-rate, i.e. \emph{reverse backpropagation}. In the learning stage on the other hand, we update the parameters of the given VAE with positive learning-rate, as is commonly done using backpropagation. This alternation allows for structurally similar instances that can share latent variables to be learned with a single VAE, while forgetting those that are not well suited. To reconstruct an input $\mathbf{x}$ with an infinite mixture, the expected reconstruction is defined as: \begin{equation} \mathbb{E}[\mathbf{x}]=\sum_{c}p_{\boldsymbol{\theta}_{c}}(\mathbf{c}_{i}=c|\mathbf{x}_{i})\mathbb{E}_{q_{\phi}(\mathbf{z}_{c}|\mathbf{x})}\left[\mathbf{x}|\mathbf{z}_c\right].\label{eq:expected_x} \end{equation} That is, we use each VAE to reconstruct the input and weight it with the probability of that VAE (this probability is inversely proportionate to the variance of each VAE). \section{Semi-Supervised Learning using Infinite autoencoders\label{sec:Semi-Supervised-Learning-using}} Many of deep neural networks' greatest successes have been in supervised learning, which depends on the availability of large labeled datasets. However, in many problems such datasets are unavailable and alternative approaches, such as combination of generative and discriminative models, have to be employed. In semi-supervised learning, where the number of labeled instances is small, we employ our infinite mixture of VAEs to assist supervised learning. Inspired by the \emph{mixture of experts} \cite[Chapter 11]{Murphy2012} we formulate the problem of predicting output $y^{*}$ for the test example $\mathbf{x}^{*}$ as, \begin{eqnarray*} p(y^{*}|\mathbf{x}^{*}) & = & \sum_{c}^{C}\underbrace{p(y^{*}|\mathbf{x}^{*},\boldsymbol{\omega}_{c})}_{\text{deep discriminative}}\times\,\,\underbrace{p_{\boldsymbol{\theta}_{c}}(\mathbf{c}_{i}=c|\mathbf{x}_{i})}_{\text{deep generative}}. \end{eqnarray*} This formulation for prediction combines the discriminative power of a deep learner with parameter set $\boldsymbol{\omega}_{c}$, and a flexible generative model. For a given test instance $\mathbf{x}^{*}$, each discriminative expert produces a tentative output that is then weighted by the generative model. As such, each discriminative expert learns to perform better with instances that are more structurally similar from the generative model's perspective. During training we minimize the negative log of the discriminative term (log loss) weighted by the generative weight. Each instance's weight\textendash as calculated by the infinite autoencoder\textendash acts as an additional coefficient for the gradient in the backpropagation. It leads to similar instances getting stronger weights in the neural net during training. Moreover, it should be noted that the generative and discriminative models can share deep parameters $\boldsymbol{\omega}_{c}$ and $\boldsymbol{\theta}_{c}$ at some level. In particular in our implementation, we only consider parameters of the last layer to be distinct for each discriminative and generative component. We summarize our framework in Figure \ref{fig:complete_model}. While combining an unsupervised generative model and a supervised discriminative models is not itself novel, in our problem the generative model can grow to capture the complexity of the data. In addition, since we share the parameters of the discriminative and generative models, each unsupervised learner does not need to learn all the aspects of the input. In fact, in many classification problems with images, each pixel value hardly matters in the final decision. As such, by sharing parameters unsupervised model incurs a heavier loss when the distribution of the latent variables does not encourage the correct final decision. This sharing is done by reusing the parameters that are initialized with labels. \begin{figure} \includegraphics[scale=0.5]{all} \caption{Our framework for infinite mixture of VAEs and semi-supervised learning. We share the parameters of the discriminative model at the lower levels for more efficient training and prediction. For each VAE in the mixture we have an expert (e.g. softmax) before the output. Thicker arrows indicate more probable connection. } \label{fig:complete_model} \end{figure} \section{Experiments \label{sec:Experiments}} In this section, we examine the performance of our approach for semi-supervised classification on various datasets. We investigate how the combination of the generative and discriminative networks is able to perform semi-supervised learning effectively. Since convergence of Gibbs sampling can be very slow we first pre-train the base VAE with all the unlabeled examples. Each autoencoder is trained with a two dimensional latent variable $\mathbf{z}$ and initialized randomly. Hence each new VAE is already capable of reconstructing the input to a certain extent. During the sampling steps, this VAE becomes more specialized in a particular structure of the inputs. To further facilitate sampling, we set the number of clusters equal to the number of classes and use $100$ random labeled examples to fine-tune VAE assignments. At each iteration, if there is no instance assigned to a VAE, it will be removed. As such, the mixture grows and shrinks with each iteration as instances are assigned to VAEs. We report the results over 3 trials. For comparing the autoencoder's ability to internally capture the structure of the input, we compared latent representation obtained by a single VAE and the expected latent representation from our approach in Equation \ref{eq:expected_x} and subsequently trained a support vector machine (SVM) with it. For computing expectations, we used $20$ samples from the latent variable space. Once the generative model is learned with all the unlabelled instances using the infinite mixture model in Section \ref{sec:Infinite-Mixture-of}, we randomly select a subset of labeled instances for training the discriminative model. Throughout the experiments, we share the parameters in the discriminative architecture from the input to the last layer so that each expert is represented by a softmax. We report classification results in various problems including handwritten binary images, natural images and 3D shapes. Although the performance of our semi-supervised learning approach depends on the choice of the discriminative model, we observe our approach outperforms baselines particularly with smaller labeled instances. For all trainings\textendash either discriminative or generative\textendash we set the maximum number of iterations to $1000$ with batch size $500$ for the stochastic gradient descent with constant learning rate $0.001$. For VAEs we use the Adam \cite{KingmaBa2014} updates with $\beta_{1}=0.9$, $\beta_{2}$$=0.999$. However, we set a threshold on the changes in the loss to detect convergence and stop the training. Except for the binary images where we use a binary decoder ($p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})$ is binomial), our decoder is continuous (($p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})$ is Gaussian) in which samples from the latent space is used to regenerate the input to compute the loss. In problems when the input is too complex for the autoencoder to perform well, we share the output of the last layer of the discriminative model with the VAEs. \subsection{MNIST Dataset} \begin{figure*} \input{mnist_images.tex} \caption{An illustration of the autoencoder's input reconstruction. First row is the original images. Reconstructions in Figure \ref{fig:mnist_reconst_2} and \ref{fig:mnist_reconst_3} are obtained from using a single VAE. Images in the last row are obtained from the proposed mixture model of $18$ VAEs each with $50$ hidden units. As seen, reconstructed images are clearer in Figure \ref{fig:mnist_reconst_4}.} \label{fig:mnist_recons} \end{figure*} MNIST dataset\footnote{http://yann.lecun.com/exdb/mnist/} contains $60,000$ training and $10,000$ test images of size $28\times28$ of handwritten digits. Some random images from this dataset are shown in Figure \ref{fig:mnist_reconst_1}. We use original VAE algorithm (single VAE) with $100$ iteration and $50$ hidden variables to learn a representation for these digits with binary distribution for the input $p_{\boldsymbol{\theta}}(\mathbf{x}|\mathbf{z})$. As shown in Figure \ref{fig:mnist_reconst_2}, these reconstructions are very unclear and at times wrong (6th column where $7$ is wrongly reconstructed as $9$). Using this VAE as base, we train an infinite mixture of our generative model. After $10$ iterations with $\alpha=2$, the expected reconstruction $\mathbb{E}\left[\mathbf{x}\right]$ is depicted in Figure \ref{fig:mnist_reconst_4}. We use 2 samples to compute $\mathbb{E}[\mathbf{x}]$ for $c$th VAE. As observed, this reconstruction is visually better and the mistake in the 6th column is fixed. Further, Figure \ref{fig:mnist_reconst_3} shows using VAE with $1024$ hidden units. It is interesting to note that even though our proposed model has smaller number of hidden units ($900$ vs $1024$), the reconstruction is better using our model. In Table \ref{tbl:reconstruction} we summarize reconstruction error (that is, $\|\mathbf{x}-\mathbb{E}\left[\mathbf{x}\right]\|$) for using our approach versus the original VAE. As seen, our approach performs similarly with the VAE when the number of hidden units are almost similar ($1000$ vs $1024$). As seen, with higher number of VAEs, we are able to reduce the reconstruction error significantly. \begin{table} \centering{}{\footnotesize{}}% \begin{tabular}{|c|c|c|c|} \hline {\footnotesize{}Method} & {\footnotesize{}$C$} & {\footnotesize{}\# hidden units} & {\footnotesize{}Error}\tabularnewline \hline \hline \multirow{3}{*}{{\footnotesize{}Infinite Mixture}} & {\footnotesize{}2} & {\footnotesize{}100} & {\footnotesize{}$9.17$}\tabularnewline \cline{2-4} & {\footnotesize{}10} & {\footnotesize{}100} & {\footnotesize{}$5.12$}\tabularnewline \cline{2-4} & {\footnotesize{}17} & {\footnotesize{}100} & {\footnotesize{}$4.9$}\tabularnewline \hline \multirow{2}{*}{{\footnotesize{}VAE}} & {\footnotesize{}1} & {\footnotesize{}100} & {\footnotesize{}$5.92$}\tabularnewline \cline{2-4} & {\footnotesize{}1} & {\footnotesize{}1024} & {\footnotesize{}$5.1$}\tabularnewline \hline \end{tabular}\caption{Reconstruction error for MNIST dataset as the norm of the difference of the input image and the expected reconstruction comparing our approach with the original VAE. } \label{tbl:reconstruction} \end{table} To test our approach in a semi-supervised setting, we use a deep Convolutional Neural Net (CNN). Our deep CNN architecture consists of two convolutional layers with $32$ filters of $5\times5$ and Rectified Linear Unit (ReLU) activation and max-pooling of $2\times2$ after each one. We added a fully connected layer with $256$ hidden units followed by a dropout layer and then the softmax output layer. As shown in Table \ref{tbl:mnist_semi}, our infinite mixture with $17$ base VAEs has been able to outperform most of the state-of-the-art methods. Only recently proposed Virtual Adversarial Network {\footnotesize{}\cite{MiyatoMaedaKoyamaEtAl2016} }performs better than ours with small training examples. \begin{table} {\footnotesize{}}% \begin{tabular}{|c|c|c|c|} \hline {\footnotesize{}Method/Labels} & {\footnotesize{}100} & {\footnotesize{}1000} & {\footnotesize{}All}\tabularnewline \hline \hline {\footnotesize{}Pseudo-label \cite{Lee2013}} & {\footnotesize{}$10.49$} & {\footnotesize{}$3.64$} & {\footnotesize{}$0.81$}\tabularnewline \hline {\footnotesize{}EmbedNN \cite{WestonRatleMobahiEtAl2012}} & {\footnotesize{}$16.9$} & {\footnotesize{}$5.73$} & {\footnotesize{}$3.59$}\tabularnewline \hline {\footnotesize{}DGN \cite{KingmaMohamedRezendeEtAl2014}} & {\footnotesize{}$3.33\pm0.14$} & {\footnotesize{}$2.40\pm0.02$} & {\footnotesize{}$0.96$}\tabularnewline \hline {\footnotesize{}Adversarial \cite{GoodfellowPouget-AbadieMirzaEtAl2014}} & & & {\footnotesize{}$0.78$}\tabularnewline \hline {\footnotesize{}Virtual Adversarial \cite{MiyatoMaedaKoyamaEtAl2016}} & {\footnotesize{}$2.66$} & {\footnotesize{}$1.50$} & {\footnotesize{}$0.64\pm0.03$}\tabularnewline \hline {\footnotesize{}AtlasRBF \cite{PitelisRussellAgapito2014}} & {\footnotesize{}$8.10\pm0.95$} & {\footnotesize{}$3.68\pm0.12$} & {\footnotesize{}$1.31$}\tabularnewline \hline {\footnotesize{}PEA \cite{BachmanAlsharifPrecup2014}} & {\footnotesize{}$5.21$} & {\footnotesize{}$2.64$} & {\footnotesize{}$2.30$}\tabularnewline \hline {\footnotesize{}$\Gamma\text{-Model}$ \cite{RasmusValpolaHonkalaEtAl2015}} & {\footnotesize{}$4.34\pm2.31$} & {\footnotesize{}$1.71\pm0.07$} & {\footnotesize{}$0.79\pm0.05$}\tabularnewline \hline \hline {\footnotesize{}Baseline CNN} & {\footnotesize{}$8.62\pm1.87$} & {\footnotesize{}$4.16\pm0.35$} & \textbf{\footnotesize{}$0.68\pm0.02$}\tabularnewline \hline {\footnotesize{}Infinite Mixture} & {\footnotesize{}$3.93\pm0.5$} & \textbf{\footnotesize{}$2.29\pm0.2$} & \textbf{\footnotesize{}$0.6\pm0.02$}\tabularnewline \hline \end{tabular}\caption{Test error for MNIST with 17 clusters and 100 hidden variables. Only \cite{MiyatoMaedaKoyamaEtAl2016} reports better performance than ours } \label{tbl:mnist_semi} \end{table} \begin{figure*} \centering{}\subfigure{\includegraphics[scale=0.18]{images/mean_vars_var_c_2}}\hspace{-3mm}\subfigure{\includegraphics[scale=0.18]{images/mean_vars_var_c_4}}\hspace{-3mm}\subfigure{\includegraphics[scale=0.18]{images/mean_vars_var_c_5}}\hspace{-3mm}\subfigure{\includegraphics[scale=0.18]{images/mean_vars_var_c_7}}\hspace{-3mm}\subfigure{\includegraphics[scale=0.18]{images/mean_vars_var_c_9}}\caption{Two dimensional latent space found from training our infinite mixture of VAEs on Dogs dataset. We randomly selected 5 dog images and 5 images of anything else and plotted their latent representation in each VAE ($z_{1}$ for the first dimension and $z_{2}$ for the second one). The position of each circle represents the mean of the density for the given image in this space and its radius is the variance ($\mu$ and $\sigma$ in Figure \ref{eq:vae_loss}, respectively). As shown, representation of non-dogs (blue circles) are generally clustered far away from the dogs (red circles). Moreover, dogs have smaller variance than non-dogs, hence the VAEs are uncertain about the representation of images that were not seen during training.} \label{fig:dogs} \end{figure*} \begin{table*} \begin{centering} {\footnotesize{}}% \begin{tabular}{|c|c|c|c|c|} \hline {\footnotesize{}Method/Labels} & {\footnotesize{}100} & {\footnotesize{}1000} & {\footnotesize{}4000} & {\footnotesize{}All}\tabularnewline \hline \hline {\footnotesize{}AlexNet }\cite{KrizhevskySutskeverHinton2012} & {\footnotesize{}$69.59\pm3.21$} & {\footnotesize{}$86.72\pm0.66$} & {\footnotesize{}$89.88\pm0.03$} & {\footnotesize{}$90.26\pm0.25$}\tabularnewline \hline {\footnotesize{}Infinite Mixture} & {\footnotesize{}$75.81\pm1.83$} & {\footnotesize{}$89.28\pm0.19$} & {\footnotesize{}$90.68\pm0.05$} & {\footnotesize{}$91.69\pm0.17$}\tabularnewline \hline \hline {\footnotesize{}Latent VAE+SVM} & {\footnotesize{}$49.81\pm1.87$} & {\footnotesize{}$63.28\pm0.64$} & {\footnotesize{}$74.8\pm0.2$} & {\footnotesize{}$79.6\pm0.7$}\tabularnewline \hline {\footnotesize{}Latent Mixture+SVM} & {\footnotesize{}$58.1\pm2.63$} & {\footnotesize{}$72.28\pm0.2$} & {\footnotesize{}$79.8\pm0.18$} & {\footnotesize{}$83.9\pm0.24$}\tabularnewline \hline \end{tabular}{\footnotesize{}} \par\end{centering}{\footnotesize \par} \caption{Test accuracy of AlexNet on the dogs dataset compared to our proposed approach in the first two rows. Second two rows compare the latent representations obtained from a single VAE compared to ours.} \label{tbl:dogs} \end{table*} \subsection{Dogs Experiment} ImageNet is a dataset containing $1,461,406$ natural images manually labeled according to the WordNet hierarchy to 1000 classes. We select a subset of $10$ breeds of dogs for our experiment. These $10$ breeds are: ``Maltese dog, dalmatian, German shepherd, Siberian husky, St Bernard, Samoyed, Border collie, bull mastiff, chow, Afghan hound'' with $10,400$ training and $2,600$ test images. For an illustration of the latent space and how the mixture of VAEs is able to represent the uncertainty in the hidden variables we use this dogs subset. We fine-tune a pre-trained AlexNet \cite{KrizhevskySutskeverHinton2012} as the base discriminative model and share the parameters with the generative model. In particular, we use the $4096$-dimensional output of the $7$th fully connected layer (fc7) as the input for both softmax experts and the VAE autoencoders. We trained the generative model with all the unlabeled dog instance and used 1000 hidden units for each VAE and set $\alpha=2$ and stopped with \emph{$14$ }autoencoders. We randomly select 5 images of dogs (from this ImageNet subset) and $5$ images of anything else (non-dogs from Flicker with Creative Common License) for the illustration in Figure \ref{fig:dogs}. We plot the 2-dimensional latent representation of these images in $5$ VAEs of the learnt mixture. In each plot, the mean of the density of the latent variable $\mathbf{z}$ determines the position of the center of the circle and the variance is shown as its radius (we use the mean variance of the bivariate Gaussian for better illustration in a circle). These values are calculated from each VAE network as $\mu$ and $\sigma$ in Figure \ref{eq:vae_loss}. As shown, the images of non-dogs are generally clustered together in this latent space which indicate they are recognized to be different. In addition, the variance of the non-dogs are generally higher than the dogs. As such, even when the mean of non-dogs are not discriminative enough (the dogs and non-digs are not sufficiently well clustered apart in that VAE) we are \emph{uncertain} about the representations that are not dogs. This uncertainty leads to lower probability for the assignment to the given VAE (from Equation \ref{eq:label_assign}) and subsequently smaller weights when learning a mixture of experts model. In Table \ref{tbl:dogs} the accuracy of AlexNet on this dogs subset is shown and compared with our infinite mixture approach. As seen infinite mixture performs better, particularly with smaller labeled instances. In addition, latent representation of the infinite mixture (computed as an expectation) when used in a SVM significantly outperforms a single VAE. This illustrates the ability of our model in better capturing underlying representations. \subsection{CIFAR Dataset} \begin{table} {\footnotesize{}}% \begin{tabular}{|c|c|c|c|} \hline {\footnotesize{}Method/Labels} & {\footnotesize{}1000} & {\footnotesize{}4000} & {\footnotesize{}All}\tabularnewline \hline \hline {\footnotesize{}Spike-and-slab \cite{GoodfellowCourvilleBengio2012}} & & {\footnotesize{}31.9} & \tabularnewline \hline {\footnotesize{}Maxout \cite{GoodfellowWarde-FarleyMirzaEtAl2013}} & & & {\footnotesize{}$9.38$}\tabularnewline \hline {\footnotesize{}GDI \cite{PuYuanStevensEtAl2015}} & & & {\footnotesize{}$8.27$}\tabularnewline \hline {\footnotesize{}Conv-Large \cite{RasmusValpolaHonkalaEtAl2015,SpringenbergDosovitskiyBroxEtAl2014}} & & {\footnotesize{}$23.3\pm30.61$} & {\footnotesize{}$9.27$}\tabularnewline \hline {\footnotesize{}$\Gamma\text{-Model}$ \cite{RasmusValpolaHonkalaEtAl2015}} & & {\footnotesize{}$20.09\pm0.46$} & {\footnotesize{}$9.27$}\tabularnewline \hline \hline {\footnotesize{}Residual Network \cite{HeZhangRenEtAl2015}} & {\footnotesize{}$10.08\pm1.12$} & {\footnotesize{}$8.04\pm.21$} & {\footnotesize{}$7.5\pm0.01$}\tabularnewline \hline {\footnotesize{}Infinite Mixture of VAEs} & {\footnotesize{}$8.72\pm0.45$} & {\footnotesize{}$7.78\pm0.13$} & {\footnotesize{}$7.5\pm0.02$}\tabularnewline \hline \end{tabular}\caption{Test error on CIFAR10 with various number of labeled training examples. The results reported in \cite{RasmusValpolaHonkalaEtAl2015} did not include image augmentations. Although the original approach in \cite{SpringenbergDosovitskiyBroxEtAl2014} seems to offer up to $2\%$ error reduction with augmentation.} \label{tbl:cifar10} \end{table} The CIFAR-10 dataset \cite{KrizhevskyHinton2009} is composed of 10 classes of natural $32\times32$ RGB images with $50,000$ images for training and $10,000$ images for testing. Our experiments show single VAE does not perform well for encoding this dataset as is also confirmed here \cite{LarsenSoenderbyLarochelleEtAl2016}. However, since our objective is to perform semi-supervised learning, we use Residual network (ResNet) \cite{HeZhangRenEtAl2015} as a successful model in image representation for discriminative learning to share the parameters with our generative model. This model is useful for complex problems where the unsupervised approach may not be sufficient. In addition, autoencoders seek to preserve the distribution of the pixel values required in reconstructing the images while this information has a minimum impact on the final classification prediction. Therefore, such parameter sharing in which generative model is combined with the classifier is necessary for better prediction. As such we fine-tune a ResNet and use output of the $127$th layer as the input for the VAE. We use a $2000$ hidden nodes and $\alpha=2$ to train an infinite mixture with \emph{$15$ }VAEs. For training we augmented the training images by padding images with $4$ pixels on each side and random cropping. Table \ref{tbl:cifar10} reports the test error of running our approach on this dataset. As shown, our infinite mixture of VAEs combined with the powerful discriminative model outperforms the state-of-the-art in this dataset. When all the training instances are used the performance of our approach is the same as the discriminative model. This is because with larger labeled training sizes, the instance weights provided by the generative model are averaged and lose their impact, therefore all the experts become similar. With smaller labeled examples on the other hand, each softmax expert specializes in a particular aspect of the data. \subsection{3D ModelNet} \begin{figure} \centering{}\includegraphics[scale=0.35]{images/shapenet10}\caption{ModelNet10 compared to 3D Shapenet \cite{WuSongKhoslaEtAl2015} and DeepPano \cite{ShiBaiZhouEtAl2015} averaged over 3 trials.} \label{fig:modelnet} \end{figure} The ModelNet datasets were introduced in \cite{WuSongKhoslaEtAl2015} to evaluate 3D shape classifiers. ModelNet has $151,128$ 3D models classified into $40$ object categories, and ModelNet10 is a subset based on classes in the NYUv2 dataset \cite{SilbermanHoiemKohliEtAl2012}. The 3D models are voxelized to fit a $30\times30\times30$ grid and augmented by $12$ rotations. For the discriminative model we use a convolutional architecture similar to that of \cite{MaturanaScherer2015} where we have a 3D convolutional layer with $32$ filters of size $5$ and stride $2$, convolution of size $3$ and stride $1$, max-pooling layer with size $2$ and a $128$-dimensional fully connected layer. Similar to the CIFAR-10 experiment, we share the parameters of the last fully connected layer between the infinite mixture of VAEs and the discriminative softmax. As shown in Figure \ref{fig:modelnet}, when using the whole dataset our infinite mixture and the best result from \cite{MaturanaScherer2015} match at $92\%$ accuracy. However, as we reduce the number of labeled training examples it is clear that our approach outperforms a single softmax classifier. Additionally, Table \ref{tbl:modelnet_svm} shows the accuracy comparison of the latent representation obtained from the samples from our infinite mixture and a single VAE as measured by the performance of SVM. As seen, the expected latent representation in our approach is significantly more discriminative and outperforms single VAE. This is because, we take into account the variations in the input and adapt to the complexity of the input. While a single VAE has to capture the dataset in its entirety, our approach is free to choose and fit. \begin{table} \centering {\footnotesize \begin{tabular}{|c|c|c|c|} \hline Method/Labels & 100 & 1000 & All\tabularnewline \hline \hline VAE latent+SVM & 64.21 & 79.09 & 82.71\tabularnewline \hline Mixture latent+SVM & 74.01 & 83.26 & 85.68\tabularnewline \hline \end{tabular}}\caption{ModelNet10 accuracy of latent variable representation for training SVM using a single VAE versus expected latent variable in our approach.} \label{tbl:modelnet_svm} \end{table} Our experiments with both 2D and 3D images show the initial convolutional layers play a crucial rule for the VAEs to be able to encode the input into a latent space where the mixture of experts best perform. This 3D model further illustrate the decision function mostly depends on the internal structure of the generative model rather than reconstruction of the pixel values. When we share the parameters of the discriminative model with the generative infinite mixture of VAEs and learn the mixture of experts, we combine various representations of the data for better prediction. \section{Conclusion} In this paper, we employed Bayesian non-parametric methods to propose an infinite mixture of variational autoencoders that can grow to represent the complexity of the input. Furthermore, we used these autoencoders to create a mixture of experts model for semi-supervised learning. In both 2D images and 3D shapes, our approach provides state of the art results in various datasets. We further showed that such mixtures, where each component learns to represent a particular aspect of the data, are able to produce better predictions using fewer total parameters than a single monolithic model. This applies whether the model is generative or discriminative. Moreover, in semi-supervised learning where the ultimate objective is classification, parameter sharing between discriminative and generative models was shown to provide better prediction accuracy. In future works we plan to extend our approach to use variational inference rather than sampling for better efficiency. In addition, a new variational loss that minimizes the joint probability of the input and output in a Bayesian paradigm may further increase the prediction accuracy when the number of labeled examples is small. \newpage \bibliographystyle{ieee} \section{Mathematical Details of the Infinite Variational Autoencoder } We have the following \begin{eqnarray*} p\frac{}{}(\mathbf{c},\boldsymbol{\theta},\mathbf{x}_{1,\ldots,n},\alpha) & = & \int\int p_{\boldsymbol{\theta}}(\mathbf{x}_{1,\ldots,n}|\mathbf{c},\mathbf{z})p(\mathbf{z})p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha)d\boldsymbol{\pi}d\mathbf{z}\\ & = & \int p_{\boldsymbol{\theta}}(\mathbf{x}_{1,\ldots,n}|\mathbf{c},\mathbf{z})p(\mathbf{z})\bigg[\int p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha)d\boldsymbol{\pi}\bigg]d\mathbf{z}\\ & = & \int\left(\bigg[\prod_{i}^{n}p{}_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})p(\mathbf{z}_{\mathbf{c}_{i}})\bigg]\bigg[\int p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha)d\boldsymbol{\pi}\bigg]\right)d\mathbf{z}_{\mathbf{c}_{i}}. \end{eqnarray*} To perform inference so that the distributions of the unknown parameters are known, we use blocked \emph{Gibbs sampling} by iterating the following two steps: \begin{enumerate} \item Sample for the unknown density of the observations with parameter $\boldsymbol{\theta}$ (we drop $\alpha$because it is conditionally independent $\mathbf{x}$): \begin{eqnarray*} \mathbf{x}_{1,\ldots,n},\boldsymbol{\theta} & \sim & p(\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}|\mathbf{c}) \end{eqnarray*} \item Sample the base VAE assignments: \begin{eqnarray*} \qquad\quad\mathbf{c} & \sim & p(\mathbf{c}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\alpha) \end{eqnarray*} \end{enumerate} For the first step, we use variational inference to find joint probability of the input and its parameter $\boldsymbol{\theta}$ \emph{conditioned} on current assignments. Using standard variational inference, we have, \begin{eqnarray*} p(\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}|\mathbf{c}) & = & \int\prod_{i}p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})p(\mathbf{z}_{\mathbf{c}_{i}})d\mathbf{z}_{\mathbf{c}_{i}}\\ & = & \int\prod_{i}\frac{p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})p_{\boldsymbol{\theta}}(\mathbf{z}_{\mathbf{c}_{i}})}{q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})}q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})d\mathbf{z}_{\mathbf{c}_{i}}\\ & = & \int\prod_{i}p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})\frac{p_{\boldsymbol{\theta}}(\mathbf{z}_{\mathbf{c}_{i}})}{q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})}q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})d\mathbf{z}_{\mathbf{c}_{i}} \end{eqnarray*} Taking the $\log$ from both sides and using Jensen's inequality, we have the following lower bound for the joint distribution of the observations conditioned on the latent variable assignments (VAE assignments in the infinite mixture): \begin{eqnarray} \log\left(p(\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}|\mathbf{c})\right) & \geq & \sum_{i}-\text{KL}\left(q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})\|p_{\boldsymbol{\theta}}(\mathbf{z}_{\mathbf{c}_{i}})\right)+\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x}_{i})}[\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})].\label{eq:vae_loss}\\ & & =\sum_{i}\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x}_{i})}[\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i},\mathbf{z}_{\mathbf{c}_{i}})-\log q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})] \end{eqnarray} Here, $\boldsymbol{\theta}$ denotes all the parameters in the decoder network and $\phi$ all the parameters in the encoder. Now, to compute the expectations in both the KL-divergence and the conditional likelihood of the second term, we use the sampling with the reparameterization trick. Thus, Equation \ref{eq:vae_loss} is rewritten as \begin{eqnarray*} \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x}_{i})}[\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i},\mathbf{z}_{\mathbf{c}_{i}})-\log q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})] & \approx & \frac{1}{L}\sum_{\ell=1}^{L}\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i},\mathbf{z}_{\mathbf{c}_{i}}^{\ell})-\log q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}^{\ell}|\mathbf{x}_{i}) \end{eqnarray*} where $\mathbf{z}$ is taken from a differentiable function that performs a random transformation of $\mathbf{x}$. This differentiable transformation function allows for using stochastic gradient descent in backpropagation algorithm. We use $L=2$ in our experiments. For sampling the base VAE assignment $\mathbf{c}$, we know that \begin{eqnarray*} p(\mathbf{c}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\alpha) & = & \int\underbrace{p(\mathbf{c}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\boldsymbol{\pi})}_{\text{Multinomial distribution}}\underbrace{p(\boldsymbol{\pi}|\alpha)}_{\text{Dirichlet distribution}}d\boldsymbol{\pi} \end{eqnarray*} This integral corresponds to a Multinomial distribution with Dirichlet prior where the number of components $C$ is a constant. We have, \begin{eqnarray*} p(\mathbf{c}_{1}\ldots,\mathbf{c}_{C}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\boldsymbol{\pi}) & = & \prod_{j}^{C}\boldsymbol{\pi}_{j}^{n_{j}},\qquad\qquad n_{j}=p_{\boldsymbol{\theta}_{j}}(\mathbf{c}_{i}=j|\mathbf{x}_{i})\times\sum_{i=1}^{n}\mathbb{I}[\mathbf{c}_{i}=j], \end{eqnarray*} Using the standard Dirichlet integration, we have \begin{eqnarray*} p(\mathbf{c}_{1}\ldots,\mathbf{c}_{C}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}) & = & \int p(\mathbf{c}_{1}\ldots,\mathbf{c}_{C}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\boldsymbol{\pi}_{1},\ldots,\boldsymbol{\pi}_{C})p(\boldsymbol{\pi}_{1},\ldots,\boldsymbol{\pi}_{C})d\boldsymbol{\pi}_{1},\ldots,\boldsymbol{\pi}_{C}\\ & = & \frac{\Gamma(\alpha)}{\Gamma(\alpha+n)}\prod_{j=1}^{C}\frac{\Gamma(n_{j}+\alpha/C)}{\Gamma(\alpha/C)} \end{eqnarray*} where we can draw samples from the conditional probabilities as \begin{eqnarray*} p(\mathbf{c}_{i}|\mathbf{c}_{i-1},\mathbf{c}_{i+1}\ldots,\mathbf{c}_{C},\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}) & = & \frac{\eta_{j}(\mathbf{x}_{i})+\alpha/C}{n-1+\alpha}. \end{eqnarray*} When taking the number of components to approach infinity, $C\to\infty$ it is easy to see that the results in the paper is obtained. \section{Base Variational Autoencoder's Architecture} The base autoencoder contains the following layers: \begin{enumerate} \item Input layer: Depending on the type of the input it's dimensions are different. \item A fully connected layer with the number of hidden dimensions $\mathbf{h}$. This number of hidden dimensions is what is changed during training and infinite mixture uses infinite hidden dimensions. We use batch normalization in the input of this layer that according to our experiments helps with the convergence and better performance. The output of the batch normalization is used in $\tanh$ nonlinearity units. The original VAE paper did not use batch normalization. \item The output of the last hidden layer is used in another fully connected layer with linear units to estimate the mean of the density layer $\mu$. This is the mean of the Gaussian density of the latent variable $\mathbf{z}$. Since this density is multivariate, we found $10$ percent of the hidden dimensions to performing the best. For the Dogs experiment in the paper, we used $2$-dimensional latent space. \item Similar to the mean layer $\mu$, we have another layer with the same dimensions for estimating the diagonal entries of the density of the latent space $\sigma$. \item For the decoder, we need to sample from the density of the latent variable $\mathbf{z}$ to compute the reconstruction. We use two samples though-out our experiments to estimate the expected reconstruction following below steps for the decoder: \begin{enumerate} \item Sample from the latent multivariate Gaussian distribution with mean $\mu$ and variance $\sigma$. \item The sample is used in another fully connected layer with hidden dimensions $\mathbf{h}$. We use batch normalization in the input of this layer too. Batch normalization helps with searching for a latent space in a lower dimensions. The output of the batch normalization is used in $\tanh$ nonlinearity units. \item The output of the batch normalized latent space is used in another fully connected layer with sigmoid nonlinearity unit to reconstruct the input. \end{enumerate} \end{enumerate} It should be noted that this non-symmetric autoencoder corresponds to the binary VAE described in the original paper (with minor changes that helped with its convergence stability and performance). We found this architecture to perform better than its alternative symmetric one for the semi-supervised learning application of ours for CIFAR-10 and MNIST. In evaluation of the model for computing the loss, we use the cross-entropy measure for $p(\mathbf{x}|\mathbf{z})$ since the variables are considered binary. For 3D ModelNet and Dogs dataset, we use a symmetric variant that showed to be more effective. In the symmetric version, we changed all the $\tanh$ units to softplus ( $\log(1+\exp(x))$). The final step 5c is changed to the following: We use the hidden layer $\mathbf{h}$ to feed into two fully connected layers for the mean and variance of the decoder $\mu_{\text{dec}}$, $\sigma_{\text{dec}}$. We then sample from this decoding density for the reconstruction of the input. For computing the loss, we just use the log-likelihood of the reconstruction. All the code is implemented in Python using Lasagne\footnote{https://github.com/Lasagne/Lasagne} and Theano\footnote{http://deeplearning.net/software/theano/}. We will release the code for the submission amongst with the dogs dataset. \end{document} \section{Mathematical Details of the Infinite Variational Autoencoder } We have the following \begin{eqnarray*} p(\mathbf{c},\boldsymbol{\theta},\mathbf{x}_{1,\ldots,n},\alpha) & = & \int\int p_{\boldsymbol{\theta}}(\mathbf{x}_{1,\ldots,n}|\mathbf{c},\mathbf{z})p(\mathbf{z})p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha)d\boldsymbol{\pi}d\mathbf{z}\\ & = & \int p_{\boldsymbol{\theta}}(\mathbf{x}_{1,\ldots,n}|\mathbf{c},\mathbf{z})p(\mathbf{z})\bigg[\int p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha)d\boldsymbol{\pi}\bigg]d\mathbf{z}\\ & = & \int\left(\bigg[\prod_{i}^{n}p{}_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})p(\mathbf{z}_{\mathbf{c}_{i}})\bigg]\bigg[\int p(\mathbf{c}|\boldsymbol{\pi})p(\boldsymbol{\pi}|\alpha)d\boldsymbol{\pi}\bigg]\right)d\mathbf{z}_{\mathbf{c}_{i}}. \end{eqnarray*} To perform inference so that the distributions of the unknown parameters are known, we use blocked \emph{Gibbs sampling} by iterating the following two steps: \begin{enumerate} \item Sample for the unknown density of the observations with parameter $\boldsymbol{\theta}$ (we drop $\alpha$because it is conditionally independent $\mathbf{x}$): \begin{eqnarray*} \mathbf{x}_{1,\ldots,n},\boldsymbol{\theta} & \sim & p(\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}|\mathbf{c}) \end{eqnarray*} \item Sample the base VAE assignments: \begin{eqnarray*} \qquad\quad\mathbf{c} & \sim & p(\mathbf{c}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\alpha) \end{eqnarray*} \end{enumerate} For the first step, we use variational inference to find joint probability of the input and its parameter $\boldsymbol{\theta}$ \emph{conditioned} on current assignments. Using standard variational inference, we have, \begin{eqnarray*} p(\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}|\mathbf{c}) & = & \int\prod_{i}p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})p(\mathbf{z}_{\mathbf{c}_{i}})d\mathbf{z}_{\mathbf{c}_{i}}\\ & = & \int\prod_{i}\frac{p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})p_{\boldsymbol{\theta}}(\mathbf{z}_{\mathbf{c}_{i}})}{q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})}q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})d\mathbf{z}_{\mathbf{c}_{i}}\\ & = & \int\prod_{i}p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})\frac{p_{\boldsymbol{\theta}}(\mathbf{z}_{\mathbf{c}_{i}})}{q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})}q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})d\mathbf{z}_{\mathbf{c}_{i}} \end{eqnarray*} Taking the $\log$ from both sides and using Jensen's inequality, we have the following lower bound for the joint distribution of the observations conditioned on the latent variable assignments (VAE assignments in the infinite mixture): \begin{eqnarray} \log\left(p(\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}|\mathbf{c})\right) & \geq & \sum_{i}-\text{KL}\left(q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})\|p_{\boldsymbol{\theta}}(\mathbf{z}_{\mathbf{c}_{i}})\right)+\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x}_{i})}[\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i}|\mathbf{z}_{\mathbf{c}_{i}})].\label{eq:vae_loss}\\ & & =\sum_{i}\mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x}_{i})}[\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i},\mathbf{z}_{\mathbf{c}_{i}})-\log q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})] \end{eqnarray} Here, $\boldsymbol{\theta}$ denotes all the parameters in the decoder network and $\phi$ all the parameters in the encoder. Now, to compute the expectations in both the KL-divergence and the conditional likelihood of the second term, we use the sampling with the reparameterization trick. Thus, Equation \ref{eq:vae_loss} is rewritten as \begin{eqnarray*} \mathbb{E}_{q_{\phi}(\mathbf{z}|\mathbf{x}_{i})}[\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i},\mathbf{z}_{\mathbf{c}_{i}})-\log q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}|\mathbf{x}_{i})] & \approx & \frac{1}{L}\sum_{\ell=1}^{L}\log p_{\boldsymbol{\theta}_{\mathbf{c}_{i}}}(\mathbf{x}_{i},\mathbf{z}_{\mathbf{c}_{i}}^{\ell})-\log q_{\phi}(\mathbf{z}_{\mathbf{c}_{i}}^{\ell}|\mathbf{x}_{i}) \end{eqnarray*} where $\mathbf{z}$ is taken from a differentiable function that performs a random transformation of $\mathbf{x}$. This differentiable transformation function allows for using stochastic gradient descent in backpropagation algorithm. We use $L=2$ in our experiments. For sampling the base VAE assignment $\mathbf{c}$, we know that \begin{eqnarray*} p(\mathbf{c}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\alpha) & = & \int\underbrace{p(\mathbf{c}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\boldsymbol{\pi})}_{\text{Multinomial distribution}}\underbrace{p(\boldsymbol{\pi}|\alpha)}_{\text{Dirichlet distribution}}d\boldsymbol{\pi} \end{eqnarray*} This integral corresponds to a Multinomial distribution with Dirichlet prior where the number of components $C$ is a constant. We have, \begin{eqnarray*} p(\mathbf{c}_{1}\ldots,\mathbf{c}_{C}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\boldsymbol{\pi}) & = & \prod_{j}^{C}\boldsymbol{\pi}_{j}^{n_{j}},\qquad\qquad n_{j}=p_{\boldsymbol{\theta}_{j}}(\mathbf{c}_{i}=j|\mathbf{x}_{i})\times\sum_{i=1}^{n}\mathbb{I}[\mathbf{c}_{i}=j], \end{eqnarray*} Using the standard Dirichlet integration, we have \begin{eqnarray*} p(\mathbf{c}_{1}\ldots,\mathbf{c}_{C}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}) & = & \int p(\mathbf{c}_{1}\ldots,\mathbf{c}_{C}|\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta},\boldsymbol{\pi}_{1},\ldots,\boldsymbol{\pi}_{C})p(\boldsymbol{\pi}_{1},\ldots,\boldsymbol{\pi}_{C})d\boldsymbol{\pi}_{1},\ldots,\boldsymbol{\pi}_{C}\\ & = & \frac{\Gamma(\alpha)}{\Gamma(\alpha+n)}\prod_{j=1}^{C}\frac{\Gamma(n_{j}+\alpha/C)}{\Gamma(\alpha/C)} \end{eqnarray*} where we can draw samples from the conditional probabilities as \begin{eqnarray*} p(\mathbf{c}_{i}|\mathbf{c}_{i-1},\mathbf{c}_{i+1}\ldots,\mathbf{c}_{C},\mathbf{x}_{1,\ldots,n},\boldsymbol{\theta}) & = & \frac{\eta_{j}(\mathbf{x}_{i})+\alpha/C}{n-1+\alpha}. \end{eqnarray*} When taking the number of components to approach infinity, $C\to\infty$ it is easy to see that the results in the paper is obtained. \section{Base Variational Autoencoder's Architecture} The base autoencoder contains the following layers: \begin{enumerate} \item Input layer: Depending on the type of the input it's dimensions are different. \item A fully connected layer with the number of hidden dimensions $\mathbf{h}$. This number of hidden dimensions is what is changed during training and infinite mixture uses infinite hidden dimensions. We use batch normalization in the input of this layer that according to our experiments helps with the convergence and better performance. The output of the batch normalization is used in $\tanh$ nonlinearity units. The original VAE paper did not use batch normalization. \item The output of the last hidden layer is used in another fully connected layer with linear units to estimate the mean of the density layer $\mu$. This is the mean of the Gaussian density of the latent variable $\mathbf{z}$. Since this density is multivariate, we found $10$ percent of the hidden dimensions to performing the best. For the Dogs experiment in the paper, we used $2$-dimensional latent space. \item Similar to the mean layer $\mu$, we have another layer with the same dimensions for estimating the diagonal entries of the density of the latent space $\sigma$. \item For the decoder, we need to sample from the density of the latent variable $\mathbf{z}$ to compute the reconstruction. We use two samples though-out our experiments to estimate the expected reconstruction following below steps for the decoder: \begin{enumerate} \item Sample from the latent multivariate Gaussian distribution with mean $\mu$ and variance $\sigma$. \item The sample is used in another fully connected layer with hidden dimensions $\mathbf{h}$. We use batch normalization in the input of this layer too. Batch normalization helps with searching for a latent space in a lower dimensions. The output of the batch normalization is used in $\tanh$ nonlinearity units. \item The output of the batch normalized latent space is used in another fully connected layer with sigmoid nonlinearity unit to reconstruct the input. \end{enumerate} \end{enumerate} It should be noted that this non-symmetric autoencoder corresponds to the binary VAE described in the original paper (with minor changes that helped with its convergence stability and performance). We found this architecture to perform better than its alternative symmetric one for the semi-supervised learning application of ours for CIFAR-10 and MNIST. In evaluation of the model for computing the loss, we use the cross-entropy measure for $p(\mathbf{x}|\mathbf{z})$ since the variables are considered binary. For 3D ModelNet and Dogs dataset, we use a symmetric variant that showed to be more effective. In the symmetric version, we changed all the $\tanh$ units to softplus ( $\log(1+\exp(x))$). The final step 5c is changed to the following: We use the hidden layer $\mathbf{h}$ to feed into two fully connected layers for the mean and variance of the decoder $\mu_{\text{dec}}$, $\sigma_{\text{dec}}$. We then sample from this decoding density for the reconstruction of the input. For computing the loss, we just use the log-likelihood of the reconstruction. All the code is implemented in Python using Lasagne\footnote{https://github.com/Lasagne/Lasagne} and Theano\footnote{http://deeplearning.net/software/theano/}. We will release the code for the submission amongst with the dogs dataset.
1,108,101,564,032
arxiv
\section{Introduction} The spin structure of bound states is a topic of considerable interest in recent years \cite{Filippone:2001ux}. The so-called ``proton spin crisis'' was generated from the EMC experiment~\cite{EMC} in which it was found only a small fraction of the total proton spin is carried by the quark spin. In Ref.~\cite{Ji:1996ek}, a gauge-invariant decomposition of the nucleon angular momentum into the quark spin, orbital and the gluon contributions has been introduced. The connection between off-forward (or generalized) parton distributions and the quark/gluon angular momenta within the proton has been established. The total angular momentum carried by quarks is measurable through the deeply-virtual Compton scattering (DVCS)~\cite{Ji:1996nm}. A large amount of works have also been done in studying the angular momentum carried by the gluons, $\Delta G$, and several experiments have been designed to measure this quantity and interesting physics has been learnt~\cite{data}. While much of the attention has been focused on relativistic theory, it is illuminating to study angular momentum in the non-relativistic limit of QED/QCD. A clear understanding of the non-relativistic angular momentum structure is useful for many systems, including the hydrogen-like atoms~\cite{Eides:2000xc} and the heavy quarkonium~\cite{Hoang:2002ae} bound states. The fraction of angular momentum carried by the photon in hydrogen atom is relevant to the gluon contribution to the spin of the proton. Since the hydrogen atom is a non-relativistic bound state, the naive perturbation theory of relativistic QED is not applicable because multiple energy scales present. For the ground state, the electron has orbital velocity $v \sim \alpha_{\rm _{EM}}$ where $\alpha_{\rm _{EM}}\approx 1/137$ is the electromagnetic fine-structure constant, therefore the loop expansion in $\alpha_{\rm _{EM}}/v$ does not converge~\cite{weinberg}. As another example, in production of electron-positron pairs near the threshold, the interactions are always accompanied with the famous Sommerfeld enhancement proportional to $\alpha_{\rm _{EM}}/v$. Attempts to calculate these corrections within the relativistic theory is difficult analytically, usually involving numerical solution of the Bethe-Salpeter equations. Effective field theory approach sheds new light on the problems described above. In recent years, effective theories including non-relativistic QED (NRQED)~\cite{Caswell:1985ui} and the non-relativistic QCD (NRQCD), has become a usual tool in solving non-relativistic bound states. NRQED has been used to calculate the hyperfine splitting and lamb shift of the hydrogen system with considerable simplification~\cite{Pineda:1997ie, Kinoshita:1995mt, Luke:1999kz}. NRQCD has been used in analyzing the heavy quarkonium inclusive production in colliders~\cite{Beneke:1997zp} and precision bound-state calculations on the lattice. By observing that the Coulomb interactions exchanging momentum at much lower scale compared to the fermion mass $m$, one can integrate out the large momentum scale, resulting in higher dimensional effective operators suppressed by powers of ${p}/{m}\ll1$. In effective theories, relativistic and radiative corrections to bound state problems can be systematically expanded in terms of ${p}/{m}$ and $\alpha_{\rm _{EM}}$. The non-relativistic bound state wavefunction properly sums up all order corrections due to the Coulomb photon exchange. Therefore, the NRQED as an effective field theory, is capable of describing the hydrogen atom up to any definite order with desired precision in a language familiar in quantum mechanics. In the non-relativistic limit, the orbital angular momentum $\vec{L}$ and spin $\vec{S}$ of the electron are both conserved and can be used to classify the energy eigenstates. Moreover, the contribution from the gauge potential in the orbital part is proportional to the velocity and hence negligible. $\vec{L}$ is then the usual non-relativistic angular momentum. When taking into account the relativistic effects and quantum corrections, the angular momentum operator becomes more complicated. For example, for relativistic Dirac Hamiltonian, neither $L$ nor $S$ maintains as a good quantum due to corrections of order $v^2/c^2$. These are relativistic corrections starting at order $\mathcal{O}(\alpha_{\rm _{EM}}^2)$. Radiative corrections will further contribute at order $\mathcal{O}(\alpha_{\rm _{EM}}^3)$~\cite{Chen:2009rw}. In this paper, we extend the discussions in~\cite{Chen:2009rw} and present relevant details of deriving the effective angular momentum operator in NRQED. For this purpose, we will construct a set of gauge-invariant effective operators in the non-relativistic theory. The matrix elements of the spin (orbital) angular momentum for the electron and the gauge fields are calculated in full QED and matched to the effective theory up to order ${\alpha_{\rm _{EM}}}/{m^2}$. The main results of this paper are Eqs.~(\ref{spin})--(\ref{gamma}) and Eqs.~(\ref{WilsonBilinear}), (\ref{WilsonGauge}), (\ref{cutoff_Lag}). Using the power counting in NRQED~\cite{Luke:1999kz, Bodwin:1994jh}, we have applied the one-loop matching result to calculate the angular momentum carried by radiative photons in the hydrogen atom up to $\mathcal{O}(\alpha_{\rm _{EM}}^3)$ in Ref.~\cite{Chen:2009rw}. The further extension to the angular momentum decomposition in QCD/NRQCD will be presented elsewhere. The NRQED Lagrangian is first given in Ref.~\cite{Caswell:1985ui}. In Ref.~\cite{Bodwin:1994jh}, a power counting rule has been established. \begin{eqnarray} \label{NRQED Lagrangian} \mathcal{L}_{\rm NRQED} & = & \psi^\dag\left\{iD^0 + c_2\frac{\mathbf{D}^2}{2m} + c_4\frac{\mathbf{D}^4}{8m^3} + \frac{e\,c_F}{2m}\boldgreek{\sigma}\cdot\mathbf{B} + \frac{e\,c_D}{8m^2} [\boldgreek{\nabla}\cdot\mathbf{E}] + \frac{ie\,c_S}{8m^2}\boldgreek{\sigma}\cdot(\mathbf{D}\times\mathbf{E}-\mathbf{E}\times\mathbf{D}) \right\}\psi \nonumber\\ & & - \frac{d_1}{4} F_{\mu\nu}F^{\mu\nu} + \frac{d_2}{m^2} F_{\mu\nu}\square F^{\mu\nu} + \mathcal{L}_{\rm 4-Fermi} + \cdots \ , \end{eqnarray} here $\mathcal{L}_{\rm 4-Fermi}$ represents all the four-fermi terms in the Lagrangian and will not be used in this paper. The Wilson coefficients in the effective Lagrangian are well known~\cite{Manohar:1997qy}. Up to $\mathcal{O}(\alpha_{\rm _{EM}})$ order, \begin{eqnarray} &&c_2 = c_4 = 1;\;\; c_F = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi};\;\; c_D = 1 - \frac{4\alpha_{\rm _{EM}}}{3\pi} \ln\frac{\mu^2}{m^2} ;\;\; c_S = 1 + \frac{\alpha_{\rm _{EM}}}{\pi};\nonumber\\ &&d_1 = 1 + \frac{\alpha_{\rm _{EM}}}{3\pi}\ln\frac{\mu^2}{m^2};\;\; d_2 = \frac{\alpha_{\rm _{EM}}}{60\pi}\ . \end{eqnarray} In the calculation, we will use the ``old-fashioned'' NRQED~\cite{holstein}, which is equivalent to the intermediate effective theory in pNRQED~\cite{Brambilla:1999xf}. In performing the matching, $1/m$ expansion of the matrix elements (so are the effective operators) as described in Ref.~\cite{Manohar:1997qy} is understood. It is proved in Ref.~\cite{Grinstein:1997gv} that this agrees with the multi-pole expansion method in Ref.~\cite{Labelle:1996en}. More explanations will follow along with the detailed calculations. \section{Angular Momentum Operators in QED and NRQED and Matching Conditions} The question we are interested in is how the total angular momentum is distributed among various components, e.g., that carried by electron and photon, respectively. The effective non-relativistic angular momentum operators are useful in studying many non-relativistic bound state systems, including the hydrogen-like atoms and heavy quarkonia. The purpose of the following sections will be the construction of gauge-invariant angular momentum operators in NRQED. We start by first reviewing the angular momentum operator in full QED. We then use the Foldy-Wouthysen (FW) transformation to get the corresponding NRQED operators at tree level. We then use the N\"{o}ether current method to derive the total NRQED angular momentum operator from the effective NRQED lagrangian. This method is accurate to all loops. However, it does not give us the results for individual relativistic angular momentum components. Therefore, we write down the Wilson expansion of these angular momentum components in terms of the non-relativistic angular momentum operators. In the following sections, we determine the Wilson coefficients using the matching method. \subsection{Angular Momentum in Full QED} It is straightforward to derive the total angular momentum operator from the QED Lagrangian using the N\"{o}ether current method. Starting from \begin{equation} \mathcal{L}_{\rm QED} = -\frac{1}{4} F^{\mu\nu}F_{\mu\nu} + \bar{\Psi}(i\slashed{D} - m)\Psi \ , \end{equation} the conserved QED angular momentum is~\cite{Jaffe:1989jz} \begin{eqnarray} \label{QED_J} \mathbf{J}_{\rm QED} & = & \int d^3x \left\{\Psi^\dag \frac{\boldgreek{\Sigma}}{2}\Psi + \Psi^\dag (\mathbf{x} \times \boldgreek{\pi})\Psi + \mathbf{x}\times (\mathbf{E}\times \mathbf{B})\right\} \equiv \mathbf{S}_q+\mathbf{L}_q+\mathbf{J}_\gamma \ , \end{eqnarray} in which \begin{eqnarray} \label{QED_Op} \mathbf{S}_q & \equiv & \frac{1}{2}\int d^3x \Psi^\dag \boldgreek{\Sigma} \Psi \ , \nonumber\\ \mathbf{L}_q & \equiv & \int d^3x \Psi^\dag (\mathbf{x} \times \boldgreek{\pi}) \Psi \ , \nonumber\\ \mathbf{J}_\gamma & \equiv & \int d^3x \;\mathbf{x}\times (\mathbf{E}\times \mathbf{B}) \ . \end{eqnarray} The individual relativistic operators $\mathbf{S}_q, \mathbf{L}_q, \mathbf{J}_\gamma$ are gauge invariant and are regarded as the electron spin, electron orbital angular momentum and photon angular momentum, respectively. Alternative separation can be achieved in some particular gauge and frame choices. Our goal is to construct the non-relativistic counterparts to $\mathbf{S}_q, \mathbf{L}_q,$ and $\mathbf{J}_\gamma$. The result within NRQED is gauge invariant, with the manifest separation of high- and low-scale physics. \subsection{Non-Relativistic Reduction: Foldy-Wouthysen Transformation} In this subsection, we apply the Foldy-Wouthysen (FW) transformation to the QED operators in Eq.~(\ref{QED_Op}). We shall keep in mind that the FW transformation does not yield any information about radiative corrections, due to its quantum mechanical nature. Consequently, one can obtain only the leading-order Wilson coefficients by transforming the relativistic operators $\mathbf{S}_q$, $\mathbf{L}_q$, and $\mathbf{J}_\gamma$, respectively. Any Wilson coefficient starting from order $\mathcal{O}(\alpha_{\rm _{EM}})$ or higher would not show up in the transformation. Furthermore, the FW transformation only rotate the electron wavefunction, leaving the photon sector $\mathbf{J}_\gamma$ unchanged. Here is a quick review of the FW transformation. The Dirac Hamiltonian describes the relativistic fermion containing both positive and negative frequencies. At lower energy, the negative frequency part (positron) is subdominant. For an electron state, the physics is dominated by the interactions of the electron itself, and the correction from the positron is suppressed by the mass. The latter is reflected by the off-block-diagonal elements in the Hamiltonian. The FW transformation utilizes a unitary transformation to block-diagonalize the Dirac Hamiltonian up to order $\mathcal{O}(m^{-2})$. The unitary transformation matrix is \begin{equation} \label{FoldyU} U = \exp\left[\frac{\beta \boldgreek{\alpha} \cdot \boldgreek{\pi}}{2m}\right] = \exp\left[\frac{\boldgreek{\gamma} \cdot \boldgreek{\pi}}{2m}\right] \ , \end{equation} where in the Dirac representation \begin{equation} \beta = \left[\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right], \, \, \, \, \, \boldgreek{\alpha} = \left[\begin{array}{cc} 0 & \boldgreek{\sigma} \\ \boldgreek{\sigma} & 0 \end{array}\right] \ . \end{equation} One can use $U$ to eliminate the lower components in a four-component Dirac spinor of electron up to order $\mathcal{O}(m^{-3})$ and a quantum mechanics operator $\hat{\mathcal{O}}$ transforms accordingly \begin{eqnarray} \label{FoldyO} u(\mathbf{p}) & \to & U \cdot u(\mathbf{p}) = {u_h(\mathbf{p}) \choose 0} + \mathcal{O}(m^{-3}) \ ,\nonumber\\ \hat{\mathcal{O}} & \to & U \cdot \hat{\mathcal{O}} \cdot U^\dag \ . \end{eqnarray} The operators in Eq.~(\ref{QED_Op}) take the following form under the FW transformation \begin{eqnarray} \label{FW} \int d^3 \mathbf{x}\ \Psi^\dag \frac{\boldgreek{\Sigma}}{2} \Psi & \rightarrow & \int d^3 \mathbf{x} \psi^\dag\left\{ \frac{\boldgreek{\sigma}}{2} + \frac{1}{8m^2} \left[ (\boldgreek{\sigma} \times \boldgreek{\pi})\times \boldgreek{\pi} - \boldgreek{\pi} \times (\boldgreek{\sigma} \times \boldgreek{\pi}) \right] + \frac{e\mathbf{B}}{4m^2} \right\}\psi \ , \nonumber \\ \int d^3 \mathbf{x}\ \Psi^\dag \mathbf{x} \times \boldgreek{\pi} \Psi & \rightarrow & \int d^3 \mathbf{x} \psi^\dag\left\{ \mathbf{x} \times \boldgreek{\pi} - \frac{1}{8m^2} \left[ (\boldgreek{\sigma} \times \boldgreek{\pi})\times \boldgreek{\pi} - \boldgreek{\pi} \times (\boldgreek{\sigma} \times \boldgreek{\pi}) \right] - \frac{e\mathbf{B}}{4m^2} \right\}\psi \nonumber \\ & & + \int d^3 \mathbf{x} \frac{e}{8m^2} \left\{ \mathbf{x} \times [\mathbf{B} \times \boldgreek{\nabla}(\psi^\dag \psi)] + \mathbf{x} \times \psi^\dag \left[\mathbf{B} \times(\boldgreek{\sigma} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi \right\} \ , \nonumber \\ \int d^3x \;\mathbf{x}\times (\mathbf{E}\times \mathbf{B}) & \rightarrow & \int d^3x \;\mathbf{x}\times (\mathbf{E}\times \mathbf{B}) \ , \end{eqnarray} where $\psi$ is the non-relativistic electron field and $\overleftrightarrow{\boldgreek{\pi}}$ is defined as: \begin{eqnarray} \psi^\dag_{p'}\overleftrightarrow{\boldgreek{\pi}}\psi_{p} & \equiv & \psi^\dag_{p'} (\boldgreek{\pi}\psi_{p}) + \big(\boldgreek{\pi}\psi_{p'}\big)^\dag \psi_{p} = \psi^\dag_{p'} \left( \mathbf{p} + \mathbf{p}' -2e\mathbf{A}\right) \psi_{p} \ . \end{eqnarray} In the above results, the next-to-leading order relativistic corrections has been properly taken into account. In Ref.~\cite{Chen:2009rw}, it has been shown that the relativistic correction to the orbital angular momentum of ground state hydrogen atom can be calculated with the above leading order operators. As we will see in the following section, these results are also consistent with those derived from matching between QED and NRQED at $\mathcal{O}(\alpha_{_{\rm EM}}/m^2)$ order. \subsection{Total Angular Momentum from NRQED Lagrangian} In this subsection, we will first derive $\mathbf{J}_{\rm NRQED}$ using the N\"{o}ether current method. The NRQED Lagrangian contains a second-order-derivative term $\frac{d_2}{m^2} F_{\mu\nu}\square F^{\mu\nu}$ and calls for special treatments. In Appendix A, we present the generalized formalism to derive the equation of motion and the conserved currents, including the energy-momentum current $T^{\mu\nu}$ and the angular momentum current $\mathcal{M}^{\mu\nu\lambda}$, from a general Lagrangian $\mathcal{L}=\mathcal{L}(x, \phi, \partial_\mu\phi, \partial_\mu\partial_\nu\phi)$. For NRQED, the equation of motion with regard to $A^0$ field is \begin{eqnarray} -e\psi^\dagger\psi - \frac{e\,c_D}{8m^2}\boldgreek{\nabla}^2(\psi^\dag \psi) + \frac{ie\,c_S}{4m^2}\boldgreek{\nabla} \cdot \left[\psi^\dag \left(\boldgreek{\sigma}\times\mathbf{D}\right)\psi\right] + d_1 \partial_\mu F^{\mu 0} - \frac{4d_2}{m^2}\square\left(\partial_\mu F^{\mu 0}\right) = 0 . \label{EoM} \end{eqnarray} and the total angular momentum \begin{eqnarray} \label{NRQED_J} \mathbf{J}_{\rm NRQED} & = & \int d^3 \mathbf{x} \left\{ \psi^\dag \left(\frac{\boldgreek{\sigma}}{2}\right) \psi + \psi^\dag \left(\mathbf{x} \times \boldgreek{\pi}\right) \psi + \frac{e\,c_D}{8m^2} \mathbf{x} \times [\mathbf{B} \times \boldgreek{\nabla}(\psi^\dag \psi)] \rule{0cm}{6mm}\right. \nonumber\\ &+&\frac{e\,c_S}{8m^2} \mathbf{x} \times \psi^\dag \left[\mathbf{B} \times(\boldgreek{\sigma} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi + d_1\,\mathbf{x} \times \left( \mathbf{E} \times \mathbf{B} \right) \nonumber\\ &+&\left.\left(-\frac{4d_2}{m^2}\right)\left[ - \mathbf{x} \times ( \square \mathbf{E} \times \mathbf{B} ) + \dot{\mathbf{E}}^a (\mathbf{x}\times \boldgreek{\nabla}) \mathbf{E}^a - \dot{\mathbf{B}}^a(\mathbf{x}\times \boldgreek{\nabla}) \mathbf{B}^a + \dot{\mathbf{E}}\times\mathbf{E} - \dot{\mathbf{B}}\times\mathbf{B}\right] \right\} \nonumber\\ &+& \cdots \ . \end{eqnarray} We will not consider four-fermion operators in this paper. The total angular momentum operator $\mathbf{J}_{\rm NRQED}$ in NRQED is manifestly gauge invariant. However, it is difficult to see a clear correspondence with the individual relativistic operators. Therefore, we shall use the matching method to derive the non-relativistic expansion of the relativistic operators. \subsection{Relativistic Angular Momentum Operator in NRQED} We observe that the effective Lagrangian in Eq.~(\ref{NRQED Lagrangian}) still possesses the rotational symmetry in 3D, therefore implies the existence of conserved total angular momentum $\mathbf{J}_{\rm NRQED}$~\footnote[1]{The authors in Ref.~\cite{Brambilla:2003nt} have noticed this as well, and derived the angular momentum in the NRQED up to order $\mathcal{O}\left(\alpha_{\rm _{EM}}^0/m^0\right)$.}. In this paper, we will derive non-relativistic effective angular momentum operators up to order $\mathcal{O}\left(\alpha_{\rm _{EM}}/m^2\right)$, through order-by-order matching to the full QED. The general form of angular momentum operators in NRQED can be constructed as follows. First, we write down all the axial-vector Hermitian operators respecting gauge and 3D rotational invariance. Second, we require these operators contain no time derivatives on the fermion fields. The removal of $D^0$ can be done by field redefinition as shown in~\cite{Manohar:1997qy}. Any operator containing $D^0$ can be rewritten as linear combinations of other operators by the equation of motion without changing the on-shell matrix elements. Third, there will be non-local operators in $\mathbf{L}_q$ and $\mathbf{J}_\gamma$, containing the space coordinate $\mathbf x$ explicitly. Here we write down the general form of the effective operators, with the Wilson coefficients to be determined. For electron spin, \begin{eqnarray}\label{spin} \mathbf{S}_q \rightarrow \mathbf{S}_q^{\rm eff} & = & \int d^3x\left\{\rule{0cm}{7mm}a_\sigma\psi^\dag\frac{\boldgreek{\sigma}}{2}\psi + \frac{a_\pi}{8m^2} \psi^\dag \left[(\boldgreek{\sigma}\times\boldgreek{\pi}) \times \boldgreek{\pi}-\boldgreek{\pi} \times (\boldgreek{\sigma}\times\boldgreek{\pi})\right]\psi + \frac{e\,a_B}{4m^2}\psi^\dag\mathbf{B}\psi \rule{0cm}{7mm}\right\} \nonumber\\ & & +\int d^3x \left\{\frac{a_{\gamma_1}}{m^2}\left[\dot{\mathbf{E}}\times\mathbf{E} - \dot{\mathbf{B}}\times\mathbf{B}\right] + \frac{a_{\gamma_2}}{m^2} \left[(\boldgreek{\nabla}\cdot \mathbf{E})\mathbf{B} - (\mathbf{B}\cdot \boldgreek{\nabla}) \mathbf{E} \rule{0cm}{4mm}\right] \rule{0cm}{7mm}\right\} + \mathcal{O}(m^{-3}) \ , \nonumber \\ \end{eqnarray} for electron orbital momentum, \begin{eqnarray}\label{orbit} \mathbf{L}_q \rightarrow \mathbf{L}_q^{\rm eff} & = & \int d^3x\left\{\rule{0cm}{7mm} d_\sigma\psi^\dag\frac{\boldgreek{\sigma}}{2}\psi + \frac{d_\pi}{8m^2} \psi^\dag \left[(\boldgreek{\sigma}\times\boldgreek{\pi}) \times \boldgreek{\pi}-\boldgreek{\pi} \times (\boldgreek{\sigma}\times\boldgreek{\pi})\right]\psi \right.\nonumber\\ & & + d_R \psi^\dag(\mathbf{x}\times \boldgreek{\pi})\psi + \frac{e\,d_B}{4m^2}\psi^\dag\mathbf{B}\psi + \frac{e\,d_D}{8m^2} \mathbf{x}\times \left[\mathbf{B}\times \boldgreek{\nabla}(\psi^\dag\psi)\right]\nonumber\\ & & + \frac{e\,d_S}{8m^2}{\mathbf x}\times \psi^\dag \left[\mathbf{B} \times(\boldgreek{\sigma} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi + \frac{e\,d'_S}{8m^2}{\mathbf x}\times \psi^\dag \left[\boldgreek{\sigma}\times (\mathbf{B} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi\nonumber\\ & & \left.+ \frac{e\,d_E}{4m}\mathbf{x} \times\psi^\dag\left(\boldgreek{\sigma} \times \mathbf {E}\right)\psi \rule{0cm}{7mm}\right\} + \int d^3x \left\{\rule{0cm}{7mm}d_\gamma\, \mathbf{x}\times(\mathbf{E}\times\mathbf{B}) + \frac{d_{\gamma_1}}{m^2}\left[\dot{\mathbf{E}}\times\mathbf{E} - \dot{\mathbf{B}}\times\mathbf{B}\right]\right.\nonumber\\ & &+ \frac{d_{\gamma_2}}{m^2} \left[(\boldgreek{\nabla}\cdot \mathbf{E})\mathbf{B} - (\mathbf{B}\cdot \boldgreek{\nabla}) \mathbf{E} \rule{0cm}{4mm}\right] + \frac{d_{\gamma_3}}{m^2} \left[ - \mathbf{x} \times ( \square \mathbf{E} \times \mathbf{B} ) \rule{0cm}{4mm}\right] \nonumber\\ & & \left.+ \frac{d_{\gamma_4}}{m^2}\left[\dot{\mathbf{E}}^a (\mathbf{x}\times \boldgreek{\nabla}) \mathbf{E}^a - \dot{\mathbf{B}}^a (\mathbf{x}\times \boldgreek{\nabla}) \mathbf{B}^a\right] \rule{0cm}{7mm}\right\} +\mathcal{O}(m^{-3}) \ , \end{eqnarray} and for photon angular momentum operator, \begin{eqnarray}\label{gamma} \mathbf{J}_\gamma \rightarrow \mathbf{J}_\gamma^{\rm eff} & = & \int d^3x\left\{\rule{0cm}{7mm} f_\sigma\psi^\dag\frac{\boldgreek{\sigma}}{2}\psi + \frac{f_\pi}{8m^2} \psi^\dag \left[(\boldgreek{\sigma}\times\boldgreek{\pi}) \times \boldgreek{\pi}-\boldgreek{\pi} \times (\boldgreek{\sigma}\times\boldgreek{\pi})\right]\psi \right.\nonumber\\ & & + f_R \psi^\dag({\mathbf x\times \boldgreek{\pi}})\psi + \frac{e\,f_B}{4m^2}\psi^\dag \mathbf{B} \psi + \frac{e\,f_D}{8m^2} {\mathbf x}\times \left[\mathbf B \times \boldgreek{\nabla}(\psi^\dag\psi)\right] \nonumber\\ & & + \frac{e\,f_S}{8m^2}{\mathbf x}\times \psi^\dag \left[\mathbf{B} \times(\boldgreek{\sigma} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi + \frac{e\,f'_S}{8m^2}{\mathbf x}\times \psi^\dag \left[\boldgreek{\sigma}\times (\mathbf{B} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi\nonumber\\ & & \left. + \frac{e\,f_E}{4m}\mathbf{x} \times\psi^\dag\left(\boldgreek{\sigma} \times \mathbf {E}\right)\psi \rule{0cm}{7mm}\right\} + \int d^3xf_\gamma\mathbf{x}\times(\mathbf{E}\times\mathbf{B})+ \mathcal{O}(m^{-3}) \ . \end{eqnarray} In what follows, we will perform the matching of various matrix elements between QED and NRQED, to determine all the Wilson coefficients in Eq.~(\ref{spin})--(\ref{gamma}). The matching condition is \begin{equation} \label{MatchingCondition} \langle\mathbf{J}(\mu)\rangle_{\rm QED} = \langle\mathbf{J}^{\rm eff}(\mu)\rangle_{\rm NRQED}\ . \end{equation} Although the primary application of our NRQED effective operators involves bound states, we are not obliged to use a bound state to do the matching. For local operators which do not contain space coordinate $\mathbf{x}$ explicitly, we shall calculate two- and three-body matrix elements between plane-wave external states. On the other hand, when calculating the matrix element of non-local operators, such as $\mathbf{L}_q$, $\mathbf{J}_\gamma$ etc, we have to replace the simple plane waves by general wave packets. An important remark is in order. The total angular momentum operator in Eq.~(\ref{NRQED_J}), originates from the underlying $SO(3)$ rotational symmetry of the NRQED Lagrangian. It has nothing to do with the perturbative expansion and must be exact. The gauge-invariant effective operators, as will be derived from the matching to the full theory, must agree with Eq.~(\ref{NRQED_J}) in the sum. This fact imposes the constraint $\mathbf{S}_q^{\rm eff} + \mathbf{L}_q^{\rm eff} + \mathbf{J}_\gamma^{\rm eff} = \mathbf{J}_{\rm NRQED}$, which in turn, leads to the sum rules among the above Wilson coefficients, as given explicitly in Ref.~\cite{Chen:2009rw}. \section{Two-Body Matching Through Single-Electron States} In this section, we consider matching the relativistic and NR angular momentum operators through single-electron states. This step alone cannot determine all the Wilson coefficients in Eqs.~(\ref{spin})--(\ref{gamma}). However, the calculation is useful to illustrate the general procedure, and shows simple physical concepts behind the matching. The result in this section is complementary to the three-body matching in the next one. We calculate the forward matrix elements of the angular momentum operators: $\langle e_{p}| \mathcal{O}_J|e_{p}\rangle \equiv \langle\mathcal{O}_J\rangle_{p,p}$, with $\mathcal{O}_J$ representing one of the relativistic operators $\mathbf{S}_q$, $\mathbf{L}_q$ and $\mathbf{J}_\gamma$ and $|e_p \rangle$ is a free electron state with momentum $p$. We will perform non-relativistic reduction of their matrix elements in QED, and match them with those in NRQED. The spin operator $\mathbf{S}_q$ is local and its contribution can be easily calculated in QED. At tree level, we have \begin{equation} \langle \mathbf{S}_q \rangle_{p,p} = \frac{1}{2} \bar u(p) \boldgreek{\Sigma} u(p) , \end{equation} where $u(p)$ is the usual Dirac spinor. The calculation of $\langle\mathbf{L}_q\rangle$ and $\langle\mathbf{J}_\gamma\rangle$ is more involved. The explicit presence of space coordinate $\mathbf{x}$ in the operator obscures the definition of the forward matrix element. Before going into explicit calculations, we address a basic question: what is the angular momentum for a free electron? The idea of the total angular momentum of a plane wave is ill defined. To see this, we first consider an off-forward matrix element of the operator $\mathbf{L}_q$. After some algebra, \begin{eqnarray} \langle \mathbf{L}_q \rangle_{p',p} & = & \int d^3x \langle \mathbf{p}'| \Psi^\dag (\mathbf{x}\times\boldgreek{\pi} ) \Psi |\mathbf{p}\rangle\nonumber\\ & \equiv & \int d^3x e^{-i\mathbf{x}\cdot(\mathbf{p'}-\mathbf{p})}\mathbf{x} \times \mathbf{f}(p', p)\nonumber\\ & = & (2\pi)^3[i\boldgreek{\nabla}_{p'}\delta^3(\mathbf{p}-\mathbf{p}')] \times \mathbf{f}(p', p) \ , \end{eqnarray} where $\mathbf{f}(p',p)=\bar u(p') \mathbf{p} u(p)$ at tree level. The derivative on the delta function is singular if one takes $\mathbf{p}=\mathbf{p}'$. A more realistic question would be to calculate the angular momentum of a distribution, or a {\it wave packet} $|\Phi \rangle \equiv \int \frac{d^3 \mathbf{p'}}{(2\pi)^3} \Phi(\mathbf{p'})|\mathbf{p'}\rangle$. The result will not only depend on the intrinsic properties of the electron itself, but also on the shape of the wave packet. Such relevance arises because the angular momentum operator is a non-local operator. In the momentum space, $\mathbf{x}$ can be Fourier transformed into the derivative over the conjugate momentum $\boldgreek{\nabla}_{p'}$. After integration by parts, the contribution to the total angular momentum is therefore separated into the two parts: the derivative on the matrix element $\mathbf{f}(p',p)$ and on the wave packet $\Phi(\mathbf{p'})$, \begin{eqnarray}\label{23} \langle \mathbf{L}_q \rangle_{\Phi} & = & \int d^3p \Phi^*(\mathbf{p}) \Phi(\mathbf{p})\left[i\boldgreek{\nabla}_{p'} \times \mathbf{f}(p', p)\rule{0mm}{4mm}\right]_{p'=p} + \frac{1}{2}\int d^3p \boldgreek{\nabla}_p\left[\Phi^*(\mathbf{p}) \Phi(\mathbf{p})\rule{0mm}{4mm}\right]\times \mathbf{f}(p,p). \end{eqnarray} In the first term, $[i\boldgreek{\nabla}_{p'} \times \mathbf{f}(p', p)]_{p'=p}$ is the orbital angular momentum density intrinsic to the electron and contributes in the same manner as $\langle\mathbf{S}_q\rangle$. The second term is precisely the angular momentum carried by the wave packet. Similar terms exist for the matrix elements of $\mathbf{J}_\gamma$, as well as any non-local effective operator. In the following, we will denote the class of contribution like the first term in Eq.~(\ref{23}) as the {\it local} contribution, and the second term as the {\it non-local} contribution for a matrix element between general wave packets. For local operators like $\mathbf{S}_q$, only the local contribution is present. For non-local operators, such as $\mathbf{L}_q$ and $\mathbf{J}_\gamma$ etc, both local and non-local parts of the matrix elements from the full theory and the effective theory should match, independently. This will provide us with enough information to obtain the relevant Wilson coefficients. For non-relativistic reduction, we use the convention for a four-component spinor \begin{equation}\label{nr} u(p) = \sqrt{\frac{p^0+m}{2p^0}} {u_h \choose \frac{\mathbf{p} \cdot{\boldgreek{\sigma}}}{p^0+m}u_h} \ , \end{equation} where $u_h = \sqrt{2p^0}{1\choose0}$ or $\sqrt{2p^0}{0\choose1}$ and $p^0=\sqrt{\mathbf{p}^2+m^2}$. So we can rewrite the full theory matrix elements in terms of $u_h$ and compare them with the effective theory. At tree level, the non-relativistic reduction of the QED matrix elements are \begin{eqnarray} \label{2bodytree} &&\langle \mathbf{S}_q \rangle^{(0)}_{\rm local} = u_h^\dag \frac{\boldgreek{\sigma}}{2} u_h + \frac{1}{4 m^2} u_h^\dag \left[ (\boldgreek{\sigma} \times \mathbf{p}) \times \mathbf{p} \right] u_h + \mathcal{O}(m^{-3}) \nonumber \\ &&\langle \mathbf{L}_q \rangle^{(0)}_{\rm local} = - \frac{1}{4 m^2} u_h^\dag \left[ (\boldgreek{\sigma} \times \mathbf{p}) \times \mathbf{p} \right] u_h + \mathcal{O}(m^{-3}) \nonumber\\ &&\mathbf{f}(p,p)^{(0)}_{L_q,\rm non-local} = u_h^\dag\mathbf{p} u_h, \end{eqnarray} This simple result can be used to examine the FW-tranformed $\mathbf{S}_q$ and $\mathbf{L}_q$. By calculating the matrix elements of operators in Eq.~(\ref{FW}) between two non-relativistic spinor wave packets, we find they indeed agree with Eq.~(\ref{2bodytree}). Therefore, we conclude \begin{equation} a_\sigma^{(0)} = a_\pi^{(0)} = -d_\pi^{(0)} = d_R^{(0)} = 1 \ , \end{equation} at the leading order. \begin{figure}[hbt] \begin{center} \includegraphics[width=4cm]{QED_2body_OneLoop_a.eps} \hspace{.5cm} \includegraphics[width=4cm]{QED_2body_OneLoop_b.eps} \hspace{.5cm} \includegraphics[width=4cm]{QED_2body_OneLoop_c.eps} \caption{One-loop corrections to $\langle e|\mathbf{J}|e\rangle$ in QED. Here $\otimes$ can be either $\mathbf{L}_q$,$\mathbf{S}_q$, or $\mathbf{J}_\gamma$: (a) for $\mathbf{S}_q$; (a)(b) for $\mathbf{L}_q$; (c) for $\mathbf{J}_\gamma$. Wave function renormalization diagrams, mass counterterms and the mirror diagrams are not shown explicitly. } \label{QED_2body} \end{center} \end{figure} The one-loop Feynman diagrams for the electron two-body matrix element of $\mathbf{S}_q$, $\mathbf{L}_q$ and $\mathbf{J}_\gamma$ are shown in Fig. \ref{QED_2body}. We use the dimensional regularization (DR) for both infrared (IR) and ultraviolet (UV) divergencies. The {\it local} contributions in the matrix elements are \begin{eqnarray} \label{2body1loopSpin} \langle \mathbf{S}_q \rangle^{(1)}_{\rm local} & = & \frac{\alpha_{\rm _{EM}}}{2 \pi} u_h^\dag\left\{ \frac{\boldgreek{\sigma}}{2} + \frac{1}{4 m^2} (\boldgreek{\sigma} \times \mathbf{p}) \times \mathbf{p} \right\} u_h \nonumber\\ \langle \mathbf{L}_q \rangle^{(1)}_{\rm local} & = & \frac{\alpha_{\rm _{EM}}}{2 \pi} u_h^\dag \left\{ \left( -\frac{4}{3\epsilon_{\rm UV}} - \frac{4}{3}\ln\frac{\mu^2}{m^2} - \frac{20}{9} \right) \frac{\boldgreek{\sigma}}{2} - \frac{1}{4 m^2} \frac{5}{3} (\boldgreek{\sigma} \times \mathbf{p}) \times \mathbf{p} \right\}u_h \nonumber\\ \langle \mathbf{J}_\gamma \rangle^{(1)}_{\rm local} & = & \frac{\alpha_{\rm _{EM}}}{2 \pi} u_h^\dag \left\{ \left( \frac{4}{3 \epsilon_{\rm UV}} + \frac{4}{3}\ln\frac{\mu^2}{m^2} + \frac{11}{9} \right) \frac{\boldgreek{\sigma}}{2} + \frac{1}{4m^2} \frac{2}{3} (\boldgreek{\sigma} \times \mathbf{p}) \times \mathbf{p} \right\}u_h \ , \end{eqnarray} where all IR divergences cancel. The {\it non-local} contributions are \begin{eqnarray} \label{2body1loopOrbital} \mathbf{f}(p,p)^{(1)}_{L_q, {\rm non-local}} & = & - \mathbf{f}(p,p)^{(1)}_{J_\gamma, {\rm non-local}} = \frac{\alpha_{\rm _{EM}}}{2 \pi} \left( -\frac{4}{3\epsilon_{\rm UV}} - \frac{4}{3}\ln\frac{\mu^2}{m^2} - \frac{17}{9} \right)u_h^\dag \mathbf{p}u_h \ . \end{eqnarray} As a cross check, we sum up the one-loop results \begin{eqnarray} \langle \mathbf{S}_q + \mathbf{L}_q + \mathbf{J}_\gamma \rangle^{(1)}_{\rm local} &=& 0\nonumber\\ \mathbf{f}(p,p)^{(1)}_{L_q, {\rm non-local}} + \mathbf{f}(p,p)^{(1)}_{J_\gamma, {\rm non-local}} &=& 0 \ . \end{eqnarray} The vanishing sum of the first-order contributions is expected since the total angular momentum is a conserved quantity. The algebra it respects dictates that the expectation value must be integer or half-integer and should not depend on expansion parameter $\alpha_{\rm _{EM}}$. For the NRQED diagrams, we use dimensional regularization as well. Following the argument in Ref.~\cite{Manohar:1997qy}, in DR all loop diagrams in NRQED are zero due to scaleless integrals. The lack of scale is a natural consequence of the multi-pole expansion~\cite{Labelle:1996en}. In the one-loop diagram containing ultrasoft photons in NRQED, the mass $m$ decoupled and the relevant scales are IR cutoff and UV cutoffs in the effective theory. However, in DR both divergences are regulated by the extra dimension $4-D$ and the only scale is $\mu$. Hence, all the NRQED loop integrals take the form \begin{equation} w_i^{(0)} \cdot \langle \mathcal{O}_{i,\rm eff}\rangle^{(1)}(\mu) = \frac{\alpha_{\rm _{EM}}}{2\pi} A_{\rm eff}\left(\frac{1}{\epsilon_{\rm UV}}-\frac{1}{\epsilon_{\rm IR}}\right) \ . \end{equation} where $w_i^{(0)}$ is the zero-th order Wilson coefficient and $\langle \mathcal{O}_{i,\rm eff}\rangle^{(1)}$ is the first-order matrix element of effective operators, which is scale-independent. For a general matrix element in QED, we have \begin{equation} \label{IR_in_QED} \langle\mathcal{O}_{\rm QED} \rangle^{(1)}= \frac{\alpha_{\rm _{EM}}}{2\pi}\left(\frac{A}{\epsilon_{\rm UV}} + \frac{B}{\epsilon_{\rm IR}} + (A+B)\ln\frac{\mu^2}{m^2} + C\right) \ . \end{equation} where UV divergence reflects the renormalization property of the operator in the full theory. By imposing the matching condition, \begin{eqnarray} \label{IR_in_Eff} \langle\mathcal{O}_{\rm NRQED} \rangle^{(1)}(\mu) & = & \langle w_i\mathcal{O}_{i,\rm eff} \rangle^{(1)}(\mu) = w_i^{(0)} \cdot \langle \mathcal{O}_{i,\rm eff} \rangle^{(1)}(\mu) + w_i^{(1)} \cdot \langle \mathcal{O}_{i,\rm eff} \rangle^{(0)}(\mu) \nonumber\\ & = & \frac{\alpha_{\rm _{EM}}}{2\pi} A_{\rm eff}\left(\frac{1}{\epsilon_{\rm UV}}-\frac{1}{\epsilon_{\rm IR}}\right) + w_i^{(1)}(\mu)\langle \mathcal{O}_{i,\rm eff} \rangle^{(0)} . \end{eqnarray} Since the full and effective theories have to reproduce the same IR divergences, matching yields, \begin{eqnarray} && A_{\rm eff} = -B, \, \, \, \, \,w_i^{(1)}(\mu) \langle \mathcal{O}_{i,\rm eff} \rangle^{(0)} = \frac{\alpha_{\rm _{EM}}}{2\pi} (A +B)\ln\frac{\mu^2}{m^2} + C \ . \end{eqnarray} Threrefore, the Wilson coefficients can be obtained by subtracting all the $1/\epsilon$ poles while keeping logarithms and finite terms in full theory calculation. Effectively this is equivalent to ``pulling up'' the IR divergences to the UV divergences in the Wilson coefficients. Of course, this does not make sense since the Wilson coefficients should not depend on physics in the infrared. All the IR sensitivities that appear in the full theory calculation should be reproduced by the matrix elements of the effective operators. A serious problem resulting from the above procedure is the obscure physical meaning of the cutoff scale $\mu$ in the dimensional regularization. It yields the wrong scaling behavior of the matrix elements $\frac{d \langle \hat{\mathcal{O}}_J\rangle}{d \ln\mu}$, where $\hat{\mathcal{O}}_J=\mathbf{S}_q$, $\mathbf{L}_q$ and $\mathbf{J}_\gamma$. To alleviate this confusion in QED, one option is to regularize the IR divergence using a different regulator, e.g., the photon mass. In the effective theory, the scaling behavior of the matrix elements also fails to satisfy the correct evolution equation from the naive matching in dimensional regularization. In Subsection III.F, we will render this problem by regularizing the UV divergence using a three-momenta cutoff $\Lambda$ in NRQED, which is distinguished from the QED scale $\mu$. We will also use the photon mass to regularize the IR divergences. At the moment, for the sake of simplicity, we use DR. In NRQED, the contributions to the electron two-body matrix elements can be readily obtained from the operators in Eqs.~(\ref{spin})-(\ref{gamma}). The {\it local} parts of matrix elements are \begin{eqnarray} \langle \mathbf{S}_q^{\rm eff} \rangle_{\rm local} & = & u_h^\dag\left\{ a_\sigma\frac{\boldgreek{\sigma}}{2} + \frac{a_\pi}{4m^2} (\boldgreek{\sigma}\times\mathbf{p}) \times\mathbf{p} \right\}u_h \ , \nonumber\\ \langle \mathbf{L}_q^{\rm eff} \rangle_{\rm local} & = & u_h^\dag\left\{ d_\sigma\frac{\boldgreek{\sigma}}{2} + \frac{d_\pi}{4m^2} (\boldgreek{\sigma}\times\mathbf{p}) \times\mathbf{p} \right\}u_h \ , \nonumber\\ \langle \mathbf{J}_\gamma^{\rm eff} \rangle_{\rm local} & = & u_h^\dag\left\{ f_\sigma\frac{\boldgreek{\sigma}}{2} + \frac{f_\pi}{4m^2} (\boldgreek{\sigma}\times\mathbf{p}) \times\mathbf{p} \right\}u_h \ , \end{eqnarray} and the {\it non-local} parts of matrix elements are \begin{eqnarray} \mathbf{f}(p,p)_{L_q^{\rm eff}, {\rm non-local}} & = & d_R u_h^\dag \mathbf{p} u_h \ , \nonumber\\ \mathbf{f}(p,p)_{J_\gamma^{\rm eff}, {\rm non-local}} & = & f_R u_h^\dag \mathbf{p} u_h \ . \end{eqnarray} Comparing them with Eqs.~(\ref{2bodytree}), (\ref{2body1loopSpin}) and (\ref{2body1loopOrbital}), we immediately arrive at some of the Wilson coefficients in Eq.~(\ref{spin})--(\ref{gamma}) in NRQED \begin{eqnarray} a_\sigma&=&1+\frac{\alpha_{\rm _{EM}}}{2\pi},\;\;a_\pi=1+\frac{\alpha_{\rm _{EM}}}{2\pi} \ , \nonumber \\ d_R &=& 1 + \frac{\alpha_{\rm _{EM}}}{2\pi}\left(-\frac{4}{3}\ln\frac{\mu^2}{m^2}-\frac{17}{9}\right),\;\; d_\sigma = \frac{\alpha_{\rm _{EM}}}{2\pi}\left(-\frac{4}{3}\ln\frac{\mu^2}{m^2}-\frac{20}{9}\right),\;\;d_\pi = -1 - \frac{5\alpha_{\rm _{EM}}}{6\pi} \ ,\nonumber \\ f_R &=& \frac{\alpha_{\rm _{EM}}}{2\pi}\left(\frac{4}{3}\ln\frac{\mu^2}{m^2}+\frac{17}{9}\right),\;\; f_\sigma = \frac{\alpha_{\rm _{EM}}}{2\pi}\left(\frac{4}{3}\ln\frac{\mu^2}{m^2}+\frac{11}{9}\right),\;\;f_\pi = \frac{\alpha_{\rm _{EM}}}{3\pi} \ . \end{eqnarray} These results agree with Eq.~(7) of Ref.~\cite{Chen:2009rw}. Again, this result assumes dimensional regularization for IR and UV divergences in the effective theory calculations. \section{Three-Body Matching Through Electron Scattering in Background Field} The two-body matching is insufficient to derive all the Wilson coefficients for the effective operators. For example, in two-body calculations, these operators involving at least one photon field give null result. To obtain the complete expansion, we have to consider more complicated matrix elements. In this section, we consider the processes with an electron interacting with an external electromagnetic field, through the angular momentum operators. Together with the results from the previous section, we will be able to determine the Wilson coefficients for all effective operators bilinear in the electron fields. \subsection{Tree Level} We calculate the QED amplitude $\langle e_{p'}|\mathcal{O}_J|e_{p} \mathbf{A}_{q}\rangle$ with an external vector potential $\mathbf{A}$, as well as $ \langle e_{p'}|\mathcal{O}_J|e_{p} A^0_{q}\rangle$ with an external scalar potential $A^0$. The in- and out-electron states are always taken to be on shell. We further assume electrons are non-relativistic and the virtual photons carry a small momentum $q$ compared with the electron mass, i.e., $\mathbf{q}^2\ll m^2$. \begin{figure}[hbs] \begin{center} \includegraphics[width=5cm]{QED_Lq_Tree_1.eps} \hspace{1cm} \includegraphics[width=5cm]{QED_Lq_Tree_2.eps} \caption{QED tree diagrams for electron scattering in a photon background, coupled to angular momentum operators.} \label{QED_Tree} \end{center} \end{figure} In QED, the tree-level diagrams for the three-body matrix elements are shown in Fig.~\ref{QED_Tree}. Fig.~\ref{QED_Tree} (a) is the direct contribution is from the $\mathbf{x}\times\mathbf{A}$ part in $\mathbf{L}_q$. Fig.~\ref{QED_Tree} (b) is an indirect contribution. We calculate the elastic scattering matrix element for on-shell electron $p^0=p'^0=\mathbf{p}^2+m^2$, $|\mathbf{p}| = |\mathbf{p'}|$. The incident photon momentum is $q^\mu=(0,\mathbf{q})$, in which $\mathbf{q}=\mathbf{p}'-\mathbf{p}$. Clearly the diagrams such as Fig.~\ref{QED_Tree}(b) will be divergent if we take $p+q=p'$ since the intermediate state carries a momentum on mass shell. This is a pole term. During the matching, part of this pole term will be exactly reproduced by the non-relativistic electron propagator in NRQED. There is also a finite contribution though, due to the propagation of positron mode. According to Eq.~(\ref{nr}), the virtual position propagation will be far off-shell and suppressed by $\mathcal{O}(m^{-1})$. Effectively, the propagator of Fig.~\ref{QED_Tree}(b) will shrink to a point and result in a contribution from a local effective operators like those in Fig.~\ref{QED_Tree}(a). The physics beyond the scale $m$ shall be be in the corresponding Wilson coefficients. To regulate the above pole term, we introduce a small off-shellness to the intermediate electron by assigning the photon momentum $q^\mu$ a time-like component, $q^\mu\rightarrow(q^0,\mathbf{q})$. Meanwhile we rewrite the intermediate angular momentum operators as $\hat{\mathcal{O}}\rightarrow\hat{\mathcal{O}}e^{iq^0 x^0}$ to maintain the four-momentum conservation. Diagrammatically, we have an additional small momentum $(q^0,0)$ flowing in from the photon and out from the angular momentum operators. Eventually, we will take $q^0\rightarrow0$. For the matrix elements, we separate them into the $1/q^0$ pole term and the regular part in $q^0$, which represent the contribution from non-local electron propagation and the local interactions in NRQED, respectively. In practice, it is enough to match only the local part of the matrix elements between QED and NRQED. Matching the terms carrying $1/q^0$ poles can be used as a consistency check of our results. In the presence of $q^0$, the denominator of the intermediate electron propagator in Fig.~\ref{QED_Tree}(b) can be Taylor expanded as~\cite{Luke:1999kz} \begin{eqnarray} \frac{1}{(p+q)^2-m^2} & = & \frac{1}{2m q^0} - \frac{1}{4m^2} + \frac{q^0}{8m^3} - \frac{\mathbf{p}^2}{4m^3q^0} + \cdots \ , \end{eqnarray} where we have used the on-shell kinematics for the electron $p^2=p'^2=m^2$, $\mathbf{p}' = \mathbf{p} + \mathbf{q}$ and $p'^0=p^0\simeq m + \mathbf{p}^2/2m$ and make subsequent non-relativistic expansion. Later on we shall make the same expansion in powers of $q^0$ in the next-to-leading-order calculations as well. For non-local operators, we choose $\mathbf{x}\rightarrow i\boldgreek{\nabla}_q$, i.e. the derivative with respect to the external momentum of the photon. This has several advantages. First, the result will be symmetric with respect to the incoming and outgoing electron momenta. Second, we need not consider the derivative on the photon polarization which does not depend on $\mathbf{q}$. This greatly simplifies the calculation. Finally, the three-momentum conservation relation $\mathbf{p'}=\mathbf{p}+\mathbf{q}$ should be understood after the replacement $\mathbf{x}\rightarrow i\boldgreek{\nabla}_q$ has been made. The tree-level non-pole contributions from Fig.~\ref{QED_Tree} in QED are \begin{eqnarray} \label{QED_Tree_Amplitude} &&\langle \mathbf{S}_q \rangle_{\rm local}^{(0)} = \frac{e}{4m^2}u_h^\dag \left[i\mathbf{q}\times\mathbf{A} + (\mathbf{P}\cdot\mathbf{A})\boldgreek{\sigma} - (\mathbf{P}\cdot\boldgreek{\sigma})\mathbf{A}\right]u_h \ ,\nonumber\\ &&\langle \mathbf{L}_q \rangle_{\rm local}^{(0)} = \frac{e}{4m^2}u_h^\dag \left[-i\mathbf{q}\times\mathbf{A} - (\mathbf{P}\cdot\mathbf{A})\boldgreek{\sigma} + (\mathbf{P}\cdot\boldgreek{\sigma})\mathbf{A}\right]u_h \ ,\nonumber\\ &&\mathbf{f}(p',p,p'-p)^{(0)}_{L_q, {\rm non-local}} = u_h^\dag \left[1 - \frac{|\mathbf{q}|^2}{8m^2} + \frac{i}{8m^2} (\mathbf{q}\times\mathbf{P})\cdot\boldgreek{\sigma}\right](-e\mathbf{A})u_h \ , \end{eqnarray} in which $\mathbf{P} \equiv \mathbf{p}'+\mathbf{p}$ and $\mathbf{q} \equiv \mathbf{p}' - \mathbf{p}$. Terms of order $\mathcal{O}(m^{-3})$ have been suppressed. \begin{figure}[!htb] \begin{center} \includegraphics[width=4cm]{NR_Lq_Tree_1.eps} \hspace{.5cm} \includegraphics[width=4cm]{NR_Lq_Tree_2a.eps} \hspace{.5cm} \includegraphics[width=4cm]{NR_Lq_Tree_3a.eps} \caption{corresponding NRQED tree diagrams as in Fig. 2} \label{NR_Tree} \end{center} \end{figure} Next, we consider the tree diagrams in NRQED as shown in Fig.~\ref{NR_Tree}. The electron propagator in Fig.~\ref{NR_Tree}(b),(c) \begin{equation} \frac{1}{E_p - \frac{(\mathbf{p}+\mathbf{q})^2}{2m} + i\epsilon}\to\frac{1}{E_{p'} - \frac{\mathbf{p}'^2}{2m}+i\epsilon} \ , \end{equation} is also divergent for on-shell electron external states. By introducing a small non-zero $q^0$, the fermion propagator will be regularized to $\frac{i}{q^0+i\epsilon}$. Since there is no angular momentum effective operator containing time derivatives, the $1/q^0$ pole can only be canceled by Dirac and spin-orbit terms in the NRQED Lagrangian involving the electric field strength $\mathbf{E}$\ (since $\mathbf{E}^i = -F^{0i} \sim i(q^0\mathbf{A}^i-q^iA^0)$ contains positive powers of $q^0$), resulting in local matrix elements. As a concrete example, we calculate the {\it local} part of the matrix element $\langle e|\int d^3\mathbf{x}\psi^\dag\mathbf{x}\times\boldgreek{\pi}\psi|e\mathbf{A}\rangle^{\rm eff,Tree}_{\rm local}$ as shown in Fig.~\ref{NR_Tree}, \begin{eqnarray}\label{eg} \langle e|\mathbf{x}\times\boldgreek{\pi}|e\mathbf{A}\rangle^{(0)}_{\rm local} & = & i\boldgreek{\nabla}_q\times u_h^\dag \left\{(-e\mathbf{A}) + \left(\frac{i\mathbf{p}}{q^0+i\epsilon} + \frac{i\mathbf{p}'}{- q^0 +i\epsilon}\right) \cdot \frac{ie c_D}{8m^2}\mathbf{q}\cdot(\mathbf{q}A^0 - q^0\mathbf{A}) \right.\nonumber\\ &+& \frac{ec_S}{8m^2}q^0\left(\left[(2\mathbf{p}'-\mathbf{q}) \cdot \mathbf{A} \times \boldgreek{\sigma}\right]\frac{i\mathbf{p}} {q^0+i\epsilon} + \left[(2\mathbf{p} + \mathbf{q})\cdot \mathbf{A} \times \boldgreek{\sigma}\right] \frac{i\mathbf{p}'}{-q^0 +i\epsilon} \right) \nonumber\\ &+&\left. \frac{ec_S}{8m^2}A^0\mathbf{\sigma}\cdot(\mathbf{P}\times\mathbf{q})\cdot \left(\frac{i\mathbf{p}}{q^0+i\epsilon} + \frac{i\mathbf{p}'}{- q^0 +i\epsilon}\right) \right\}u_h \nonumber\\ & \sim & u_h^\dag\left[-\frac{\mathbf{p'}-\mathbf{p}}{q^0} \times \frac{ie c_D}{8m^2} q^0 \mathbf{A} + \frac{\mathbf{p} + \mathbf{p}'}{q^0}\times \frac{e c_S}{8m^2}q^0(\mathbf{A}\times\boldgreek{\sigma}) \right]u_h \nonumber\\ & = & u_h^\dag\left[-\frac{iec_D}{8m^2} \mathbf{q}\times\mathbf{A} - \frac{ec_S}{8m^2}\mathbf{P} \times (\boldgreek{\sigma}\times \mathbf{A})\right]u_h \ . \end{eqnarray} In the above calculation, we have included both cases that the NRQED interaction happens before and after the angular momentum operator, along the momentum flow. As promised above, we only need to keep the non-pole terms [regular as $q^0\to 0$] and suppress all the $1/q^0$ poles. Following the same procedure, we have calculate non-pole contributions from the local and non-local matrix elements for all the effective operators in Eqs.~(\ref{spin})-(\ref{gamma}). The results are listed in Table I. \begin{table}[!htb] \begin{center} \label{NRQED_BiQuark} \begin{tabular} {|c|c|c|}\hline {\rm operator} & {\it local} & {\it non-local} \\ \hline\hline $ \psi^\dag (\boldgreek{\sigma}/2) \psi$ & $-\frac{e}{8m^2}c_S u_h^\dag (\mathbf{P}\times\mathbf{A})\times\boldgreek{\sigma} u_h$ & -\\ \hline $ \psi^\dag \mathbf{x}\times\boldgreek{\pi}\psi $ & $-\frac{e}{8m^2}u_h^\dag\left[c_D (i\mathbf{q}\times\mathbf{A}) \right.$ & $\frac{e}{8m^2} u_h^\dag\left[c_D (\mathbf{q}\cdot\mathbf{A})\mathbf{q} \right.$\\ & $\left. + c_S \mathbf{P}\times(\boldgreek{\sigma}\times \mathbf{A})\right]u_h $& $\left. + ic_S\boldgreek{\sigma}\cdot(\mathbf{P}\times\mathbf{A}) \mathbf{q}\right]u_h$\\ \hline $ \psi^\dag\left[{\mathbf x} \times (\boldgreek{\sigma} \times \mathbf{E})\right]\psi $ & $2u_h^\dag(A^0\boldgreek{\sigma})u_h $ & $-i u_h^\dag (A^0\boldgreek{\sigma}\times\mathbf{q}) u_h$\\ \hline $\psi^\dag \left[(\boldgreek{\sigma}\times\boldgreek{\pi}) \times \boldgreek{\pi}\right. $ & $e u_h^\dag \left[ \mathbf{P}\times(\boldgreek{\sigma}\times\mathbf{A})\right.$ & - \\ $\left.-\boldgreek{\pi} \times (\boldgreek{\sigma}\times\boldgreek{\pi})\right]\psi$ & $\left.+ \mathbf{A}\times(\boldgreek{\sigma}\times\mathbf{P})\right] u_h$ & \\ \hline $\psi^\dag \mathbf{B}\psi$ & $iu_h^\dag \mathbf{q}\times\mathbf{A} u_h$ & -\\ \hline $ {\mathbf x}\times \left[\mathbf B \times \boldgreek{\nabla}(\psi^\dag\psi)\right] $ & $i u_h^\dag \mathbf{q}\times\mathbf{A} u_h$ & $u_h^\dag \left[\mathbf{q}^2\mathbf{A} - (\mathbf{q}\cdot\mathbf{A})\mathbf{q}\right]u_h$\\ \hline $ {\mathbf x}\times \psi^\dag \left[\mathbf{B} \times(\boldgreek{\sigma} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi $ & $u_h^\dag \mathbf{A}\times(\boldgreek{\sigma}\times\mathbf{P})u_h$ & $-i u_h^\dag\left[ \boldgreek{\sigma}\cdot(\mathbf{q}\times\mathbf{P})\mathbf{A} \right.$ \\ & & $\left. + \boldgreek{\sigma}\cdot(\mathbf{P}\times\mathbf{A})\mathbf{q}\right]u_h$\\ \hline ${\mathbf x}\times \psi^\dag \left[\boldgreek{\sigma}\times (\mathbf{B} \times \overleftrightarrow{\boldgreek{\pi}}) \right]\psi$ & $u_h^\dag \left[(\bf{\sigma}\cdot\mathbf{P})\mathbf{A} + (\mathbf{A}\cdot\mathbf{P})\boldgreek{\sigma}\right]u_h$ & $i u_h^\dag\left[(\mathbf{q}\cdot\mathbf{P})\boldgreek{\sigma}\times\mathbf{A}\right.$\\ & & $\left. - (\mathbf{A}\cdot\mathbf{P})\boldgreek{\sigma}\times\mathbf{q}\right]u_h$\\ \hline \end{tabular} \caption{{\it Local} and {\it non-local} contributions to $\langle e|\mathcal{O}^{\rm eff}|e\mathbf{B}\rangle_{\rm Tree}$, only local terms are shown.} \end{center} \end{table} From Eqs.~(\ref{spin}), (\ref{orbit}) and (\ref{QED_Tree_Amplitude}), we get the leading order Wilson coefficients up to $\mathcal{O}(1)$ \begin{eqnarray} & a_\sigma^{(0)} = a_\pi^{(0)} = a_B^{(0)} = 1\ ,&\nonumber\\ & d_\sigma^{(0)} = -d_\pi^{(0)} = -d_B^{(0)} = d_D^{(0)} = d_S^{(0)} = 1& \ . \end{eqnarray} They agree with the non-relativistic reductions through the FW transformation in Eq.~(\ref{FW}). The other Wilson coefficients in $\mathbf{S}_q^{\rm eff}$ and $\mathbf{L}_q^{\rm eff}$ are of order $\mathcal{O}(\alpha_{\rm _{EM}})$ or higher. They will show up in one-loop matching as described in the next subsection. \subsection{One-Loop Matching} In this subsection, we calculate the one-loop radiative corrections to angular momentum operators in QED and NRQED. To find the complete set of Feynman diagrams, we first draw the one-loop QED diagrams and insert angular momentum operators in all possible ways. The corresponding Feynman diagrams in QED are shown in Fig.~\ref{QED_OneLoop}. Here we have no diagram with $\mathbf{J}_\gamma$ on the external photon line because this yields only pole contribution. The photon pole contribution is related to matching in the photon state, which will be done in the next subsection. \begin{figure}[!htb] \begin{center} \includegraphics[width=3cm]{QED_OneLoop_a.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_b.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_c.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_d.eps}\\ \vspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_e.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_f.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_g.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_h.eps}\\ \vspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_i.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_j.eps} \hspace{2mm} \includegraphics[width=3cm]{QED_OneLoop_k.eps} \caption{One-loop contributions to $\langle e|\mathbf{J}|e\gamma\rangle$ in QED. Here $\otimes$ can be either $\mathbf{L}_q$,$\mathbf{S}_q$, or $\mathbf{J}_\gamma$: (a)-(i) for $\mathbf{L}_q$; (b)(d)(g)(i) for $\mathbf{S}_q$; (j)(k) for $\mathbf{J}_\gamma.$ Wave function renormalization diagrams, mass counter terms and the mirror diagrams are not shown explicitly. } \label{QED_OneLoop} \end{center} \end{figure} The loop diagrams are considerably more complicated than the tree-level case. This is because all loops can be Taylor expanded in $q^0$ and contribute to the local matrix elements in a similar manner as Eq.~(\ref{eg}). Again only the regular part of matrix elements in the limit $q^0\rightarrow 0$ is taken into account. After lengthy calculations we find, for the vector external potential $\mathbf{A}$ case, the {\it local} part of matrix element reads \begin{eqnarray} \label{S_spin} \langle \mathbf{S}_q \rangle^{(1)}_{\rm local} & = & \frac{\alpha_{\rm _{EM}} }{2\pi} \frac{e}{4m^2} u_h^\dag\left[7i\mathbf{q}\times\mathbf{A} + (\mathbf{P}\cdot\mathbf{A})\boldgreek{\sigma} + (\boldgreek{\sigma}\cdot\mathbf{A})\mathbf{P} - 2(\mathbf{P}\cdot\boldgreek{\sigma})\mathbf{A}\right]u_h \ , \end{eqnarray} \begin{eqnarray} \label{L_spin} \langle \mathbf{L}_q \rangle_{\rm local}^{(1)} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2}u_h^\dag\left[ \left(\frac{8}{\epsilon_{\rm IR}} + 8 \ln\frac{\mu^2}{m^2} + \frac{50}{3}\right)i\mathbf{q}\times\mathbf{A} + \frac{2}{3}(\mathbf{P}\cdot\mathbf{A})\boldgreek{\sigma} - 2(\boldgreek{\sigma}\cdot\mathbf{A})\mathbf{P} + \frac{8}{3}(\mathbf{P}\cdot\boldgreek{\sigma})\mathbf{A}\right]u_h \ , \nonumber \\ \end{eqnarray} \begin{eqnarray} \label{J_spin} \langle \mathbf{J}_\gamma \rangle_{\rm local}^{(1)} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2}u_h^\dag\left[ \left(-\frac{8}{\epsilon_{\rm IR}} - 8 \ln\frac{\mu^2}{m^2} - \frac{71}{3}\right)i\mathbf{q}\times\mathbf{A} - \frac{5}{3}(\mathbf{P}\cdot\mathbf{A})\boldgreek{\sigma} + (\boldgreek{\sigma}\cdot\mathbf{A})\mathbf{P} - \frac{2}{3}(\mathbf{P}\cdot\boldgreek{\sigma})\mathbf{A}\right]u_h \ , \nonumber \\ \end{eqnarray} and the {\it non-local} part \begin{eqnarray} \label{L_orbital} \mathbf{f}(p',p,p'-p)^{(1)}_{L_q, {\rm non-local}} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2}u_h^\dag\left[ \left(\frac{4}{3\epsilon_{\rm UV}} + \frac{4}{3}\ln\frac{\mu^2}{m^2} + \frac{17}{9}\right)(4m^2)\mathbf{A} - \frac{2}{3}i (\mathbf{P}\cdot\mathbf{A}) (\boldgreek{\sigma} \times \mathbf{q})\right.\nonumber\\ & & - \frac{5}{3}i [\boldgreek{\sigma}\cdot(\mathbf{P}\times\mathbf{A})] \mathbf{q} + \left(\frac{2}{3\epsilon_{\rm UV}} + \frac{2}{3}\ln\frac{\mu^2}{m^2} - \frac{31}{18}\right)i [\boldgreek{\sigma}\cdot(\mathbf{q}\times\mathbf{P})] \mathbf{A} \nonumber\\ & & \left. - \frac{7}{9}(\mathbf{q}\cdot\mathbf{A})\mathbf{q} + \left( -\frac{2}{3\epsilon_{\rm_{UV}}} - \frac{4}{3\epsilon_{\rm_{IR}}} -2 \ln\frac{\mu^2}{m^2} - \frac{1}{6} \right)|\mathbf{q}|^2\mathbf{A} \right]u_h \ , \end{eqnarray} \begin{eqnarray} \label{g_orbital} \mathbf{f}(p',p,p'-p)^{(1)}_{J_\gamma, {\rm non-local}} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2} u_h^\dag\left[ \left(-\frac{4}{3\epsilon_{\rm UV}}-\frac{4}{3}\ln\frac{\mu^2}{m^2} - \frac{17}{9}\right)(4m^2)\mathbf{A} + \frac{2}{3}i (\mathbf{P}\cdot\mathbf{A}) (\boldgreek{\sigma} \times \mathbf{q}) \right.\nonumber\\ & & + \frac{5}{3}i [\boldgreek{\sigma}\cdot(\mathbf{P}\times\mathbf{A})] \mathbf{q} + \left(-\frac{2}{3\epsilon_{\rm UV}} - \frac{2}{3}\ln\frac{\mu^2}{m^2} + \frac{13}{18}\right)i [\boldgreek{\sigma}\cdot(\mathbf{q}\times\mathbf{P})] \mathbf{A} \nonumber\\ & & \left. + \frac{7}{9}(\mathbf{q}\cdot\mathbf{A})\mathbf{q} + \left( \frac{2}{3\epsilon_{\rm UV}} + \frac{2}{3}\ln\frac{\mu^2}{m^2} + \frac{1}{6} \right)|\mathbf{q}|^2\mathbf{A} \right]u_h \ . \end{eqnarray} For the scalar potential $A^0$ case, the local part of matrix elements are \begin{eqnarray} \langle \mathbf{S}_q \rangle_{\rm local}^{(1)} & = & \mathcal{O}(m^{-3}) \ ,\nonumber\\ \langle \mathbf{L}_q \rangle_{\rm local}^{(1)} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2} u_h^\dag\left[\frac{4m}{3}A^0 \boldgreek{\sigma}\right] u_h + \mathcal{O}(m^{-3}) \ ,\nonumber\\ \langle \mathbf{J}_\gamma \rangle_{\rm local}^{(1)} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2}u_h^\dag\left[ -\frac{4m}{3}A^0\boldgreek{\sigma} \right]u_h + \mathcal{O}(m^{-3}) \ , \end{eqnarray} and the {\it non-local} part \begin{eqnarray} \mathbf{f}(p',p,p'-p)^{(1)}_{L_q, {\rm non-local}} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2} u_h^\dag\left[ -\frac{2m}{3}A^0i(\boldgreek{\sigma}\times\mathbf{q}) \right] u_h \ ,\nonumber\\ \mathbf{f}(p',p,p'-p)^{(1)}_{J_\gamma, {\rm non-local}} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2} u_h^\dag\left[ \frac{2m}{3}A^0i(\boldgreek{\sigma}\times\mathbf{q}) \right] u_h \ . \end{eqnarray} Again we sum up all {\it local} and {\it non-local} matrix elements of the angular momentum and get \begin{eqnarray} && \langle e| \mathbf{S}_q + \mathbf{L}_q + \mathbf{J}_\gamma| e \mathbf{A}\rangle_{\rm local}^{(1)} = \langle e| \mathbf{S}_q + \mathbf{L}_q + \mathbf{J}_\gamma| e A^0\rangle_{\rm local}^{(1)} = 0,\nonumber\\ && \langle e| \mathbf{S}_q + \mathbf{L}_q + \mathbf{J}_\gamma| e \mathbf{A}\rangle_{\rm non-local}^{(1)} = \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{e}{4m^2} \mathbf{x} \times u_h^\dag\left[\rule{0mm}{5.5mm}-2i [\boldgreek{\sigma}\cdot(\mathbf{q}\times\mathbf{P})] \mathbf{A} -\frac{4}{3} \left(\frac{1}{\epsilon_{\rm IR}} +\ln\frac{\mu^2}{m^2}\right)|\mathbf{q}|^2\mathbf{A} \right]u_h \ ,\nonumber\\ && \langle e| \mathbf{S}_q + \mathbf{L}_q + \mathbf{J}_\gamma| e A^0\rangle_{\rm non-local}^{(1)} = 0 \ . \end{eqnarray} The reason for the null results is again due to the conservation of the total angular momentum. The non-zero first-order result for $\langle e| \mathbf{J}_{\rm NRQED}| e \mathbf{A}\rangle_{\rm orbital}^{(1)}$ is due to the non-vanishing one-loop contributions of the electromagnetic form factors $F_1(q^2)$ and $F_2(q^2)$. Once we include the diagrams with the operator $J_\gamma$ placed on the external photon line, the total orbital contribution to the matrix element $\langle e| \mathbf{J}_\gamma| e \mathbf{A}\rangle_{\rm orbital}^{(1)}$ would also yield zero. For the matrix elements in NRQED, we only need to calculate the tree-level diagrams for the effective operators in dimensional regularization. By applying the matching conditions according to Eq.~(\ref{MatchingCondition}) together with the results in Table~I, all the Wilson coefficients for effective operators bilinear in quark fields can be determined in the $\overline{\rm MS}$ scheme, \begin{eqnarray} \label{WilsonBilinear} &&a_\sigma = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi},\;\; a_\pi = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi}, \;\; a_B = 1 + \frac{7\alpha_{\rm _{EM}}}{2\pi},\;\; \ ,\nonumber\\ &&d_R = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi} \left(-\frac{4}{3}\ln\frac{\mu^2}{m^2} - \frac{17}{9}\right),\;\; d_\sigma = \frac{\alpha_{\rm _{EM}}}{2\pi}\left(-\frac{4}{3}\ln\frac{\mu^2}{m^2} - \frac{20}{9}\right),\;\; d_\pi = -1 - \frac{5\alpha_{\rm _{EM}}}{6\pi},\nonumber\\ &&d_D = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi}\left(-4\ln\frac{\mu^2}{m^2} - \frac{1}{3}\right),\;\; d_S = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi}\left(-\frac{4}{3}\ln\frac{\mu^2}{m^2} + \frac{31}{9}\right)\ ,\nonumber\\ &&d'_S = \frac{2\alpha_{\rm _{EM}}}{3\pi}, \;\; d_E = \frac{\alpha_{\rm _{EM}}}{3\pi}, \;\; d_B = -1 + \frac{\alpha_{\rm _{EM}}}{2\pi} \left(8\ln\frac{\mu^2}{m^2} + \frac{143}{9}\right) \ ,\nonumber\\ &&f_R = \frac{\alpha_{\rm _{EM}}}{2\pi} \left(\frac{4}{3}\ln\frac{\mu^2}{m^2} + \frac{17}{9}\right),\;\; f_\sigma = \frac{\alpha_{\rm _{EM}}}{2\pi} \left(\frac{4}{3}\ln\frac{\mu^2}{m^2} + \frac{11}{9}\right),\;\;f_\pi = \frac{\alpha_{\rm _{EM}}}{3\pi},\nonumber\\ &&f_D = \frac{\alpha_{\rm _{EM}}}{2\pi}\left(\frac{4}{3}\ln\frac{\mu^2}{m^2} + \frac{1}{3}\right),\;\; f_S = - \frac{\alpha_{\rm _{EM}}}{2\pi}\left(-\frac{4}{3}\ln\frac{\mu^2}{m^2} + \frac{13}{9}\right),\nonumber\\ &&f'_S = -\frac{2\alpha_{\rm _{EM}}}{3\pi},\;\; f_E = -\frac{\alpha_{\rm _{EM}}}{3\pi}, \;\; f_B = \frac{\alpha_{\rm _{EM}}}{2\pi}\left(-8\ln\frac{\mu^2}{m^2}-\frac{206}{9}\right)\ . \end{eqnarray} As it can be easily verified, the coefficients satisfy the sum rules in Eq. (20). These results reproduce Eq.~(8) of Ref.~\cite{Chen:2009rw}, except for coefficients $d_D$, $d_B$ and $f_B$. We will properly separate the UV and IR divergences in these three coefficients when discussing the renormalization group evolution in Sec.\,VI. \section{Photon Sector} In this section, we calculate the forward matrix elements $\langle\gamma|\hat{\mathcal{O}}|\gamma\rangle$ up to the order $\mathcal{O}(\alpha_{\rm _{EM}})$ with off-shell photons carrying a momentum $q$. The tree-level matching is straightforward and we have \begin{equation} f_\gamma^{(0)} = 1,\;\; d_\gamma^{(0)} = 0. \end{equation} \begin{figure}[hb] \begin{center} \includegraphics[width=3cm]{QED_Photon_a.eps} \hspace{.5cm} \includegraphics[width=3cm]{QED_Photon_b.eps} \hspace{.5cm} \includegraphics[width=3cm]{QED_Photon_c.eps} \caption{One-loop contributions to $\langle \gamma|\mathbf{J}|\gamma\rangle$ in QED. Mirror diagrams are not shown. (a) for $\hat{S}_q$; (a)(b) for $\hat{L}_q$; (c) for $\hat{J}_\gamma$.} \label{QED_photon} \end{center} \end{figure} The relevant QED Feynman diagrams at one-loop are shown in Fig.~\ref{QED_photon}. While the operators $\mathbf{S}_q$ and $\mathbf{L}_q$ can be inserted to the fermion loop of the vacuum polarization diagram as in Fig.~\ref{QED_photon}(a) and (b), the one-loop contribution from $\mathbf{J}_\gamma$ only arise in Fig.~\ref{QED_photon}(c). We need no new effective operator to match the matrix element of $\mathbf{J}_\gamma$, simply because the vacuum polarization is already reflected by the $d_1$- and $d_2$-terms in the effective Lagrangian. Thus \begin{equation} f_\gamma^{(1)} = 0 \ , \end{equation} and there are no further corrections to $\mathbf{J}_\gamma$ at $\mathcal{O}(\alpha_{\rm _{EM}}^0 m^{-2})$ order as already indicated in Eq.~(\ref{gamma}). The calculation of the one-loop matrix elements for $\mathbf{S}_q$ and $\mathbf{L}_q$ in QED yields \begin{eqnarray} \langle\gamma |\mathbf{S}_q| \gamma\rangle_{\rm local}^{(1)} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{i}{m^2} \frac{2}{3} \left[ (\mathbf{A}\times\mathbf{A}^*)q^0q^2 - (\mathbf{A}\times\mathbf{q}) q^2A^{*0} - (\mathbf{q}\times\mathbf{A}^*)q^2A^0\right] \ ,\\ \langle\gamma |\mathbf{L}_q| \gamma\rangle_{\rm local}^{(1)} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\left(\frac{\mu^2}{m^2}\right)^\epsilon \frac{2i}{3\epsilon_{\rm UV}}\left[(\mathbf{A} \times \mathbf{A}^*)q^0 + (\mathbf{A}\times\mathbf{q})A^{*0} + 2 (\mathbf{q}\times\mathbf{A}^*)A^0 \right] \nonumber\\ &+& \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{i}{m^2}\frac{1}{15}\left[(\mathbf{A}\times\mathbf{A}^*)q^0q^2 + 3(\mathbf{A}\times\mathbf{q}) q^2A^{*0} + 13(\mathbf{q}\times\mathbf{A}^*)q^2A^0 \right.\nonumber\\ &+& \left. 4(\mathbf{A}\times\mathbf{q})(q\cdot A^*)q^0 +4(\mathbf{q}\times\mathbf{A}^*)(q\cdot A)q^0 \right] \ , \\ \mathbf{f}(q,q)^{(1)}_{L_q, {\rm non-local}} & = & \frac{\alpha_{\rm _{EM}}}{2\pi}\left(\frac{\mu^2}{m^2}\right)^\epsilon \frac{2}{3\epsilon_{\rm UV}} \left[-2(A \cdot A^*)q^0\mathbf{q} + (q\cdot A)q^0 \mathbf{A}^* + (q\cdot A^*)q^0 \mathbf{A} \right] \nonumber\\ &+& \frac{\alpha_{\rm _{EM}}}{2\pi}\frac{1}{m^2} \frac{2}{15} \left[-4(A\cdot A^*)q^0q^2 \mathbf{q} + (q\cdot A)q^0q^2\mathbf{A}^* + (q\cdot A^*)q^0q^2\mathbf{A}\right. \nonumber\\ &+& (q\cdot A)A^{*0}q^ 2\mathbf{q} + (q\cdot A^*)A^0q^2\mathbf{q} - (q^2)^2(A^0\mathbf{A}^* + A^{*0}\mathbf{A})\nonumber\\ &+& \left. 2(q\cdot A)(q\cdot A^*)q^0 \mathbf{q} \;\rule{0mm}{4.5mm}\right] \ , \end{eqnarray} where $A$ and $A^*$ represent the incoming and outgoing photon fields, respectively. \begin{table}[!htb] \begin{center} \begin{tabular} {|c|c|c|}\hline \label{NRQED_photon_Table} {\rm operator} & {\it local} & {\it non-local} \\ \hline\hline $\mathbf{x} \times ( \mathbf{E} \times \mathbf{B} )$ & $i\left[(\mathbf{A} \times \mathbf{A}^*)q^0 + (\mathbf{A}\times\mathbf{q})A^{*0} \right.$ & $-2(A \cdot A^*)q^0\mathbf{q}$\\ & $ \left.+ 2 (\mathbf{q}\times\mathbf{A}^*)A^0 \right]$ & $ + (q\cdot A)q^0 \mathbf{A}^* + (q\cdot A^*)q^0 \mathbf{A}$ \\ \hline $ (\partial^0 F^{\rho i}) F_\rho^{\;j} $ & $ -2i\left[ q^0 q^2\mathbf{A}\times\mathbf{A}^* \right.$ & $-$ \\ & $\left. -(q\cdot A^*) q^0 \mathbf{A}\times\mathbf{q} - (q\cdot A)q^0 \mathbf{q}\times\mathbf{A}^* \right]$ & \\ \hline $ (\partial^i F^{\rho0}) F_\rho^{\;j} $ & $-i\left[(q^2 A^0 -q\cdot A q^0) \mathbf{q} \times \mathbf{A}^*\right.$ & $-$ \\ & $\left. + (q^2 A^{*0} -q\cdot A^* q^0) \mathbf{A} \times \mathbf{q}\right]$ & \\ \hline $ (\square F^{0\rho}) F_\rho^{\;j} x^i $ & $ -i\left[ 4 q^2 A^0 \mathbf{q}\times \mathbf{A}^* - q^2 A^{*0} \mathbf{A}\times\mathbf{q}\right. $ & $ (q^2)^2 (A^{*0}\mathbf{A} + A^0\mathbf{A}^*) $\\ & $\left. +q^0 q^2 \mathbf{A}\times\mathbf{A}^* - 2 q\cdot A q^0 \mathbf{q}\times \mathbf{A}^*\right]$ & $ + 2q^0 q^2 (A\cdot A^*)\mathbf{q} -q^2(q\cdot A)(q^0 \mathbf{A}^* +A^{*0}\mathbf{q}) $ \\ & & $-q^2(q\cdot A^*)(q^0 \mathbf{A} +A^{0}\mathbf{q})$\\ \hline $ (\partial^0 F^{\rho\sigma}) (\partial^j F_{\rho\sigma}) x^i $ & $4i q^0(q\cdot A)\mathbf{q}\times\mathbf{A}^* $ & $4 q^0 q^2(A\cdot A^*)\mathbf{q} - 4q^0(q\cdot A)(q\cdot A^*)\mathbf{q} $ \\ \hline \end{tabular} \caption{{\it Local} and {\it non-local} contributions to off-shell matrix element $\langle\gamma|\mathcal{O}^{\rm eff}|\gamma\rangle_{\rm Tree}$.} \end{center} \end{table} In NRQED, since the energy fluctuation cannot excite the virtual electron-position pair, there are no loop corrections to the matrix elements of the effective operators. All the loops in QED will be encoded in the corresponding Wilson coefficients. All the tree-level matrix elements of the effective operators in Eqs.~(\ref{spin}), (\ref{orbit}) and (\ref{gamma}) are summarized Table II. Matching the matrix elements between QED and NRQED, it is easy to find the follow combinations of effective operators \begin{eqnarray} ( \mathbf{S}_q^3)^{(1)} &\to& \frac{\alpha_{\rm _{EM}}}{2\pi} \frac{1}{4m^2} \epsilon^{ij3} \left( - \frac{2}{3} \right) \left[ (\partial^0 F^{\rho i}) F_\rho^{\;j} - 2 (\partial^j F^{\rho0}) F_{\rho}^{\;j} \right] \ , \nonumber \\ ( \mathbf{L}_q^3)^{(1)} &\to& \frac{\alpha_{\rm _{EM}}}{2\pi} \frac{1}{4m^2} \epsilon^{ij3} \left[ - \frac{8}{15} (\square F^{0\rho}) F_\rho^{\; j} x^i - \frac{4}{15} (\partial^0 F^{\rho\sigma}) (\partial^j F_{\rho\sigma}) x^i + \frac{2}{15} (\partial^0 F^{\rho i}) F_\rho^{\;j} - \frac{4}{3} (\partial^iF^{\rho0}) F_\rho^{\;j} \right] \ . \nonumber \\ \end{eqnarray} A little algebra yields the operator relations \begin{eqnarray} (\partial^0 F^{\rho i})F_\rho^{\;j} &\to& \dot{\mathbf{E}}\times\mathbf{E} - \dot{\mathbf{B}}\times\mathbf{B} \ , \nonumber \\ (\partial^iF^{k0}) F_k^{\;j} &\to& (\boldgreek{\nabla}\cdot\mathbf{E}) \mathbf{B} - (\mathbf{B}\cdot\boldgreek{\nabla})\mathbf{E} \ , \nonumber \\ \square F^{0\rho}F_\rho^{\;j} x^i &\to& - \mathbf{x} \times ( \square \mathbf{E} \times \mathbf{B} ) \ , \nonumber \\ \partial^0 F^{\rho\sigma}\partial^j F_{\rho\sigma} x^i &\to& 2 \left[ \dot{\mathbf{E}}^a (\mathbf{x}\times \boldgreek{\nabla}) \mathbf{E}^a - \dot{\mathbf{B}}^a(\mathbf{x}\times \boldgreek{\nabla}) \mathbf{B}^a \right] \ . \end{eqnarray} Together with Eqs.~(\ref{spin}) and (\ref{orbit}), we extract the Wilson coefficients in the photon sector in the $\overline{\rm MS}$ scheme \begin{eqnarray} \label{WilsonGauge} &&a_{\gamma_1}=-\frac{\alpha_{\rm _{EM}}}{12\pi}, \ \ \ a_{\gamma_2}=\frac{\alpha_{\rm _{EM}}}{6\pi} \ ,\nonumber\\ && d_\gamma=\frac{\alpha_{\rm _{EM}}}{3\pi}\ln\frac{\mu^2}{m^2}, \ \ \ d_{\gamma_1} = \frac{\alpha_{\rm _{EM}}}{60\pi}, \ \ \ d_{\gamma_2}= -\frac{\alpha_{\rm _{EM}}}{6\pi}, \ \ \ d_{\gamma_3}= - \frac{\alpha_{\rm _{EM}}}{15\pi}, \ \ \ d_{\gamma_4}= - \frac{\alpha_{\rm _{EM}}}{15\pi} \ . \nonumber\\ && f_{\gamma}=1 \ . \end{eqnarray} \section{Distinguishing QED and NRQED UV Cutoffs} In the previous sections, we have obtained the non-relativistic reduction of the angular momentum operators with the main results in Eqs.~(\ref{spin})-(\ref{gamma}) and Eqs.~(\ref{WilsonBilinear}), (\ref{WilsonGauge}). We have used the dimensional regulation for both UV and IR divergences in QED and NRQED, and the Wilson coefficients depend on a single energy scale $\mu$. However, there is no need that the NRQED must use the same UV regularization. In fact, in many bound state calculations, it is better to use three-momentum cut-off for the UV divergences and the photon mass for IR divergences. In this section, we will try to redo the calculations in this new regularization scheme. Before that, let's consider a problem that arises from DR for all divergences. We expect the UV behaviors of QED operators are completely included in the Wilson coefficients in NRQED. In another word, the matrix elements in NRQED must satisfy the same renormalization group (RG) evolution equation \begin{equation} \label{RG_Equation} \frac{d}{d t}{\mathbf{J}_q^{\rm eff} \choose \mathbf{J}_\gamma^{\rm eff}} = \frac{\alpha_{\rm _{EM}}}{2\pi} {-\frac{4}{3}\;\;\;\;\,\frac{1}{3} + \frac{1}{3} \choose \;\;\,\frac{4}{3}\;\;-\frac{1}{3} + \frac{1}{3}} {\mathbf{J}_q^{\rm eff} \choose \mathbf{J}_\gamma^{\rm eff}} \ , \end{equation} where $t = \ln \mu^2/m^2$ and $\mathbf{J}_q^{\rm eff} \equiv \mathbf{S}_q^{\rm eff} + \mathbf{L}_q^{\rm eff}$ represents the total angular momentum carried by the electron. The additional $\frac{1}{3}$ in the evolution equation stems from the scale dependence of the redefined photon fields in the effective theory, the $d_1$ term. Our effective operators in Eqs.~(\ref{spin})-(\ref{gamma}) with Wilson coefficients listed in Eqs.~(\ref{WilsonBilinear}) and (\ref{WilsonGauge}), actually, fail to satisfy the desired evolutions. The reason for this, as already discussed in subsection III, is due to DR. In the QED calculations, we have not distinguish the $\mu$ dependence from UV or IR divergences. The IR divergences should be captured by the effective theory and has nothing to do with the RG flow. On the other hand, when matching to NRQED, we also have not distinguished the UV cutoff dependence from that in QED. By introducing the three-momentum cutoff $\Lambda$ in NRQED calculations, we can restore the correct RG evolutions. In addition, We let the photon have a small mass $\lambda$ in both QED and NRQED to regulate the infrared. According to Ref.~\cite{Kinoshita:1995mt} the Wilson coefficients in NRQED Lagrangian with three-momentum cutoff $\Lambda$ as UV regulator are \begin{eqnarray} \label{cutoff_Lag} &&c_2 = c_4 = 1,\;\; c_F = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi},\;\; c_D = 1 - \frac{8\alpha_{\rm _{EM}}}{3\pi} \left(\ln\frac{2\Lambda}{m} - \frac{5}{6}\right),\;\; c_S = 1 + \frac{\alpha_{\rm _{EM}}}{\pi} \ ,\nonumber\\ &&d_1 = 1 + \frac{\alpha_{\rm _{EM}}}{3\pi}\ln\frac{\mu^2}{m^2},\;\; d_2 = \frac{\alpha_{\rm _{EM}}}{60\pi} \ . \end{eqnarray} The separation of two scales $\mu$ and $\Lambda$ is obvious. The full theory amplitude with the photon mass regulator can be translated by making the replacement in Eqs.~(\ref{S_spin})-(\ref{g_orbital}) \begin{equation} \left(\frac{1}{\epsilon_{\rm IR}} + \ln\frac{\mu^2}{m^2}\right) \rightarrow 2\ln\frac{\lambda}{m} \ . \end{equation} In the effective theory, the loop diagrams are no longer vanishing if we use $\Lambda$ and $\lambda$ to regulate UV and IR divergences. We choose to work in the Coulomb gauge to do the calculation since the full result is gauge-invariant. All the diagrams under consideration in NRQED are listed in Fig.~\ref{NR_OneLoop}. The non-relativistic electron propagator picks up only the pole at $p^0=\sqrt{\mathbf{p}^2+m^2}$. \begin{equation} S_F^{\rm eff}(p+k) = \frac{1}{E_p+k^0-\frac{(\mathbf{p}+\mathbf{k})^2}{2m}+ i\epsilon} = \frac{1}{k^0+i\epsilon} + \frac{2\mathbf{p}\cdot\mathbf{k}+\mathbf{k}^2}{2m(k^0+i\epsilon)^2} + \cdots . \end{equation} This can be understood as a multipole expansion in inverse powers of $m$. \begin{figure} \begin{center} \includegraphics[width=3cm]{NR_OneLoop_a.eps} \hspace{2mm} \includegraphics[width=3cm]{NR_OneLoop_b.eps} \hspace{2mm} \includegraphics[width=3cm]{NR_OneLoop_c.eps} \hspace{2mm} \includegraphics[width=3cm]{NR_OneLoop_d.eps}\\ \vspace{2mm} \includegraphics[width=3cm]{NR_OneLoop_e.eps} \hspace{2mm} \includegraphics[width=3cm]{NR_OneLoop_f.eps} \hspace{2mm} \includegraphics[width=3cm]{NR_Jg_OneLoop_a.eps} \hspace{2mm} \includegraphics[width=3cm]{NR_Jg_OneLoop_b.eps}\\ \vspace{2mm} \includegraphics[width=3cm]{NR_Jg_OneLoop_c.eps} \hspace{2mm} \includegraphics[width=3cm]{NR_Jg_OneLoop_d.eps} \hspace{3mm} \includegraphics[width=3cm]{NR_Jg_OneLoop_e.eps} \caption{One-loop corrections to $\langle e|\mathbf{L}_q^{\rm eff}|e\mathbf{A}\rangle$, (a)-(f), and $\langle e|\mathbf{J}_\gamma^{\rm eff}|e\mathbf{A}\rangle$, (g)-(k), in NRQED. In the diagram the filled box represents the vertex interaction from term $-\frac{\mathbf{D}^2}{2m}$ in the NRQED Lagrangian. In the diagram the filled box represents the vertex interaction from term $-\frac{\mathbf{D}^2}{2m}$ in the NRQED Lagrangian and the dot represents the $D^0$ vertex. The dashed line is for the Coulomb photon propagator while the wavy line is for the transverse photon propagator. Wavefunction renormalization diagrams, mass counterterms and the mirror diagrams are not shown explicitly. } \label{NR_OneLoop} \end{center} \end{figure} The one-loop effective theory matrix elements now read: \begin{eqnarray} \langle \mathbf{L}_q^{\rm eff} \rangle^{(1)}_{\rm local} & = & \frac{\alpha_{\rm _{EM}}}{2\pi} \frac{-4ie}{m^2} d_R^{(0)} \left(\ln\frac{2\Lambda}{\lambda} - \frac{5}{6} \right) u_h^\dag\mathbf{q}\times\mathbf{A}u_h \ , \nonumber\\ \langle \mathbf{J}_\gamma^{\rm eff} \rangle^{(1)}_{\rm local} & = & \frac{\alpha_{\rm _{EM}}}{2\pi} \frac{4ie}{m^2} f_\gamma^{(0)} \left(\ln\frac{2\Lambda}{\lambda} - \frac{5}{6} \right) u_h^\dag\mathbf{q}\times\mathbf{A}u_h \ , \nonumber\\ \langle \mathbf{L}_q^{\rm eff} \rangle^{(1)}_{\rm non-local} & = & \frac{\alpha_{\rm _{EM}}}{2\pi} \frac{2}{3m^2} d_R^{(0)} \left(\ln\frac{2\Lambda}{\lambda} - \frac{5}{6} \right)\mathbf{x}\times u_h^\dag |\mathbf{q}|^2\mathbf{A} u_h \ . \end{eqnarray} The one-loop amplitude $\langle \mathbf{S}_q^{\rm eff} \rangle_{\rm local}$ and $\langle \mathbf{J}_\gamma^{\rm eff} \rangle_{\rm non-local}$ as well as the diagrams involving the scalar potential $A^0$ vanish at higher order $\mathcal{O}(m^{-2})$ in NRQED. Now the matching condition Eq.~(\ref{MatchingCondition}) should be rewritten as \begin{equation} \label{MatchingConditionNew} \langle\mathbf{J}(\mu)\rangle_{\rm QED} = \langle\mathbf{J}^{\rm eff}(\mu,\Lambda)\rangle_{\rm NRQED} \ . \end{equation} The sum rules of the Wilson coefficients in Eq.~(\ref{sum_rule}) do not change. However, all the Wilson coefficients in the effective Lagrangian are defined in the presence of new regulators, similar to Eq.~(\ref{cutoff_Lag}). With the new matching conditions, we find yet another set of Wilson coefficients depending on both three-momentum cutoff $\Lambda$ in NRQED and the QED cutoff $\mu$. We only list the ones different from those in Eq.~(\ref{WilsonBilinear}), and denote them with an asterisk. \begin{eqnarray} &&d_D^* = 1 + \frac{\alpha_{\rm _{EM}}}{2\pi} \left(-\frac{4}{3} \ln\frac{\mu^2}{m^2} + \frac{16}{3} \ln\frac{m}{2\Lambda} + \frac{37}{9}\right) \ ,\nonumber\\ &&d_B^* = -1 + \frac{\alpha_{\rm _{EM}}}{2\pi} \left( -16\ln\frac{m}{2\Lambda} + \frac{23}{9}\right)\ ,\nonumber \\ &&f_B^* = \frac{\alpha_{\rm _{EM}}}{2\pi} \left( 16\ln\frac{m}{2\Lambda} - \frac{86}{9}\right) \ . \end{eqnarray} With the new coefficients, both $\mathbf{J}_q^{\rm eff}$ and $\mathbf{J}_\gamma^{\rm eff}$ now satisfy the RG equations, Eq.~(\ref{RG_Equation}). They now agree with those in Eq.~(8) in Ref.~\cite{Chen:2009rw}. \section{Conclusions} In this paper, we have established systematically the relativistic angular momentum components in the framework of NRQED up to the order $\alpha_{\rm _{EM}}/m^2$. Such effective operators can be used in the computation of higher-order contributions to spin/orbital angular momentum in non-relativistic bound states, such as the hydrogen atom. Further extensions to the effective operators in NRQCD can be readily obtained following the same procedure, and will be useful for studying the strong-interaction bound states such as the heavy quarkonium systems. We would like to thank Peng Sun and Yang Xu for discussions and comments on the manuscripit. This work was partially supported by the U. S. Department of Energy via grant DE-FG02-93ER-40762. YZ acknowledges the support from the TQHN group at University of Maryland and the Center for High-Energy Physics at Peking University where part of the work was done.
1,108,101,564,033
arxiv
\section{Supplemental Material} \label{sec:appendix} In the supplementary material, we first show the learning process of SHT\ with pseudocode summarized in Algorithm~\ref{alg:learn_alg}. Then, we investigate the influences of different hyperparameter settings, and discuss the impact of three key hyperparameters. Finally, we describe the details of our vector visualization algorithm used in the case study experiments. \subsection{Learning process of SHT} \vspace{-0.1in} \begin{algorithm}[h] \caption{Learning Process of SHT\ Framework} \label{alg:learn_alg} \LinesNumbered \KwIn{user-item interaction graph $\mathcal{G}$, number of graph layers $L$, number of edges to sample $R, R'$, maximum epoch number $E$, learning rate $\eta$} \KwOut{trained parameters in $\mathbf{\Theta}$} Initialize all parameters in $\mathbf{\Theta}$\\ \For{$e=1$ to $E$}{ Draw a mini-batch $\textbf{U}$ from all users $\{1,2,...,I\}$\\ Calculate the graph topology-aware embeddings $\bar{\textbf{E}}$\\ Generate input embeddings $\tilde{\textbf{E}}_0$ for hypergraph transformer\\ \For{$l=1$ to $L$} { Conduct node-to-hyperedge propagation to obtain $\tilde{\textbf{Z}}^{(u)}, \tilde{\textbf{Z}}^{(v)}$ for both users and items\\ Conduct hierarchical hyperedge feature transformation for $\hat{\textbf{Z}}^{(u)}, \hat{\textbf{Z}}^{(v)}$\\ Propagate information from hyperedges back to user/item nodes to obtain $\tilde{\textbf{E}}^{(u)}_l,\tilde{\textbf{E}}^{(v)}_l$\\ } Aggregate the iteratively propagated embeddings to get $\hat{\textbf{E}}$\\ Sample $R$ edge pairs for self-augmented learning\\ Acquire the user/item transformation function $\phi^{(u)}$ and $\phi^{(v)}$ with the meta network\\ Conduct user/item embedding transformations using $\phi(\cdot)$ to get $\mathbf{\Gamma}^{(u)}, \mathbf{\Gamma}^{(v)}$\\ Calculate the solidity score $s$ for the $R$ edge pairs\\ Calculate the solidity predictions $\hat{s}$ for the $R$ edge pairs\\ Compute loss $\mathcal{L}_\text{sa}$ for self-augmented learning according to Eq~\ref{eq:sa_loss}\\ Sample $R'$ edge pairs for the main task\\ Calculate the pair-wise marginal loss $\mathcal{L}$ according to Eq~\ref{eq:loss}\\ \For{each parameter $\theta \in\mathbf{\Theta}$}{ $\theta=\theta-\eta\cdot\partial\mathcal{L}/\partial\theta$ } } \Return all parameters $\mathbf{\Theta}$ \end{algorithm} \subsection{Hyperparameter Investigation} We study the effect of three important hyperparameters, \textit{i}.\textit{e}., the hidden dimensionality $d$, the number of latent hyperedges $K$, and the number of graph iterations $L$. To present more results, we calculate the relative performance decrease in terms of evaluation metrics, compared to the best performance under default settings. The results are shown in Fig~\ref{fig:hyperparam}, our observations are shown below: \begin{itemize}[leftmargin=*] \item The latent embedding dimension size largely determines the representation ability of the proposed SHT\ model. Small $d$ greatly limits the efficacy of SHT, by 15\%-35\% performance decrease. However, greater $d$ does not always yield obvious improvements. As shown by results when $d=68$ on Yelp data, the performance increases marginally due to the over-fitting effect. \item The curve of performance w.r.t. hyperedge number $K$ typically follows the under- to over-fitting pattern. However, it is interesting that $K$ has significantly less influence compared to $d$ (at most $-6\%$ and $-35\%$, respectively). This is because the hyperedge-node connections in SHT\ are calculated in a $d$-dimensional space, which reduces the amount of independent parameters related to $K$ to $O(K\times d)$. So $K$ has much smaller impact on model capacity compared to $d$, which relates to $O((I+J)\times d)$ parameters. \item For the number of graph iterations $L$, smaller $L$ hinders nodes from aggregating high-order neighboring information. When $L=0$, graph neural networks degrades significantly. Meanwhile, by stacking more graph layers may cause over-smoothing issue, which yields indistinguishable node embeddings. \end{itemize} \begin{figure}[h] \centering \begin{adjustbox}{max width=1.0\linewidth} \input{./fig/hyper1.tex} \end{adjustbox} \begin{adjustbox}{max width=1.0\linewidth} \input{./fig/hyper2.tex} \end{adjustbox} \vspace{-0.2in} \caption{Hyperparameter study of the SHT.} \vspace{-0.2in} \label{fig:hyperparam} \end{figure} \subsection{Vector Visualization Algorithm} In our case study experiments, each item embeddings of $32$ dimensions is visualized with a color. This visualization process should preserve the learned item information in the embedding vectors. Meanwhile, to make the visualization results easy to understand, it would be better to pre-select several colors to use. Considering the above two requirements, we design a neural-network-based dimension reduction algorithm. Specifically, we train a multi-layer perceptron to map 32-dimensional item embeddings to 3-dimensional RGB values. The network is trained using two objective functions, corresponding to the forgoing two requirements. Firstly, the compressed 3-d vectors (colors) are fed into a classifier, to predict the original item ids. Through this self-discrimination task, the network is trained to preserve the original embedding information in the RGB vectors. Secondly, the network is trained with a regularizer that calculates the distance between each color vectors and the preferred colors. Using the two objectives, we can map embeddings into preferred colors while preserving the embedding information. \section{Conclusion} \label{sec:conclusion} In this work, we explore the self-supervised recommender systems with an effective hypergraph transformer network. We propose a new recommendation framework SHT, which seeks better user-item interaction modeling with self-augmented supervision signals. Our SHT\ model improves the robustness of graph-based recommender systems against noise perturbation. In our experiments, we achieved better recommendation results on real-world datasets. Our future work would like to extend our SHT\ to explore the disentangled user intents with diverse user-item relations for encoding multi-dimensional user preferences. \section{Evaluation} \label{sec:eval} To evaluate the effectiveness of our SHT, our experiments are designed to answer the following research questions: \begin{itemize}[leftmargin=*] \item \textbf{RQ1}: How does our SHT\ perform by comparing to strong baseline methods of different categories under different settings? \item \textbf{RQ2}: How do the key components of SHT\ (\eg, the hypergraph modeling, the transformer-like information propagation) contribute to the overall performance of SHT\ on different datasets? \item \textbf{RQ3}: How well can our SHT\ handle noisy and sparse data, as compared to baseline methods? \item \textbf{RQ4}: In real cases, can the designed the self-supervised learning mechanism in SHT\ provide useful interpretations? \end{itemize} \begin{table}[] \centering \small \caption{Statistical information of the experimental datasets.} \vspace{-0.15in} \begin{tabular}{lcccc} \toprule Stat. & Yelp & Gowalla & Tmall\\ \midrule \# Users & 29601 & 50821 & 47939\\ \# Items & 24734& 24734 & 41390\\ \# Interactions & 1517326 & 1069128 & 2357450\\ Density & $2.1\times 10^{-3}$ & $4.0\times 10^{-4}$ & $1.2\times 10^{-3}$\\ \bottomrule \end{tabular} \vspace{-0.1in} \label{tab:data} \end{table} \subsection{Experimental Settings} \subsubsection{\bf Experimental Datasets} The experiments are conducted on three datasets collected from real-world applications, \textit{i}.\textit{e}., Yelp, Gowalla and Tmall. The statistics of them are shown in Table~\ref{tab:data}. \begin{itemize}[leftmargin=*] \item \textbf{Yelp}: This commonly-used dataset contains user ratings on business venues collected from Yelp. Following other papers on implicit feedback~\cite{huang2021knowledge}, we treat users' rated venues as interacted items and treat unrated venues as non-interacted items. \item \textbf{Gowalla}: It contains users' check-in records on geographical locations obtained from Gowalla. This evaluation dataset is generated from the period between 2016 and 2019. \item \textbf{Tmall}: This E-commerce dataset is released by Tmall, containing users' behaviors for online shopping. We collect the page-view interactions during December in 2017. \end{itemize} \subsubsection{\bf Evaluation Protocols} Following the recent collaborative filtering models~\cite{he2020lightgcn, wu2021self}, we split the datasets by 7:2:1 into training, validation and testing sets. We adopt all-rank evaluation protocol. When testing a user, the positive items in the test set and all the non-interacted items are tested and ranked together. We employ the commonly-used \textit{Recall@N} and \textit{Normalized Discounted Culmulative Gain (NDCG)@N} as evaluation metrics for recommendation performance evaluation~\cite{wang2019neural,ren2020sequential}. \textit{N} is set as 20 by default. \subsubsection{\bf Compared Baseline Methods} We evaluate our SHT\ by comparing it with 15 baselines from different research lines for comprehensive evaluation. \noindent \textbf{Traditional Factorization-based Technique.} \begin{itemize}[leftmargin=*] \item \textbf{BiasMF}~\cite{koren2009matrix}: This method augments matrix factorization with user and item bias vectors to enhance user-specific preferences. \end{itemize} \noindent \textbf{Neural Factorization Method.}\vspace{-0.05in} \begin{itemize}[leftmargin=*] \item \textbf{NCF}~\cite{he2017neural}: This method replaces the dot-product in conventional matrix factorization with multi-layer neural networks. Here, we adopt the NeuMF variant for comparison. \end{itemize} \noindent \textbf{Autoencoder-based Collaborative Filtering Approach.} \begin{itemize}[leftmargin=*] \item \textbf{AutoR}~\cite{sedhain2015autorec}: It improves user/item representations with a three-layer autoencoder trained under a behavior reconstruction task. \end{itemize} \noindent \textbf{Graph Neural Networks for Recommendation.} \begin{itemize}[leftmargin=*] \item \textbf{GCMC}~\cite{berg2017graph}: This is one of the pioneering work to apply graph convolutional networks (GCNs) to the matrix completion task. \item \textbf{PinSage}~\cite{ying2018graph}: It applies random sampling in graph convolutional framework to study the collaborative filtering task . \item \textbf{NGCF}~\cite{wang2019neural}: This graph convolution-based approach additionally takes source-target representation interaction learning into consideration when designing its graph encoder. \item \textbf{STGCN}~\cite{zhang2019star}: The model combines conventional graph convolutional encoders with graph autoencoders to improve the model robustness against sparse and cold-start samples. \item \textbf{LightGCN}~\cite{he2020lightgcn}: This work conducts in-depth analysis to study the effectiveness of modules in standard GCN for collaborative data, and proposes a simplified GCN model for recommendation. \item \textbf{GCCF}~\cite{chen2020revisiting}: This is another method which simplifies the GCNs by removing the non-linear transformation. In GCCF, the effectiveness of residual connections across graph iterations is validated. \end{itemize} \noindent \textbf{Disentangled Graph Model for Recommendation.} \begin{itemize}[leftmargin=*] \item \textbf{DGCF}~\cite{wang2020disentangled}: It disentangles user interactions into multiple latent intentions to model user preference in a fine-grained way. \end{itemize} \noindent \textbf{Hypergraph-based Neural Collaborative Filtering.} \begin{itemize}[leftmargin=*] \item \textbf{HyRec}~\cite{wang2020next}: This is a sequential collaborative model that learns item-wise high-order relations with hypergraphs. \item \textbf{DHCF}~\cite{ji2020dual}: This model adopts dual-channel hypergraph neural networks for both users and items in collaborative filtering. \end{itemize} \noindent \textbf{Recommenders enhanced by Self-Supervised Learning .} \begin{itemize}[leftmargin=*] \item \textbf{MHCN}~\cite{yu2021self}: This model maximizes the mutual information between node embeddings and global readout representations, to regularize the representation learning for interaction graph. \item \textbf{SLRec}~\cite{yao2021self}: This approach employs the contrastive learning between the node features as regularization terms to enhance the existing recommender systems. \item \textbf{SGL}~\cite{wu2021self}: This model conducts data augmentation through random walk and feature dropout to generate multiple views. It enhances LightGCN with self-supervised contrastive learning. \end{itemize} \subsubsection{\bf Implementation Details} We implement our SHT\ using TensorFlow and use Adam as the optimizer for model training with the learning rate of $1e^{-3}$ and $0.96$ epoch decay ratio. The models are configured with $32$ embedding dimension size, and the number of graph neural layers is searched from \{1,2,3\}. The weights $\lambda_1, \lambda_2$ for regularization terms are selected from $\{a\times 10 ^ {-x}: a\in\{1, 3\}, x\in\{2,3,4,5\}\}$. The batch size is selected from $\{32, 64, 128, 256, 512\}$. The rate for dropout is tuned from $\{0.25, 0.5, 0.75\}$. For our model, the number of hyperedges is set as $128$ by default. Detailed hyperparameter settings can be found in our released source code. \subsection{Overall Performance Comparison (RQ1)} \begin{table*}[t] \vspace{-0.1in} \caption{Performance comparison on Yelp, MovieLens, Amazon datasets in terms of \textit{Recall} and \textit{NDCG}.} \vspace{-0.15in} \centering \footnotesize \setlength{\tabcolsep}{1mm} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Data & Metric & BiasMF & NCF & AutoR & GCMC & PinSage & NGCF & STGCN & LightGCN & GCCF & DGCF & HyRec & DHCF & MHCN & SLRec & SGL & \emph{SHT} & p-val.\\ \hline \multirow{4}{*}{Yelp} &Recall@20 & 0.0190 & 0.0252 & 0.0259 & 0.0266 & 0.0345 & 0.0294 & 0.0309 & 0.0482 & 0.0462 & 0.0466 & 0.0472 & 0.0449 & 0.0503 & 0.0476 & 0.0526 & \textbf{0.0651} & $9.3e^{-7}$\\ &NDCG@20 & 0.0161 & 0.0202 & 0.0210 & 0.0251 & 0.0288 & 0.0243 & 0.0262 & 0.0409 & 0.0398 & 0.0395 & 0.0395 & 0.0381 & 0.0424 & 0.0398 & 0.0444 & \textbf{0.0546} & $9.1e^{-8}$ \\ \cline{2-19} &Recall@40 & 0.0371 & 0.0487 & 0.0504 & 0.0585 & 0.0599 & 0.0522 & 0.0504 & 0.0803 & 0.0760 & 0.0774 & 0.0791 & 0.0751 & 0.0826 & 0.0821 & 0.0869 & \textbf{0.1091} & $4.1e^{-7}$\\ &NDCG@40 & 0.0227 & 0.0289 & 0.0301 & 0.0373 & 0.0385 & 0.0330 & 0.0332 & 0.0527 & 0.0508 & 0.0511 & 0.0522 & 0.0493 & 0.0544 & 0.0541 & 0.0571 & \textbf{0.0709} & $2.2e^{-7}$ \\ \hline \multirow{4}{*}{Gowalla} &Recall@20 & 0.0196 & 0.0171 & 0.0239 & 0.0301 & 0.0576 & 0.0552 & 0.0369 & 0.0985 & 0.0951 & 0.0944 & 0.0901 & 0.0931 & 0.0955 & 0.0925 & 0.1030 & \textbf{0.1232} & $5.3e^{-7}$\\ &NDCG@20 & 0.0105 & 0.0106 & 0.0132 & 0.0181 & 0.0373 & 0.0298 & 0.0217 & 0.0593 & 0.0535 & 0.0522 & 0.0498 & 0.0505 & 0.0574 & 0.0581 & 0.0623 & \textbf{0.0731} & $6.3e^{-7}$\\ \cline{2-19} &Recall@40 & 0.0346 & 0.0216 & 0.0343 & 0.0427 & 0.0892 & 0.0810 & 0.0542 & 0.1431 & 0.1392 & 0.1401 & 0.1306 & 0.1356 & 0.1393 & 0.1305 & 0.1500 & \textbf{0.1804} & $1.5e^{-7}$\\ &NDCG@40 & 0.0145 & 0.0118 & 0.0160 & 0.0212 & 0.0417 & 0.0367 & 0.0262 & 0.0710 & 0.0684 & 0.0671 & 0.0669 & 0.0660 & 0.0689 & 0.0680 & 0.0746 & \textbf{0.0881} & $3.2e^{-7}$\\ \hline \multirow{4}{*}{Tmall} &Recall@20 & 0.0103 & 0.0082 & 0.0103 & 0.0103 & 0.0202 & 0.0180 & 0.0146 & 0.0225 & 0.0209 & 0.0235 & 0.0233 & 0.0156 & 0.0203 & 0.0191 & 0.0268 & \textbf{0.0387} & $4.3e^{-9}$\\ &NDCG@20 & 0.0072 & 0.0059 & 0.0072 & 0.0072 & 0.0136 & 0.0123 & 0.0105 & 0.0154 & 0.0141 & 0.0163 & 0.0160 & 0.0108 & 0.0139 & 0.0133 & 0.0183 & \textbf{0.0262} & $4.9e^{-9}$\\ \cline{2-19} &Recall@40 & 0.0170 & 0.0140 & 0.0174 & 0.0159 & 0.0345 & 0.0310 & 0.0245 & 0.0378 & 0.0356 & 0.0394 & 0.0350 & 0.0261 & 0.0340 & 0.0301 & 0.0446 & \textbf{0.0645} & $4.0e^{-9}$\\ &NDCG@40 & 0.0095 & 0.0079 & 0.0097 & 0.0086 & 0.0186 & 0.0168 & 0.0140 & 0.0208 & 0.0196 & 0.0218 & 0.0199 & 0.0145 & 0.0188 & 0.0171 & 0.0246 & \textbf{0.0352} & $3.5e^{-9}$\\ \hline \end{tabular} \label{tab:overall_performance} \end{table*} In this section, we validate the effectiveness of our SHT\ framework by conducting the overall performance evaluation on the three datasets and comparing SHT\ with various baselines. We also re-train SHT\ and the best-performed baseline (\textit{i}.\textit{e}.~SGL) for 10 times to compute p-values. The results are presented in Table~\ref{tab:overall_performance}. \begin{itemize}[leftmargin=*] \item \textbf{Performance Superiority of SHT}. As shown in the results, SHT\ achieves best performance compared to the baselines under both top-\textit{20} and top-\textit{40} settings. The t-tests also validate the significance of performance improvements. We attribute the superiority to: i) Based on the hypergraph transformer, SHT\ not only realizes global message passing among semantically-relevent users/items, but also refines the hypergraph structure using the multi-head attention. ii) The global-to-local self-augmented learning distills knowledge from the high-level hypergraph transformers to regularize the topology-aware embedding learning, and thus alleviate the data noise issue. \item \textbf{Effectiveness of Hypergraph Architecture}. Among the state-of-the-art baselines, approaches that based on hypergraph neural networks (HGNN) (\textit{i}.\textit{e}., HyRec and DHCF) outperforms most of the GNN-based baselines (\eg, GCMC, PinSage, NGCF, STGCN). This sheds lights on the insufficiency of conventional GNNs in capturing high-order and global graph connectivity. Meanwhile, our SHT\ is configured with transformer-like hypergraph structure learning which further excavates the potential of HGNN in global relation learning. In addition, most existing hypergraph-based models utilize user or item nodes as hyperedges, while our SHT\ adopts latent hyperedges which not only enables automatic graph dependency modeling, but also avoids pre-calculating the large-scale high-order relation matrix. \item \textbf{Effectiveness of Self-Augmented Learning}. From the evaluation results, we can observe that self-supervised learning obviously improves existing CF frameworks (\eg, MHCN, SLRec, SGL). The improvements can be attributed to incorporating the augmented learning task, which provides the beneficial regularization on the parameter learning based on the input data itself. Specifically, MHCN regularizes the node embeddings according to a read-out global information of the holistic graph. This approach may be too strict for large graphs containing many local sub-graphs with their own characteristics. Meanwhile, SLRec and SGL adopt stochastic data augmentation to construct multiple data views, and conduct contrastive learning to capture the invariant feature from the corrupted views. In comparison to the above methods, the self-augmentation in our SHT\ has mainly two merits: i) SHT\ adopts meta networks to generate global-structure-aware mapping functions for domain adaption, which adaptively alleviates the gap between local and global feature spaces. ii) Our self-supervised approach does not depend on random masking, which may drop important information to hinder representation learning. Instead, SHT\ self-augment the model training by transferring knowledge from the high-level hypergraph embeddings to the low-level topology-aware embedding. The superior performance of SHT\ compared to the baseline self-supervised approaches validates the effectiveness of our new design of self-supervised learning paradigm. \end{itemize} \vspace{-0.1in} \subsection{Model Ablation Test (RQ2)} To validate the effectiveness of the proposed modules, we individually remove the applied techniques in the three major parts of SHT\ (\textit{i}.\textit{e}., the local graph structure capturing, the global relation learning, and the local-global self-augmented learning). The variants are re-trained for test on the three datasets. Both prominent components (\eg, the entire hypergraph transformer) and small modules (\eg, the deep hyperedge feature extraction) of SHT\ are ablated. The results can be seen in Table~\ref{tab:module_ablation}. We have the following major conclusions: \begin{itemize}[leftmargin=*] \item Removing either the graph topology-aware embedding module or the hypergraph transformer (\textit{i}.\textit{e}., \textit{-Pos} and \textit{-Hyper}) severely damage the performance of SHT\ in all cases. This result suggests the necessity of local and global relation learning, and validates the effectiveness of our GCN-based topology-aware embedding and hypergraph transformer networks. \item The variant without self-augmented learning (\textit{i}.\textit{e}.~\textit{-SAL}) yields obvious performance degradation in all cases, which validates the positive effect of our augmented global-to-local knowledge transferring. The effect of our meta-network-based domain adaption can also be observed in the variant \textit{-Meta}. \item We also ablate the components in our hypergraph neural network. Specifically, we substitute the hypergraph transformer with independent node-hypergraph matrices (\textit{-Trans}), and we remove the deep hyperedge feature extraction to keep only one layer of hyperedges (\textit{-DeepH}). Additionally, we remove the high-order hypergraph iterations (\textit{-HighH}). From the results we can conclude that: i) Though using much less parameters, the transformer-like hypergraph attention works much better than learning hypergraph-based user/item dependencies. ii) The deep hyperedge layers indeed make contribution to the global relation learning through non-linear feature transformation. iii) Though our hypergraph transformer could connect any users/items using learnable hyperedges, high-order iterations still improve the model performance through the iterative hypergraph propagation. \end{itemize} \begin{table}[t] \caption{Ablation study on key components of SHT.} \vspace{-0.15in} \centering \footnotesize \setlength{\tabcolsep}{1.2mm} \begin{tabular}{c|c|cc|cc|cc} \hline \multirow{2}{*}{Category} & Data & \multicolumn{2}{c|}{Yelp} & \multicolumn{2}{c|}{Gowalla} & \multicolumn{2}{c}{Tmall}\\ \cline{2-8} & Variants & Recall & NDCG & Recall & NDCG & Recall & NDCG\\ \hline \hline Local & -Pos & 0.0423 & 0.0352 & 0.0816 & 0.0487 & 0.0218 & 0.0247\\ \hline \multirow{4}{*}{Global} & -Trans & 0.0603 & 0.0504 & 0.0999 & 0.0608 & 0.0321 & 0.0206\\%No transformer &-DeepH & 0.0645 & 0.0540 & 0.1089 & 0.0634 & 0.0347 & 0.0234\\%No deep hyper layers &-HighH & 0.0598 & 0.0497 & 0.1091 & 0.0646 & 0.0336 & 0.0227\\%No high-order hyper &-Hyper & 0.0401 & 0.0346 & 0.0879 & 0.0531 & 0.0209 & 0.0144\\%No hypergraph \hline \multirow{2}{*}{SAL} & -Meta & 0.0615 & 0.0526 & 0.1108 & 0.0717 & 0.0375 & 0.0255\\%No meta network &-SAL & 0.0602 & 0.0519 & 0.1099 & 0.0699 & 0.0363 & 0.0251\\%No SSL regularization \hline \multicolumn{2}{c|}{\emph{SHT}} & 0.0651 & 0.0546 & 0.1232 & 0.0731 & 0.0387 & 0.0262\\ \hline \end{tabular} \vspace{-0.15in} \label{tab:module_ablation} \end{table} \subsection{Model Robustness Test (RQ3)} \subsubsection{\bf Performance \textit{w.r.t.} Data Noise Degree} In this section, we first investigate the robustness of SHT\ against the data noise. To evaluate the influence of noise degrees on model performance, we randomly substitute different percentage of real edges with randomly-generated fake edges, and re-train the model using the corrupted graphs as input. Concretely 5\%, 10\%, 15\%, 20\%, 25\% of the edges are replaced with noisy signals in our experiments. We compare SHT\ with MHCN and LightGCN, which are recent recommenders based on HGNN and GNN, respectively. To better study the effect of noise on performance degradation, we evaluate the relative performance compared to the performance on original data. The results are shown in Fig~\ref{fig:noise}. We can observe that our method presents smaller performance degradation in most cases compared to the baselines. We ascribe this observation to two reasons: i) The global relation learning and information propagation by our hypergraph transformer alleviate the noise effect caused by the raw observed user-item interactions. ii) The self-augmented learning task distills knowledge from the refined hypergraph embeddings, so as to refine the graph-based embeddings. In addition, we can observe that the relative performance degradation on the Gowalla data is more obvious compared with other two datasets. This is because the noisy data has larger influence for the performance on the sparsest Gowalla dataset. \begin{figure}[t] \centering \subfigure[Yelp data]{ \includegraphics[width=0.47\columnwidth]{fig/noise_yelp_Recall.pdf}\ \includegraphics[width=0.47\columnwidth]{fig/noise_yelp_NDCG.pdf} \vspace{-0.15in} } \subfigure[Gowalla data]{ \includegraphics[width=0.47\columnwidth]{fig/noise_gowalla_Recall.pdf}\ \includegraphics[width=0.47\columnwidth]{fig/noise_gowalla_NDCG.pdf} } \subfigure[Tmall data]{ \includegraphics[width=0.47\columnwidth]{fig/noise_tmall_Recall.pdf}\ \includegraphics[width=0.47\columnwidth]{fig/noise_tmall_NDCG.pdf} } \vspace{-0.15in} \caption{Relative performance degradation \wrt\ noise ratio.} \vspace{-0.2in} \label{fig:noise} \end{figure} \subsubsection{\bf Performance \textit{w.r.t.} Data Sparsity} We further study the influence of data sparsity from both user and item side on model performance. We compare our SHT\ with two representative baselines LightGCN and SGL. Multiple user and item groups are constructed in terms of their number of interactions in the training set. For example, the first group in the user-side experiments contains users interacting with 15-20 items, and the first group in the item-side experiment contains items interacting with 0-8 users. In Fig~\ref{fig:sparse}, we present both the recommendation accuracy and performance difference between our SHT\ and compared methods. From the results, we have the following observations: i) The superior performance of SHT\ is consistent on datasets with different sparsity degrees, which validates the robustness of SHT\ in handling sparse data for both users and items. ii) The sparsity of item interaction vectors has obviously larger influence on model performance for all the methods. This indicates that the collaborative pattern of items are more difficult to model compared to users, such that more neighbors usually result in better representations. iii) In the item-side experiments, the performance gap on the middle sub-datasets is larger compared to the gap on the densest sub-dataset. This suggests the better anti-sparsity capability of SHT\ for effectively transferring knowledge among dense samples and sparse samples with our proposed hypergraph transformer. \begin{figure}[t] \centering \subfigure[Performance \textit{w.r.t.} item interaction numbers]{ \includegraphics[width=0.47\columnwidth]{fig/sparsity_gowalla_recall.pdf}\ \includegraphics[width=0.47\columnwidth]{fig/sparsity_gowalla_ndcg.pdf} \vspace{-0.15in} } \subfigure[Performance \textit{w.r.t.} user interaction numbers]{ \includegraphics[width=0.47\columnwidth]{fig/sparsity_gowalla_recall_user.pdf}\ \includegraphics[width=0.47\columnwidth]{fig/sparsity_gowalla_ndcg_user.pdf} } \vspace{-0.15in} \caption{Performance \textit{w.r.t.} different data sparsity degrees on Gowalla data. Lines present Recall@40 and NDCG@40 values, and bars shows performance differences between baselines and our SHT\ with corresponding colors.} \vspace{-0.15in} \label{fig:sparse} \end{figure} \subsection{Case Study (RQ4)} In this section, we analyze the concrete data instances to investigate the effect of our hypergraph transformer with self-augmentation from two aspects: i) Is the hypergraph-based dependency modeling in SHT\ capable of learning useful node-wise relations, especially the implicit relations unknown to the training process? ii) Is the self-augmented learning with meta networks in SHT\ able to differentiate noisy edges in the training data? To this end, we select three users with fair number of interactions from Tmall dataset. The interacted items are visualized as colored circles representing their trained embeddings (refer to the supplementary material for details about the visualization algorithm). The results are shown in Fig~\ref{fig:case_study}. For the above questions, we have the following observations: \begin{itemize}[leftmargin=*] \item \textbf{Implicit relation learning}. Even if the items are interacted by same users, their learned embeddings are usually divided into multiple groups with different colors. This may relate to users' multiple interests. To study the differences between the item groups, we present additional item-wise relations that are not utilized in the training process. Specifically, we connect items belonging to same categories, and items co-interacted by same users. Note that only view data is used in model training, so interactions in other behaviors are unknown to the trained model. It is clear that there exist dense implicit correlations among same-colored items (\eg, the green items of user (a), the purple items of user (b), and the orange items of user (c)). Meanwhile, there are much less implicit relations between items of different colors. This results shows the capability of SHT\ in identifying useful implicit relations, which we ascribe to the global structure learning of our hypergraph transformer. \item \textbf{Noise discrimination}. Furthermore, we show the solidity scores $s$ estimated from our self-augmented learning, for the user-item relations in Fig~\ref{fig:case_study}. We also show the normalized values of some notable edges in the corresponding circles (\eg, edges of item 10202 and 6508 are labeled with 2.3 and 1.9). The red values are anomalously low, which may indicates noise. The black values are the lowest and highest solidity scores for edges except the anomalous ones. By analyzing user (a), we can regard the yellow and green items as two interests of user (a) as they are correlated in terms of their learned embeddings. In contrast, item 6508 and 10202 have few relations to other interacted items of user (a), which may not reflect the real interactive patterns of this user. Thus, the model may consider this two edges as noisy interactions. Similar cases can be found for user (b), where item 2042 has few connections to the other items and show difference with the embedding color. It is labeled with low $s$ scores and considered as noise by SHT. The results show the effective noise discrimination ability of the self-augmented learning in SHT, which recalibrate the topology-aware embedding using global information encoded from hypergraph transformer. \end{itemize} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{fig/case_study.pdf} \caption{Case study on inferring implicit item-wise relations and discriminating potential noise edges. Circles denote items interacted by the centric users, and their learned embeddings are visualized with colors. Implicit item-wise relations not utilized during model training are presented by green and blue lines. The type of co-interactions are also labeled (\eg, \textit{view-cart} denotes viewed and added-to-cart by same users). Also, the inferred solidity scores $s$ are shown on the circles, where red values are anomalously low scores indicating noisy edges.} \label{fig:case_study} \vspace{-0.1in} \end{figure} \section{Introduction} \label{sec:intro} Recommender systems have become increasingly important to alleviate the information overload for users in a variety of web applications, such as e-commerce systems~\cite{guo2020debiasing}, streaming video sites~\cite{liu2021concept} and location-based lifestyle apps~\cite{chen2021curriculum}. To accurately infer the user preference, encoding user and item informative representations is the core part of effective collaborative filtering (CF) paradigms based on the observed user-item interactions~\cite{he2017neural,rendle2020neural,huang2021recent}. Earlier CF models project interaction data into latent user and item embeddings using matrix factorization (MF)~\cite{koren2009matrix}. Due to the strong representation ability of deep learning, various neural network CF models have been developed to project users and items into latent low-dimensional representations, such as autoencoder~\cite{liang2018variational} and attention mechanism~\cite{chen2017attentive}. Recent years have witnessed the development of graph neural networks (GNNs) for modeling graph-structural data~\cite{wang2019heterogeneous,wu2019simplifying}. One promising direction is to perform the information propagation along the user-item interactions to refine user embeddings based on the recursive aggregation schema. For example, upon the graph convolutional network, PinSage~\cite{ying2018graph} and NGCF~\cite{wang2019neural} attempt to aggregate neighboring information by capturing the graph-based CF signals for recommendation. To simplify the graph-based message passing, LightGCN~\cite{he2020lightgcn} omits the burdensome non-linear transformer during the embedding propagation and improve the recommendation performance. To further enhance the graph-based user-item interaction modeling, some follow-up studies propose to learn intent-aware representations with disentangled graph neural frameworks (\eg, DisenHAN~\cite{wang2020disenhan}), differentiate behavior-aware embeddings of users with multi-relational graph neural models (\eg, MB-GMN~\cite{xia2021graph}). Despite the effectiveness of the above graph-based CF models by providing state-of-the-art recommendation performance, several key challenges have not been well addressed in existing methods. \emph{First}, data noise is ubiquitous in many recommendation scenarios due to a variety of factors. For example, users may click their uninterested products due to the over-recommendation of popular items~\cite{zhang2021causal}. In such cases, the user-item interaction graph may contain `` interest-irrelevant'' connections. Directly aggregating information from all interaction edges will impair the accurate user representation. Worse still, the embedding propagation among multi-hop adjacent vertices (user or item) will amplify the noise effects, which misleads the encoding of underlying user interest in GNN-based recommender systems. \emph{Second}, data sparsity and skewed distribution issue still stand in the way of effective user-item interaction modeling, leading to most existing graph-based CF models being biased towards popular items~\cite{zhang2021model,krishnan2018adversarial}. Hence, the recommendation performance of current approaches severely drops with the user data scarcity problem, as the high-quality training signals could be small. While there exist a handful of recently developed recommendation methods (SGL~\cite{wu2021self} and SLRec~\cite{yao2021self}) leveraging self-supervised learning to improve user representations, these methods mainly generate the additional supervision information with probability-based randomly mask operations, which might keep some noisy interaction and dropout some important training signals during the data augmentation process. \\\vspace{-0.12in} \noindent \textbf{Contribution}. In light of the aforementioned challenges, this work proposes a \underline{S}elf-Supervised \underline{H}ypergraph \underline{T}ransformer (SHT) to enhance the robustness and generalization performance of graph-based CF paradigms for recommendation. Specifically, we integrate the hypergraph neural network with the topology-aware Transformer, to empower our SHT\ to maintain the global cross-user collaborative relations. Upon the local graph convolutional network, we first encode the topology-aware user embeddings and inject them into Transformer architecture for hypergraph-guided message passing within the entire user/item representation space. \\\vspace{-0.12in} In addition, we unify the modeling of local collaborative relation encoder with the global hypergraph dependency learning under a generative self-supervised learning framework. Our proposed new self-supervised recommender system distills the auxiliary supervision signals for data augmentation through a graph topological denoising scheme. A graph-based meta transformation layer is introduced to project hyergraph-based global-level representations into the graph-based local-level interaction modeling for user and item dimensions. Our new proposed SHT\ is a model-agnostic method and serve as a plug-in learning component in existing graph-based recommender systems. Specifically, SHT\ enables the cooperation of the local-level and global-level collaborative relations, to facilitate the graph-based CF models to learn high-quality user embeddings from noisy and sparse user interaction data. The key contributions of this work are summarized as follows: \begin{itemize}[leftmargin=*] \item In this work, we propose a new self-supervised recommendation model--SHT\ to enhance the robustness of graph collaborative filtering paradigms, by integrating the hypergraph neural network with the topology-aware Transformer.\\\vspace{-0.1in} \item In the proposed SHT\ method, the designed hypergraph learning component encodes the global collaborative effects within the entire user representation space, via a learnable multi-channel hyperedge-guided message passing schema. Furthermore, the local and global learning views for collaborative relations are integrated with the cooperative supervision for interaction graph topological denoising and auxiliary knowledge distillation. \\\vspace{-0.1in} \item Extensive experiments demonstrate that our proposed SHT\ framework achieves significant performance improvement over 15 different types of recommendation baselines. Additionally, we conduct empirical analysis to show the rationality of our model design with the ablation studies. \end{itemize} \section*{Acknowledgments} This research work is supported by the research grants from the Department of Computer Science \& Musketeers Foundation Institute of Data Science at the University of Hong Kong. \bibliographystyle{ACM-Reference-Format} \balance \section{Preliminaries} \label{sec:model} \section{Preliminaries and Related Work} \label{sec:relate} \noindent \textbf{Recap Graph Collaborative Filtering Paradigm}. To enhance the Collaborative Filtering with the multi-order connectivity information, one prominent line of recommender systems generates graph structures for user-item interactions. Suppose our recommendation scenario involves $I$ users and $J$ items with the user set $\mathcal{U}=\{u_1,...u_I\}$ and item set $\mathcal{V}=\{v_1,...v_J\}$. Edges in the user-item interaction graph $\mathcal{G}$ are constructed if user $u_i$ has adopted item $v_j$. Upon the constructed interaction graph structures, the core component of graph-based CF paradigm lies in the information aggregation function--gathering the feature embeddings of neighboring users/items via different aggregators, \eg, mean or sum.\\\vspace{-0.12in} \noindent \textbf{Recommendation with Graph Neural Networks}. Recent studies have attempted to design various graph neural architectures to model the user-item interaction graphs through embedding propagation. For example, PinSage~\cite{ying2018graph} and NGCF~\cite{wang2019neural} are built upon the graph convolutional network over the spectral domain. Later on, LightGCN~\cite{he2020lightgcn} proposes to simplify the heavy non-linear transformation and utilizes the sum-based pooling over neighboring representations. Upon the GCN-based message passing schema, each user and item is encoded into the transformed embeddings with the preservation of multi-hop connections. To further improve the user representation, some recent studies attempt to design disentangled graph neural architecture for user-item interaction modeling, such as DGCF~\cite{wang2020disentangled} and DisenHAN~\cite{wang2020disenhan}. Several multi-relational GNNs are proposed to enhance recommender systems with multi-behavior modeling, including KHGT~\cite{xia2021knowledge} and HMG-CR~\cite{yang2021hyper}. However, most of existing graph neural CF models are intrinsic designed to merely rely on the observation interaction lables for model training, which makes them incapable of effectively modeling interaction graph with sparse and noisy supervision signals. To overcome these challenges, this work proposes a self-supervised hypergraph transformer architecture to generate informative knowledge through the effective interaction between local and global collaborative views. \\\vspace{-0.12in} \noindent \textbf{Hypergraph-based Recommender Systems}. There exist some recently developed models constructing hypergraph connections to improve the relation learning for recommendation~\cite{wang2020next,ji2020dual,yu2021self}. For example, HyRec~\cite{wang2020next} regards users as hyperedges to aggregate information from the interacted items. MHCN~\cite{yu2021self} constructs multi-channel hypergraphs to model high-order relationships among users. Furthermore, DHCF~\cite{ji2020dual} is a hypergraph collaborative filtering model to learn the hybrid high-order correlations. Different from these work for generating hypergraph structures with manually design, this work automates the hypergraph structure learning process with the modeling of global collaborative relation. \\\vspace{-0.12in} \noindent \textbf{Self-Supervised Graph Learning}. To improve the embedding quality of supervised learning, self-supervised learning (SSL) has become a promising solution with auxiliary training signals~\cite{liu2021self}, such as augmented image data~\cite{kang2020contragan}, pretext sequence tasks for language data~\cite{vulic2021lexfit}, knowledge graph augmentation~\cite{yang2022knowledge}. Recently, self-supervised learning has also attracted much attention on graph representation~\cite{hwang2020self}. For example, DGI~\cite{velickovic2019deep} and GMI~\cite{peng2020graph} perform the generative self-supervised learning over the GNN framework with auxiliary tasks. Inspired by the graph self-supervised learning, SGL~\cite{wu2021self} produces state-of-the-art performance by generating contrastive views with randomly node and edge dropout operations. Following this research line, HCCF~\cite{xia2022hypergraph} leverages the hypergraph to generate contrastive signals to improve the graph-based recommender system. Different from them, this work enhances the graph-based collaborative filtering paradigm with a generative self-supervised learning framework. \section{Methodology} \label{sec:solution} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{fig/framework.pdf} \vspace{-0.2in} \caption{Overall framework of the proposed SHT\ model.} \vspace{-0.1in} \label{fig:framework} \end{figure*} In this section, we present the proposed SHT\ framework and show the overall model architecture in Figure~\ref{fig:framework}. SHT\ embeds local structure information into latent node representations, and conduct global relation learning with the local-aware hypergraph transformer. To train the proposed model, we augment the regular parameter learning with the local-global cross-view self-augmentation. \subsection{Local Graph Structure Learning} To begin with, we embed users and items into a $d$-dimensional latent space to encode their interaction patterns. For user $u_i$ and item $v_j$, embedding vectors $\textbf{e}_i, \textbf{e}_j \in \mathbb{R}^d$ are generated, respectively. Also, we aggregate all the user and item embeddings to compose embedding matrices $\textbf{E}^{(u)} \in \mathbb{R}^{I\times d}, \textbf{E}^{(v)} \in \mathbb{R}^{J\times d}$, respectively. We may omit the superscript $(u)$ and $(v)$ for notation simplification when it is not important to differentiate the user and item index. Inspired by recent success of graph convolutional networks~\cite{wu2019simplifying, he2020lightgcn} in capturing local graph structures, we propose to encode the neighboring sub-graph structure of each node into a graph topology-aware embedding, to inject the topology positional information into our graph transformer. Specifically, SHT\ employs a two-layer light-weight graph convolutional network as follows: \begin{align} \bar{\textbf{E}}^{(u)} = \text{GCN}^2(\textbf{E}^{(v)}, \mathcal{G})= \bar{\mathcal{A}} \cdot \bar{\mathcal{A}}^\top \textbf{E}^{(u)} + \bar{\mathcal{A}} \cdot \textbf{E}^{(v)} \end{align} where $\bar{\textbf{E}}^{(u)}\in\mathbb{R}^{I\times d}$ denotes the topology-aware embeddings for users. $\text{GCN}^2(\cdot)$ denotes two layers of message passing. $\bar{\mathcal{A}}\in\mathbb{R}^{I\times J}$ refers to the normalized adjacent matrix of graph $\mathcal{G}$, which is calculated by $\bar{\mathcal{A}}_{i,j}=\mathcal{A}_{i,j} / (\textbf{D}_i^{(u)1/2} \textbf{D}_j^{(v)1/2})$, where $\mathcal{A}$ is the original binary adjacent matrix. $\textbf{D}_i^{(u)}, \textbf{D}_j^{(v)}$ refer to the degree of $u_i$ and $v_j$ in graph $\mathcal{G}$, respectively. Note that SHT\ considers neighboring nodes in different distance through residual connections. The topology-aware embeddings for items can be calculated analogously. \subsection{Hypergraph Transformer for Global Relation Learning} Though existing graph-based neural networks have shown their strength in learning interaction data~\cite{wang2019neural,he2020lightgcn,chen2020revisiting}, the inherent noise and skewed data distribution in recommendation scenario limit the performance of graph representation for user embeddings. To address this limitation, SHT\ adopts a hypergraph transformer framework, which i) alleviates the noise issue by enhancing the user collaborative relation modeling with the adaptive hypergraph relation learning; ii) transfer knowledge from dense user/item nodes to sparse ones. Concretely, SHT\ is configured with a Transformer-like attention mechanism for structure learning. The encoded graph topology-aware embeddings are injected into the node representations to preserve the graph locality and topological positions. Meanwhile, the multi-channel attention~\cite{sun2019bert4rec} further benefits our structure learning in SHT. In particular, SHT\ generates input embedding vectors for $u_i$ and $v_j$ by combining the id-corresponding embeddings ($\textbf{e}_i, \textbf{e}_j$) together with the topology-aware embeddings ( vectors $\bar{\textbf{e}}_i, \bar{\textbf{e}}_j$ from embedding tables $\bar{\textbf{E}}^{(u)}$ and $\bar{\textbf{E}}^{(v)}$) as follows: \begin{align} \tilde{\textbf{e}}_i = \textbf{e}_i + \bar{\textbf{e}}_i;~~~~~~ \tilde{\textbf{e}}_j = \textbf{e}_j + \bar{\textbf{e}}_j \end{align} Then, SHT\ conducts hypergraph-based information propagation as well as hypergraph structure learning using $\tilde{\textbf{e}}_i, \tilde{\textbf{e}}_j$ as input. We utilize $K$ hyperedges to distill the collaborative relations from the global perspective. Node embeddings are propagated to each other using hyperedges as intermediate hubs, where the connections between nodes and hyperedges are optimized to reflect the implicit dependencies among nodes. \subsubsection{\bf Node-to-Hyperedge Propagation} Without loss of generality, we mainly discuss the information propagation between user nodes and user-side hyperedges for simplicity. The same process is applied for item nodes analogously. The propagation from user nodes to user-side hyperedges can be formally presented as follows: \begin{align} \tilde{\textbf{z}}_k = \mathop{\Bigm|\Bigm|}\limits_{h=1}^H \bar{\textbf{z}}_{k,h};~~~~ \bar{\textbf{z}}_{k,h} = \sum_{i=1}^I \textbf{v}_{i,h} \textbf{k}_{i,h}^\top \textbf{q}_{k,h} \end{align} where $\tilde{\textbf{z}}_k\in\mathbf{R}^d$ denotes the embedding for the $k$-th hyperedge. It is calculated by concatenating the $H$ head-specific hyperedge embeddings $\bar{\textbf{z}}_{k,h} \in \mathbb{R}^{d/H}$. $\textbf{q}_{k,h}, \textbf{k}_{i,h}, \textbf{v}_{i,h}\in\mathbb{R}^{d/H}$ are the query, key and value vectors in the attention mechanism which will be elaborated later. Here, we calculate the edge weight between hyperedge $k$ and user $u_i$ through a linear dot-product $\textbf{k}_{i,h}^\top \textbf{q}_{k,h}$, which reduces the complexity from $O(K\times I\times d / H)$ to $O((I+K)\times d^2/ {H^2})$ by avoiding directly calculating the node-hyperedge connections (\textit{i}.\textit{e}.~$\textbf{k}_{i,h}^\top \textbf{q}_{k,h}$), but the key-value dot-product first (\textit{i}.\textit{e}.~$\sum_{i=1}^I \textbf{v}_{i,h} \textbf{k}_{i,h}^\top$). In details, the multi-head query, key and value vectors are calculated through linear transformations and slicing. The $h$-head-specific embeddings are calculated by: \begin{align} \label{eq:qkv} \textbf{q}_{k,h}=\textbf{Z}_{k, p_{h-1}: p_h};~~~ \textbf{k}_{i,h}=\textbf{K}_{p_{h-1}: p_h, :} \tilde{\textbf{e}}_i;~~~ \textbf{v}_{i,h}=\textbf{V}_{p_{h-1}: p_h, :} \tilde{\textbf{e}}_i \end{align} where $\textbf{q}_{k,h}\in\mathbb{R}^{d/H}$ denotes the $h$-head-specific query embedding for the $k$-th hyperedge, $\textbf{k}_{i,h}, \textbf{v}_{i,h} \in\mathbb{R}^{d/H}$ denotes the $h$-head-specific key and value embedding for user $u_i$. $\textbf{Z}\in\mathbb{R}^{K\times d}$ represents the embedding matrix for the $K$ hyperedges. $\textbf{K}, \textbf{V} \in\mathbb{R}^{d\times d}$ represents the key and the value transformation of all the $H$ heads, respectively. $p_{h-1} = \frac{(h-1)d}{H}$ and $p_h = \frac{hd}{H}$ denote the start and the end indices of the $h$-th slice. To further excavate the complex non-linear feature interactions among the hyperedges, SHT\ is augmented with two-layer hierarchical hypergraph neural networks for both user side and item side. Specifically, the final hyperedge embeddings are calculated by: \begin{align} \hat{\textbf{Z}} = \text{HHGN}^2(\tilde{\textbf{Z}});~~~~ \text{HHGN}(\textbf{X}) = \sigma(\mathcal{H}\cdot \textbf{X} + \textbf{X}) \end{align} where $\hat{\textbf{Z}}, \tilde{\textbf{Z}} \in\mathbb{R}^{K\times d}$ represent the embedding tables for the final and the original hyperedge embeddings, consisting of hyperedge-specific embedding vectors $\hat{\textbf{z}}, \tilde{\textbf{z}}\in\mathbb{R}^d$, respectively. $\text{HHGN}^2(\cdot)$ denotes applying the hierarchical hypergraph network (HHGN) twice. HHGN is configured with a learnable parametric matrix $\mathcal{H}\in\mathbb{R}^{K \times K}$, which characterizes the hyperedge-wise relations. An activation function $\sigma(\cdot)$ is introduced for non-linear relation modeling. Additionally, we utilize a residual connection to facilitate gradient propagation in our hypergraph neural structures. \subsubsection{\bf Hyperedge-to-Node Propagation} With the final hyperedge embeddings $\hat{\textbf{Z}}$, we propagate the information from hyperedges to user/item nodes through a similar but reverse process: \begin{align} \tilde{\textbf{e}}_i' = \mathop{\Bigm|\Bigm|}\limits_{h=1}^H \bar{\textbf{e}}_{i,h}';~~~~ \bar{\textbf{e}}_{i,h}' = \sum_{k=1}^K \textbf{v}_{k,h}' {\textbf{k}'}_{k,h}^{\top} \textbf{q}_{i,h}' \end{align} where $\tilde{\textbf{e}}_i'\in\mathbb{R}^d$ denotes the new embedding for user $u_i$ refined by the hypergraph neural network. $\bar{\textbf{e}}_{i,h}'\in\mathbb{R}^{d/H}$ denotes the node embedding calculated by the $h$-th attention head for $u_i$. $\textbf{q}_{i,h}', \textbf{k}_{k,h}', \textbf{v}_{k,h}' \in\mathbb{R}^{d/H}$ represent the query, key and value vectors for user $u_i$ and hyperedge $k$. The attention calculation in this hyperedge-to-node propagation process shares most parameters with the aforementioned node-to-hyperedge propagation. The former query serves as key, and the former key serves as query here. The value calculation applies the same value transformation for the hyperedge embedding. The calculation process can be formally stated as: \begin{align} \textbf{q}_{i,h}' = \textbf{k}_{i,h};~~~~ \textbf{k}_{k,h}' = \textbf{q}_{k,h};~~~~ \textbf{v}_{k,h}' = \textbf{V}_{p_{h-1}:p_h,:} \hat{\textbf{z}}_k \end{align} \subsubsection{\bf Iterative Hypergraph Propagation} Based on the prominent node-wise relations captured by the learned hypergraph structures, we propose to further propagate the encoded global collaborative relations via stacking multiple hypergraph transformer layers. In this way, the long-range user/item dependencies can be characterized by our SHT\ framework through the iterative hypergraph propagation. In form, taking the embedding tables $\tilde{\textbf{E}}_{l-1}$ in the $(l-1)$-th iteration as input, SHT\ recursively applies the hypergraph encoding (denoted by $\text{HyperTrans}(\cdot)$) and obtains the final node embeddings $\hat{\textbf{E}}\in\mathbb{R}^{I\times d}$ or $\mathbb{R}^{J\times d}$ as follows: \begin{align} \tilde{\textbf{E}}_l = \text{HyperTrans}(\tilde{\textbf{E}}_{l-1});~~~~ \hat{\textbf{E}} = \sum_{l=1}^L \tilde{\textbf{E}}_l \end{align} \noindent where the layer-specific embeddings are combined through element-wise summation. The iterative hypergraph propagation is identical for the user nodes and item nodes. Finally, SHT\ makes predictions through dot product as $p_{i,j} = \hat{\textbf{e}}_i^{(u)\top}\hat{\textbf{e}}_j^{(v)}$, where $p_{i,j}$ is the forecasting score denoting the probability of $u_i$ interacting with $v_j$. \subsection{Local-Global Self-Augmented Learning} The foregoing hypergraph transformer addresses the data sparsity problem through adaptive hypergraph message passing. However, the graph topology-aware embedding for local collaborative relation modeling may still be affected by the interaction data noise. To tackle this challenge, we propose to enhance the model training with self-augmented learning between the local topology-aware embedding and the global hypergraph learning. To be specific, the topology-aware embedding for local information extraction is augmented with an additional task to differentiate the solidity of sampled edges in the observed user-item interaction graph. Here, solidity refers to the probability of an edge not being noisy, and its label in the augmented task is calculated based on the learned hypergraph dependencies and representations. In this way, SHT\ transfers knowledge from the high-level and denoised features in the hypergraph transformer, to the low-level and noisy topology-aware embeddings, which is expected to recalibrate the local graph structure and improve the model robustness. The workflow of our self-augmented module is illustrated in Fig~\ref{fig:framework_sal}. \subsubsection{\bf Solidity Labeling with Meta Networks} In our SHT\ model, the learned hypergraph dependency representations can serve as useful knowledge to denoise the observed user-item interactions by associating each edge with a learned solidity score. Specifically, we reuse the key embeddings $\textbf{k}_{i,h}, \textbf{k}_{j,h}$ in Eq~\ref{eq:qkv} to represent user $u_i$ and item $v_j$ when estimating the solidity score for the edge $(u_i, v_j)$. This is because that the key vectors are generated for relation modeling and can be considered as helpful information source for interaction solidity estimation. Furthermore, we propose to also take the hyperedge embeddings $\textbf{Z}\in\mathbb{R}^{K\times d}$ in Eq~\ref{eq:qkv} into consideration, to introduce global characteristics into the solidity labeling. Concretely, we first concatenate the multi-head key vectors and apply a simple perceptron to eliminate the gap between user/item-hyperedge relation learning and user-item relation learning. Formally, the updated user/item embeddings are calculated by: \begin{align} \mathbf{\Gamma}_i=\phi^{(u)}\left(\mathop{\Bigm|\Bigm|}\limits_{h=1}^H \textbf{k}_{i,h}\right); ~~~~~~ \mathbf{\Gamma}_j=\phi^{(v)}\left(\mathop{\Bigm|\Bigm|}\limits_{h=1}^H \textbf{k}_{j,h}\right) \end{align} \noindent where $\phi^{(u)}(\cdot), \phi^{(v)}(\cdot)$ are the user- and item-specific perceptrons for feature vector transformation, respectively. This projection is conducted with a meta network, using the user-side and the item-side hyperedge embeddings as input individually: \begin{align} \phi(\textbf{x}; \textbf{Z}) = \sigma(\textbf{W} \textbf{x} + \textbf{b});~~ \textbf{W} = \textbf{V}_1 \bar{\textbf{z}} + \textbf{W}_0;~~ \textbf{b} = \textbf{V}_2 \bar{\textbf{z}} + \textbf{b}_0 \end{align} \noindent where $\textbf{x}\in\mathbb{R}^d$ denotes the input user/item key embedding (\eg~$\mathbf{\Gamma}_i, \mathbf{\Gamma}_j$). $\phi(\cdot)$ being user-specific or item-specific depends on $\textbf{Z}$ being user-side or item-side hyperedge embedding table. $\textbf{W}\in\mathbb{R}^{d\times d}$ and $\textbf{b}\in\mathbb{R}^d$ are the parameters generated by the meta network according to the input $\textbf{Z}$. In this way, the parameters are generated based on the learned hyperedge embeddings, which encodes global features of user- or item-specific hypergraphs. $\bar{\textbf{z}}\in\mathbb{R}^{d}$ denotes the mean pooling of hyperedge embeddings (\textit{i}.\textit{e}.~$\bar{\textbf{z}} = \sum_{k=1}^K \textbf{z}_k / K$). $\textbf{V}_1\in\mathbb{R}^{d\times d\times d}, \textbf{W}_0\in\mathbb{R}^{d\times d}, \textbf{V}_2\in\mathbb{R}^{d\times d}, \textbf{b}_0\in\mathbb{R}^{d}$ are the parameters of the meta network. With the updated user/item embeddings $\mathbf{\Gamma}_i, \mathbf{\Gamma}_j$, SHT\ then calculates the solidity labels for edge $(u_i, v_j)$ through a two-layer neural network as follows: \begin{align} s_{i,j} = \text{sigm}(\textbf{d}^\top \cdot \sigma(\textbf{T} \cdot [\mathbf{\Gamma}_i; \mathbf{\Gamma}_j] + \mathbf{\Gamma}_i + \mathbf{\Gamma}_j + \textbf{c})) \end{align} \noindent where $s_{i,j}\in\mathbb{R}$ denotes the solidity score given by the hypergraph transformer. $\text{sigm}(\cdot)$ denotes the sigmoid function which limits the value range of $s_{i,j}$. $\textbf{d}\in\mathbb{R}^d, \textbf{T}\in\mathbb{R}^{d\times 2d}, \textbf{c}\in\mathbb{R}^d$ are the parametric matrices or vectors. $[\cdot;\cdot]$ denotes the vector concatenation. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{fig/framework_sal.pdf} \vspace{-0.1in} \caption{Workflow of the self-augmented learning.} \vspace{-0.25in} \label{fig:framework_sal} \end{figure} \subsubsection{\bf Pair-wise Solidity Ranking} To enhance the optimization of topological embeddings, SHT\ employs an additional objective function to better estimate the edge solidity using the above $s_{i,j}$ as training labels. In particular, $R$ pairs of edges $\{(e_{1, 1}, e_{1,2})$,...,$(e_{R,1}, e_{R,2})\}$ from the observed edges in $\mathcal{G}$ are sampled, and SHT\ gives predictions on the solidity using the topology-aware embeddings. The predictions are then updated by optimizing the loss below: \begin{align} \label{eq:sa_loss} \mathcal{L}_{sa} = &\sum_{r=1}^R \text{max}(0, 1 - (\hat{s}_{u_{r,1}, v_{r,1}} - \hat{s}_{u_{r,2}, v_{r,2}}) (s_{u_{r,1}, v_{r,1}} - s_{u_{r, 2}, v_{r, 2}}));\nonumber\\ &\hat{s}_{u_{r,1}, v_{r,1}} = \textbf{e}_{u_{{r,1}}}^\top \textbf{e}_{v_{{r, 1}}}; ~~~~~~ \hat{s}_{u_{r,2}, v_{r,2}} = \textbf{e}_{u_{{r,2}}}^\top \textbf{e}_{v_{{r, 2}}} \end{align} \noindent where $\mathcal{L}_{sa}$ denotes the loss function for our self-augmented learning. $\hat{s}_{u_{r,1}, v_{r,1}}, \hat{s}_{u_{r,2}, v_{r,2}}$ denote the solidity scores predicted by the topology-aware embedding, while ${s}_{u_{r,1}, v_{r,1}}, {s}_{u_{r,2}, v_{r,2}}$ denote the edge solidity labels given by the hypergraph transformer. Here $u_{r,1}$ and $v_{r,1}$ represent the user and the item node of edge $e_{r,1}$, respectively. In the above loss function, the label term $(s_{u_{r,1}, v_{r,1}} - s_{u_{r,2}, v_{r,2}})$ not only indicates the sign of the difference (\textit{i}.\textit{e}.~which one of $e_{r,1}$ and $e_{r,2}$ is bigger), but also indicates how bigger the difference is. In this way, if the solidity labels for a pair of edges given by the hypergraph transformer are close to each other, then the gradients on the predicted solidity scores given by the topology-aware embedding will become smaller. In this way, SHT\ is self-augmented with an adaptive ranking task, to further refine the low-level topology-aware embeddings using the high-level embeddings encoded from the hypergraph transformer. \subsection{Model Learning} We train our SHT\ by optimizing the main task on implicit feedback together with the self-augmented ranking task. Specifically, $R'$ positive edges (observed in $\mathcal{G}$) and $R'$ negative edges (not observed in $\mathcal{G}$) are sampled $\{(e_{1, 1}, e_{1, 2}), (e_{2,1}, e_{2, 2})..., (e_{R',1}, e_{R',2})\}$, where $e_{r,1}$ and $e_{r,2}$ are individual positive and negative sample, respectively. The following pair-wise marginal objective function is applied: \begin{align} \label{eq:loss} \mathcal{L} = \sum_{r=1}^{R'} \text{max}(0, 1-(p_{u_{r,1}, v_{r,1}} - p_{u_{r,2}, v_{r,2}})) + \lambda_1 \mathcal{L}_{\text{sa}} + \lambda_2 \|\mathbf{\Theta}\|_\text{F}^2 \end{align} \noindent where $p_{u_{r,1}, v_{r,1}}$ and $p_{u_{r,2}, v_{r,2}}$ are prediction scores for edge $e_{r,1}$ and $e_{r,2}$, respectively. $\lambda_1$ and $\lambda_2$ are weights for different loss terms. $\|\mathbf{\Theta}\|_\text{F}^2$ denotes the $l_2$ regularization term for weight decay. \subsubsection{\bf Complexity Analysis} We compare our SHT\ framework with several state-of-the-art approaches on collaborative filtering, including graph neural architectures (\eg~NGCF~\cite{wang2019neural}, LightGCN~\cite{he2020lightgcn}) and hypergraph neural networks(\eg~DHCF~\cite{ji2020dual}). As discussed before, our hypergraph transformer enables the complexity reduction from $O(K\times (I + J) \times d)$ to $O((I+J+K) \times d^2)$. As the typical value of the number of hyperedge $K$ is much smaller than the number of nodes $I$ and $J$, but larger than the embedding size $d$, the latter term is smaller and close to $O((I+J)\times d^2)$. In comparison, the complexity for a typical graph neural architecture is $O(M\times d + (I+J)\times d^2)$. So our hypergraph transformer network can achieve comparable efficiency as GNNs, such as graph convolutional networks in model inference. The existing hypergraph-based methods commonly pre-process high-order node relations to construct hypergraphs, which makes them usually more complex than the graph neural networks. In our SHT, the self-augmented task with the loss $\mathcal{L}_\text{sa}$ has the same complexity with the original main task.
1,108,101,564,034
arxiv
\section{Introduction} Our Universe consists of 4 fundamental forces. Three of these forces have been consistently described on the quantum level and combined into the Standard Model of particle physics. Only quantum gravity seems to be elusive and has not been fully described in terms of quantum theory. This is not only because gravity is power counting non-renormalizable but also due to the fact that direct quantum gravity regime cannot be accessed experimentally (for example an accelerator measuring the quantum gravity effects would have to be big as our Solar System). \\ In recent years an alternative strategy has been put forward, namely one formulates a fundamental quantum gravity theory and then tests, which of the low energy effective theories can be UV completed by this quantum gravity model. In string theory this goes under the name of swampland conjectures \cite{Vafa:2005ui,Ooguri:2006in}. Recently widely discussed is so-called de-Sitter conjecture \cite{Obied:2018sgi,Agrawal:2018own} which states that string theory cannot have de-Sitter vacua and is in tension with single field inflation \cite{Achucarro:2018vey,Kinney:2018nny}. There seems to be also a tension between standard S-matrix formulation of quantum gravity and existence of stable de-Sitter space \cite{Dvali:2013eja,Dvali:2014gua,Dvali:2017eba,Dvali:2020etd}. However it is not established whether asymptotic safety admits a standard S-matrix formulation \cite{Kwapisz:2020jux} due to fractal spacetime structure in the deep quantum regime \cite{Lauscher:2005qz,Lauscher:2005xz} In line of these swampland criteria the no eternal inflation principle has been put forward \cite{Rudelius:2019cfh}, see also the further discussions on the subject of eternal inflation \cite{Guth:2007ng,Johnson:2011aa,Lehners:2012wz,Leon:2017sru,Matsui:2018bsy,Wang:2019eym,Blanco-Pillado:2019tdf,Lin:2019fdk,Hohm:2019ccp,Hohm:2019jgu,Banks:2019oiz,Seo:2020ger}. \\ On the other hand, the theory of inflation is a well-established model providing an answer to problems in classical cosmology, such as the flatness problem, large-scale structures formation, homogeneity, and isotropy of the universe. A handful of models is in an agreement with the CMB observations. In the inflationary models, quantum fluctuations play a crucial role in primordial cosmology, providing a seed for the large-scale structure formation after inflation and giving a possibility for the eternally inflating multiverse. Initial fluctuations in the early universe may cause an exponential expansion in points scattered throughout the space. Such regions, rapidly grow and dominate the volume of the universe, creating ever-inflating, disconnected pockets. Since so far there is no way to verify the existence of the other pockets, we treat them as potential autonomous universes, being part of the multiverse.\\ In the light of this tension between string theory and inflationary paradigm \cite{Rudelius:2019cfh}, one can be interested in how robust are the swampland criteria for the various quantum gravity models. In accordance with the inflation theory, we anticipate that the dynamics of the universe are being determined by the quantum corrections to the general relativity stemming from the concrete UV model. The effective treatment led Starobinsky to create a simple inflationary model taking into account the anomaly contributions to the energy-momentum tensor. \\ As pointed out by Donoghue \cite{Donoghue:1994dn} below the Planck scale, for quantum gravity one can safely take the effective field theory perspective. Yet these quantum gravity effects can be important below the Planck scale by the inclusion of higher dimensional operators. The gravitational constant $G_N$ has a vanishing anomalous dimension below the Planck scale and various logarithmic corrections to the $R^2$ have been considered, capturing the main quantum effects \cite{Codello:2014sua,Ben-Dayan:2014isa,Bamba:2014mua,Liu:2018hno}. Yet in order to get the correct 60 e-fold duration of inflationary period one has to push the scalar field value in the Einstein frame beyond the Planck mass \cite{Rudelius:2019cfh}. Furthermore most of these models do not possess a flat potential limit (either diverge or have a runaway solutions), suggesting that eternal inflation can be investigated only if one takes into account the full quantum corrections to the Starobinsky inflation. \\ In the effective field theory scheme, the predictive power of the theory is limited, as the description of gravity at transplanckian scales requires fixing infinitely many coupling constants from experiments. The idea of asymptotic safety \cite{Weinberg:1980gg} was introduced by Stephen Weinberg in 1978 as a UV completion of the quantum theory of gravity. The behavior of an asymptotically safe theory is characterized by scale invariance in the high-momentum regime. Scale invariance requires the existence of a non-trivial Renormalization Group fixed point for dimensionless couplings. There are many possible realizations of such non-trivial fixed point scenario, such as the canonical vs anomalous scaling (gravitational fixed point \cite{Reuter:1996cp,Souma:1999at,Lauscher:2001ya,Reuter:2001ag}) and one-loop vs two-loop contributions or gauge vs Yukawa contributions, see \cite{eichhorn2019asymptotically} for further details and \cite{Dupuis:2020fhh} for current status of asymptotically safe gravity.\\ The existence of an interacting fixed point and hence the flatness of the potential in the Einstein frame led Weinberg to discuss \cite{Weinberg_2010} cosmological inflation as a consequence of Asymptotically Safe Gravity, see also \cite{Reuter:2005kb,Reuter:2012id} for discussion of AS cosmology. Following this suggestion, we study two types of models.\\ The first type relies on the RG-improvement of the gravitational actions and is based on the asymptotic safety hypothesis that gravity admits a non-trivial UV fixed point. Since asymptotically safe gravity flattens the scalar field potentials \cite{Eichhorn:2017als}, one can expect that it will result in the eternal inflation for large enough initial field values. On the other hand, RG-improved actions can serve as a UV completion of the Starobinsky model. One should also note that asymptotically safe swampland has been vastly studied \cite{Shaposhnikov:2009pv,Zanusso:2009bs,Daum:2009dn,Folkerts:2011jz,Christiansen:2012rx,Wang:2015sxe,Eichhorn:2016esv,Grabowski:2018fjj,Kwapisz:2019wrl,Eichhorn:2017ylw,Eichhorn:2018whv,Eichhorn:2017muy,Eichhorn:2016vvy,Christiansen:2017cxa,Eichhorn:2018nda,Christiansen:2017gtg,Eichhorn:2019dhg,Eichhorn:2019yzm,Alkofer:2020vtb,Daas:2020dyo,Held:2020kze,Hamada:2020vnf,Reichert:2019car,Eichhorn:2020kca,Eichhorn:2020sbo,Hamada:2020mug,deBrito:2020dta}.\\ The other model relies on the non-trivial fixed point in the pure matter sector governed by the Yang-Mills dynamics in the Veneziano limit \cite{Litim_2014,Litim:2015iea}, see also \cite{Mann:2017wzh,Antipin:2018zdg,Molinaro:2018kjz,Wang:2018yer,Wang:2018yer}. In this model, we have uncovered a new type of eternal inflation scenario relying on tunneling to a false vacuum - in the opposite direction as it was considered in the old-inflation proposal \cite{Guth:1980zm}.\\ In contradistinction to string theory, the couplings in the asymptotic safety paradigm are predicted from the RG-flow of the theory and their fixed point values rather than as vacuum expectation values (vev's) of certain scalar fields. Hence, the asymptotically safe eternally inflating multiverses landscape is much less vast than the one stemming from the string theory, making these models much less schismatic \cite{Ijjas:2014nta}. Finally, let us note that asymptotic safety can argue for the homogeneous and isotropic initial conditions on its own using the finite action principle \cite{Lehners:2019ibe}.\\ Our work is organized as follows. In Chapter~\ref{sec:EI} we introduce the idea of eternal inflation and multiverse. We discuss necessary conditions for eternal inflation to occur based on the Fokker-Planck equation. In Chapter~\ref{sec:EM} we show, how the developed tools work in practice with the two popular inflationary models. Chapter~\ref{sec:EIinASM} is devoted to the presence of eternal inflation in Asymptotically Safe models. In Chapter~\ref{Sec:Discussion} the results are discussed and concluded. \section{How inflation becomes eternal?} \label{sec:EI} In this section we discuss, under what circumstances the inflation becomes eternal. Our discussion follows closely \cite{Rudelius:2019cfh}. \subsection{Fokker-Planck equation} Consider a scalar field in the FLRW metric \begin{align} S= \int d^4 \sqrt{-g} \left(\frac{1}{2}M_{Pl}^2 R+ \frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu} \phi - V(\phi)\right), \end{align} with $\phi(t,\vec{x})=\phi(t)$ one obtains the following equations of motion: \begin{align} \ddot{\phi} + 3H\phi + \frac{\partial V}{\partial \phi} = 0, \quad H^2 M_P^2 = \frac{1}{3} \left(\frac{1}{2} \dot{\phi}^2 + V(\phi)\right), \end{align} which in the slow-roll approximation become \cite{Kwapisz:2019cxq}: \begin{align} \label{eq:slowroll} 3H\dot{\phi}+\frac{\partial V}{\partial\phi} \approx0, \quad & H^2M_{Pl}^2 \approx \frac{1}{3}V\left(\phi\right ). \end{align} Inflation ends once one of the so-called slow-roll parameters becomes of order one \begin{align} \label{Inflconditions} \epsilon \simeq \frac{M_P^2}{2} \left(\frac{V_{,\phi}}{V}\right)^2, \quad \eta \simeq M_P^2 \frac{V_{,\phi\phi}}{V}, \end{align} and enters the oscillatory, reheating phase. The standard treatment of eternal inflation relies on the stochastic inflation approach \cite{Linde:1991sk}. One splits the field into classical background and short-wavelength quantum field \begin{align} \phi\left(t,\Vec{x}\right )=\phi_{cl}\left(t,\Vec{x}\right )+\delta\phi\left(t,\Vec{x}\right ). \end{align} Due to the fact that action is quadratic in the fluctuations, their spatial average over the Hubble volume is normally distributed. Hence from now on we shall assume that both background and fluctuations are homogeneous, which is the standard treatment of eternal inflation (if not otherwise specified). In the large e-fold limit, equation of motion for the full field takes form of slow-roll equation with additional classical noise term \cite{Rudelius:2019cfh,Kiefer:1998qe,Kiefer:2008ku}, known as the Langevin Equation: \begin{align} \label{Langevin} 3H\dot{\phi}+\frac{\partial V}{\partial\phi} =N\left(t\right ), \end{align} where $N\left(t\right )$ is a Gaussian distribution with mean equal 0 and variance $\sigma=\frac{H^3t}{4\pi^2}$ \cite{Linde_1992}. Then the probability density of the inflaton field is then given by the Fokker-Planck equation \cite{Rudelius:2019cfh}: \begin{align} \label{Planck-Fokker} \dot{P}[\phi,t]=\frac{1}{2}\left(\frac{H^3}{4\pi^2}\right) \frac{\partial^2 P[\phi,t]}{\partial \phi \partial \phi}+\frac{1}{3H}\partial_i\left(\partial^i V\left(\phi\right) P[\phi,t]\right), \end{align} where $\dot{P}[\phi,t]:=\frac{\partial}{\partial t} P[\phi,t]$. \subsection{Analytic solutions} To understand better the Fokker-Planck equation, let us now briefly discuss the analytical solutions.\\ \paragraph{Case 1. Constant potential} \begin{align} V\left(\phi\right ) = V_0, \end{align} the Fokker-Planck equation reduces to \begin{align} \dot{P}[\phi,t]=\frac{1}{2}\left(\frac{H^3}{4\pi^2}\right) \frac{\partial^2 P[\phi,t]}{\partial \phi \partial \phi}, \end{align} furthermore $H^2 =\textrm{const}$ by the Friedman equations. Then the Fokker Planck equation reduces to the standard heat equation, which has a solution given by a Gaussian distribution: \begin{align} \label{gauss} P[\phi,t] = \frac{1}{\sigma\left(t\right )\sqrt{2\pi}}\exp\left[-\frac{\left(\phi - \mu\left(t\right )\right )^2}{2\sigma\left(t\right )^2}\right], \end{align} with \begin{align} \mu\left(t\right ) = 0, \quad & \sigma^2\left(t\right ) = \frac{H^3}{4\pi^2}t. \end{align} A delta-function distribution initially centered at $\phi$ = 0 will remain centered at $\phi = 0$ for all time. It will however, spread out by the amount $\sigma \left ( t = H^{-1}\right ) = H/2\pi$ after a Hubble time. This represents the standard “Hubble-sized” quantum fluctuations that are well-known in the context of inflation, famously imprinted in the CMB and ultimately seeding the observed large-scale structure. \\ \paragraph{Case 2. Linear potential} For the linear hilltop model the potential is given by \begin{align} \label{eq:linearhilltop} V\left(\phi\right ) = V_0 - \alpha \phi. \end{align} Fokker-Planck equation is analogously solved by the Gaussian distribution (\ref{gauss}) with: \begin{align} \label{linearsolution} \mu\left(t\right ) = \frac{\alpha}{3H}t, \quad & \sigma^2\left(t\right ) = \frac{H^3}{4\pi^2}t. \end{align} The time-dependence of $\mu\left(t\right )$ is due to the classical rolling of the field in the linear potential. The time-dependence of $\sigma^2\left(t\right )$ is purely due to Hubble-sized quantum fluctuations, and it precisely matches the result in the constant case. In general, for a linear and quadratic potential the equation simplifies to the heat equation, hence the solutions are Gaussian. Furthermore, if the potential is asymptotically flat, a finite limit at infinity exists. One may employ a series expansion around this point at infinity, approximating the potential up to the linear term. It is then expected that the probability density is approximately Gaussian (\ref{linearsolution}). In the next section, we describe in detail how the Gaussian distribution causes the inflaton to decay exponentially. \subsection{Eternal inflation conditions} Given an arbitrary field value $\phi_c$, one can ask what is the probability that quantum field $\phi=\phi(t)$ is above this value: \begin{align} \label{phi_c} \mathrm{Pr}[\phi>\phi_c,t]=\int^{\infty}_{\phi_c}d{\phi}P[\phi,t]. \end{align} Since the distribution is Gaussian, then for $\phi_c$ large enough the $Pr[\phi>\phi_c,t]$ can be approximated by an exponential decay: \begin{align} \label{eq:gamma} \mathrm{Pr}[\phi > \phi_c,t] \approx C(t)\exp(-\Gamma t), \end{align} where $C(t)$ is polynomial in $t$ and all of the dependence on $\phi_c$ is contained in $C(t)$. Then it seems that inflation cannot last forever since \begin{align} \lim_{t\to \infty}\mathrm{Pr}[\phi>\phi_c,t] =0. \end{align} However, there is an additional effect to be included: expansion of the universe during inflation. The size of the universe depends on time according to: \begin{align} U\left(t\right )=U_0 e^{3Ht}, \end{align} where $U_0$ is the initial volume of the pre-inflationary universe. One can interpret the probability $ Pr[\phi>\phi_c,t]$ as fraction of the volume $U_{inf}\left(t\right )$ still inflating, that is: \begin{align} \label{eq:eternal} U_{inf}\left(t\right )=U_0e^{3Ht} Pr[\phi>\phi_c,t], \end{align} then in order for the Universe to inflate eternally, the positive exponential factor $3H$ in Eq.~(\ref{eq:eternal}) and the negative exponential factor $-\Gamma$ in (\ref{eq:gamma}) must satisfy: \begin{align} 3H>\Gamma. \end{align} We shall illustrate this general property on an example of linear potential. Evaluating the integral for probability density, in the linear case gives: \begin{align} Pr[\phi>\phi_c,t]=\frac{1}{2} \textrm{erfc}\left({\frac{\frac{\alpha}{3H}t-\phi_c}{\frac{H}{2\pi}\sqrt{2Ht}}}\right ). \end{align} The error function may be approximated by an exponential: \begin{align} Pr[\phi>\phi_c,t]=C\left(t\right )\textrm{exp}\left( -\frac{4\pi^2\alpha^2}{18H^5}t \right ), \end{align} where $C\left(t\right )$ is power-law in $t$ and $\phi_c$ vanished from the final approximation of the probability, which is a generic feature. By comparing the exponents we can check, whether $U_{inf}$ will grow or tend to zero. The condition for eternal inflation to occur becomes: \begin{align} 3H>\frac{4\pi^2\alpha^2}{18H^5}. \end{align} For linear potential $\alpha=V'\left(\phi\right )$ using the slow-roll equations equation (\ref{eq:slowroll}), above condition can be rewritten: \begin{align} \label{eq:EternalCondition1} \frac{|V'|}{V^{\frac{3}{2}}}<\frac{\sqrt{2}}{2\pi} \frac{1}{M^2_{Pl}}. \end{align} This can be interpreted as quantum fluctuations dominating over classical field rolling. For linear potential, this is satisfied for a large $\phi$. Similarly, the second condition for the eternal inflation may be derived from the quadratic hilltop potential: \begin{equation} \label{eq:EternalCondition2} -\frac{V''}{V}<\frac{3}{M^2_{Pl}}. \end{equation} Further necessary conditions on p-th derivative with $p>2$ have been derived in \cite{Rudelius:2019cfh} and give: \begin{align} [-\textrm{sgn}\left(\partial^p V\right)]^{p+1}\frac{|\partial^p V|}{V^{(4-p}/2}< \mathcal{N}_p M_{Pl}^{p-4}, \end{align} where $\mathcal{N}_p\gg 1$ is numerically determined coefficient. Eternal inflation can be understood as a random walk of a field and a diffusion process on top of the classical motion \cite{Vilenkin,Guth:2000ka,Guth:2007ng}. In order to cross check the formulas (\ref{eq:EternalCondition1}, \ref{eq:EternalCondition2}) the numerical simulation has been developed. To reconstruct the probability distribution one simulates the discretized version of equation (\ref{Langevin}): \begin{align} \label{DiscretLangevin} \phi_n=\phi_{n-1}-\frac{1}{3H}V'\left(\phi_{n-1}\right )\delta t +\delta \phi_q \left(\delta t\right ), \end{align} with $\delta \phi_q \left(\delta t\right )$ being random number taken from the gaussian distribution with mean equal zero, and variance $\frac{H^3}{4\pi^2}\delta t$. We further assume the Hubble parameter to be constant and respecting the slow-roll regime $H=\frac{1}{M_{Pl}}\sqrt{\frac{V(\phi_0)}{3}}$, where $V(\phi_0)$ is a value of the potential at the start of the simulation. We verified, that the change of $H$ caused by the field fluctuation does not affect the conclusions for eternal inflation. The simulation starts at the user-given value $\phi_0$ and follows the Langevin discretized equation (\ref{DiscretLangevin}).\\ If the inflation occurs, the corresponding timestep $t_n$ is added to a list. This happens while the slow-roll conditions are satisfied, meaning $\epsilon(t_n)$ and $\eta (t_n)$ are smaller than one. Violation of one of these conditions resets the simulation. However, the list containing information about time $t_n$ of the ongoing inflation is stored in the memory. This large time list is appended in the similar way each evolution. Its size may be estimated by $N\frac{T_{c}}{\delta t}$, where $N$ is the total number of simulations and $T_{c}$ is the time of the classical slow-roll inflation starting at $\phi_0$.\\ It is important to stress, that the duration of a particular evolution may be too long to compute in any practical time. We employ a large timeout ending the evolution. This is a good approximation for our purposes. Finally, the list containing information about every timestep at which inflation was ongoing in $N$ independent simulations is sorted in an ascending order. A normalized histogram with 1000 equal-width bins is created from the list. The number of counts is related to the probability of ongoing inflation, while the bins correspond to inflationary time. The field's evolution supports the Fokker-Planck result (\ref{eq:gamma}). In the slow-roll regime, the inflaton decays exponentially with decay parameter $\Gamma$. This is true for every numerically investigated potential in this work. In order to recognize the eternally inflating models we search for such initial value of the field $\phi_0$ that $\Gamma<3H$. In the numerical analysis we eliminate $\phi_c$ in equation (\ref{phi_c}) and instead check for the slow-roll conditions violation at each step. \subsection{Tunneling and eternal inflation} \label{SecTunneling} Most of the inflationary potentials are of the single-minimum type such as Starobinsky inflation and alpha-attractors. There are however, potentials which are of type depicted on figure \ref{Fig:1} and possess a various minima. In such models, the inflation can become eternal due to tunnelling to the false vacua. When the vacua are degenerate enough, the tunneling dominates over quantum uphill rolling. The tunnelling goes in the opposing direction to the old inflation scenario \cite{Guth:1980zm} as showed on figure \ref{fig:Vacua2}. As it will turn out this is the dominant effect for the model discussed in section \ref{Sannino}. The eternal inflation mechanism discussed in the previous sections relies on the local shape of the potential and cannot provide an accurate description in that case. In order to quantitatively derive predictions for this new effect, we shall rely on the first passage formalism \cite{Vennin_2015,Noorbala_2018} instead, and apply it to the eternal inflation considerations. \begin{figure}[t!] \hfill \subfigure[]{ \label{fig:Vacua1} \includegraphics[width=0.45\textwidth ]{Vacua1.jpg}} \hfill \subfigure[]{ \label{fig:Vacua2} \includegraphics[width= 0.45\textwidth]{Vacua2.jpg}} \hfill \caption{Left: field initially placed at the maximum of the potential may decay towards one of the two vacua: at $\phi_{-}$ with probability $p_{-}$ and at $\phi_{+}$ with probability $p_{+}$.\newline Right: field initially placed at $\phi_0<\phi_{max}$ may tunnel trough the barrier towards $\phi_{+}$ with probability $p_{+}(\phi_0)$. Analogous tunneling from "plus" to "minus" side is also possible.} \label{Fig:1} \end{figure} Given the initial value of the field $\phi_0$ being between $\phi_{-}$ and $\phi_{+}$, the probability that it reaches $\phi_{+}$ before $\phi_{-}$ and $\phi_{-}$ before $\phi_{+}$, denoted respectively $p_{+}(\phi_0)$ and $p_{-}(\phi_0)$, obeys the following equation: \begin{align} vp''_{\pm}(\phi)-\frac{v'}{v}p'_{\pm}(\phi)=0, \end{align} with initial conditions: $p_{\pm}(\phi_\pm)=1$, $p_\pm(\phi_\mp)=0$, where $v=v(\phi)$ is the dimensionless potential: \begin{align} v(\phi):=\frac{V(\phi)}{24 \pi^2 M_{Pl}^4}. \end{align} The analytical solution is: \begin{align} \label{AnalP} p_\pm(\phi_0)=\pm \frac{\int^{\phi_0}_{\phi_\pm} e^{-\frac{1}{v(\phi)}} d\phi}{\int^{\phi_{+}}_{\phi_{-}}e^{-\frac{1}{v(\phi)}}d\phi}. \end{align} One may also define the probability ratio $R$: \begin{align} \label{ProbabilityR} R(\phi_0):=\frac{p_{+}(\phi_0)}{p_{-}(\phi_0)} =\frac{\int^{0}_{\phi_{-}} e^{-\frac{1}{v(\phi)}}d\phi}{\int^{\phi_{+}}_{\phi_0}e^{-\frac{1}{v(\phi)}}d\phi}. \end{align} The above integrals may be evaluated numerically. However, if the amplitude of $v(\phi)$ is much smaller than 1, the term $e^{-1/v(\phi)}$ will be extremely small, possibly exceeding machine precision in the computation. \\ Yet, one can use the steepest descent approximation. Consider a potential with the field located initially on the local maximum $\phi_{max}$ with a minimum on each side, as depicted on figure \ref{fig:Vacua1}. Then the probability ratio $R$ may be evaluated approximately, where the leading contributions to (\ref{ProbabilityR}) come from the values of the field in the neighborhood of $\phi_{max}$. $p_{+}$ and $p_{-}$ gives a probability of evolution realised respectively by the red and the green ball. We get:\footnote{For the details of the calculation consult \cite{Noorbala_2018}.} \begin{align} \label{Rmaximum} R(\phi_{max})\approx 1-\frac{2}{3}\frac{\sqrt{2}}{\pi} \frac{v(\phi_{max}) v'''(\phi_{max})}{|v''(\phi_{max})|^{3/2}}. \end{align} In this regime, probability of descending into each of the minima $\phi_{-}$ and $\phi_{+}$ is similar, giving $|1-R|\ll 1$. It is possible to start the inflation in the subset of $[\phi_{-},\phi_{+}]$ that would lead to the violation of the slow-roll conditions and tunnel through the potential barrier to the sector dominated by eternal inflation, schematically shown on figure \ref{fig:Vacua2}. We further analyze this possibility in Sec.~\ref{tunneling section} for a particular effective potential with two vacua, stemming from an asymptotically safe theory. We use equation ~(\ref{Rmaximum}) to find the dependence of $R$ on the parameters of the theory and verify the result with direct numerical simulation of the Langevin Equation (\ref{Langevin}) given set of parameters. \section{Exemplary models} \label{sec:EM} In this section we show the basic application of the conditions (\ref{eq:EternalCondition1}, \ref{eq:EternalCondition2}) to simple effective potentials stemming from the $\alpha$-attractor models and the Starobinsky inflation. \subsection{Alpha-attractor models} \label{AlfaAttractorSection} We start our investigation with the $\alpha$-attractor models \cite{Carrasco_2015}, a general class of the inflationary models, originally introduced in the context of supergravity. They are consistent with the CMB data, and their preheating phase has been studied on a lattice in \cite{Krajewski_2019}. The phenomenological features of these models are described by the Lagrangian: \begin{align} \frac{1}{\sqrt{-g}}\mathcal{L}_T=\frac{1}{2} R- \frac{1}{2}\frac{\partial \phi^2}{\left( 1-\frac{\phi^2}{6\alpha}\right)^2}-V(\phi). \end{align} Here, $\phi$ is an inflaton and $\alpha$ can take any real, positive value. At the limit $\alpha\xrightarrow{}\infty$ the scalar field becomes canonically normalized, and the theory coincides with the chaotic inflation. Canonical and non-canonical fields are related by the transformation: \begin{align} \phi=\sqrt{6\alpha}\tanh{\frac{\varphi}{\sqrt{6\alpha}}}. \end{align} We further consider T-models, in which the potential of canonically normalised field is given by: \begin{align} \label{AlfaV} V(\phi)=\alpha \mu^2 \tanh^{2n}{\frac{\varphi}{\sqrt{6\alpha}}}, \end{align} where parameter $\mu$ is of order $10^{-5}$. The shape of the potential for $n=1$ was plotted on figure \ref{fig:AlfaAttractorPotential}. At the large $\phi$ the potential (\ref{AlfaV}) is asymptotically flat, this creates the possibility for eternal inflation to occur. Using the first condition (\ref{eq:EternalCondition1}) we have verified, that generally the space is eternally inflating for all initial values of the $\phi$, above certain $\phi_{EI}$. The second eternal condition (\ref{eq:EternalCondition2}) as well as the higher order conditions are satisfied for almost all values of $\phi_0$ above $0$, providing no new information. This is a generic feature for all of the models we investigate. \\For every $\alpha$, $\phi_0$ necessary to produce 60 e-folds is safely below $\phi_{EI}$. We have found $\phi_0$ by solving slow-roll equation numerically. It is shown on figure \ref{fig:AlfaAttractorEternalInflation}. The time of inflation larger than 60 Planck-times is unlikely according to the Planck Collaboration data \cite{refId0}. The values of $\phi_{EI}$ change only slightly with $n$. We may therefore conclude, that $\alpha$-attractor models are consistent with the beginning of "our" pocket universe. However, it is not inconceivable that the field fluctuations in other parts of the early universe had values $\phi_0>\phi_{EI}$, driving the eternal inflation. \begin{figure}[t!] \hfill \subfigure[]{ \label{fig:AlfaAttractorPotential} \includegraphics[width=0.48\textwidth]{AlfaAttractorsEffectivePotential.jpg}} \hfill \subfigure[]{ \label{fig:AlfaAttractorEternalInflation} \includegraphics[width=0.48\textwidth]{AlfaAttractorsEI.jpg}} \hfill \caption{Left: the T-model potential with $n=1$ and various $\alpha$ were depicted.\newline Right: plot of the initial value $\phi_0$ necessary for 60 e-folds, as a function of $\alpha$ (blue), as well as the lowest initial value $\phi_{EI}$ of the field, at which the eternal inflation "kicks in" (yellow).} \end{figure} \FloatBarrier \subsection{Starobinsky model} Solutions stemming from Einstein-Hilbert action predict initial singularity. In 1980 Starobinsky proposed a model \cite{1980PhLB...91...99S}, where pure modified gravitational action can cause non-singular evolution of the universe, namely: \begin{align} \label{StaryLagrangian} S = \frac{1}{2}\int \sqrt{|g|}d^4x\left(M^2_pR + \frac{1}{6M^2}R^2\right ), \end{align} this can be rewritten to the effective potential form, with: \begin{align} \label{StaryPotential} V\left(\phi\right )=V_0\left(1-\exp\left( -\sqrt{\frac{2}{3}}\frac{\phi}{M_{Pl}}\right )\right )^2. \end{align} The inflation begins on a plateau at large $\phi$. The field rolls towards a minimum at $\phi=0$, where the oscillatory reheating phase occurs. It has been estimated from the CMB data, that during the inflation the volume of the universe has grown by approximately 60 e-folds. This corresponds to the initial condition $\phi_0=5.5$ $M_{Pl}$, without taking into account quantum gravity effects. It is possible to perturbatively recover information about the shape of the potential from the CMB, for details see \cite{Lidsey_1997}. Amplitude of the scalar power spectrum $A_s=2\times10^{-9}$ fixes the value $V_0=8.12221\times10^{-11}$ $M^4_{Pl}$, via the relation: \begin{align} \label{cmb} V_0= 24 \pi^2 \epsilon(\phi_0)A_s. \end{align} Applying the analytical eternal inflation conditions (\ref{eq:EternalCondition1}, \ref{eq:EternalCondition2}) to the Starobinsky potential, the initial value of the field, above which the eternal inflation occurs, has been estimated to be $\phi_0=16.7$ $M_{Pl}$. It has been found, that decay rate $\Gamma$ decreases approximately exponentially with $\phi_0$. Our numerical simulation confirms the analytical prediction discussed in \cite{Rudelius:2019cfh} within a sample of 10000 simulations. They are performed as described in the previous section. An exemplary numerical evolution of the Langevin equation (\ref{Langevin}) is shown in the figure \ref{starysample}. The linear fit in the early times shows the exponential decay of the inflation in the slow-roll regime.\\ Nevertheless, for eternal inflation scenario and realistic phenomenology, this model requires the transplanckian values of the fields in order to reproduce the correct tensor to scalar ratio $r$, amplitudes and spectral tilt $n_s$. Hence the Starobinsky inflation will be affected, possibly invalidated by the quantum gravity fluctuations. The leading log corrections have been studied in \cite{Liu:2018hno}, which we as well study in the context of eternal inflation. Yet, due to the large field values required for eternal inflation to occur, one should seek a theory predictive up to an arbitrary large energy scale, which we discuss in the next section. \begin{figure}[t!] \label{starysample} \hfill \subfigure{\includegraphics[width=0.49\textwidth]{Samplef03.png}} \hfill \subfigure{\includegraphics[width=0.49\textwidth]{ProbabilityStaryf0=3vareps.png}} \hfill \caption{Left: exemplary field evolution for the Starobinsky model has been plotted. Green plot shows the solution to the classical slow-roll equation, and the black plot is the Langevin solution. The inflation ends, when the slow-roll parameter reaches 1. Values of the field are given in $M_{Pl}$.\newline Right: the time dependence of the probability, that inflation still occurs. In the slow-roll regime, the probability decays exponentially. Linear fit slope, the decay rate is around $\Gamma=0.15$. Red, dashed line denotes eternal inflation threshold with slope $3H$ of order $10^{-5}$. Since $3H<\Gamma$, initial condition $\phi_0=3$ $M_{Pl}$ is an example of non-eternally inflating universe.} \end{figure} \section{Eternal inflation in asymptotically safe models} \label{sec:EIinASM} In this chapter, as a warm up we study effective corrections to the Starobinsky inflation, providing a different behavior at the large field values. Later we show, that RG-improvement of Starobinsky model proposed in \cite{Bonanno:2015fga}, see also \cite{Bonanno:2017pkg, Platania:2020lqb} for review, closely related to the $R+R^2$ renormalizable Fradkin-Tseytlin gravity \cite{Fradkin:1981hx} produces a branch of inflationary potential entirely dominated by eternal inflation. We find the initial values of the inflaton such that inflation becomes eternal, for the remaining branch, as a function of theory parameters. Finally we show, that the possibility of tunneling through the potential barrier present in \cite{Nielsen:2015una} becomes a new mechanism for eternal inflation. In all of the asymptotically safe inflationary theories, eternal inflation is present as a consequence of asymptotic flatness of the effective potential. \subsection{Quantum corrections to the Starobinsky model} Below Planck scale the gravitational constant $G_N$ has a vanishing anomalous dimension and the $R^2$ has a coefficient that runs logarithmically \cite{Demmel:2015oqa} (this comes from the fact that $R^2$ is dimensionless in $4$ dimensions). Hence, one can motivate various quantum corrected inflationary models, such as \cite{Codello:2014sua,Ben-Dayan:2014isa,Bamba:2014mua,Liu:2018hno}. In particular, the leading-log corrections to the Starobinsky model are given by \cite{Liu:2018hno}: \begin{align} \label{L_RS} \mathcal{L}_{eff} = \frac{M_{Pl}^{2} R}{2} + \frac{\frac{a}{2}R^2}{1+b \ln(\frac{R}{\mu^2})}+\mathcal{O}(R^3). \end{align} In order to find the Einstein frame potential for this model a few steps need to be taken. First, we use the conformal transformation \cite{Kaiser:2010ps}. Then, following the next transformation for the Ricci scalar and the metric determinant we get the Einstein frame action: \begin{align} \label{EffActionFinal_RS} S = \int d^4 x \sqrt{-g_E} \left[\frac{M_{Pl} ^2}{2} R_E - \frac{1}{2} g_E ^{\mu \nu} \left(\partial_\mu \phi_E\right) \left(\partial_\nu \phi_E\right) - V_E (\phi_E) \right], \end{align} which depends on the sought potential $V_E$, that can be further obtained:\footnote{Here, the effective action differs from \cite{Liu:2018hno} due to the introduction of auxiliary numerical parameter\\ $e\approx 2.81$. Nevertheless, the dynamics stemming from each of the potentials are equivalent.} \begin{align} \label{Veff_RS} V_E (\Phi) =\frac{M_{Pl}^4}{2} \frac{a \Phi^2 \left(1+b \ln \left(\frac{\Phi}{\mu^2}\right)\right)^2 \left(1+b \ln \left(\frac{\Phi}{e\mu}^2\right)\right)}{\left\{M_{p}^{2} \left(1+b \ln \left(\frac{\Phi}{\mu^2}\right)\right)^2 + 2 a \Phi \left(1+b \ln \left(\frac{\Phi}{\sqrt{e}\mu^2}\right)\right)\right\}^2}, \end{align} with $\phi_E$ given by \begin{equation} F(\phi_E)=M_{Pl}^2\exp\left(\sqrt{\frac{2}{3}}\frac{\phi_E}{M_{Pl}}\right)=M_P^2 + \frac{a\Phi[2-b+2b\ln (\Phi/\mu^2)]}{[1+2b\ln (\Phi/\mu^2)]^2}, \end{equation} yet the transformation between $\Phi$ and $\phi_E$ is non-invertible. By taking into account the COBE normalization, we can treat $b$ as a free parameter and fix $a(b)$. For $b=0$ one obtains the usual Starobinsky model, and for $b \ll 1 $ one gets the model discussed in \cite{Ben-Dayan:2014isa} $R^2(1+\beta \ln R)$ with the potential given by Lambert W function (so the same as for model discussed in section \ref{Sannino}) and approximated in the limit $\beta \ll 1$ as \begin{equation} \label{smallb} V \approx \frac{V_s}{1+b/(2\alpha)+ \beta/\alpha\ln[(e^{\tilde{\chi}}-1)/2\alpha]}, \end{equation} where $V_s$ is the Starobinsky potential, $\tilde{\chi}$ is the Einstein frame field and $\alpha(\beta)$, where we have kept the original notation. \\ From the plots~\ref{fig:RStarobinskya}, \ref{fig:RStarobinskyb}, one can see that both of the models should give similar inflationary observables as in the Starobinsky inflation. On the other hand the eternal inflation in this models will be quite different. These models for $\beta<0$ and $b>0$ have the potentials that are non-flat for large field values, while for $\beta>0$ potential depicted on Figure \ref{fig:RStarobinskyb} have the runaway behaviour, so different asymptotic behaviour, which is discussed in Section \ref{Sannino}. This makes those potentials quantitatively different from the Starobinsky model in the context of eternal inflation and suggest, that eternal inflation cannot take place in those models. To be concrete, we have checked that eternal inflation for model described by \ref{Veff_RS} takes place at $\Phi \approx 1000\,M_{Pl}$, which is far beyond the applicability of the model. So now we turn to the inflationary models stemming from the asymptotic safety. \FloatBarrier \begin{figure}[t!] \hfill \subfigure[]{ \label{fig:RStarobinskya} \includegraphics[width=0.48\textwidth]{4a.jpeg}} \hfill \subfigure[]{ \label{fig:RStarobinskyb} \includegraphics[width=0.48\textwidth]{4b.jpeg}} \hfill \caption{Left: Plots of the potential (\ref{Veff_RS}) for various $b$ parameters.\newline Right: Potential (\ref{smallb}) in dependence of $\beta$ parameter values. Approach towards infinity in case of $\beta > 0$ is visible.} \end{figure} \noindent \FloatBarrier \subsection{RG-improved Starobinsky inflation} Renormalization Group improvement is a procedure of identifying and replacing the RG scale $k^2$ with a physical scale. It incorporates leading-order quantum effects in the dynamics of classical system. In the case of gravity, running of coupling constants in Einstein-Hilbert action results in additional contribution to the field equations from the gravitational energy-momentum tensor \cite{Platania:2020lqb}. In the de Sitter-type setting $k^2\sim R$ is the unique identification of the physical scale dictated by Bianchi identities \cite{Platania:2020lqb}. Such replacement in the scale-dependent Einstein-Hilbert action generates an effective $f(R)$ action, whose analytical expression is determined by running of the gravitational couplings. RG-improvement could solve classical black hole singularity problem \cite{Bonanno:1998ye,Bonanno:2006eu}, gives finite entanglement entropy \cite{Pagani:2018mke} and generates inflationary regime in quantum gravity \cite{eichhorn2019asymptotically}.\\ In this section, we study the asymptotically safe inflation based on RG-improved quadratic gravity Lagrangian, considered in \cite{Bonanno:2015fga,Platania:2020lqb}: \begin{equation} \mathcal{L}_{k}=\frac{1}{16\pi g_{k}}\left(R-2\lambda_{k} k^2 \right)-\beta_{k} R^2, \end{equation} with the running dimensionless couplings $g_{k}$, $\lambda_{k}$, $\beta_{k}$ being the three relevant directions of the theory, with running given by \cite{Bonanno:2004sy}: \begin{equation} g_k= \frac{6\pi c_1 k^2}{6 \pi \mu^2 + 23 c_1(k^2-\mu^2)}, \quad \quad \beta_k = \beta_{\ast} + b_0 \left(\frac{k^2}{\mu^2}\right)^{-\theta_3/2}, \end{equation} where $\mu$ is the infrared renormalization point such that $c_1=g_k(k=\mu)$ and $c_1$ and $b_0$ are integration constants. We introduce a parameter $\alpha$ as \begin{equation} \alpha=-2\mu^{\theta_3}b_0/M_P^2, \end{equation} that measures the departure from the non-gaussian fixed point (NGFP). One may find the behavior of the couplings near the NGFP and substitute the appropriate expressions to the Lagrangian using the RG-improvement and the following identification of scale $k^2= \xi R$, where $\xi$ is an arbitrary parameter of order one. \begin{figure}[t!] \hfill \subfigure[]{ \label{fig:PlataniaFit1} \includegraphics[width=0.48\textwidth]{PlataniaPlot.jpg}} \hfill \subfigure[]{ \label{fig:PlataniaFit2} \includegraphics[width=0.48\textwidth]{PlataniaEternalalpha.jpg}} \hfill \caption{Left: $V_{+}(\phi)$ plot for various $\alpha$ and fixed $\Lambda=1$ is shown. \newline Right, logarithmic dependence of initial field value above which eternal inflation occurs on parameter $\alpha$ has been found. Blue points were evaluated via (\ref{eq:EternalCondition1}).} \end{figure} Following \cite{Bonanno:2015fga} we shall assume $\theta_3 =1$, then the transformation from the Jordan to the Einstein frame yields an effective potential \cite{Bonanno:2015fga,Platania:2020lqb}: \begin{align} \label{Vpm} \begin{split} V_{\pm} =& \frac{m^2 e^{-2 \sqrt{\frac{2}{3}}\kappa\phi}}{256\kappa}\Bigg\{\vphantom{6 \alpha ^3 \sqrt{\alpha^2 + 16 e^{\sqrt{\frac{2}{3}}\kappa\phi}-16}} 192(e^{\sqrt{\frac{2}{3}}\kappa\phi}-1)^2 - 3\alpha ^4 + 128 \Lambda \\ &- \sqrt{32} \alpha \left[ (\alpha ^2 + 8 e^{\sqrt{\frac{2}{3}}\kappa\phi} - 8) \pm \alpha \sqrt{\alpha ^2 + 16 e^{\sqrt{\frac{2}{3}}\kappa\phi} -16}\right]^{\frac{3}{2}}\\ &- 3\alpha ^2 (\alpha ^2 + 16 e^{\sqrt{\frac{2}{3}}\kappa\phi} - 16) \mp 6 \alpha ^3 \sqrt{\alpha^2 + 16 e^{\sqrt{\frac{2}{3}}\kappa\phi}-16} \Bigg\} \end{split}, \end{align} where the only free parameters are cosmological constant $\Lambda$ and $\alpha$ after the CMB normalization we perform below. The $V_{+}$ branch predicts the reheating phase, figure \ref{fig:PlataniaFit1} shows its plot for various $\alpha$.\\ Similarly as in the case of Starobinsky inflation, we have denoted $V_0$ the constant part of the potential at infinity $V(\phi\xrightarrow{}\infty)=V_0=\frac{3m^2}{4 \kappa^2}$ and fixed it with CMB data by the relation (\ref{cmb}). For example, given $\alpha=2.8$, $\Lambda=1$ the plateau value is equal to $V_0=1.99\times10^{-10}$ $M^4_{Pl}$, hence one may fix the mass parameter $m=2\times10^{14}$ GeV.\\ Now we shall investigate the eternal inflation conditions given by (\ref{eq:EternalCondition1}, \ref{eq:EternalCondition2}). These conditions restricts the initial value of the field. We search for $\phi_0$ above which the eternal inflation occurs, as a function of the theory parameters. We have also found, that initial value above which eternal inflation occurs does not depend on the cosmological constant. It is due to the fact that $\Lambda$ only shifts the minimum of the potential and does not affect the large-field behavior of the system. Analytical conditions for EI have been checked for a set of $\alpha$ and depicted on figure \ref{fig:PlataniaFit2}. The initial value of the field, depends logarithmically on $\alpha$. The reason for that behaviour is the following. In the large field expansion: \begin{align} V_{\pm}(\phi)=V_{plateau}-128 V_0 \alpha e^{-\frac{1}{2}\sqrt\frac{3}{2}\phi}, \end{align} and by the substitution $\tilde{\phi} = e^{-\frac{1}{2}\sqrt\frac{3}{2}\phi}$ the potential reduces to the linear hilltop model, which justifies the usage of formulae (\ref{eq:EternalCondition1}, \ref{eq:EternalCondition2}) and the functional form of $\phi_0(\alpha)$. The results were also confirmed by the numerical simulations. For example, given $\Lambda=1, \, \alpha=1.6$, the analytical considerations predict $\phi_{EI}=22.6 \, M_{Pl}$. The direct numerical simulation for this set of parameters yields $\Gamma=0.0001$ $M_{Pl}$, and $3 H=0.0003$ $M_{Pl}$, meaning that the eternal inflation begins slightly below the expected value $\phi_{EI}$. The plateau of (\ref{Vpm}) at large field values is a characteristic feature of effective inflationary potentials stemming from the asymptotically safe theories. It is dominated by eternal inflation and may suggest a deeper relation between the asymptotic safety of quantum gravity and multiverse. \subsection{Large N-dynamics and (eternal) inflation} \label{Sannino} In this section we investigate model in which inflation is driven by an ultraviolet safe and interacting scalar sector stemming from a new class of non-supersymmetric gauge field theories. We consider a $\mathrm{SU}(N_C)$ gauge theory, with $N_F$ Dirac fermions and interacting with an $N_F$ $\times$ $N_F$ complex scalar matrix $H_{ij}$ that self interacts, described in \cite{Nielsen:2015una}. The Veneziano limit ($N_F \to +\infty$, $N_C\to +\infty$, $N_F/N_C=\mathrm{const}$) is taken such that the ratio $N_F/N_C$ becomes a continuous parameter \cite{Litim_2014}. The action in Jordan frame has the following form: \begin{equation} S_J=\int d^4x \sqrt{-g} \left\{ - \frac{M^2+\xi\phi^2}{2}R + \frac{g^{\mu \nu}}{2}\partial_{\mu}\phi \partial_{\nu}\phi - V_{\mathrm{iUVFP}} \right\}, \end{equation} where the leading logarithmically resummed potential $V_{iUVFP}$ is given by: \begin{equation} V_{\mathrm{iUVFP}}(\phi)=\frac{\lambda_* \phi^4}{4 N_f^2 \left(1+W(\phi)\right)}\left(\frac{W(\phi)}{W(\mu_0)}\right)^{\frac{18}{13 \delta}}, \end{equation} where $ \lambda_* = \delta \frac{16 \pi^2}{19}(\sqrt{20+6\sqrt{23}}-\sqrt{23}-1)$ is positive quartic coupling at the fixed point, $\phi$ is the real scalar field along the diagonal of $H_{ij} = \phi \delta_{ij}/\sqrt{2N_f}$ and $\delta = N_F/N_C - 11/2$ is the positive control parameter, $W(\phi)$ is the Lambert function solving the transcendent equation \begin{equation} z = W \exp W, \end{equation} with \begin{equation} z(\mu) = \left(\frac{\mu_0}{\mu}\right)^{\frac{4}{3}\delta \alpha*}\left(\frac{\alpha*}{\alpha_0}- 1 \right) \exp \left[\frac{\alpha*}{\alpha_0} - 1 \right] . \end{equation} The parameter $\alpha* = \frac{26}{57}\delta + O(\delta^2)$ is the gauge coupling at its UV fixed point value and $\alpha_0 = \alpha(\mu_0)$ is the same coupling at a reference scale $\mu_0$. \\ A conformal transformations allows to rewrite the action from Jordan to Einstein frame. Assuming a single field slow-roll inflation, we examine inflationary predictions of the potential and compute the slow-roll parameters: \begin{equation} \epsilon = \frac{M_{Pl}^2}{2}\left(\frac{dU/d\chi}{U}\right)^2, \quad \quad \eta = M_{Pl}^2\frac{d^2U/d\chi^2}{U}, \end{equation} where $U = V_{\mathrm{iUVFP}}/ \Omega^4$, with $\Omega^2 = (M^2 + \xi \phi^2)/M_{Pl}^2$ being the conformal transformation of the metric, and $\chi$ is the canonically normalized field in the Einstein frame. We assume, that $M = M_{Pl}$. Inflation ends when the slow-roll conditions are violated, that is when $\epsilon(\phi_{end})$ or |$\eta(\phi_{end})$| = 1. We analyze the non-minimal case, where the coupling $\xi$ is non-vanishing. The potential $U$ is given by: \begin{equation} \label{NonMinimalPotential} U=\frac{V_{\mathrm{iUVFP}}}{\Omega^4} \approx \frac{\lambda_* \phi^4}{4 N_F^2 \left( 1+ \frac{\xi \phi^2}{M_{Pl}^2} \right)^2} \left(\frac{\phi}{\mu_0} \right)^{-\frac{16}{19}\delta} \mathrm{.}\end{equation} \begin{figure}[t!] \hfill \subfigure[]{ \label{fig:Sanninoplot} \includegraphics[width=0.48\textwidth]{SanninoPotential.jpg}} \hfill \subfigure[]{ \label{fig:SanninoCondition1} \includegraphics[width=0.48\textwidth]{SanninoEternalCOndition.jpg}} \hfill \caption{Left: the non-minimally coupled potential as a function of $\phi$ for $\delta = 0.1$, $\xi = 1/6$ and $\mu_{0}= 10^{-3}M_{Pl}$. There is a maximum at $\phi_{max}=16.7\, M_{Pl}$.\newline Right: For the same set of parameters, we plot the first eternal inflation condition as a function of $\phi$ (blue curve) and the eternal inflation bound (yellow curve). Inflation becomes eternal if the blue curve is below the yellow one. At $\phi_{max}$ the first derivative of the potential vanishes and (\ref{eq:EternalCondition1}) predicts a narrow window for the eternal inflation.} \end{figure} In the large field limit $\phi$ $\gg$ $M_{Pl}/\sqrt{\xi}$ the $\phi^4$ term in the numerator cancels against the term in the denominator. In this limit, the quantum corrections dictate the behaviour of the potential, which is found to decrease as: \begin{equation}\label{eqn:maxima} \frac{\lambda_* M_{Pl}^4}{4 N_F^2 \xi^2}\left(\frac{\phi}{\mu_0}\right)^{-\frac{16}{19}\delta} \mathrm{.}\end{equation} The non-minimal coupled potential has one local maximum and two minima. The region to the left of the maximum is the region, where the inflation can be brought to an end and the reheating takes place \cite{Svendsen:2016kvn}. To the right of the maximum, the inflation becomes classically eternal. For large values of $\phi$ the potential flattens out and the slow-roll conditions are not violated. Numerical solutions to the FP equation shows that the is no possibility of eternal inflation in that region, since it is an unstable maximum and hence any quantum fluctuation will drop it from that position. Furthermore due to steepness of the potential around this maxima there is no possibility for the field to remain in that region. Let us now investigate the analytical eternal inflation conditions. Similarly as in the Starobinsky model, the second condition (\ref{eq:EternalCondition2}) is always satisfied. The first condition (\ref{eq:EternalCondition1}) is illustrated on the figure \ref{fig:SanninoCondition1}. There is a peak for $\phi$ = $\phi_{max}$ = 16.7 $M_{Pl}$, due to the vanishing derivative and if we "zoom in", the analytical condition allows for eternal inflation in the close neighbourhood of $\phi_{max}$. We have verified numerically this is not a sustainable attractor of eternal inflation. A field that starts evolution at $\phi_{max}$ will leave this region, as it cannot climb further up-hill. Nevertheless, eternal inflation may still occur due to the quantum tunnelling through the potential barrier. \paragraph{Tunneling through the potential barrier} \label{tunneling section} As described in section \ref{SecTunneling}, if the potential has multiple vacua, quantum tunneling through the potential barrier is expected. The non-minimal coupling potential (\ref{NonMinimalPotential}) belongs in this class. The question is, whether tunnelling from the non-eternal inflation region of $\phi<\phi_{max}$ to the region of classical eternal inflation $\phi>\phi_{max}$ is possible.\\ We start with investigating the fate of the field initially placed at the peak $\phi_0=\phi_{max}$ of the potential depicted on figure \ref{fig:Sanninoplot}. By the virtue of the steepest descent approximation at the maximum equation (\ref{Rmaximum}) may be employed. The resulting ratio of probabilities of the right-side descent to the left-side descent $R(\alpha,\xi)=\frac{p_{+}}{p_{-}}$, as a function of parameter $\alpha$ and the non-minimal coupling constant $\xi$ was calculated directly from the formula (\ref{Rmaximum}) without the need of numerical simulation of the Langevin equation. It is presented on figure \ref{fig:RContour}. Due to the complexity of the potential (\ref{NonMinimalPotential}) its maximum was found numerically and then employed in (\ref{Rmaximum}). Figure \ref{fig:PhimaxContour} shows how the maximum changes with parameters. As expected, the ratio $R(\alpha,\xi)$ is close to 1 and favors the right-side (left-side) of the potential for large (small) values of the parameters. The biggest ratio emerges at large values of the parameters $\xi$ and $\delta$, since the potential is "step-like" and highly asymmetric. It is monotonically decreasing with the values of $\phi_{max}$, for which the potential has a maximum, see figure \ref{fig:Rmax}. \FloatBarrier \begin{figure}[t!] \hfill \subfigure[]{ \label{fig:RContour} \includegraphics[width=0.48\textwidth]{Rcontour.jpg}} \hfill \subfigure[]{ \label{fig:PhimaxContour} \includegraphics[width=0.48\textwidth]{PhiMaxContour.jpg}} \hfill \caption{Left: probability ratio $R=\frac{p_{+}}{p_{-}}$ of descending from the maximum towards the right minimum ($p_{+}$) and the left minimum ($p_{-}$), as a function of theory parameters $\xi$ and $\delta$. For the small values of the parameters it is more probable to fall from the maximum towards $\phi_{-}=0$ with non-eternal inflation, while for the large values of parameters, minimum at $\phi_{+}=\infty$ is favored, resulting in eternally inflating universe.\newline Right: the value of the field at which, the potential is maximal. The above figures are qualitatively similar because for the small values of $\phi_{max}$, the effective potential is highly asymmetric ("step-like"). This brakes the symmetry between the right and left descend probability.} \end{figure} \FloatBarrier In order to verify the accuracy of the relation (\ref{Rmaximum}), we have performed numerical simulation of the discretized Langevin equation (\ref{DiscretLangevin}) with initial condition $\phi_0=\phi_{max}$. For example it was found, that for the set of the parameters $N_F=10$, $\mu=10^{-3} M_{Pl}$, $\xi=\frac{1}{6}$, $\delta=0.1$ the steepest descent approximation yields $R=0.92$, and the numerical analysis results in $R=0.97$, which proves good accuracy of the analytical formula.\\ One may wonder, how does the probability $p_{\pm}(\phi_0)$ depends on the departure from the maximum $\phi_0 \neq \phi_{max}$. The analytical answer is given by (\ref{AnalP}). As we have checked numerically inflation becomes eternal, when tunneling probability is non-zero, as depicted on Figure \ref{fig:TunnelingProbabilities}.\\ To bypass the numerical calculation of the integral (\ref{AnalP}) we employ the direct numerical simulation of the Langevin equation. This time however, we do not seek for the time evolution of the inflaton. Rather than creating histograms of the count of inflationary events at a given timestep, we simply track the probabilities $p_{+}$ and $p_{-}$. We say that the particle tunnelled through the potential barrier contributing to $p_{+}$ if the evolution starts at $\phi_0<\phi_{max}$ and proceeds to arbitrarily large field values after a long time. For each point at figure \ref{fig:TunnelingProbabilities} the probability has been calculated on the sample of 10000 simulations. As expected, choosing values of $\phi_0$ smaller than $\phi_{max}$ lowers the probability of the tunneling to the right side of the barrier. Moreover, the probability of tunneling decreases linearly with the distance to the maximum. The result of the simulation for the set of parameters \begin{equation} \label{eq:Sanninopar} N_F=10,\quad \mu=10^{-3} M_{Pl},\quad \xi=\frac{1}{6},\quad \delta=0.1 \end{equation} is shown on figure \ref{fig:TunnelingProbabilities}. The green line corresponds to the green ball on figure \ref{fig:Vacua1} and shows the probability of tunneling through the barrier (as in figure \ref{fig:Vacua2}) as a function of proximity to the maximum $\phi_0 \neq \phi_{max}$. The red line corresponds to the red ball on figure \ref{fig:Vacua1} and shows the probability of rolling towards the minimum at infinity. \\ The rolling is also a stochastic process, as the tunneling in the opposite direction is possible. The probability distribution of tunneling in either direction is not a symmetric process. Notice, that the initial condition, for which $p_{+}=\frac{1}{2}$ is shifted to the right of $\phi_{max}$. This means, that starting from the maximum, it is slightly more probable to land in $\phi_{-}$. There is a point, below which the green ball cannot tunnel $p_{+}=0$ (for the set of parameters given by (\ref{eq:Sanninopar}), at $9.2 M_{Pl}$), and the other limiting case (at $24.8$ $M_P$), when the red ball cannot tunnel $p_{-}=0$. Hence, for every initial value of the field above $9.2$ Planck Masses, there is a non-zero probability of eternal inflation. On the other hand, for $\phi_0=9.2 \,M_{Pl}$ and parameters given by (\ref{eq:Sanninopar}), the inflation classically produces roughly 54 e-folds, depending on the reheating time \cite{Svendsen:2016kvn}, and is in the agreement with CMB data. This shows that the model is on a verge of being eternally inflating, which may point out to the interesting phenomenology.\\ To sum up the critical point of our analysis is that the analytical conditions (\ref{eq:EternalCondition1}, \ref{eq:EternalCondition2}) did not allow for eternal inflation, even though the tunneling process may evolve any initial point above 9.2 Planck Masses to $\phi_{+}=\infty$, and not violate the slow-roll conditions. This shows that conditions (\ref{eq:EternalCondition1},\ref{eq:EternalCondition2}) are not well suited for the multiple minima models and cannot contain the full information of the global influence of quantum fluctuations in the early universe. \begin{figure}[t!] \hfill \subfigure[]{ \label{fig:TunnelingProbabilities} \includegraphics[width=0.48\textwidth]{TunnelingProbability.jpg}} \hfill \subfigure[]{ \label{fig:Rmax} \includegraphics[width=0.48\textwidth]{Rphimax.jpg}} \hfill \caption{Left: Linear probability distribution of tunneling (green side) and rolling (red side) towards the minimum at infinity as. The data points have been directly simulated.\newline Right: Probability ratio $R$, evaluated with (\ref{Rmaximum}), is a monotonically decreasing with the value of the maximum of the potential in the steepest descent approximation} \end{figure} \FloatBarrier \section{Conclusions} \label{Sec:Discussion} Eternal inflation remains a conceptual issue of the inflationary paradigm. The creation of scattered, causally disconnected regions of spacetime - the multiverse is not confirmed observationally and raises question about the inflationary predictions \cite{Ijjas:2014nta}. Hence, one may impose the no eternal inflation principle \cite{Rudelius:2019cfh} to restrict free parameters and the initial conditions.\\ We have investigated popular inflationary models, and have found that in principle, eternal inflation is present at every asymptotically flat effective potential for large field values, assuming the ergodicity of the system. The finite inflationary time of our pocket universe serves as a consistency condition of the multiverse predictions. In section \ref{AlfaAttractorSection}, we verified that $\alpha$-attractor T-models are consistent from this point of view.\\ If the initial value of the scalar field driving inflation is above the Planck scale, a UV-completeness of a given model is necessary. Starobinsky inflation stemming from $R^2$ gravity gives around 60 e-folds for $\phi_0$=5.5 $M_{pl}$. We have considered the effective quantum corrections to the Starobinsky inflation based on the qualitative behavior of the running coupling constants. Next, the RG-improvement of $R+R^2$ Lagrangian was studied in this case and we have found that field values required for eternal inflation are typically higher than the ones for the Starobinsky case. The flatness of the potential and possibility of eternal inflation seems to be a signature mark of asymptotically safe UV completions, in contradistinction to the effective theory corrections. We have checked that \cite{Liu:2018hno} $\Phi \sim 1000$ $M_{Pl}$ in order to get eternal inflation, which is far beyond the applicability of the model. \\ Furthermore we have found, that for potentials with multiple vacua, tunneling through potential barriers provides a new mechanism for eternal inflation. So in order to understand the inflationary dynamics one cannot simply cut the potential at the maximum. The $\mathrm{SU}(N)$ Gauge theory with Dirac fermions provides an example for such behavior. The probability of tunneling to the side dominated by eternal inflation becomes negligible few Planck Masses away from the peak of the potential. Yet the fixed point values of the couplings and possibly the shape of the potential can be obscured by the quantum gravity effects and this shall be investigated elsewhere.\\ Our analysis reveals that there is no obstruction for the multiverse scenario in the asymptotically safe models. Yet its occurrence depends on the initial conditions for the inflationary phase and the matching to the observational data, tying these three profound issues together. On the other hand, in AS models these questions can have intriguing answers by the finite action principle \cite{Lehners:2019ibe}. \acknowledgments We thank J. Reszke and J. Łukasik for participating in the early stages of this project. We thank G. Dvali, A. Eichhorn, M. Pauli, A. Platania, T. Rudelius, S. Vagnozzi and Z.W. Wang for fruitful discussions and extensive comments on the manuscript. Work of J.H.K. was supported by the Polish National Science Center (NCN) grant 2018/29/N/ST2/01743. J.H.K. would like to acknowledge the CP3-Origins hospitality during this work. The computational part of this research has been partially supported by the PL-Grid Infrastructure.
1,108,101,564,035
arxiv
\section{Introduction} It is estimated by the Indian Cancer Society that 55,100 new cases of melanoma are being diagnosed each year in India \cite{india}. Worldwide, it caused more than 60,000 deaths out of the 350,000 cases reported in 2015 and causes one death in every 54 minutes in US ~\cite{report}. Even after being accounted for as less as 1\% of the total skin-related diseases, melanoma cancer has become the major cause of death in these diseases. The annual cost of healthcare for melanoma cancer exceeds \$8 billion~\cite{A_Cost}. With early detection of melanoma, the 5-year survival rate can be increased up to 99\%; whereas delaying the diagnosis can drastically reduce the survival rate to 23\%~\cite{death} once it spreads to other parts of the body. Hence, it is of fundamental importance to identify the cancerous skin lesion at the earliest to increase the survival rate. A huge amount of time and effort has been dedicated to increasing the accuracy and scale of diagnostic methods by researchers worldwide. The International Skin Imaging Collaboration(ISIC) provides a publicly available dataset of more than 25,000 dermoscopy images. They have been hosting the benchmark competition on skin lesion analysis every year since 2016. The previous year challenge comprised of 3 tasks on lesion analysis: Lesion Boundary Segmentation, Lesion Attribute Detection, and Lesion Diagnosis. The extraction of crucial information for accurate diagnosis and other clinical features depends heavily on the lesion segmented from the given dermoscopic image and therefore, segmentation of the lesion has been designated as a decisive prerequisite step in the diagnosis~\cite{rethink,survey}. Our work majorly focuses on the segmentation of the skin lesions which in itself is a challenging task. Skin lesions are accompanied by a huge variance in shape, size, and texture. While melanomas have very fuzzy lesion boundaries, there are further artifacts introduced due to hair, contrast, light reflection, and medical gauze which makes it more difficult for the CNN-based approach to segment the lesions. \subsection{Related} In recent years, many pixel-level techniques for skin lesion segmentation have been developed. Initial work explored the visual properties of skin lesion like color and texture and applied classical techniques. Li et al.~\cite{Depth} proposed the use of above-mentioned features with classical edge detection for a contour-based methodology. Garnavi et al. \cite{garnavi2009skin} combined use of histogram thresholding and CIE-XYZ color space to segement the lesion. While classical approaches do not generalize well on unseen lesion images, deep convolutional neural network(DCNN) based approaches have proven to be a great success in generalization over such tasks with improved accuracy and precision~\cite{unet,v-net}. Mishra et al. \cite{deep} presented an efficient implementation of UNet \cite{unet} and compared the improvement in performance with other classical methods. Many authors have used the same CNN-based method for segmentation as well as classification. In~\cite{twofrcn}, two fully convolutional residual networks(FCRN) were used, to segment as well as classify skin lesions at the same time. Instead of processing only the local features, much work has been focused on processing the global information with CNNs as well. In~\cite{multires}, the authors made use of pyramid pooling to incorporate global context along with spatial information to produce location precise masks. Some work has also introduced the concept of processing the features selectively. In~\cite{focusnet}, the authors made use of Squeeze-and-Excitation~\cite{squeeze} network to incorporate attention for focusing only on the important parts of the feature maps.~\cite{gan2} and~\cite{denseres} made use of modified GANs(General Adversarial Network) which involves training a Generator and Discriminator network in an adversarial fashion to generate accurate segmentation maps. Recently, two-stage CNN pipelines have also been researched upon. In~\cite{grabcut}, they used YOLO \cite{yolo}, an object detection network for localising the skin lesion and then employed classical image segmentation algorithm for segmenting the lesion. Similarly,~\cite{shreyas} first employed Faster RCNN~\cite{frcnn} for detecting the lesion in the images and then employed a dilation-based autoencoder for segmenting the lesion. \section{Methods} We took inspiration from~\cite{grabcut} and~\cite{shreyas} and to apply the pre-processing step of extracting the region of interest(ROI). We implemented the faster region-based convolutional neural network (Faster RCNN)~\cite{frcnn} on the lines of \cite{shreyas} as Stage 1 of our pipeline. The ROIs extracted through our detection network were give as input to our segmentation network to obtain the lesion mask which served as the Stage 2 of our pipeline. We named the Stage 1 as the Detector and the Stage 2 as the SegMentor. Our segmentation network is inspired from the UNet \cite{unet} and the Hourglass \cite{hg} network. As previously experimented in \cite{grabcut} and \cite{shreyas}, the localized and cropped image of the lesion area by the detector in the given dermoscopic images, were used to train the network along with the cropped segmentation masks. Doing so, increased the over all performance of their segmentation algorithms. The reason behind this being that a prior removal of irrelevant features and other nearby pixels from the input images and present only relevant features at segmentation stage. This helps the segmentation network to achieve good and fast results. We used Dice Similarity Coefficient(DSC) as the evaluation metric which is similar to the Jaccard Index~\cite{jaccard}, to tackle the issue of imbalanced lesion to background ratio. We discuss the architectural uniqueness and the non-conventional training strategy followed for our model, where components of the model were trained sequentially to attain the optimal value of parameters. We have also shown a comparison between model performances with and without employing Stage 1 i.e. melanoma localization in table \ref{tab:results2}. The overall pipeline is summarized in figure \ref{fig:Overall}. We named the combined pipeline of localization and segmentation networks as Detector-SegMentor. \subsection{Dataset} The ISIC challenge 2018~\cite{d1,d2} dataset was used for training. The ISIC 2018 challenge provided datasets for skin lesion segmentation task along with attribute detection and lesion diagnosis tasks. The data was collected from various institutions and clinics around the word. ISIC archive is the largest dermoscopic image library available publicly. The challenge provided a total of 2594 images with their corresponding ground truth masks. Image dimensions varied from 1022x767 to 6688x4439. Out of the available dataset, we utilised 2000 images solely for training purpose. To extend our training data, we performed conventional augmentations such as horizontal and vertical flips, rotation, shear and stretch, central cropping, and contrast shift. The final training dataset consisted of 30,000 images with their corresponding ground truth masks. Moreover to showcase the generalisability of our model, we also used ISBI 2017 \cite{d17} and PH\textsuperscript{2} \cite{ph2} datasets for validation. \begin{figure*} \centering \centering\includegraphics[height=1.6cm,width=12.2cm]{Overall2.jpg} \caption{An overview of the proposed pipeline.} \label{fig:Overall} \end{figure*} \subsection{Proposed Detector-SegMentor} Faster-RCNN network in Stage 1 task gave us the localised lesion. The Detector returns a set of coordinates corresponding to the input image which confines a lesion in it with a certain probability. The lesion area is cropped from the original image with the help of the obtained coordinates from the detector. The cropped image is then either resized (if aspect ratio larger than 512x512) or padded with zeros (if aspect ratio smaller than 512{x}512) to obtain the image size of 512{x}512{x}3 to be fed into the SegMentor (Segmentation Network) to generate segmentation maps. \subsubsection{Stage - 1 : Detector} In Stage 1 of the proposed method, the skin lesion is localized in the input image and then passed on as input to the Stage 2. The Detector can be divided into three major components namely the base network, region proposal network(RPN), and the RCNN. The base network generates a feature map of the input image to be used by the RPN. We used the ResNet50 network \cite{resnet} and its pre-trained weights on ImageNet. The RPN then acts on the feature map from the base network and outputs the region proposals in the form of set anchor boxes. These boxes have a high chance of containing the lesion from the input feature map. Thereafter, each proposed region is classified into lesion/non-lesion and the bounding box coordinates of the proposal are trimmed to fit the lesion entirely by the RCNN. As given in \cite{shreyas}, time-distributed convolution layers were used for the RCNN to aid in avoiding repeated classification and also in accommodating the differing number of regions proposed by the RPN per image. Finally, Non-Maximum suppression was done with a threshold of 0.5 to remove the redundant boxes. Coordinates were scaled for the lesion in the original image. Figure \ref{fig:frcnn} depicts our Detector which comprises of three major components, namely the base network, RPN, and the RCNN. \subsubsection{Stage - 2 : SegMentor} After detecting the lesion at Stage 1, the detected section is cropped from the original image or padded appropriately to be given as input to the SegMentor. The segmentation network was designed in a network-in-network fashion. The base network has been derived from the well-known UNet~\cite{unet} which was a major breakthrough in biomedical image segmentation. We proposed the use of a sequence of hourglass modules which are smaller but effectively dense networks at the bottleneck of the autoencoder. The module enabled us to further compress and better represent the bottleneck features for them to be easily decoded by the decoder. The encoder and decoder of the hourglass module are connected with processed skip connections instead of simple skip connections as in UNet \cite{unet}. We have demonstrated the effects of using multiple hourglass modules at the bottleneck in table \ref{tab:results2}. The results showed that the segmentation accuracy reached a maximum for the optimum number of modules. Increasing the modules beyond the optimum number resulted in overfitting of the network. The following paragraphs give an insight into the encoder, decoder, hourglass modules, and training strategy followed for the SegMentor network. The SegMentor is depicted in figure \ref{fig:seg}. \begin{figure*} \centering \includegraphics[height=4.7cm,width=12.3cm]{FasterRCNN_2.jpg} \caption{ Detector : Faster RCNN for skin lesion localization.} \label{fig:frcnn} \end{figure*} \begin{figure*} \centering \includegraphics[height=3.9cm,width=12.1cm]{Full_Network.jpeg} \caption { SegMentor : Encoder, Hourglass bottleneck and Decoder of the Segmentation network.} \label{fig:seg} \end{figure*} \textit {Encoder - Decoder : } The combination of an encoder and a decoder, termed autoencoder has been widely used across the literature for image-to-image translation tasks. Here, we exploited the same to obtain the mask for skin lesion. Our encoder comprises of convolution blocks where each block consist of 2 convolutional layers of filter size 3{x}3, each followed by batch normalization. We used max-pooling with steps 2{x}2 to downsample the features. Table 1 in supplementary material and figure \ref{fig:seg} describe the detailed filter sizes and parameters for encoder. At the bottleneck of the encoder, feature map of size 64{x}64{x}128 was fed into the hourglass modules to obtain the compressed feature map of size 64{x}64{x}256 which was then fed into the decoder of the model. The decoder again comprises of convolutional blocks each having 2 convolutional layer of filter size 3{x}3 and number of filter for different blocks are 16, 64 and 128 same as that of the encoder, detailed filter sizes and parameters are given in Table 4 in supplementary material. Both the encoder and decoder were connected using long skip connections. This facilitates better gradient flow through them and hence, tackles the issue of vanishing gradient in deep convolutional networks such as ours. The long skip connections between the encoder and decoder allow for the transfer of global features whereas, the hourglass modules provide local features in the form of compressed and better extracted feature maps to the decoder. This ultimately generates sharp and location-precise masks. The hourglass modules helped in tackling the variation in shape, size, and obstructions observed in the input images and made the model more robust to such variations as explained in the next paragraph. \begin{figure*} \centering \centering\includegraphics[height=5.2cm,width=13.7cm]{Hourglass_3.jpg} \caption{Hourglass Module and Residual Block.} \label{fig:hg} \end{figure*} \textit{Hourglass Module : } Hourglass modules are dense autoencoders placed at the bottleneck of the main encoder-decoder model. Hourglass modules have long as well as short skip connections, allowing for the better flow of information across the network. This leads to better extraction of the required feature map at output. The short skip connection is incorporated to make the network residual and hence, avoid gradient vanishing while also transfer the information at every step. Each residual block has 3 convolutional layers with batch normalization and a skip connection from input layer to output layer of each block as given in table 2 in supplementary material. Long skip connections between the hourglass-encoder and hourglass-decoder consist of intermediate residual blocks to process the skipped information before concatenating into the hourglass-decoder. This makes the network heavily dense allowing for better feature extraction and representation of bottleneck features. Table 3 in supplementary material and figure \ref{fig:hg} shows the complete architecture of the hourglass module. \subsection{Training Strategy :} The Detector was trained in step-wise manner where first, the RPN was trained followed by the regressor as described in~\cite{frcnn} and~\cite{shreyas}. Cross-entropy and categorical cross-entropy loss were used as classification loss in the RPN and RCNN respectively while the mean squared error (MSE) was used for both as regression loss. The masks obtained from the ISIC 2018 challenge were used to create synthetic ground truths for bounding box regression. Training strategy mentioned in~\cite{shreyas} was followed for Faster-RCNN which is an standard approach in general. We used a step-wise training strategy for SegMentor where first, the autoencoder was trained for few epochs to learn representation of the data. Afterwards, a single hourglass module was introduced at the bottleneck of the encoder-decoder pair for training while keeping the weights of the encoder-decoder pair frozen. Second hourglass module was introduced following the first one for training where the weights of all other components were kept frozen for few epochs and finally, the entire model was trained freely. Multiple modules can be introduced with similar strategy. Table \ref{tab:results2} shows that as we increased the number of hourglass modules at the bottleneck of main network, the performance of framework increased till a certain point after which it started declining rapidly due to the overfitting on training dataset. Hence, we stopped at only 2 hourglass modules. We used the dice coefficient loss for our segmentation model which is insensitive to class imbalance(poor foreground to background ratio). \begin{figure*} \centering \includegraphics[height=2.2cm,width=5.2cm]{data.jpg} \caption {Outputs at various stages of the pipeline. a) Lesion localized by the Detector, b) cropped image of the lesion, c) segmentation map from the SegMentor, d) final padded segmentation map.} \label{fig:data} \end{figure*} \section{Experimental Setup and Results} We evaluated our framework on Dice Similarity Coefficient, Jaccard Index, Accuracy, Sensitivity and Specificity. We used the Adam optimizer with a learning rate of 0.00001 and 0.0002 for the Detector and SegMentor respectively. It roughly took 6 hours to train the Faster RCNN and 13 hours to train the SegMentor alone for 90 epochs for final end-to-end training with single hourglass module, both on NVIDIA's GTX 1080 Ti with 12 GB memory. For evaluation, we randomly separated 594 images from ISIC 2018 challenge (training) dataset \cite{d1,d2} for pure testing purpose. Apart from ISIC 2018 \cite{d1,d2}, we evaluated our framework on PH\textsuperscript{2} \cite{ph2} and ISBI 2017 \cite{d17} datasets. The SegMentor was trained in multiple steps where initially only the encoder-decoder pair was trained for 20 epochs with a slightly smaller learning rate. Next, a single hourglass module introduced at the bottleneck and was trained alone for another 20 epochs with a slightly higher learning rate. Finally, the complete model was trained in an end-to-end manner for 90 epochs where the loss converged after 50 epochs. Tables~\ref{tab:results2} and ~\ref{tab:results} show the comparison among the results on ISIC 2018 \cite{d1,d2} and our results on ISBI 2017 \cite{d17} and PH\textsuperscript{2} \cite{ph2} as well. The supplementary material includes the qualitative (both success and failure) and quantitative (for ISBI 2017) results. Also, to further check the robustness of our proposed method, we performed 5-fold cross-validation on the ISIC 2018 dataset. The values obtained were 0.94(Accuracy), 0.903(Dice), and 0.783(Jaccard). From table 1, it can be seen that our method outperformed the famous MaskRCNN \cite{mask}. Though MaskRCNN also contains a detector and segmentor network, unlike ours, the training is done in an end-to-end fashion. In our method, the detector and segmentor are trained separately but resonate at the end. Also, MaskRCNN uses only simple 3x3 convolution layers to segment the lesion from the ROI but we have used a novel segmentation network altogether for the specified task. \begin{table*}[] \centering\caption{Comparison of results on ISBI 2017 validation set with UNet+2HG.} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Method} & \textbf{Accuracy} & \textbf{Dice} & \textbf{Jaccard} & \textbf{Sensitivity} & \textbf{Specificity} \\ \hline UNet \cite{unet} & 0.920 & 0.768 & 0.651 & 0.853 & 0.957 \\ \hline FocusNet \cite{focusnet} & 0.921 & 0.831 & 0.756 & 0.767 & \textbf{0.989} \\ \hline Tu et al. \cite{denseres} & 0.945 & 0.862 & 0.768 & 0.901 & 0.974 \\ \hline Mishra et al. \cite{deep} &0.928 & 0.868 & 0.842 & 0.930 & 0.954 \\ \hline Proposed Method & \textbf{0.971} & \textbf{0.947} &\textbf{0.844} &\textbf{0.972} & 0.981 \\ \hline \end{tabular} \label{tab:results11} \end{table*} \begin{table*}[] \centering\caption{Results on ISIC 2018 validation set. In the table, UNet+nHG describes the network architecture used where 'n' represents the number of hourglass modules. Methods used to compare are taken from report \cite{d1} of the organisers of the ISIC 2018.} \scalebox{0.85}{% \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Method} & \textbf{Cropping Status} & \textbf{Accuracy} & \textbf{Dice} & \textbf{Jaccard} & \textbf{Sensitivity} & \textbf{Specificity} \\ \hline C. Qian (MaskRCNN) & - & 0.942 & 0.898 & 0.802 & 0.906 &0.963 \\ \hline Y. Seok (Ensemble + C.R.F.) & - & 0.945 &0.904 &0.801 &0.934 &0.952 \\ \hline Y. Ji (Feature Aggregation CNN) &- &0.943 & 0.900 & 0.799 & 0.964 &0.918 \\ \hline Y. Xue (SegAN) & - & 0.945 & 0.903 & 0.798 & 0.940 &0.942 \\ \hline \multicolumn{7}{c}{} \\ \hline \multirow{2}{*}{UNet} & W/o FRCNN & 0.906 & 0.819 & 0.712 & 0.754 & 0.842 \\ \cline{2-7} & With FRCNN & 0.917 & 0.85 & 0.746 & 0.842 & 0.891 \\ \hline \multirow{2}{*}{UNet + HG} & W/o FRCNN & 0.928 & 0.841 & 0.746 & 0.821 & 0.924 \\ \cline{2-7} & With FRCNN & 0.943 & 0.874 & 0.761 & 0.906 & 0.946 \\ \hline \multirow{2}{*}{UNet + 2HG} & W/o FRCNN & 0.937 & 0.887 & 0.773 & 0.912 & 0.944 \\ \cline{2-7} & With FRCNN & \textbf{0.959} & \textbf{0.915} & \textbf{0.809} & \textbf{0.968} &\textbf{0.973} \\ \hline \multirow{2}{*}{UNet + 3HG} & W/o FRCNN & 0.921 & 0.866 & 0.756 & 0.893 & 0.931 \\ \cline{2-7} & With FRCNN & 0.939 & 0.878 & 0.779 & 0.923 & 0.958 \\ \hline \end{tabular}} \label{tab:results2} \vspace{6mm} \centering\caption{Results on ISBI 2017 validation and PH\textsuperscript{2} dataset with UNet+2HG.} \scalebox{0.85}{% \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Dataset} & \textbf{Accuracy} & \textbf{Dice} & \textbf{Jaccard} & \textbf{Sensitivity} & \textbf{Specificity} \\ \hline PH\textsuperscript{2} & 0.979 & 0.952 & 0.891 & 0.975 & 0.988 \\ \hline ISBI 2017 & 0.971 & 0.947 & 0.849 & 0.972 & 0.981 \\ \hline ISIC 2018 & 0.959 & 0.915 & 0.809 & 0.968 & 0.973 \\ \hline \end{tabular} } \label{tab:results} \end{table*} \begin{figure*} \centering \includegraphics[height=6cm,width=7cm]{plot.jpg} \caption{ Plot depicting variation of accuracy with the number of hourglasses in the bottleneck on ISIC 2018 validation set.} \label{fig:plot} \end{figure*} We present most widely accepted comparative results on ISIC 2018 \cite{d1,d2} and performance on ISBI 2017 \cite{d17} and PH\textsuperscript{2} \cite{ph2} datasets. While most of the literature have stated the comparative results on ISBI 2017 \cite{d17} and performance on PH\textsuperscript{2} \cite{ph2} and in very few cases on ISIC 2018 \cite{d1,d2} dataset. ISIC 2018 \cite{d1,d2} dataset is somewhat the extension of the ISBI 2017 \cite{d17} dataset and our work majorly revolves around the 2018 dataset, we gave the comparative result on ISIC 2018 \cite{d1,d2} dataset only in main paper. To better evaluate and analyze our method we give comparative result on ISBI 2017 \cite{d17} dataset in table \ref{tab:results11} and compare our method with existing state-of-the-arts. Figure \ref{fig:examples} shows the qualitative results of our method. \begin{figure*} \centering \includegraphics[height=9.5cm,width=6cm]{examples.jpg} \caption{Qualitative results. a) Original images, b) ground truth lesion masks and c) predicted lesion masks.} \label{fig:examples} \end{figure*} \section{Conclusions} The proposed methodology with multiple networks achieved the state-of-the-art on publicly available datasets namely ISIC 2018. The results show, confident lesion mask boundary obtained from our network. However, the results were less than the present state-of-the art in terms of specificity. This was mainly in cases where the contrast of the lesion matched with the contrast of the normal nearby skin and so some background was segmented as foreground. In future, we will try to improve the performance in terms of specificity while simultaneously also aim for one single end-to-end network architecture to perform the detection and segmentation task together. Also, it is planned to extend the generalization of the network to enable segmentation of skin lesion from images taken from normal mobile cameras. \bibliographystyle{splncs04}
1,108,101,564,036
arxiv
\section{Analysis Methodology} \label{analysis} { } \noindent \textbf{Processing of EEG Bands and Metrics:} \label{process neural data} { We collected continuous data of raw EEG signals, the frequency band information of the signals (Alpha, Beta, Gamma, etc.), and Neurosky's post-processed, aggregated EEG metrics (Attention and Meditation). Three different trials were considered in our recorded EEG data: (1) the fake news, (2) the real news, and (3) the resting state that was for the participants' relaxation between the other trials. We set up 2 seconds for the resting state, 30 seconds for each real/fake news trial, and 5 seconds for response time and chopped the neural data per trial and the resting state time using the onset and offset time for each. So, we have 40 data samples corresponding to the trials and the resting state for each session. For the fake vs. real news trial analysis, we calculated the average of the frequency bands and metrics for each trial which has multiple rows of data. The same process was followed for news trials vs. the resting state analysis.} \noindent \textbf{Processing of Raw Data:} \label{process raw data} { \textit{ThinkGear Socket Protocol} \cite{thinkgearsocket} captures the raw EEG signal in the time domain. To analyze the differences between real and fake news trials, and between trials and the resting state, we converted the signals from the time domain to the frequency domain. To this end, we used Signal Analyzer Application (SAA) \cite{signalanalyzerapp} in MATLAB \cite{matlab}, a tool to visualize and analyze the data by comparing the signals in both domains. For each case, a high pass filter (HPF) and a low pass filter (LPF) were applied separately. The passband frequency was defined as 0.5 for both filters. The main difference between a high-pass filter and a low-pass filter is that a low-pass filter allows the signal to pass through lower than a cutoff frequency, while a high-pass filter allows the signal to pass through higher than a cutoff frequency. In other words, a low-pass filter blocks high frequencies, and a high-pass filter blocks low frequencies. } \noindent \textbf{Statistical Testing Methodology:} { For the statistical analysis, we did two types of comparisons using IBM's SPSS Statistical Software \cite{IBMSPSS}. One for fake vs. real news trial and another for news trial (fake and real) vs. resting state. All statistical results in this paper are reported at a significance level $(\alpha)$ of 0.05. The Friedman test was used to test for the existence of differences within the above groups, and the Wilcoxon Signed-Rank Test (WSRT) was used for measuring the pairwise differences between different EEG bands and metrics underlying our analysis. The effect size of WSRT was calculated using the formula $r=Z/\sqrt N$, where $Z$ is the value of the z-statistic and $N$ is the number of observations on which $Z$ is based. We considered Cohen criteria \cite{cohen1977statistical} which reports effect size $>$ .1 as small, $>$ .3 as medium and $>$ .5 as large. The statistically significant pairwise comparisons are reported with Bonferroni corrections. } \section{Background \& Prior Work} \label{sec:back} \subsection{EEG Overview} \label{eegoverview} { Electroencephalograph (EEG) is a technique to measure the electrical activity that is generated by the brain. EEG’s significant sequential solution is used for measuring the time progression in brain activation among the different regions of the brain. For adults, the amplitude of the signal is between 1uV to 100uV and the range is approximately 10mV to 20mV when it is measured with subdural electrodes \cite{neuroskybrainwave}. Today, EEG is used in several applications such as diagnoses of sleep orders, anesthesia, and tumor. Neuroscience, cognitive science, and many other academic fields routinely benefit from brain signal measurements using EEG. } \subsection{EEG Device and Neural Metrics Used} { In our study, we used a commercial EEG headset called MindWave Mobile 1 manufactured by Neurosky \cite{neuroskymindwave} to collect brain wave signals as people are subject to fake vs. real news. This headset has an adjustable headband, sensor tip/arm, and ear clip. It connects to a computer using Bluetooth. This device reads the raw brain wave signals and reports frequency bands listed in Table \ref{tab:frequencybands}. In general, an EEG signal is defined concerning the frequency band of the signal. Some examples of the EEG frequency bands defined by Neurosky are Delta, Theta, Alpha, Beta, and Gamma. Table \ref{tab:frequencybands} drawn from \cite{neuroskybrainwave} outlines these bands and the associated frequency ranges and mental states they represent. As we can see, each of the frequency bands helps to characterize different behavioral activities like movements, thinking, and decision making. While Delta is related to deep sleep levels, Theta is associated with imagination. Alpha and Beta waves, on the other hand, imply relaxation and awareness, respectively. Finally, Gamma is implicated in motor functions and mental activities. To read and obtain the values of the Attention and Meditation metrics, we used \textit {ThinkGear} \cite{thinkgearsocket}, which provides us with an ability to connect to the device. Here, Attention captures the user’s focus and attentiveness or engagement level (related to visual processing and decision making), and Meditation represents the users' calmness or relaxation level (related to relaxation) \cite{neuroskymindwave}. Both have a value range from 0 to 100. If we summarize the range of these metrics: 0 to 20 means strongly lowered, 20 to 40 is the reduced level and 40 to 60 is neutral level, while 60 to 80 is slightly elevated and 80 to 100 is strongly indicative levels \cite{neuroskymindwave}. \begin{table}[t] \centering \scriptsize \caption{EEG frequency bands and related brain states \cite{neuroskybrainwave}} \begin{tabular}{|l|l|p{4.5cm}|} \hline \textbf{\begin{tabular}[c]{@{}l@{}}Frequency\\ Band\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Frequency\\ range\end{tabular}} & \textbf{Mental states and conditions} \\ \hline \hline \textbf{Delta} & 0.1Hz to 3Hz & \begin{tabular}[c]{@{}l@{}}Deep, dreamless, sleep, non-\\ REM sleep, unconscious\end{tabular} \\ \hline \textbf{Theta} & 4Hz to 7Hz & Intuitive, creative, recall, fantasy, imaginary, dream \\ \hline \textbf{Alpha} & 8Hz to 12Hz & Relaxed but not drowsy, tranquil, conscious \\ \hline \textbf{Low Beta} & 12Hz to 15Hz & \begin{tabular}[c]{@{}l@{}}Formerly SMR, relaxed yet\\ focused, integrated\end{tabular} \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Midrange\\ Beta\end{tabular}} & 16Hz to 20Hz & \begin{tabular}[c]{@{}l@{}}Thinking, aware of self \&\\ surroundings\end{tabular} \\ \hline \textbf{High Beta} & 21Hz to 30Hz & Alertness, agitation \\ \hline \textbf{Gamma} & 21Hz to 30Hz & \begin{tabular}[c]{@{}l@{}}Motor Functions, higher\\ mental activity\end{tabular} \\ \hline \end{tabular} \vspace{-2mm} \label{tab:frequencybands} \vspace{-1mm} \end{table} } \subsection{Related Work} \label{Related-Work} { Our study centers on user-centered fake vs real news detection, i.e., the users’ ability to determine whether a given news item is real or fake. Researchers have conducted different studies in this area. Most closely relevant is the study reported by Chan et al. \cite{manpuisally} which is a meta-analysis of the factors underlying effective messages to counter attitudes and beliefs based on disinformation. Barthel et al. \cite{michaelbarthel} conducted a study with the findings that most Americans suspect that made-up news is having an impact and has left them confused about basic facts. The study reported almost a quarter of participants stated that they have posted a made-up news story, intentionally or not. Moreover, Americans see fake news as causing a great deal of confusion in general. However, most of them are confident in their ability to identify when news is false. Vosoughi et al. \cite{Vosoughi1146} conducted a study on the spread of true and false news online. They found that falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information. These effects were more pronounced for political news than for fake news about terrorism, natural disasters, science, urban legends, or financial information. It was found that fake news was more novel than real news, which suggests that people were more likely to share novel information. Using the cognitive science approach, Pennycook et al. \cite{pennycook2019} investigated whether cognitive factors motivate belief in or rejection of fake news. They conducted two studies in the paper, utilizing the Cognitive Reflection Test (CRT) as a measure of the proclivity to engage in analytical reasoning. Their results suggest that analytic thinking plays an important role in people’s self-inoculation against political disinformation. The reason people fall for fake news is they do not think. Hoang et al. \cite{hoang2012} investigated user vulnerability as a behavioral component in predicting viral diffusion of general information rather than fake news. Wagner et al. \cite{wagner2012} explored user susceptibility to fake news generated by bot activities and limit their definition of ‘susceptible users’ as the ones that interacted at least once with a social bot. Shen et al. \cite{shen2019} focus on several degrees of susceptibility to fake news by applying machine learning methods. A related problem is fake vs. real website detection under a phishing attack. Several studies have been conducted in this problem space. Dhamija et al. \cite{Dhamija:2006} reported a study in which participants were asked to identify real and fake websites in the context of phishing attacks. They claimed that the participants chose wrong answers almost 40\% of the time. In line with this, Neupane et al. \cite{neupaneEEG,neupanefNIRS,neupane2014neural} conducted an fMRI, EEG, and fNIRS studies of users’ mental processing of real and fake websites. Their results show that there is a significant increase in several areas of the brain when people view fake websites vs. real websites even though users’ accuracy in identifying the legitimacy of the site was close to 50\%. The study of \cite{voice-ndss} investigated users' processing of real vs. fake voices in the context of voice impersonation attacks, with similar findings as to the above studies. \begin{table}[ht!] \centering \footnotesize \caption{An Outlook of Real-Fake Studies in Related Works~vs. Our Work} \begin{tabular}{|p{3cm}|p{2cm}|p{2.2cm}|} \hline {\bf Artifact Type} & {\bf Neural Activity Diff.} & {\bf Behavioral Response Diff.} \\ \hline \hline Websites under phishing & Present & Nearly absent \\ \hline Paintings & Present & Nearly absent \\ \hline News (Our Work) & Absent & Absent \\ \hline \end{tabular} \label{my-label} \end{table} Stanford History Education Group conducted a study \cite{sheg} in which the participants (school/college students) were asked to find the differences in some media contents, e.g., news and ads, identify real-fake Facebook accounts, test the reliability of Facebook posts (whether a web site is trusted). Our study, in contrast, investigates real vs. fake news detection in behavioral and neural domains. Different from many of the above studies, our work aims at eliciting a deeper understanding of users’ susceptibility to fake news by focusing on not only people’s behavioral responses as to whether the news is real or fake but also their neurological responses captured via EEG during the decision-making process. Moreover, following the results of prior studies \cite{neupaneEEG,neupanefNIRS,neupane2014neural}, we wanted to test whether people’s brain activities differ when subject to fake vs. real news, even if they may not be able to identify the differences behaviorally. So, we have developed the EEG headset configuration to test ``fake news detection'' studies of brain activity. Table \ref{my-label} summarizes our work against some other real-fake detection studies. } \section{Concluding Remarks} \label{CFW} In this paper, we investigated disinformation, or fake news attack susceptibility of human users via experimental neuro-cognitive science, conducting an EEG study of human-centered fake news detection. We identified the neural underpinnings that control people's responses to real news articles and fake news articles as they attempt to identify the legitimacy of these articles by reading through them. We showed that there are significant differences in neural activation when users are evaluating real or fake news articles vs. resting, and when they are evaluating different types of fake news articles. However, we did not observe statistically significant, or machine learnable, differences in neural activities when users were subject to real vs. fake news articles, irrespective of their behavioral response. We believe that this neuro-cognitive insight from our work helps justify the users' susceptibility to fake news attacks as also demonstrated by our behavioral task performance result. \section{Study Design \& Data Collection} \label{Design of Experiments} \subsection{Ethical and Privacy Considerations} {Our study was approved by the Institutional Review Board (IRB) at our university. Participation in the study was completely voluntary and followed informed consent. The participants were given a chance to leave the study at any point if they did not feel comfortable. We utilized the best practices to make sure the participants' private data (such as the survey results, and the neural data obtained during the experiment) was protected and duly anonymized.} \subsection{Design of the News Credibility Detection Task} {We designed an experimental task in which the participants were asked to read the different (real and fake) news articles while wearing the MindWave Mobile 1 headset, which recorded their brain activity during the task. Each participant was asked to read 40 articles (20 fake and 20 real). The articles were drawn from the dataset which is used in a study conducted by Zannettou et al. \cite{ZannettouCCKLSS17}. The dataset consists of 99 news sites gathered from millions of comments, threads, and posts on Twitter, Reddit, and 4chan. These websites\footnote{The list of the 99 sites is available at \url{https://drive.google.com/file/d/0ByP5a__khV0dM1ZSY3YxQWF2N2c/view?usp=sharing\&resourcekey=0-nyqLTloH-4PFfOscFB0G1Q}} including 45 mainstreams (characterized as real news) from the Alexa top 100 news sites and 54 alternative websites (characterized as fake news) from Wikipedia\footnote{\url{https://en.wikipedia.org/wiki/List_of_fake_news_websites}} and FakeNewsWatch \cite{fakenewswatch} were gathered from posts, threads, and comments on three platforms. We used this dataset as follows. Due to a lapse of time, we first tested if the alternative domains were working and realized that 24 domains had already expired. From the remaining 75 domains, we randomly, but considering the length of the news (readable in 30 seconds), chose 200 news including 100 fake news, and 100 real news to create our small dataset. Then, we again randomly chose 20 fake news articles and 20 real news articles from our small dataset. We chose the same number of articles from different categories such as politics, daily, and other (sports, science, business, entertainment, etc.) news. Since our goal was to assess the credibility of the news articles themselves, we only showed the textual content of these news articles to the participants, rather than the side information such as URLs and advertisements. We believe this aspect of our study design is crucial and in fact, a strength of our work because, as demonstrated in most phishing attack studies \cite{Dhamija:2006, neupaneEEG}, people usually do not understand or heed the URLs. Further, fake news is often hosted on legitimate websites for which the users already trust the corresponding URLs, and fake news frequently gets propagated via users' social media accounts with textual postings of the news content. In the study, the articles were pre-downloaded for offline usage and hosted on a local webserver to be displayed using the Firefox browser. We modified the shown page with a white background with black text and in the same style for all articles. We also removed the images from the original news articles (if any). The URLs were hidden during the experiment. After reading each article, the participants were asked to identify whether it is real or fake. The exact flow of the experiment was as follows. First, the instruction page, which includes the directions of the experiment, was shown for 20 seconds and the experiment start page informed the participants about the beginning of the experiment. Then, the resting page, which is for participant relaxation and eliciting baseline brain activity, was shown for 2 seconds. Right after this, the trial (fake or real) news article was displayed for 30 seconds. 30 seconds represent a sufficient amount of time for the participants to read/analyze the news article as they were of average to short length. The order of presentation of the news articles was randomized per participant. After presenting each news trial, we asked the participants, “Do you think the shown news is real?” with “Yes” and “No” answer choices. Finally, the goodbye page was displayed to end the experiment for 5 seconds. Figure \ref{FlowChart} visualizes the experimental flow. } \begin{figure}[h!] \centering \vspace{-3mm} \includegraphics[scale=.46]{flowdiagram.png} \caption{The flow diagram represents design of the experiment.} \vspace{-1mm} \label{FlowChart} \end{figure} \vspace{-2mm} \subsection{Experimental Set-Up} {In our experiment, participants were asked to wear the EEG headset on their head. While they performed the experimental task, we recorded their brain activity in terms of EEG frequency bands, metrics and raw signals (defined in section \ref{sec:back}). For the experimental setup, we developed Phyton scripts to read EEG data using socket. For this experiment we used PC monitor with screen size of 1920*1080. Along with the EEG data, we recorded the timestamps of while news articles were showed and when our participants were resting. } \subsection{Study Protocol} \label{(study-protocol)} {\noindent \textbf{Participant Recruitment and Preparation Phase:} { Twenty-three healthy people (mainly students and workers at our University) participated in our study. After we eliminated the polluted data samples containing noisy brain signals (due to participants moving excessively), the final sample of participants came out to be nineteen. Each participant completed the study in about 30 minutes. After giving consent information, the participants were asked to provide their demographic information (such as gender, education level, and current job). Many of our participants were in the range of 18-30 years (58\%) and male (58\%). Most of them had at least a bachelor’s degree (84\%) and 32\% had a job currently. Table \ref{particpantdemographics} provides a summary of the demographic information of the participants. Previous power analysis studies have found 15 to be an optimal number of participants for such studies. For example, statistical power analysis of ER-design fMRI studies has demonstrated that 80\% of clusters of activation proved reproducible with a sample size of 15 subjects \cite{murphy2004}. Our participant demographics is also well-aligned with prior neuroimaging security studies \cite{andersen-fmri,neupaneEEG,neupanefNIRS,neupane2014neural}.} \begin{table}[h!] \scriptsize \centering \caption{Summary of Participant Demographics} \begin{tabular}{|l|l|r|} \hline \textbf{Demographics (N=19)} & & \textbf{\%} \\ \hline \hline \textbf{Gender} & \begin{tabular}[c]{@{}l@{}}Male\\ Female\end{tabular} & \begin{tabular}[c]{@{}r@{}}57.9\\ 42.1\end{tabular} \\ \hline \textbf{Age} & \begin{tabular}[c]{@{}l@{}}18-24\\ 25-30\\ 31-40\\ \textgreater{}40\end{tabular} & \begin{tabular}[c]{@{}r@{}}26.3\\ 31.6\\ 21.1\\ 21.0\end{tabular} \\ \hline \textbf{Education} & \begin{tabular}[c]{@{}l@{}}Bachelor's degree\\ Master's degree\\ Doctoral degree\\ Others\end{tabular} & \begin{tabular}[c]{@{}r@{}}52.6\\ 26.3\\ 5.3\\ 15.8\end{tabular} \\ \hline \textbf{Employment status} & \begin{tabular}[c]{@{}l@{}}Employed\\ Non-Employed\end{tabular} & \begin{tabular}[c]{@{}r@{}}31.6\\ 68.4\end{tabular} \\ \hline \textbf{Frequency of Reading news} & \begin{tabular}[c]{@{}l@{}}Daily\\ Several times a week\\ Once a week\\ Others\end{tabular} & \begin{tabular}[c]{@{}r@{}}21.1\\ 47.4\\ 21.1\\ 10.4\end{tabular} \\ \hline \textbf{Credibility of the News} & \begin{tabular}[c]{@{}l@{}}Care\\ Do not care\end{tabular} & \begin{tabular}[c]{@{}r@{}}73.7\\ 26.3\end{tabular} \\ \hline \end{tabular} \label{particpantdemographics} \vspace{-1mm} \end{table} \noindent \textbf{Task Execution Phase:} {To recall, NeuroSky’s Mobile 1 Headset was used to collect the EEG data for the experiment. We placed the headset on the head of each user, then started to run the experiment and record the brainwaves simultaneously. The ThinkGear software (Section \ref{eegoverview}) was used as an interface. This software included real-time artifact removal due to muscle movement and environment/electrical interference such as spikes, saturation, and movement of the head.} \noindent \textbf{Post-Experiment Phase:} {Once the experiment was completed, the post-experiment survey page, designed to learn participants' interests in news articles and their strategies to assess their credibility, loaded into the browser. 68\% of the participants said they read news from the news websites, and 89\% participants read at least a news during the week. The most common answer to the question of caring about news credibility was “Yes” (73\%). Upon asking the participants what their strategy was to identify whether the news is real or fake, one of the participants said that when the news is about an opinion or a reporter's thinking, he does not believe in the news story. Some of them claimed that when the news sounds unreal, they prefer to verify from several sources. } \section{Discussion and Future Directions} \label{Discussion} \subsection{Summary and Insights} The participants in our study showed significantly increased activation in reading news articles than resting state. The EEG band, Delta, and the EEG metrics, Attention and Meditation, which are related to decision making, recall, alertness, and consciousness, indicated higher neural activation when participants were engaged with news articles than in the resting state. However, the result also shows that participants' neural activations were not different between the fake and real news articles, and they could not do a good job of identifying real vs. fake news articles behaviorally. We also did a categorical analysis to make sure if the brain reacts differently in different categories of news articles, including politics, daily news, and other news. We noticed higher brain activation when participants were reading articles than the resting state in each of the categories, but still no differences between real and fake news articles for each category. Further, the between-category comparisons of the neural activations revealed a significant difference in the Attention metric of daily fake news articles and other fake news articles. Overall, these results suggest that the users were certainly putting a considerable effort in reading the news articles of different categories and making real vs. fake decisions, but they could not make a sound decision in identifying real and fake news articles as reflected by their brain activity and behavioral responses. Although this lack of statistical differences does not necessarily mean that differences between real news trials and fake news trials do not exist definitively, our results cast serious doubt as to the presence of such differences. Perhaps the poor neural and behavioral differences between real vs. fake news identification was an outcome of the unawareness of the participants as to what may constitute fake news precisely. Another reason could be that the article selection was very good for real and fake types, so it may have been hard to differentiate between the real and the fake articles. Also, we had hidden the actual URLs in our experiment and therefore there was no chance to identify the real or fake news without reading the content of the article. Nevertheless, these findings are unique especially given that subconscious neural activity differences have been found in the real vs. fake decision-making of other artifacts such as websites in phishing attacks and paintings while users were still failing at these tasks behaviorally as reported in prior studies \cite{huang2011human,neupaneEEG,neupanefNIRS,neupane2014neural}}. A consequence of the above finding is that human users may be truly susceptible to fake news attacks because their brains themselves may not be capable to spot the differences between fake vs. real news articles. In this light, secure online social media and news media should not rely upon user inputs or their brain signals to detect fake news, but rather automated technical approaches should be emphasized that can alert the users as to the possible presence of such attacks. However, our results have a significant real-world implication to the context of automated detection methods as well: since the AI techniques for fake news detection often have to crucially rely upon human decision-making (e.g. for labeling to train and re-train classification models) (e.g., \cite{oshikawa2018survey,8966734,wang2017liar,HadeerA2017}), our result suggests that these techniques themselves may not be as robust as one would expect in classifying real vs. fake news. As a case in point, in the study of \cite{MIT}, a manually labeled dataset was cross-checked, and it was found that some of the true (real) articles are not true (real) while some false (fake) articles are not false (fake). This study helps support our argument that human-annotated news may give rise to biased training models in automated approaches. As a result, we believe that even machine learning and AI-based fake news detection methods may not be reliable, in light of our results. As our other key result, we found significant differences in the Attention metric of daily-other pair of fake news trials. This implies that our participants were more engaged in reading other fake news articles than daily fake news articles. This is perhaps because they may not have found close similarity in daily fake news articles with real-life instances but the other fake news articles might have been more interesting or relevant to them, thus they may have read those with full awareness or attentiveness. We believe it is an interesting finding that the human brain may react differently to different types of fake news articles, which may make it possible to learn what category of fake news would be more interesting to the users. This could allow fake news designers and other malicious actors to perform targeted attacks against users based on a specific category of fake news that users might find more engaging and might be more likely to believe in. As far as improving users' behavioral performance and neural activation in the real vs. fake news detection task, one possibility is the use of specialized training programs (e.g., \cite{Interland, Badnews}). Such training may help users look for specific cues that may help identify fake news from real news. The impact of such training programs on users’ ability to detect fake news could be explored in future research. We conducted a posthoc power analysis and calculated observed power values. We see that the observed power of metrics pairs that yielded statistically significant differences are 99\%, 98.7\%, 100\%, and 97.4\%, while others are below 70\%. We believe that our study has enough power since significant pairs have the power of more than 90\%. \subsection{Study Strengths and Limitations} We believe that our study has several strengths. The neurophysiological sensor was chosen for our study is a lightweight, easy-to-wear, and wireless EEG headset that allowed us to collect brain waves almost transparently \cite{neuroskyconf}. Another strength is that we focused on the content of the news articles rather than the side information such as URLs and advertisements. This is important because, as frequently seen in phishing attacks, people usually ignore to assess the URLs. Also, fake news can often be hosted on trustworthy news sites (not just alternative or malicious sites) for which the users already trust the URLs. Similar to any other study involving human subjects, our study also had certain limitations. The study was conducted in a lab environment, which may have impacted the performance of the participants since they might not have felt the real security risks. Our participants' sample comprised a majority of young students. This represents a common constraint underlying many university lab studies. However, our sample exhibited some diversity concerning educational backgrounds. Moreover, our sample, especially in terms of age, was closer to the group of users who use the Internet frequently and who are supposedly more vulnerable to disinformation. Future studies might be needed to further validate our results with broader participant samples and demographic groups. One limitation of our study pertains to the number of trials presented to the participants. Although multiple trials are a norm in EEG (and brain-imaging) experimental design to achieve a good quality signal-to-noise ratio, the participants may hardly face many security-related (news trials in our case) in a short span of time in real life. Our behavioral results, in the fake news experiment, are still well-aligned with prior work \cite{Vosoughi1146,michaelbarthel}. Another limitation related to our machine learning-based detection is deep learning techniques may be able to identify differences that our standard machine learning techniques could not. This would need to be explored in future work but would require larger datasets. Finally, our participants were explicitly asked to identify a news article as real or fake. However, in a real-world attack, the victims are driven to a fake news article from some primary task (e.g., personal social media account page) and the decision about the legitimacy of the article needs to be made implicitly. Nevertheless, the users ultimately have to decide the legitimacy of the article. Our results show that, despite being asked explicitly, users (and their brains) are not able to detect the legitimacy of the article accurately, and therefore the result may be even worse in a real-world attack where the decisions are to be made implicitly. \section{Introduction} \label{sec:intro} The spread of social disinformation on the Internet (informally often referred to as ``fake news'') is an emerging class of immensely powerful threats, especially given the widespread deployment of social media, seen in a variety of contexts that can severely harm everyday Internet users and the society in many different ways at a global scale. Fake news attacks are used, for instance, to earn money from advertisements, \cite{news-advertisement}, defame an agency, entity, or person, impact people's behavior or even influence the results of elections as seen in the last mainstream elections in the US, India and Brazil \cite{election1,election2,election3}. The susceptibility to such fake news attacks depends on whether users consider a fake news article/snippet to be legitimate or real. Unfortunately, behavioral studies demonstrate that the general user population may often believe fake news to be real news and indeed be fallible to fake news attacks \cite{Vosoughi1146,michaelbarthel}. Due to the prevalence and rapid emergence of the fake news threat in the wild, it is paramount to understand users' intrinsic psychological behavior that controls their processing of (fake and real) news articles and their potential susceptibility to fake news attacks. In this paper, we employ the brain-imaging methodology adopted in a recently introduced line of research (e.g., \cite{andersen-fmri,neupaneEEG,neupane2014neural}) to closely assess the user behavior in the specific context of disinformation susceptibility. Specifically, we study users' neural processes (in addition to their behavioral performance) to understand and leverage the {neural} mechanics when users are subjected to fake news attacks using a state-of-the-art, well-established brain-imaging technique called \textit{electroencephalogram (EEG)}. Our primary goal is to study the {neural underpinnings} of fake news detection, and analyze differences (if any) in neural activities when users process (read and decide between) fake and real news. We examine how the information present in the neural signals may be used to explain users' susceptibility to fake news attacks. \textbf{We focus on \textit{text-centric} news articles where only the most fundamental information inherent to the article, i.e., the title and textual content, is shown to the user (and no auxiliary information such as images, the URL, hosting website or the advertisements embedded within the site)}. Such text-centric fake news is often spread via social media posts or text messaging platforms. Some of the prior studies {\cite{huang2011human,neupaneEEG,neupanefNIRS,neupane2014neural}} have shown that {neural differences are present when users are subject to different types of real vs. fake artifacts, even though users themselves may not be able to tell the two apart behaviorally}. Neupane et al. \cite{neupaneEEG,neupane2014neural,neupanefNIRS} noted differences in neural activities when users were processing real and fake websites in the context of phishing attacks. Similarly, Huang et al. \cite{huang2011human} reported differences {in neural activation} when users were asked to assess real and fake Rembrandt paintings. In light of these prior results, we launched our study to also test the {hypothesis that the human brain might be activated differently when users are subject to fake and real news}. The implications of the neural activity differences, if present, when processing real vs. fake news, can be far-reaching as these differences could be automatically mined and analyzed in real-time and the user under the fake news attack could be informed as to the presence/absence of the attack, even though the user may have himself failed to detect the attack (behaviorally). Neupane et al. \cite{neupanefNIRS} suggested such a scheme for detecting phishing attacks merely based on neural activities. {On the other hand, it is crucial to note that in case the differences do not exist, it may underline a fundamental vulnerability of human biology which may prevent users from telling the fake artifact from the real artifact.} Our study is designed to investigate the same aspect, but in the independent application domain of fake news attacks and disinformation susceptibility. In this domain, the user is supposed to be detecting fake news based on the \textit{title/content of the news}, not based on the side information such as the URL hosting the news since the fake news can often appear on legitimate websites. This contrasts with the domain of phishing attacks where URLs are the most important indicators of the presence of the attack. \noindent \textbf{{Our Contributions:}} \label{Our Contributions and Results} We design and conduct an EEG study to pursue a thorough investigation of users' perception and mental processing of fake and real news. Our study asks the participants to identify real vs. fake news articles, drawn from the study of Zannettou et al. \cite{ZannettouCCKLSS17}, presented in a randomized order solely based on the title/content of the articles. We provide a comprehensive analysis of the collected EEG data set including the EEG frequency bands (such as alpha, beta, and gamma), the aggregated EEG metrics (attention and meditation) and the raw EEG signals, as well as the behavioral task performance data set, across different categories of news articles (political, daily, and miscellaneous news). Unlike prior studies on real-fake websites and paintings quoted above, we do not observe differences in the way the brains process fake news vs. real news, when subject to fake news attacks, although marked differences are seen between neural activity corresponding to (real or fake) news vs. resting state (participants not doing any mental activity) and between some of the different categories of news articles (such as daily fake news vs. other miscellaneous fake news). That is, the fake news seems nearly indistinguishable from the real news with respect to the neuro-cognitive perspective. This rather negative/null result may serve well to explain users' susceptibility to such attacks as also reflected in our task performance results (similar to those reported in \cite{Vosoughi1146,michaelbarthel}). We further tested several machine learning algorithms based on statistical features and even they failed at distinguishing between fake and real news with a probability significantly better than 50\% (equivalent to random guessing). Since this very likely indistinguishability of real vs. fake news lies at the core of human biology, we conjecture that the problem is very critical, as the human detection of fake news may not improve over time even with human evolution, especially because malicious actors may continue to come up with more advanced and surreptitious ways to design fake news. Also, our study participants are mostly young and educated individuals and with no reported visual disabilities, while older/less educated population samples and/or those having visual disabilities may be more vulnerable to fake news attacks \cite{dubno1984effects}. {We cautiously \textit{do not} claim that the acceptance of the null hypothesis in our work necessarily means that the neural differences between real and fake news are definitively absent, i.e., further studies might need to be conducted using other brain-imaging techniques and other wider samples of users. However, our work does cast a serious doubt regarding the presence of such differences which is also aligned well with our behavioral results, thereby explaining the high potential of user-centered disinformation susceptibility. The presence of neural activity differences between some types of fake news seen in our study could also have interesting implications. For instance, it could be used by malicious players to design targeted fake news attacks. \smallskip \noindent \textbf{Broader Significance, and Negative Implications to Automated Detection Techniques:} We believe that our work helps to advance the science of human-centered disinformation susceptibility, in many unique ways. It also serves to reveal the fundamental neural basis underlying fake news attacks, and highlights users' susceptibility to these attacks from both neural as well as behavioral domains. In light of our results, perhaps the best way to protect users from such attacks would be by making them more aware of the threat, and possibly by developing technical solutions to assist the users. The security and disinformation community has certainly been working towards urgently developing automated fake news detection techniques which could aid the end users against fake news-based scams, whenever possible. However, there is a critical cyclic dependency here. Since these automated techniques themselves rely upon human-based or crowd-sourced labeling of fake vs. real news to train (and re-train) the underlying classification models (e.g., \cite{oshikawa2018survey,8966734,wang2017liar, HadeerA2017, Granik, FIGUEIRA2017817}), our work raises a question on the robustness of such models when used in the real-world. \smallskip \noindent \textbf{Why is this a Computer Security Study?} Although our work is informed by neuroscience, it is deeply rooted in computer security and provides valuable implications for the security community. First and foremost, disinformation and fake news is a security threat that leads to a large variety of financial and political scams, and social engineering trickeries against casual Internet users\footnote{As a colloquial example of security relevance, disinformation was a topical area of a keynote speech at the 2019 NSF SaTC PI meeting.}. Said differently, detecting fake news is a user-centered security task, similar to detecting phishing or other similar attacks. {Second, we conduct a neuroimaging-based user study and show why malicious actors might be successful at fake news attacks.} Many similar security studies focusing on human neurobiology have been published as an ongoing line of research in mainstream security/HCI venues, {e.g.,~\cite{andersen-fmri,neupaneEEG,neupanefNIRS,neupane2014neural}}. How users perform at crucial security tasks from a neurological standpoint is therefore of great interest to the security community. This research line followed through in our work provides novel security insights and lessons that are \textit{not feasible} to elicit via behavioral studies {alone}. For example, prior studies {\cite{neupaneEEG,neupanefNIRS,neupane2014neural}} showed that security (phishing) attacks can be detected based on neural cues, although users may themselves not be able to detect these attacks. Along this dimension, our work conducted an EEG study to dissect users' behavior under fake news attacks, a rather understudied attack vector. Our results show that even brain responses cannot be used to detect such attacks, which serve to explain why users are so susceptible to these attacks. \section{Brain Signal Analytics: \\Fake vs. Real News Classification} \label{auto} Statistically, we did not observe differences in users' neural activities in the fake vs. real news identification task. In this section, we classify the real vs. fake news neural activity data using machine learning algorithms for further validation of the statistical result. \subsection{Features and Classification Algorithms} \label{feature-extraction} We normalized the data obtained from all the 19 participants using the average of each of the EEG bands and EEG metrics for each participant. Then, we calculated various statistical features, i.e., the maximum, minimum, mean, standard deviation, variance, skewness, and kurtosis values, for each participant from this normalized data. With these feature values, we created a new data set to be used as an input to machine-learning algorithms. In this dataset, EEG bands (Delta, Theta, Low Alpha, High Alpha, Low Beta, Low Beta, Low Gamma, High Gamma) and EEG metrics (Attention and Meditation) values were defined separately for fake and real news trials. We utilized the off-the-shelf algorithms provided by Weka \cite{Witten:2011:DMP:1972514} to build the classification models and test them using 10-fold cross-validation. The algorithms we tested are follows: J48, Random Forest (RF), and Random Tree (RT) under the \textit{Trees} category of models, Neural Networks, Multilayer Perceptron (MP), Support Vector Machines (SMO) and Logistics (L) under the \textit{Functions} category and Naive Bayes (NB) under \textit{Bayesian Networks} the category. \subsection{Classification Results} \label{defensive-mechanism} For the classification task, the positive class was defined as real news and the negative class as fake news. In terms of classification performance metrics, we computed precision (\textit{Prec}), Recall (\textit{Rec}), F-measure (\textit{FM}), \textit{True positive} (TP) and \textit{False positive} (FP) values and report on the average of these values. \textit{Prec} value shows the accuracy of the system in rejecting negative classes and evaluates the security of the tested approach (i.e., rejecting fake news as invalid news). On the other hand, the \textit{Rec} value evaluates the usability of the approach in accepting positive classes (i.e., accepting real news as valid news). \textit{FM} shows the balance between precision and recall. TP represents the correctly defined total rate of positive classes (real news), while FP represents the incorrectly defined total rate of positive classes. \begin{table}[ht!] \centering \scriptsize \caption{Fake-Real News Classification} \begin{tabular}{|p{3cm}|l|l|l|l|l|} \hline \textbf{} & \textbf{TP Rate} & \textbf{FP Rate} & \textbf{Prec} & \textbf{Rec} & \textbf{FM} \\ \hline \textbf{All Features} & \multicolumn{5}{l|} {} \\ \hline RandomTree & 0.47 & 0.53 & 0.47 & 0.47 & 0.47 \\ \hline Logistic & 0.31 & 0.69 & 0.31 & 0.31 & 0.31 \\ \hline J48 & 0.43 & 0.57 & 0.42 & 0.43 & 0.40 \\ \hline NaiveBayes & 0.38 & 0.62 & 0.38 & 0.38 & 0.38 \\ \hline MultilayerPerceptron & 0.40 & 0.60 & 0.39 & 0.40 & 0.38 \\ \hline SMO & 0.43 & 0.57 & 0.42 & 0.43 & 0.42 \\ \hline RandomForest & 0.45 & 0.55 & 0.45 & 0.45 & 0.45 \\ \hline \hline \textbf{Correlation Based Feature Selection} & \multicolumn{5}{l|}{} \\ \hline RandomTree & 0.54 & 0.46 & 0.54 & 0.54 & 0.54 \\ \hline RandomForest & 0.49 & 0.51 & 0.49 & 0.49 & 0.49 \\ \hline \hline \textbf{Wrapper Subset Eval Feature Selection} & \multicolumn{5}{l|}{} \\ \hline Random Tree & 0.61 & 0.39 & 0.61 & 0.61 & 0.61 \\ \hline Random Forest & 0.59 & 0.41 & 0.59 & 0.59 & 0.59 \\ \hline \end{tabular} \label{tbl_detection} \vspace{-3mm} \end{table} Our classification results are presented in Table \ref{tbl_detection}. Among the tested algorithms with all of the features chosen, the best \textit{FM} value was found to be 47\% in the RT algorithm. In this algorithm, the correct predicted positive rate was 47\%, while the incorrect predicted positive rate was 53\%. Further, to improve the results, we applied some feature selection algorithms. First, using the correlation-based feature selection algorithm, we conducted the classification model with the best subset of features {\cite{hall1999correlation}}. The result indicated the best \textit{FM} as 54\% again in the RT algorithm. Then, we performed the Wrapper Subset Selection algorithm and obtained the highest \textit{FM} value of 61\% for the RT algorithm and 59\% \textit{FM} for the RF algorithm. Consequently, these classification results show that identifying the difference between fake and real news from neural activation is quite difficult even for machine-learning algorithms. Therefore, our task performance results and the accuracy of these classifiers are similar (close to that of random guessing), and most standard machine learning techniques do not seem to be able to detect the differences between the brain signals of fake and reals news trials. \section{Neural Results} \label{Results and Analysis} In this section, we investigate the neural activation when users were reading and deciding between the real and fake articles. In particular, we compare the neural activity between news (fake and real) trials and the baseline condition (resting state), and between the real news trials and fake news trials. \subsection{Trial vs.~ Resting State} \label{trial-rest} {To evaluate the brain activity levels when participants were reading different articles and assessing their credibility, we first contrasted the brain activation during real and fake news trials with the resting state as a ground truth. \begin{table}[h!] \centering \scriptsize \caption{Statistically Significant Results across all news articles. \vspace{-3mm} \label{tab:stattab} \smallskip \begin{tabular}{|l|l|r|r|} \hline \textbf{Comparison} & \textbf{Metrics} & \textbf{p-value} & \textbf{Effect Size} \\ \hline \hline \multirow{3}{*}{\textbf{Real \textgreater Rest}} & Delta & 0.001 & 0.53 (large) \\ \cline{2-4} & Attention & 0.001 & 0.62 (large) \\ \cline{2-4} & Meditation & 0.001 & 0.52 (large) \\ \hline \hline \textbf{Fake \textgreater Rest} & Attention & 0.001 & 0.56 (large) \\ \hline \end{tabular} \vspace{-2mm} \end{table} As described in Section \ref{process neural data}, we extracted and calculated the frequency band information and the metrics collected from the EEG data. From Table \ref{tab:meanstddtab}, we can see the differences between the mean (and standard deviations) of news trials and the resting state. We observe that the means of Delta, Theta, and Low Alpha of news trials are lower than the corresponding means of the resting state. The means of High Beta, Low Gamma, High Gamma, Attention, and Meditation are lower in the resting state than in the news trials, but the means of High Alpha and Low Beta in the resting state are close enough within the range of real and fake news. To confirm whether these differences were statistically significant, we ran the Friedman test in two groups; one for EEG bands (Delta, Theta, Low Alpha, High Alpha, Low Beta, High Beta, Low Gamma, High Gamma) and another for EEG metrics (Attention and Meditation), which indeed revealed a statistically significant difference (${\chi}^2$ = 331.269, p = 0.000) corresponding to news trials and resting state. Subsequently, we ran WSRT with Bonferroni correction to evaluate the pairwise differences in the mean of each of the EEG frequency bands and the metrics between news trials vs. the resting state. We noticed that only Attention is statistically significant between the fake news trials and resting state, and Attention, Meditation and Delta are statistically significant between the real news trials and resting state. The statistically significant results are depicted in Table \ref{tab:stattab}. From Table \ref{tab:meanstddtab}, we can see that the raw mean values of Attention (51.96) in fake news trials are greater than the mean of the resting state (46.08), which means participants were more attentive to reading fake news articles than they were in the resting state. In comparing real news trials vs. resting state, we can see that the mean value of Attention (52.44) in the real news trial is higher than the resting state (46.08), mean of Meditation (56.16) in the real trial is higher than the mean of the resting state (53.12). Finally, the mean of the Delta band in the resting state (502423.87) is higher than the mean of Delta in the real trial (400910.12). \begin{table*}[ht!] \small \caption{Overall Neural Activation Results for Real News Trials, Fake News Trials and Resting State (Baseline)} \label{tab:meanstddtab} \scriptsize \begin{tabular}{|l|l|r|r|r|r|r|r|r|r||r|r|} \hline \multicolumn{2}{|l}{\textbf{Overall}} & \multicolumn{8}{|c||}{\textbf{EEG Bands}} & \multicolumn{2}{|c|}{\textbf{EEG Metrics}} \\ \hline \hline \textbf{Type}&\textbf{Value} & \textbf{Delta} & \textbf{Theta} & \textbf{LowAlpha} & \textbf{HighAlpha} & \textbf{LowBeta} & \textbf{HighBeta} & \textbf{LowGamma} & \textbf{HighGamma} & \textbf{Attention} & \textbf{Meditation} \\ \hline \hline \textbf{Real} & $\mu$ & 400910.12& 88544.69& 29227.06& 26886.87& 24506.83& 18795.82& 9675.33& 5889.36& 52.44& 56.16 \\ \hline & $\sigma$ &147104.46& 24040.17& 7511.16& 5739.33& 6190.91& 6069.21& 3832.33& 1972.76& 4.41& 5.52\\ \hline \hline \textbf{Fake} & $\mu$ & 414988.46 &95187.77&29177.07&27430.93&24723.57 &18949.93&9382.27&5883.22&51.96 & 55.78 \\ \hline & $\sigma$ &150213.95& 26768.06& 8255.18& 5754.46& 6405.26& 5207.92& 3396.58& 2204.92& 4.23& 5.11 \\ \hline \hline \textbf{Rest} & $\mu$ & 502423.87& 110839.71& 31494.11& 27029.51& 24643.74& 17197.73& 9094.13&5653.64&46.08&53.12 \\ \hline & $\sigma$ &150741.06& 36842.97& 11718.16& 9015.74& 9719.48& 7541.49& 3821.34& 2429.54& 4.84& 7.06\\ \hline \end{tabular} \vspace{-2mm} \end{table*} \begin{table*}[ht!] \centering \scriptsize \caption{Categorical Neural Activation Results for Real News Trials, Fake News Trials and Resting State (Baseline)} \begin{subtable} \centering \begin{tabular}{|l|l|r|r|r|r|r|r|r|r||r|r|} \hline \multicolumn{2}{|l}{\textbf{Politics}} & \multicolumn{8}{|c||}{\textbf{EEG Bands}} & \multicolumn{2}{|c|}{\textbf{EEG Metrics}} \\ \hline \hline \textbf{Type} &\textbf{Value} & \textbf{Delta} & \textbf{Theta} & \textbf{LowAlpha} & \textbf{HighAlpha} & \textbf{LowBeta} & \textbf{HighBeta} & \textbf{LowGamma} & \textbf{HighGamma} & \textbf{Attention} & \textbf{Meditation} \\ \hline \hline \textbf{Real} & $\mu$ & 413320.05& 96793.34& 29878.77& 26694.87& 25268.51& 19153.27& 9901.83& 5979.33& 52.20& 54.24 \\ \hline & $\sigma$ &153757.50& 34383.43& 8605.00& 6055.53& 7050.92& 7295.16& 3803.14& 2140.09& 5.43& 6.77\\ \hline \hline \textbf{Fake} & $\mu$ & 424955.60& 95244.30& 28988.54& 27138.52& 24732.95& 19065.21& 9528.1400& 5829.22& 51.98& 55.53 \\ \hline & $\sigma$ &140836.10& 23006.76& 6495.04& 5142.80& 5670.08& 4809.90& 3129.81& 1895.49& 4.10& 4.20 \\ \hline\hline \textbf{Rest} & $\mu$ & 491203.76& 103688.32& 30913.01& 24676.13& 22601.05& 15977.85& 8534.58& 5346.13& 46.31& 53.06 \\ \hline & $\sigma$ &184290.10& 26949.64& 11145.64& 7143.12& 6854.26& 4167.79& 2996.65& 1721.91& 4.76& 7.64\\ \hline \end{tabular} \end{subtable} \newline \vspace*{-0.3 mm} \newline \begin{subtable} \centering \begin{tabular}{|l|l|r|r|r|r|r|r|r|r||r|r|} \hline \multicolumn{2}{|l}{\textbf{Daily}} & \multicolumn{8}{|c||}{\textbf{EEG Bands}} & \multicolumn{2}{|c|}{\textbf{EEG Metrics}} \\ \hline \hline \textbf{Type} &\textbf{Value} & \textbf{Delta} & \textbf{Theta} & \textbf{LowAlpha} & \textbf{HighAlpha} & \textbf{LowBeta} & \textbf{HighBeta} & \textbf{LowGamma} & \textbf{HighGamma} & \textbf{Attention} & \textbf{Meditation} \\ \hline \hline \textbf{Real} & $\mu$ & 406622.23& 87742.71& 28911.04& 27968.43& 24659.73& 18860.05& 9924.38& 5888.73& 51.79& 56.28 \\ \hline & $\sigma$ &168881.12& 24040.13& 7871.16& 7434.02& 8012.35& 7785.45& 5320.73& 2385.62& 4.86& 6.73\\ \hline\hline \textbf{Fake} & $\mu$ & 415203.58& 103724.81& 31690.43& 27770.25& 24805.17& 19355.21& 9983.93& 6466.82& 47.86& 54.68 \\ \hline & $\sigma$ &177150.56& 40763.64& 16106.40& 8980.59& 9594.85& 10168.33& 5387.70& 3238.25& 8.07& 8.04 \\ \hline\hline \textbf{Rest} & $\mu$ & 490922.25& 105614.17& 28735.76& 24745.74& 22367.27& 15297.24& 8118.90& 5262.25& 45.74& 51.17 \\ \hline & $\sigma$ &220816.90& 44285.52& 12952.37& 7534.54& 8780.23& 5165.82& 3537.03& 2485.51& 6.06& 7.46\\ \hline \end{tabular} \end{subtable} \newline \vspace*{-0.3 mm} \newline \begin{subtable} \centering \begin{tabular}{|l|l|r|r|r|r|r|r|r|r||r|r|} \hline \multicolumn{2}{|l}{\textbf{Other}} & \multicolumn{8}{|c||}{\textbf{EEG Bands}} & \multicolumn{2}{|c|}{\textbf{EEG Metrics}} \\ \hline \hline \textbf{Type} &\textbf{Value} & \textbf{Delta} & \textbf{Theta} & \textbf{LowAlpha} & \textbf{HighAlpha} & \textbf{LowBeta} & \textbf{HighBeta} & \textbf{LowGamma} & \textbf{HighGamma} & \textbf{Attention} & \textbf{Meditation} \\ \hline \hline \textbf{Real} & $\mu$ & 388485.37& 83570& 29070.39& 26495.37& 24318.91& 18781.26& 9333.63& 5835.77& 53.12& 57.61 \\ \hline & $\sigma$ &143754.05& 21048.06& 8485.08& 6372.84& 7233.89& 6719.034& 4064.91& 2492.71& 6.43& 6.08\\ \hline\hline \textbf{Fake} & $\mu$ & 408582.25& 93892.00& 29007.76& 27672.92& 24758.34& 18913& 9231.12& 5877.95& 52.76& 56.18 \\ \hline & $\sigma$ &167840.62& 31452.57& 9296.78& 6819.27& 7206.02& 5680.82& 3876.19& 2814.76& 5.29& 6.45 \\ \hline\hline \textbf{Rest} & $\mu$ & 515457.83& 118797.9& 33269.67& 30050.28& 27418.84& 19130.14& 10013.07& 6081.46& 46.13& 54.28 \\ \hline & $\sigma$ &148661.33& 57555.89& 15874.06& 14312.91& 15000.57& 9926.36& 5587.18& 3681.14& 4.99& 8.06\\ \hline \end{tabular} \end{subtable} \vspace{-0.8mm} \newline \label{tab:catmeantab} \end{table*} From Table \ref{tab:frequencybands}, we learned that Attention represents the concentration of awareness on some phenomenon and Meditation represents mental calmness \cite{meditation}. In fake-rest pair, Attention is high in the fake state, thus we may conclude that participants may have been more concentrated when reading fake news articles than in the resting state. Also, in the real-rest pair, Attention and Meditation are high in the real trial than resting state. This suggests that participants were reading/assessing the real news articles with full awareness but they were relaxed while reading such articles. This may mean participants found the real news articles related to real-life events, thus their brain signals implicated calmness. In the real-rest pair, Delta is also higher in the resting state. It seems to suggest that participants were not conscious while resting. As Delta represents deep dreamless sleep and unconsiousness\cite{hirnwellen},\cite{brainmapping}, this result could mean that the participants were as relaxed as dreamless sleep (in slow-sleep dreaming may also occur, but dreamless sleep is like deep meditation). This analysis, therefore, suggests that participants have been more engaged and less distracted when reading and analyzing the news articles than during the resting state. This also serves to confirm that the experiment worked as desired as one would expect the participants to be significantly more active during the news trial presentation than during resting.} \subsection{Real News Trial vs.~ Fake News Trial} \label{realvsfake} { Having established that both real and fake news trials invoked significantly higher brain activity compared to the resting state, we now set out to analyze potential differences in the brain activation between the real news trials and the fake news trials. These comparisons of brain activities could help delineate the neural processing involved in reading and identifying real and fake articles. To do this analysis, we compared the mean and standard deviation corresponding to different EEG bands and metrics for fake news trials and real news trials. Our results are presented in Table \ref{tab:meanstddtab}. Just looking at the raw mean values from this table, the mean of Delta, HighAlpha, HighBeta, and HighGamma are slightly higher in fake articles, but these do not seem to be significantly higher. Also, the mean value of LowBeta for fake articles and real articles is approximately the same. On the other hand, the mean of the Theta band seems significantly higher in fake articles. The Friedman test revealed a statistically significant difference in the means of the group of different EEG bands (Delta, Theta, lowAlpha, highAlpha, lowBeta, highBeta, lowGamma, highGamma) and in the mean group of EEG metrics (Attention and Meditation) (${\chi}^2$ = 664.269, p = 0.000) corresponding to real vs. fake news trials. Upon applying WSRT (\textit{without} Bonferroni correction) to find the pairwise differences in the means of the different EEG bands and metrics corresponding to real and fake news trials, the p-value for Theta turned out to be .049, which represents a statistically significant difference (rest of the bands and metrics did not show any statistical difference). However, crucially, \textit{after applying} the Bonferroni correction, this only significant difference between the Theta band of real and fake news trials disappears. Therefore, our overall analysis shows that there is \textit{no statistically significant difference} in any of the EEG bands and EEG metrics corresponding to real and fake news trials. The analysis suggests that our participants were likely equally engaged and relaxed in reading and identifying real and fake news articles, i.e., their brains did not seem to react differently when processing real and fake news articles.} \subsection{Categorical Analysis} \label{Categorical Analysis} Now we analyze the neural activation based on different category news articles. First, we compared the neural activation between real or fake trials and the resting state, then we compare the neural activation between real trials and fake trials, and finally, we compare the neural activation for news trials between categories. \noindent \textbf{Political News:} In political news, raw mean values, depicted in Table \ref{tab:catmeantab}, for the metrics and bands are approximately the same in real and fake news trials. Thus, we do not see any significant differences in the mean values for real trials and fake trials. However, the resting state shows a different result in Delta, Theta, HighAlpha, LowBeta, LowGamma, HighGamma, and Attention metrics. Meditation has a slightly different value but is not significantly different. To confirm the difference, we ran the Friedman test and it revealed statistically significant differences in both the mean group of EEG bands and EEG metrics corresponding to this category. Upon performing pairwise comparisons using WSRT (with Bonferroni correction), as depicted in Table \ref{tab:category2}, we can find that Attention and HighBeta are statistically significant in the fake-rest pair and Attention is statistically significant in the real-rest pair. However, we did not find any statistically significant differences between real news trials and fake trials in this category. From Table \ref{tab:catmeantab}, we noticed that the raw mean values of Attention and HighBeta are lower in the resting state than in the fake news trial, and the means of Attention are slightly lower in the resting state than in real news trial. This analysis shows that the participants were reading fake political news articles with full awareness and the result represents nervous excitement as Attention and HighBeta are higher in the fake news trial than resting state. Also, the higher Attention level in the real news trial represents that the participants were reading real political articles with full awareness. However, nervous excitement or the state of anxiety exists while reading fake political articles. While news trials differ significantly from the resting state, no differences emerged between real news trials vs. fake news trials. \noindent \textbf{Daily News:} For daily news articles, we noticed from Table \ref{tab:catmeantab} that the raw mean values of Theta, LowAlpha, HighGamma, Attention, and Meditation are slightly different between fake news trial and real news trial. Resting-state is also showing different results for Delta, Theta, LowBeta, HighBeta, LowGamma, HighGamma, Attention, and Meditation. Following the same pattern of statistical analysis as in the Politics news category, however, we did not see any statistical difference in the real-fake pair and fake-rest pair, but in the real-rest pair Attention and Meditation are statistically significant. From the mean values in Table \ref{tab:catmeantab}, we do see that Attention and Meditation are higher in real trials than resting state. This analysis, therefore, reveals that there are no differences in real and fake metrics in daily news articles, but Attention and Meditation are significantly different in resting state in the real-rest pair. As Attention represents awareness and meditation represents mental calmness, we can infer that the participants were reading the real daily news articles with full awareness but their brain activation also represents relaxation. \noindent \textbf{Other News:} We followed the same pattern of analysis for the ``Other'' category of news and noticed that, via eyeballing the mean values, the real and fake news trials have an approximately similar output for all bands and metrics, whereas the resting state has different result compared to the fake and real news trials. Indeed, the statistical analysis supported this observation. We found that Attention is statistically significant in fake-rest pairs and Delta is statistically significant in real-rest pairs. According to the mean values from Table \ref{tab:catmeantab}, we do see that mean of Attention in fake trials (52.76) is higher than the mean of Attention (46.13) in resting states. Thus, participants were more attentive to reading other news articles than the resting state. The mean of Delta is higher in the resting state than the mean of Delta in real trials. This result shows that the participants were more attentive to reading other fake news articles than the resting state. Also, the higher value of Delta in the resting state of the real-rest pair represents the relaxation, unconsciousness, and a state of dreamless sleep (to recall, in slow-sleep dreaming may also occur, but dreamless sleep is like deep meditation) \cite{hirnwellen},\cite{brainmapping}). We can conclude that the participants were more relaxed in their resting state than in trials for both pairwise comparisons of the other news category (real-rest and fake-rest). Also, the brain activation did not show any differences between real and fake news. Thus the participants may not have been able to distinguish the real and fake other news articles but their brains reacted differently when they were assessing news articles and taking a rest in the resting state. \begin{table}[h!] \scriptsize \centering \caption{Statistically significant results per category of news articles (No statistically significant differences were observed)} \label{tab:category2} \vspace{-1mm} \begin{tabular}{|l|l|l|r|r|} \hline \textbf{Categories} & \textbf{Comparison} & \textbf{Metrics} & \textbf{p-value} & \textbf{Effect Size} \\ \hline \hline \multirow{3}{*}{\textbf{Politics}} & \multirow{2}{*}{\textbf{Fake \textgreater Rest}} & High Beta & 0.003 & 0.49 (medium) \\ \cline{3-5} & & Attention & 0.001 & 0.57 (large) \\ \cline{2-5} & \textbf{Real \textgreater Rest} & Attention & 0.002 & 0.51 (large) \\ \hline \hline \multirow{2}{*}{\textbf{Daily}} & \multirow{2}{*}{\textbf{Real \textgreater Rest}} & Attention & 0.001 & 0.54 (large) \\ \cline{3-5} & & Meditation & 0.001 & 0.55 (large) \\ \hline \hline \multirow{2}{*}{\textbf{Other}} & \textbf{Fake \textgreater Rest} & Attention & 0.001 & 0.60 (large) \\ \cline{2-5} & \textbf{Real \textless Rest} & Delta & 0.003 & 0.48 (medium) \\ \hline \end{tabular} \vspace{-2mm} \end{table} \noindent \textbf{Between-Category Comparison:} We performed a pairwise comparison of each trial/resting state between all categories (Example pairs: real-daily vs. real-politics, fake-daily vs. fake-politics, rest-daily vs. rest-politics, etc.). We followed the same pattern of analysis mentioned above to compare these pairs between every two categories. We found that the Attention metric is statistically significantly different in other-daily pairs corresponding to fake news trials (p-value of 0.018). From Table \ref{tab:catmeantab}, we can indeed see the mean of Attention (52.76) is higher in Other Type than the mean of Attention (47.86) in daily news for fake trials. As Attention represents awareness, it means the participants were more attentive to reading other fake news articles than the daily fake news articles. This may suggest that the participants were not reading daily news fake articles with full awareness as those articles may look less interesting or the participants may have identified those articles as fake, so they were not reading them seriously. \subsection{Raw Data Analysis} \label{Raw Data Analysis} Our analysis of the EEG bands and EEG metrics showed that there was no statistically significant difference between real and fake news trials. Since the bands and metrics are obtained from the post-processing over raw EEG signals, this information may have missed some characteristics of the brain activity that might be different in the real and fake news trials. To explore any such potential differences, we performed additional analysis directly over the raw EEG signals corresponding to real vs. fake news trials. \begin{figure}[h!] \centering \begin{subfigure}{} \centering \includegraphics[width=0.45\columnwidth]{raw1.png} \label{fig:highpass} \end{subfigure}% \begin{subfigure}{} \centering \includegraphics[width=0.45\columnwidth]{raw2.png} \label{fig:lowpass} \end{subfigure} \vspace{-2mm} \caption{Average of all participants' EEG raw signals frequency\\ High-pass filtered (left), Low-pass filtered (right)} \vspace{-1mm} \label{rawfig} \end{figure} We followed the methodology presented in Section \ref{process raw data} to analyze the raw EEG signals. When we first look at the average of the frequency for both high-pass filtered and low-pass filtered results in Figure \ref{rawfig}, we found significant differences between the news trial (fake or real) and the resting state. Further, the power spectrum values were -80 dB and -120 dB for the resting state, while the real/fake news trials were between -40 dB and -100 dB in the range of 0 to 0.45 frequency after applying the high-pass filter (Figure \ref{rawfig} (top)). Similarly, in the frequency range of 0.65 to 1.00 on low-pass filtered signals Figure \ref{rawfig} (bottom), the minimum resting-state value was -100 dB, and the maximum resting-state value was -120 dB, however, the fake and the real news trials were in the range of -30 dB and 20 dB. These results suggest that especially after applying a high-pass filter to the signals, there were no observable significant differences between the fake and the real news trials. Likewise, low-pass filtered signals are sloping similarly in terms of fake and real news trials. On the other hand, the difference between the real/fake news trials and the resting state is significant for both high-passed signals and low-passed signals. This means that the participants were actively engaged in reading and deciding between the real and fake news articles, but their brains were not processing real and fake news articles differently, even when looking at the raw neural signals. \section{Task Performance Results} \label{behavioral-results} { In the first phase of our analysis, we calculated the response time and the fraction of correctly identified articles out of the total number of responses given by the participants (termed ''accuracy'') for the real and fake news trials. The goal of this analysis was to determine how accurately the participants were able to identify two types of trials. Table \ref{tab:tasktab} summarizes the result from our task performance analysis. \begin{table}[h] \center \scriptsize \caption{Task Performance in Identifying Fake and Real News Articles} \begin{tabular}{|l|l|l|l|} \hline {\textbf{Trial}} & {\textbf{Accuracy (\%)}} & {\textbf{Wrong Response (\%)}} & {\textbf{Response Time (ms) $\mu$($\sigma$)}} \\ \hline \hline {\textbf{Real}} & 54.41 & 45.29 & 3069.4 (9.33) \\ \hline \hline {\textbf{Fake}} & 51.76 & 48.24 & 2966.7 (9.34) \\ \hline \end{tabular} \label{tab:tasktab} \vspace{-1mm} \end{table} We calculated the accuracy of the detection task for two types of responses: one for identifying correct responses (real as real and fake as fake), and another for wrong responses (real as fake, fake as real). So, in total there are four types of comparisons for decision making: real as real, real as fake, fake as fake, and fake as real. From Table \ref{tab:tasktab}, we can see that the overall accuracy of our participants in the experiment is 53.09\%. Here, we can also see the accuracy for real news articles is 54.41\% and for fake news articles is 51.76\%. This means that the accuracy of the identification of real news articles as real news is almost similar to the accuracy of the identification of fake news articles as fake news. Also, if we look at the wrong responses, the percentage of identification of real news articles as fake news is 45.59\% and fake news articles as real news is 48.24\%, and the total percentage of wrong answers is 46.91\%. Indeed, upon using WSRT, we did not notice any statistically significant differences in the mean of the overall accuracy of real and fake articles. The average response times for real and fake trials are also closely similar and not statistically significant. These results suggest that participants did not at all do well in telling the differences between real and fake news articles and their accuracy in identifying the two types of articles is very close to a random guessing accuracy (50\%). }
1,108,101,564,037
arxiv
\section{INTRODUCTION} Identification of the precise location of the multicritical point is an important theoretical challenge in the physics of spin glasses not only because of its mathematical interest but also for the practical purpose of reliable analyses of numerical data. The method of duality is a standard tool to derive th exact location of a critical point in pure ferromagnetic systems in two dimensions. However, the existence of randomness in spin glasses hampers a direct application of the duality. We have nevertheless developed a theory to achieve the goal by using the combination of the replica method, the duality applied to the replicated system, the gauge symmetry, and the renormalization group \cite{NN,MNN,TSN,Nstat,ONB,Ohzeki}. The result shows excellent agreement with numerical estimates. The analysis on hierarchical lattices plays a crucial role in the development of the theory, in particular in the introduction of the renormalization group, by which systematic improvements can be achieved. \section{MULTICRITICAL POINT} Let us consider the $\pm J$ Ising model defined by the Hamiltonian, \begin{equation} H = - \sum_{\langle ij \rangle} J_{ij} \sigma_{i}\sigma_{j}, \end{equation} where $\sigma_i$ is the Ising spin and $J_{ij}$ denotes the quenched random coupling. The sign of $J_{ij}$, i.e. $J_{ij}/J=\tau_{ij}$, follows the distribution \begin{eqnarray} P(\tau_{ij}) &=& p \delta(1-\tau_{ij}) + (1-p) \delta(1+\tau_{ij}) \nonumber\\ &=& \frac{\exp(K_p \tau_{ij})}{2\cosh K_p} \left\{\delta(1-\tau_{ij}) + \delta(1+\tau_{ij})\right\}, \end{eqnarray} where $\exp(-2K_p) = (1-p)/p$. The multicritical point is believed to lie on the Nishimori line (NL) defined by $K_p=\beta J$, where $\beta$ is the inverse temperature. See Fig. \ref{fig1}. \begin{figure}[tbp] \begin{center} \includegraphics[width=50mm]{fig1.eps} \end{center} \caption{{\protect\small A typical phase diagram of the $\pm J$ Ising model in two dimensions. The multicritical point (MCP) is described by a black dot and the Nishimori line is drawn dashed.}} \label{fig1} \end{figure} The restriction to the NL simplifies the problem due to the gauge symmetry \cite{HN81,HNbook}. According to the initial theory that uses the replica method, duality and gauge symmetry \cite{NN,MNN,TSN,Nstat}, the value of $p_c$ for the multicritical point satisfies \begin{equation} H(p_c) = \frac{1}{2}, \label{conjecture} \end{equation} where $H(p)$ is the binary entropy, $-p\log_2 p-(1-p)\log_2(1-p)$, for self-dual lattices. Equation (\ref{conjecture}) is solved to give $p_c=0.8900$, which is in reasonable agreement with numerical estimates. The theory has also been extended to a pair of mutually dual lattices with $p_{c1}$ and $p_{c2}$ for respective multicritical points. The result is \begin{equation} H(p_{c1})+H(p_{c2})=1. \label{Hpc} \end{equation} Hinczewski and Berker, however, found $H(p_1)+H(p_2)=1.0172,0.9829,0.9911$ for three pairs of mutually dual hierarchical lattices \cite{HB}. Their values are correct to the decimal points shown above as one can carry out numerically exact renormalization group calculations on hierarchical lattices. Thus Eq. (\ref{Hpc}) is a good approximation but not quite exact, at least for hierarchical lattices. \section{REPLICA AND DUALITY} Let us give a very brief summary of the theory that leads to Eqs. (\ref{conjecture}) and (\ref{Hpc}). We generalize the usual duality argument to the $n$-replicated $\pm J$ Ising model. We define the edge Boltzmann factor $x_k ~(k = 0, 1, \cdots, n)$, which represents the configuration-averaged Boltzmann factor for interacting spins with $k$ antiparallel spin pairs among $n$ nearest-neighbour pairs for a bond (edge). The duality gives the following relationship between the partition functions on the original and dual lattices with different values of the edge Boltzmann factors \begin{equation} Z_n(x_0,x_1,\cdots,x_n) = Z_n(x^*_0,x^*_1,\cdots,x^*_n), \label{PF2} \end{equation} where we have assumed self duality of the lattice in that both sides share the same function $Z_n$. The dual edge Boltzmann factors $x_k^*$ are defined by the discrete multiple Fourier transforms of the original edge Boltzmann factors, which are simple combinations of plus and minus of the original Boltzmann factors in the case of Ising spins. It turns out useful to focus our attention to the principal Boltzmann factors $x_0$ and $x^*_0$, which are the most important elements of the theory. Their explicit forms are \begin{equation} x_0(K,K_p) = \frac{\cosh \left(nK + K_p \right)}{\cosh K_p},\quad x^*_0(K,K_p) = \left( \sqrt{2} \cosh K \right)^n, \end{equation} where $K=\beta J$. We extract these principal Boltzmann factors from the partition functions in Eq. (\ref{PF2}), which amounts to measuring the energy from the all-parallel spin configuration. Then, using the normalized edge Boltzmann factors $u_j = x_j/x_0$ and $u^*_j = x^*_j/x^*_0$, we have \begin{equation} {x_0(K,K_p)}^{N_B}z_n(u_1,u_2,\cdots,u_n) = {x^*_0(K,K_p)}^{N_B}z_n(u^*_1,u^*_2,\cdots,u^*_n), \label{PF1} \end{equation} where $z_n(u_1,\cdots)$ and $z_n(u^*_1,\cdots)$ are defined as $Z_n/x^{N_B}_0$ and $Z_n/(x^{*}_0)^{N_B}$ and $N_B$ is the number of bonds. We now restrict ourselves to the NL, $K=K_p$. Figure \ref{Trajectory} shows the relationship between the curves $(u_1(K),u_2(K),\cdots,u_n(K))$ (the thin curve) and $(u^*_1(K),u^*_2(K),\cdots,u^*_n(K))$ (the dashed curve). The arrows emanating from both curves represent the renormalization flows toward the fixed point C. \begin{figure} \begin{center} \includegraphics[width=60mm]{fig2.eps} \end{center} \caption{ A schematic picture of the renormalization flow and the duality for the replicated $\pm J$ Ising model.} \label{Trajectory} \end{figure} The ordinary duality argument identifies the critical point under the assumption of a unique phase transition. We can obtain the critical point as the fixed point of the duality transformation using the fact that the partition function is a single-variable function. In other words, the thin curve would overlap with the dashed line for such a case. In the present random case, on the other hand, since $z_n$ is a multivariable function, there is no fixed point of the duality in the strict sense which satisfies $n$ conditions simultaneously, $u_1(K)=u^*_1(K),u_2(K)=u^*_2(K),\cdots,u_n(K)=u^*_n(K)$. This is in sharp contrast to the non-random Ising model. We nevertheless assume that $x_0(K,K)=x_0^*(K,K)$ may give the precise location of the multicritical point because, when the number of variables of $z_n$ in Eq. (\ref{PF1}) is unity ($n=1$), the fixed point condition $u_1=u^*_1$ implies $x_0=x^{*}_0$. This relation, in the limit of $n\to 0$ in the spirit of the replica method, leads to Eq. (\ref{conjecture}). A straightforward generalization to mutual dual cases gives Eq. (\ref{Hpc}). \section{RENORMALIZATION GROUP ON HIERARCHICAL LATTICES} The renormalization group provides us with an additional point of view, especially on hierarchical lattices. Let us remember the following features of the renormalization group: (i) The critical point is attracted toward the unstable fixed point. (ii) The partition function does not change its functional form by the renormalization on hierarchical lattices; only the values of arguments change. Therefore the renormalized system also has a representative point in the same space $(u_1(K),u_2(K),\cdots,u_n(K))$ as in Fig. \ref{Trajectory}. The renormalization flow from the critical point $p_c$ reaches the fixed point C, $(u^{(\infty)}_1,u^{(\infty)}_2,\cdots, u^{(\infty)}_n)$. Here the superscript means the number of renormalization steps. There is a point $d_c$ related to $p_c$ by the duality, which is expect to also reach the same fixed point C since $p_c$ and $d_c$ represent the same critical point due to Eq. (\ref{PF2}). Considering the above property of the renormalization flow as well as the duality, we find that the duality relates two trajectories of the renormalization flow from $p_c$ and from $d_c$. The same applies to the whole part of both curves, thin and dashed. In other words, after a sufficient number of renormalization steps, the thin curve representing the original system and the dashed curve for the dual system both approach the common renormalized system depicted as the bold curve in Fig. \ref{Trajectory}, which goes through the fixed point C. The partition function is then expected to become a single-variable function along the bold curve. This fact enables us to improve the method so that the exact location of the multicritical point is obtained asymptotically, which can be given by $x^{(s \to \infty)}_0(K) = {x_0^*}^{(s \to \infty)}(K)$. If we regard $x_0(K)={x_0^*}(K)$ as the zeroth approximation for the location of the multicritical point, it is expected that $x^{(1)}_0(K)={x_0^*}^{(1)} (K)$ is the first approximation and can lead to more precise results than $x_0(K)={x_0^*}(K)$ does. Our method by the duality analysis in conjunction with the renormalization group indeed has given the results in excellent agreement with the exact estimations within numerical errors on several self-dual hierarchical lattices as summarized in Table \ref{Con}. \begin{table}[htbp] \begin{center} \begin{tabular}{ccc} \hline $p_c$ (without RG) & $p_c$ (with RG) & $p_c$ (numerical) \\ \hline $0.8900$ & $0.8920$ & $0.8915(6)$ \\ $0.8900$ & $0.8903$ & $0.8903(2)$ \\ $0.8900$ & $0.8892$ & $0.8892(6)$ \\ $0.8900$ & $0.8895$ & $0.8895(6)$ \\ $0.8900$ & $0.8891$ & $0.8890(6)$ \\ \hline \end{tabular} \end{center} \caption{Comparison of the methods with and without RG and numerical estimations for several self-dual hierarchical lattices \cite{ONB}.} \label{Con} \end{table} \section{FURTHER DEVELOPMENTS The above method has also been generalized to be applicable to Bravais lattices \cite{Ohzeki}. Let us take an example of the square lattice. Instead of the iterative renormalization, we consider to sum over a part of the spins, to be called a cluster, on the square lattice as shown in Fig. \ref{fig3} to incorporate many-body effects such as frustration inherent in spin glasses. \begin{figure} \begin{center} \includegraphics[width=90mm]{fig3.eps} \end{center} \caption{The basic clusters used on the square lattice. The spins marked black on the original lattice are traced out instead of the iterative renormalization.} \label{fig3} \end{figure} To this end, we define the principal Boltzmann factors $x_{0}^{(s)}$ and its dual $x_{0}^{\ast (s)}$ as those with all spins surrounding the cluster in the up state. We assume that a single equation gives the accurate location of the multicritical point $x_{0}^{(s)}(K)=x_{0}^{\ast (s)}(K)$, where the superscript $s$ stands for the type of the cluster. Recent numerical investigations on the square lattice have given $p_c = 0.89081(7)$ \cite{Hasen}, $p_c = 0.89083(3)$ \cite{Toldin} and $p_c = 0.89061(6)$ \cite{Queiroz}, while the present method has estimated $p_c = 0.890725$ by cluster 1 of Fig. \ref{fig3}, and $p_c = 0.890822$ by cluster 2 \cite{Ohzeki}. If we deal with clusters of larger sizes, the new method is expected to show systematic improvements toward the exact answer from the point of view of renormalization. The method of the renormalization group is applicable also away from the NL. For example, the slope of the phase boundary at the pure ferromagnetic limit has been estimated to be $1/T_c\times dT/dp \approx 3.2091\cdots$ on the square lattice by perturbation \cite{Domany}. This result is applicable also to any self-dual hierarchical lattices. The present method with the renormalization group taken into account shows that this is not the case. The result depends on the type of lattice, e.g. $3.2786\cdots$ and $3.4390\cdots$ \cite{OH}. \section{CONCLUSION The hierarchical lattices provide a very effective platform to test new ideas as has been exemplified in the present study. Investigations are notoriously hard for spin glasses on finite-dimensional systems both analytically and numerically. On hierarchical lattices, on the other hand, numerically exact calculations can be carried out, and, in addition, hierarchical lattices share many features with finite-dimensional systems in contrast to mean-field systems. Analytical methods can also be implemented with relative ease on hierarchical lattices, which leads to the significant improvements in the prediction of the location of the multicritical point. Hierarchical lattices will continue to play key roles in the studies of spin glass and other complex systems. We thank financial supports by the CREST, JST.
1,108,101,564,038
arxiv
\section{Introduction} It is well known that the symmetry play an important role in phyisics. Sometimes we need not solve the problem explicitly, we can obtain much important and interesting information of a physical system or simplify our calculation by analyzing the symmetry of system. In particular, the degeneracy of energy levels is often related to some dynamical symmetries of a system [1,2]. In general, for a system with Hamitanian $ H $, let $ \hat{F} $ and $ \hat{G} $ denote two operators corresponding to physical quantities of the system, if they do not commute with eath other, i.e., $ [\hat{F},\hat{G}]\ne 0 $, and both are the conservative quantities, then energy levels of the system must be degenerate except for few special levels. For example, for a spinless particle moving in a plane, this system has the Euclidian group symmetry whose generators are momentum operator $\hat{p}_{i} (i=$x$,$y$)$ and the $z$-component of the angular momentum $\hat{L}_{z}$. Since $\hat{p}_{i} (i=$x$,$y$)$ and $\hat{L}_{z}$ do not commute with each other, and both of them are the conservative quantities of the system, energy levels of this system exhibit infinite-fold degeneracy which comes from the infinity of the dimension of the irreducible representation of the Euclidian group [2]. On the other hand, in the past ten years, the so-called quantum group symmetry (QGS) and its representation theory has attracted the attention of the phyisists and mathematicians [8,12-14]. Needless to say, the mathematical structure of the QGS is very beautiful, however, to the physicist, they are more interested in its application in the physics, In this respect, P. B. Wiegmann et al. [16] and Y. Hatsugai et al. [17] has completed their oringinal exploration. Certainly, it is still very interesting to look for more applications of the QGS in physics. In this letter, we will show that the QGS also can be found even in the simplest system of quantum mechanics and it is the origin of the degeneracy of energy levels in our problem. In more detail, with the help of the representation theory of quantum group, we determined the degree of the degeneracy of Landau energy levels for such a system which a spinless particle moves in a plane and experiences a uniform external magnetic field $\vec{B}$. We consider a spinless particle which moves in a plane and experiences an uniform external magnetic field along $z$-direction, $ \vec{B}=B\hat{e}_{z}$. The Hamiltanian of system can be written as \begin{equation} H=\frac{1}{2m}(\vec{p}+e\vec{A})^2 \end{equation} where $m$,$e$ are the mass and charge of particle, respectively. $ \vec{A} $ is vector potential which satisfy \begin{equation} \bigtriangledown \times \vec{A}=B\vec{e}_{z} \end{equation} The above problem can be easily solved in a proper gauge [1,3]. In present paper, we study the gauge-independent case with a periodical boundary condition(PBC). It is well known that in this system there does not exist the translational invariance, however, it can exhibits magnetic translation invariance which generated by the magnetic translation operator [4] defined by \begin{equation} t(\vec{a})=\exp[\frac{i}{\hbar}\vec{a}\cdot(\vec{p}+ e\vec{A}+e\vec{r}\times\vec{B})] \end{equation} where $\vec{a}=a_{x}\hat{e}_{x}+a_{y}\hat{e}_{y}$ is an arbitrary two-dimentional vector. The magnetic translation operator $t(\vec{a})$ satisfies the following group property [5,6]: \begin{equation} t(\vec{a})t(\vec{b})=\exp[-i\frac{\hat{e}_{z}\cdot(\vec{a}\times\vec{b})} {a_{0}^2}] t(\vec{b})t(\vec{a}) \end{equation} where $a_{0}\equiv\sqrt{\frac{\hbar}{eB}}$ is the magnetic length. Let \begin{equation} \vec{\kappa}=\vec{p}+e\vec{A}+e\vec{r}\times\vec{B} \end{equation} It is easy to prove that \begin{equation} [t(\vec{a}),H]=0,\hspace{2 cm} [\vec{\kappa},H]=0 \end{equation} which means that the system under consideration is invariant under the magnetic translation transformation Eq.(3), and $\vec{\kappa}$ is a conservative quantity. With the help of the magnetic translation operator, one can construct the following operators [7]: \begin{eqnarray} J_{+}&=&\frac{1}{q-q^{-1}}[t(\vec{a})+t(\vec{b})],\nonumber \\ J_{-}&=&\frac{-1}{q-q^{-1}}[t(-\vec{a})+t(-\vec{b})],\nonumber \\ q^{2J_{3}}&=&t(\vec{b}-\vec{a}),\nonumber \\ q^{-2J_{3}}&=&t(\vec{a}-\vec{b}) \end{eqnarray} with \begin{equation} q=\exp(i2\pi\frac{\Phi}{\Phi_{0}}) \end{equation} where $ \Phi=\frac{1}{2}\vec{B}\cdot(\vec{a}\times\vec{b}) $ is magnetic flux through the triangle enclosed by vector $\vec{a}$ and $\vec{b}$, $\Phi_{0}=\frac{h}{e}$ is magnetic flux quanta. A straitforward calculation shows that these operators $J_{+}$,$J_{-}$ and $J_{3}$ satisfy the algebraic relation of the quantum group $sl_{q}(2)$ [8] as follows: \begin{eqnarray} [J_{+},J_{-}]&=&[2J_{3}]_{q} \nonumber \\ q^{J_{3}}J_{\pm}q^{-J_{3}}&=&q^{\pm 1}J_{\pm} \end{eqnarray} where we have used the following notation: \begin{equation} [x]_{q}=\frac{q^{x}-q^{-x}}{q-q^{-1}} \end{equation} {}From Eqs.(6) and (7) it follows that: \begin{equation} [J_{\pm},H]=0,\hspace{2 cm} [q^{\pm J_{3}},H]=0 \end{equation} which indicates that $J_{\pm}$ and $J_{3}$ are conservative quantities of the system. Therefore, there is the quantum group $sl_{q}(2)$ in the Landau problem under our consideration. Let $\Psi$ be wave function of system in Schr\"{o}dinger picture. In order to calculate explicitly the degree of degeneracy of Landau levels, we impose the following PBC on the wave funtion[9]: \begin{equation} t(\vec{L}_{1})\Psi =\Psi,\hspace{2 cm} t(\vec{L}_{2})\Psi=\Psi \end{equation} where $\vec{L}_{1}=L_{1}\hat{e}_{x}$, and $\vec{L}_{2}=L_{2}\hat{e}_{y}$. This boundary condition means that the particle is confined in a rectangular area of size $L_{1}\times L_{2}$. {}From Eq.(12) it follows that the operators $t(\vec{L}_{1})$ and $t(\vec{L}_{2})$ commute with each other. That is, \begin{equation} t(\vec{L}_{1})t(\vec{L}_{2})=t(\vec{L}_{2})t(\vec{L}_{1}) \end{equation} however, from Eq.(4) we have \begin{equation} t(\vec{L}_{1})t(\vec{L}_{2})=exp[-i\frac{\vec{e}_{z}\cdot(\vec{L}_{1} \times\vec{L}_{2})}{a_{0}^{2}}]t(\vec{L}_{2})t(\vec{L_{1}}) \end{equation} Combining Eq.(13) with Eq.(14) yields that \begin{equation} \exp(i2\pi\frac{\Phi}{\Phi_{0}})=1 \end{equation} where $\Phi=\frac{1}{2}BL_{1}L_{2}$ is the magnetic flux through the triangle enclosed by $\vec{L}_{1}$ and $\vec{L}_{2}$. Eq.(15) implies that \begin{equation} \Phi=N_{s}\Phi_{0} \end{equation} where $N_{s}$ is a positive integer. Therefore, the periodic boundary condition Eq.(12) is equivalent to the magnetic flux quantinization. Notice that not all the translation operators $t(\vec{a})$ can keep the boundary condition Eq.(12) invariant. In other words, \begin{equation} t(\vec{L}_{i})t(\vec{a})\Psi=t(\vec{a})\Psi \nonumber \\ ( i=1,2 ) \end{equation} can not be satisfied by an arbitrary maganetic translation $t(\vec{a})$. However, if we define two primitive magnetic translation operators in the following way [9]: \begin{equation} T_{x}\equiv t(\frac{\vec{L}_{1}}{N_{s}}),\hspace{2 cm} T_{y}\equiv t(\frac{\vec{L}_{2}}{N_{s}}) \end{equation} One can find that only $T_{x},T_{y}$ and their integer powers can make Eq.(17) hold. By a straightforward calculation, it can be checked that the following relations hold \begin{eqnarray} T_{y}T_{x}&=&exp(i\frac{2\pi}{N_{s}})T_{x}T_{y},\nonumber \\ T_{y}T_{-x}&=&exp(-i\frac{2\pi}{N_{s}})T_{-x}T_{y} \end{eqnarray} \begin{eqnarray} T_{-y}T_{x}&=&exp(-i\frac{2\pi}{N_{s}})T_{x}T_{-y},\nonumber \\ T_{-y}T_{-x}&=&exp(i\frac{2\pi}{N_{s}})T_{-x}T_{-y} \end{eqnarray} \begin{equation} T_{-x}T_{x}=T_{-y}T_{y}=1 \end{equation} Making use of the operators $T_{\pm{x}}, T_{\pm{y}}$ and the above commutation relations, we can construct a basic quantum group with the generators as follows: \begin{equation} J_{+}=\frac{-i}{q-q^{-1}}(T_{-x}+T_{-y}),\hspace{2 cm}J_{-}=\frac{-i}{q-q^{-1}}(T_{x}+T_{y}) \end{equation} \begin{equation} K^{+2}=qT_{-y}T_{x},\hspace{2 cm} K^{-2}=q^{-1}T_{-x}T_{y} \end{equation} where the deformation paramerter is given by \begin{equation} q=\exp(i\frac{\pi}{N_{s}}) \end{equation} It is easy to check that these generators obey the standard commutation relations of the quantum group $sl_{q}(2)$ [8]: \begin{equation} [J_{+},J_{-}]=\frac{K^{+2}-K^{-2}}{q-q^{-1}},\hspace{2 cm} K^{+}J_{\pm}K^{-}=q^{\pm 1}J_{\pm} \end{equation} We can also find that the generators $J_{\pm}$ and $K^{\pm}$ are conservative quantities of the system under our consideration, namely \begin{equation} [J_{\pm},H]=0,\hspace{2 cm} [K^{\pm },H]=0 \end{equation} The above analysis indicates that $sl_{q}(2)$ is the basic symmetry in our system. Furthermore, according to the fundamental principle of quantum mechanics, Eq.(25) and Eq.(26) imply that there is degeneracy of Landau levels in the system. In what follows we will discuss the relation between degeneracy of Landau levels and the cyclic representation of $sl_{q}(2)$. Since $N_{s}$ is an integer, from Eq.(24) we see that \begin{equation} q^{2N_{s}}=1 \end{equation} which means that $q$ is a root of unity. In this case, the representation of quantum group has many exotic properties [10,11]. Typically, it has the cyclic representation, which implies that there is neither highest weight nor the lowest weight [10,11], and the dimension of the irreducible representation is $2N_{s}$ in the case under our consideration. Furthermore, without loss of generality, according to Eq.(26) we can simutaneously diagonalize $H$ and $K^{\pm}$. In other words, one can choose a set of basis vectors $\left | n,k\right\rangle=\left |n\right\rangle\otimes\left |k\right\rangle $ to be the simutaneous eigenvectors of operators $H$ and $K^{\pm}$. That is, we can take \begin{equation} H\left |n,k\right\rangle=E_{n}\left|n,k\right\rangle \end{equation} and \begin{equation} K^{\pm}\left |n,k\right\rangle=q^{\pm (\lambda-2k-2\mu)}\left |n,k\right\rangle \end{equation} where $n=0,1,...,\infty$ is the symbols of the energy level, and $k=0,1,...,2N_{s}-1$ is the new quantum numbers which distinguish the different quantum states in the same degenerate energy level. According to the representation theory of quantum group at root of unity [12,13,14], the actions of the $sl_{q}(2)$ generators on these basis vectors are given by \begin{equation} J_{+}\left |n,k\right\rangle=[\lambda-\mu-k+1]\left | n,k-1 \right\rangle, \hspace{1.2cm} (1\leq k\leq 2N_{s}-1) \nonumber \end{equation} \begin{equation} J_{+}\left |n,0\right\rangle=\xi^{-1}[\lambda-\mu+1]\left | n,2N_{s}-1\right\rangle \nonumber \end{equation} \begin{equation} J_{-}\left |n,k\right\rangle=\left | n,k+1 \right\rangle,\hspace{1.2cm}(0\leq k \leq 2N_{s}-2) \nonumber \end{equation} \begin{equation} J_{-}\left |n,2N_{s}-1\right\rangle=\xi\left | n,0 \right\rangle \end{equation} where $\lambda,\xi,\mu $ are constants determined by the cyclic properties of the representation of quantum group and the notation $[x]=\frac{q^{x}-q^{-x}}{q-q^{-1}}$ has been used. Since the dimension of the irreducible represention space ${\left |n,k\right\rangle }$ is $2N_{s}$, from Eqs.(28), (29) and (30) we can see that the degree of degeneracy of Landau levels is just $2N_{s}$[15]. This is one of the main conclusions of this paper. In particular, when the boundary of the system approaches to the infinity (i.e. $L_{1}\rightarrow \infty,L_{2}\rightarrow \infty)$, we can see that $2N_{s}\rightarrow \infty$. In this case, the system exihibits the continous degeneracy, this is a well known result. In summary, we have shown that there is quantum group symmetry in the Landau problem, and the existence of the quantum group symmetry is independent of the choice of the gauge, and the degeneracy of Landau levels in the system under our consideration originates from the quantum group symmetry $sl_{q}(2)$. We have found that under the PBC, the degree of the degeneracy of Landau levels is finite, and it is just the dimension of the irreducible cyclic representation of the quantum group $sl_{q}(2)$. When the boundary approaches to infinity, the usual result on the degeneracy of Landau level can be recovered. It is worth mentioning that for a particle with spin, for instance, an electron moving in a plane, although each energy level will split into two due to the additional Zeeman's energy, However, the degree of the degeneracy of its energy levels still keep $2N_{s}$ except the ground state for which the degree of the degeneracy is $N_{s}$. The reason is that energy levels overlap between the upper level and lower level except the lowest one, which is just the ground state. \vspace{0.5cm} \begin{flushleft} \Large \bf Acknowledgments \end{flushleft} The authors acknowledge Dr. D. F. Wang for valuable discussions and useful suggestions. This research is partly supported by the National Natural Science Foundation of China.
1,108,101,564,039
arxiv
\section{Introduction} Dynamic programming (DP) algorithms have been widely used to solve various NP-hard problems in exponential time. Bellman, Held and Karp showed how DP can be used to solve the \textsc{Travelling Salesman Problem} in $\widetilde{O}(2^n)$\footnote{$f(n)=\widetilde{O}(g(n))$ if $f(n) = O(\log^c(g(n)) g(n))$ for some constant $c$.} time using DP \cite{Bel62,HK62}, which still remains the most efficient classical algorithm for this problem. Their technique can be used to solve a plethora of different problems \cite{FK10,Bodlaender2012}. The DP approach of Bellman, Held and Karp solves the subproblems corresponding to subsets of an $n$-element set, sequentially in increasing order of the subset size. This typically results in an $\widetilde{\Theta}(2^n)$ time algorithm, as there are $2^n$ distinct subsets. What kind of speedups can we obtain for such algorithms using quantum computers? It is natural to consider applying Grover's search, which is known to speed up some algorithms for NP-complete problems. For example, we can use it to search through the $2^n$ possible assignments to the SAT problem instance on $n$ variables in $\widetilde{O}(\sqrt{2^n})$ time. However, it is not immediately clear how to apply it to the DP algorithm described above. Recently, Ambainis et al.~showed a quantum algorithm that combines classical precalculation with recursive applications of Grover's search that solves such DP problems in $\widetilde{O}(1.817^n)$ time, assuming the QRAM model of computation \cite{ABIKPV19}. In their work, they examined the transition graph of such a DP algorithm, which can be seen as a directed $n$-dimensional Boolean hypercube, with edges connecting smaller weight vertices to larger weight vertices. A natural question arises, for what other graphs there exist quantum algorithms that achieve a speedup over the classical DP? In this work, we examine a generalization of the hypercube graph, the $n$-dimensional lattice graph with vertices in $\{0,1,\ldots,D\}^n$. While the classical DP for this graph has running time $\widetilde{\Theta}((D+1)^n)$, as it examines all vertices, we prove that there exists a quantum algorithm (in the QRAM model) that solves this problem in time $\poly(n)^{\log n} T_D^n$ for $T_D < D+1$ (Theorems \ref{thm:main}, \ref{thm:time}). Our algorithm essentially is a generalization of the algorithm of Ambainis et al. We show the following running time for small values of $D$: \begin{table}[H] \label{tbl:intro} \begin{center} \begin{tabular}{c||c|c|c|c|c|c} $D$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ \\ \hline\hline $T_D$ & $1.81692$ & $2.65908$ & $3.52836$ & $4.42064$ & $5.33149$ & $6.25720$ \end{tabular} \end{center} \caption{The complexity of the quantum algorithm.} \end{table} A detailed summary of our numerical results is given in Section \ref{sec:smalld}. Note that the case $D=1$ corresponds to the hypercube, where we have the same complexity as Ambainis et al. In our proofs, we extensively use the saddle point method from analytic combinatorics to estimate the asymptotic value of the combinatorial expressions arising from the complexity analysis. We also prove a lower bound on the query complexity of the algorithm for general $D$. Our motivation is to check whether our algorithm, for example, could achieve complexity $\widetilde O((D+1)^{cn})$ for large $D$ for some $c < 1$. We prove that this is not the case: more specifically, for any $D$, the algorithm performs at least $\widetilde\Omega\left(\left(\frac{D+1}{\e}\right)^n\right)$ queries (Theorem \ref{thm:lb}). As an example application, we apply our algorithm to the \textsc{Set Multicover} problem (SMC), which is a generalization of the \textsc{Set Cover} problem. In this problem, the input consists of $m$ subsets of the $n$-element set, and the task is to calculate the smallest number of these subsets that together cover each element at least $D$ times, possibly with overlap and repetition. While the best known classical algorithm has running time $O(m(D+1)^n)$ \cite{Nederlof08, HWYL10}, our quantum algorithm has running time $\poly(m,n)^{\log n} T_D^n$, improving the exponential complexity (Theorem \ref{thm:smc}). The paper is organized as follows. In Section \ref{sec:prelim}, we formally introduce the $n$-dimensional lattice graph and some of the notation used in the paper. In Section \ref{sec:problem}, we define the generic query problem that models the examined DP. In Section \ref{sec:algo}, we describe our quantum algorithm. In Section \ref{sec:query}, we establish the query complexity of this algorithm and prove the aforementioned lower bound. In Section \ref{sec:time}, we discuss the implementation of this algorithm and establish its time complexity. Finally, in Section \ref{sec:app}, we show how to apply our algorithm to SMC, and discuss other related problems. \section{Preliminaries} \label{sec:prelim} The $n$-dimensional lattice graph is defined as follows. The vertex set is given by $\{0,1,\ldots,D\}^n$, and the edge set consists of directed pairs of two vertices $u$ and $v$ such that $v_i = u_i+1$ for exactly one $i$, and $u_j = v_j$ for $j \neq i$. We denote this graph by $Q(D,n)$. Alternatively, this graph can be seen as the Cartesian product of $n$ paths on $D+1$ vertices. The case $D=1$ is known as the Boolean hypercube and is usually denoted by $Q_n$. We define the \emph{weight} of a vertex $x \in V$ as the sum of its coordinates $|x| \coloneqq \sum_{i=1}^n x_i$. Denote $x \leq y$ iff for all $i \in [n]$, $x_i \leq y_i$ holds. If additionally $x \neq y$, denote such relation by $x < y$. Throughout the paper we use the standard notation $[n] \coloneqq \{1,\ldots,n\}$. In Section \ref{sec:smc}, we use notation for the superset $2^{[n]} \coloneqq \{S \mid S \subseteq [n]\}$ and for the characteristic vector $\chi(S) \in \{0,1\}^n$ of a set $S \in [n]$ defined as $\chi(S)_i = 1$ iff $i \in S$, and $0$ otherwise. We write $f(n) = \poly(n)$ to denote that $f(n) = O(n^c)$ for some constant $c$. We also write $f(n,m) = \poly(n,m)$ to denote that $f(n,m) = O(n^c m^d)$ for some constants $c$ and $d$. For a multivariable polynomial $p(x_1,\ldots,x_m)$, we denote by $[x_1^{c_1}\cdots x_m^{c_m}] p(x_1,\ldots,x_m)$ its coefficient at the multinomial $x_1^{c_1}\cdots x_m^{c_m}$. \section{Path in the hyperlattice} \label{sec:problem} We formulate our generic problem as follows. The input to the problem is a subgraph $G$ of $Q(D,n)$. The problem is to determine whether there is a path from $0^n$ to $D^n$ in $G$. We examine this as a query problem: a single query determines whether an edge $(u,v)$ is present in $G$ or not. Classically, we can solve this problem using a dynamic programming algorithm that computes the value $\text{dp}(v)$ recursively for all $v$, which is defined as $1$ if there is a path from $0^n$ to $v$, and $0$ otherwise. It is calculated by the Bellman, Held and Karp style recurrence \cite{Bel62, HK62}: $$\text{dp}(v) = \bigvee_{(u,v) \in E}\{ \text{dp}(u) \land ((u,v) \in G)\}, \hspace{1cm} \text{dp}(0^n) = 1.$$ The query complexity of this algorithm is $O(n (D+1)^n)$. From this moment we refer to this as the \emph{classical dynamic programming algorithm}. The query complexity is also lower bounded by $\widetilde{\Omega}((D+1)^n)$. Consider the sets of edges $E_W$ connecting the vertices with weights $W$ and $W+1$, $$E_W \coloneqq \{(u,v) \mid (u,v) \in Q(D,n), |u| = W, |v| = W+1\}.$$ Since the total number of edges is equal to $(D+1)^{n-1} Dn$, there is such a $W$ that $|E_W| \geq (D+1)^{n-1} Dn/Dn = (D+1)^{n-1}$ (in fact, one can prove that the largest size is achieved for $W=\lfloor nD/2 \rfloor$ \cite{dBvETK51}, but it is not necessary for this argument). Any such $E_W$ is a cut of $H_D$, hence any path from $0^n$ to $D^n$ passes through $E_W$. Examine all $G$ that contain exactly one edge from $E_W$, and all other edges. Also examine the graph that contains no edges from $E_W$, and all other edges. In the first case, any such graph contains a desired path, and in the second case there is no such path. To distinguish these cases, one must solve the OR problem on $|E_W|$ variables. Classically, $\Omega(|E_W|)$ queries are needed (see, for example, \cite{BdW02}). Hence, the classical (deterministic and randomized) query complexity of this problem is $\widetilde{\Theta}((D+1)^n)$. This also implies $\widetilde{\Omega}(\sqrt{(D+1)^n})$ quantum lower bound for this problem \cite{BBBV97}. \section{The quantum algorithm} \label{sec:algo} Our algorithm closely follows the ideas of \cite{ABIKPV19}. We will use the well-known generalization of Grover's search: \begin{theorem}[Variable time quantum search, Theorem 3 in \cite{Amb10}] \label{thm:vts} Let $\mathcal A_1$, $\ldots$, $\mathcal A_N$ be quantum algorithms that compute a function $f : [N] \to \{0,1\}$ and have query complexities $t_1$, $\ldots$, $t_N$, respectively, which are known beforehand. Suppose that for each $\mathcal A_i$, if $f(i) = 0$, then $A_i = 0$ with certainty, and if $f(i) = 1$, then $A_i = 1$ with constant success probability. Then there exists a quantum algorithm with constant success probability that checks whether $f(i) = 1$ for at least one $i$ and has query complexity $O\left(\sqrt{t_1^2+\ldots+t_N^2}\right).$ Moreover, if $f(i) = 0$ for all $i\in[N]$, then the algorithm outputs $0$ with certainty. \end{theorem} Even though Ambainis formulates the main theorem for zero-error inputs, the statement above follows from the construction of the algorithm. Now we describe our algorithm. We solve a more general problem: suppose $s, t \in \{0,1,\ldots,D\}^n$ are such that $s < t$ and we are given a subgraph of the $n$-dimensional lattice with vertices in $$\bigtimes_{i=1}^n \{s_i,\ldots,t_i\},$$ and the task is to determine whether there is path from $s$ to $t$. We need this generalized problem because our algorithm is recursive and is called for sublattices. Define $d_i \coloneqq t_i-s_i$. Let $n_d$ be the number of indices $i \in [n]$ such that $d_i = d$. Note that the minimum and maximum weights of the vertices of this lattice are $|s|$ and $|t|$, respectively. We call a set of vertices with fixed total weight a \emph{layer}. The algorithm will operate with $K$ layers (numbered $1$ to $K$), with the $k$-th having weight $|s|+W_k$, where $W_{k} \coloneqq \left\lfloor \sum_{d=1}^D \alpha_{k,d} d n_d\right\rfloor$. Denote the set of vertices in this layer by $$\mathcal L_k \coloneqq \left\{v \mid |v| = |s|+W_{k}\right\}.$$ Here, $\alpha_{k,d} \in (0,1/2)$ are constant parameters that have to be determined before we run the algorithm. The choice of $\alpha_{k,d}$ does not depend on the input to the algorithm, similarly as it was in \cite{ABIKPV19}. For each $k \in [K]$ and $d \in [D]$, we require that $\alpha_{k,d} < \alpha_{k+1,d}$. In addition to the $K$ layers defined in this way, we also consider the $(K+1)$-th layer $\mathcal L_{K+1}$, which is the set of vertices with weight $|s|+W_{K+1}$, where $W_{K+1}\coloneqq \left\lfloor \frac{|t|-|s|}{2} \right\rfloor$. We can see that the weights $W_1, \ldots, W_{K+1}$ defined in this way are non-decreasing. \bigskip \begin{breakablealgorithm} \label{alg:main} \caption{The quantum algorithm for detecting a path in the hyperlattice.} \textsc{Path($s$, $t$):} \begin{enumerate} \item \label{itm:sc0} Calculate $n_1$, $\ldots$, $n_D$, and $W_1$, $\ldots$, $W_{K+1}$. If $W_{k}=W_{k+1}$ for some $k$, determine whether there exists a path from $s$ to $t$ using classical dynamic programming and return. \item \label{itm:sc1} Otherwise, first perform the precalculation step. Let $\text{dp}(v)$ be $1$ iff there is a path from $s$ to $v$. Calculate $\text{dp}(v)$ for all vertices $v$ such that $|v| \leq |s|+W_1$ using classical dynamic programming. Store the values of $\text{dp}(v)$ for all vertices with $|v| = |s|+W_1$. Let $\text{dp}'(v)$ be $1$ iff there is a path from $v$ to $t$. Symmetrically, we also calculate $\text{dp}'(v)$ for all vertices with $|v| = |t| - W_1$. \item \label{itm:sc2} Define the function $\textsc{LayerPath}(k,v)$ to be $1$ iff there is a path from $s$ to $v$ such that $v \in \mathcal L_k$. Implement this function recursively as follows. \begin{itemize} \item $\textsc{LayerPath}(1,v)$ is read out from the stored values. \item For $k > 1$, run VTS over the vertices $u \in \mathcal L_{k-1}$ such that $u < v$. The required value is equal to $$\textsc{LayerPath}(k,v) = \bigvee_u \left\{ \textsc{LayerPath}(k-1,u) \land \textsc{Path}(u,v) \right\}.$$ \end{itemize} \item \label{itm:sc3} Similarly define and implement the function $\textsc{LayerPath}'(k,v)$, which denotes the existence of a path from $v$ to $t$ such that $v \in \mathcal L_k'$ (where $\mathcal L_k'$ is the layer with weight $|t|-W_k$). To find the final answer, run VTS over the vertices in the middle layer $v \in \mathcal L_{K+1}$ and calculate $$\bigvee_v\left\{ \textsc{LayerPath}(K+1,v) \land \textsc{LayerPath}'(K+1,v)\right\}.$$ \end{enumerate} \end{breakablealgorithm} \section{Query complexity} \label{sec:query} For simplicity, let us examine the lattice $$\bigtimes_{i=1}^n \{0,\ldots,t_i-s_i\},$$ as the analysis is identical. Let the number of positions with maximum coordinate value $d$ be $n_d$. We make an ansatz that the exponential complexity can be expressed as $$T(n_1,\ldots,n_D) \coloneqq T_1^{n_1} T_2^{n_2} \cdot \ldots \cdot T_D^{n_D}$$ for some values $T_1, T_2, \ldots, T_D > 1$ (we also can include $n_0$ and $T_0$, however, $T_0 = 1$ always and doesn't affect the complexity). We prove it by constructing generating polynomials for the precalculation and quantum search steps, and then approximating the required coefficients asymptotically. We use the saddle point method that is frequently used for such estimation, specifically the theorems developed in \cite{BM04}. \subsection{Generating polynomials} \label{sec:poly} First we estimate the number of edges of the hyperlattice queried in the precalculation step. The algorithm queries edges incoming to the vertices of weight at most $W_1$, and each vertex can have at most $n$ incoming edges. The size of any layer with weight less than $W_1$ is at most the size of the layer with weight exactly $W_1$, as the size of the layers is non-decreasing until weight $W_{K+1}$ \cite{dBvETK51}. Therefore, the number of queries during the precalculation is at most $n \cdot W_1 \cdot |\mathcal L_1| \leq n^2 D |\mathcal L_1|$, as $W_1 \leq nD$. Since we are interested in the exponential complexity, we can omit $n$ and $D$, thus the exponential query complexity of the precalculation is given by $|\mathcal L_1|$. Now let $P_d(x) \coloneqq \sum_{i=0}^d x^i$. The number of vertices of weight $W_1$ can be written as the coefficient at $x^{W_1}$ of the generating polynomial $$P(x) \coloneqq \prod_{d=0}^D P_d(x)^{n_d}.$$ Indeed, each $P_d(x)$ in the product corresponds to a single position $i \in [n]$ with maximum value $d$ and the power of $x$ in that factor represents the coordinate of the vertex in this position. Therefore, the total power that $x$ is raised to is equal to the total weight of the vertex, and coefficient at $x^{W_1}$ is equal to the number of vertices with weight $W_1$. Since the total query complexity of the algorithm is lower bounded by this coefficient, we have \begin{equation} \label{eq:prec} T(n_1,\ldots,n_D) \geq \left[x^{W_1}\right]P(x). \end{equation} Similarly, we construct polynomials for the \textsc{LayerPath} calls. Consider the total complexity of calling \textsc{LayerPath} recursively until some level $1 \leq k \leq K$ and then calling \textsc{Path} for a sublattice between levels $\mathcal L_k$ and $\mathcal L_{k+1}$. Define the variables for the vertices chosen by the algorithm at level $i$ (where $k \leq i \leq K+1$) by $v^{(i)}$. The \textsc{Path} call is performed on a sublattice between vertices $v^{(k)}$ and $v^{(k+1)}$, see Fig.~\ref{fig:rhombus}. \begin{figure}[h] \centering \includegraphics{rhombus.pdf} \caption{The choice of the vertices $v^{(i)}$ and the application of \textsc{Path} on the sublattice.} \label{fig:rhombus} \end{figure} Define $$S_{k,d}(x_{k,k},\ldots,x_{k,K+1}) \coloneqq \sum_{i=0}^d T_i^2 \cdot \sum_{\substack{p_k,\ldots,p_{K+1}\in[0,d]\\ p_{k+1} \leq \ldots \leq p_{K+1} \\ p_{k+1}-p_k=i}}{}\prod_{j=k}^{K+1} x_{k,j}^{p_j}.$$ Again, this corresponds to a single coordinate. The variable $x_{k,j}$ corresponds to the vertex $v^{(j)}$ and the power $p_j$ corresponds to the value of $v^{(j)}$ in that coordinate. Examine the following multivariate polynomial: $$S_k(x_{k,k},\ldots,x_{k,K+1}) \coloneqq \prod_{d=0}^D S_{k,d}^{n_d}(x_{k,k},\ldots,x_{k,K+1}).$$ We claim that the coefficient $$\left[x_{k,k}^{W_k} \cdots x_{k,K+1}^{W_{K+1}}\right]S_k(x_{k,k},\ldots,x_{k,K+1})$$ is the required total complexity squared. First of all, note that the value of this coefficient is the sum of $t^2$, where $t$ is the variable for the running time of \textsc{Path} between $v^{(k)}$ and $v^{(k+1)}$, for all choices of vertices $v^{(k)}$, $v^{(k+1)}$, $\ldots$, $v^{(K+1)}$. Indeed, the powers $p_j$ encode the values of coordinates of $v^{(j)}$, and a factor of $T_i^2$ is present for each multinomial that has $p_{k+1}-p_k=i$ (that is, $v^{(k+1)}_l-v^{(k)}_l=i$ for the corresponding position $l$). Then, we need to show that the sum of $t^2$ equals the examined running time squared. Note that the choice of each vertex $v^{(j)}$ is performed using VTS. In general, if we perform VTS on the algorithms with running times $s_1$, $\ldots$, $s_N$, then the total squared running time is equal to $s_1^2+\ldots+s_N^2$ by Theorem \ref{thm:vts}. By repeating this argument in our case inductively at the choice of each vertex $v^{(j)}$, we obtain that the final squared running time indeed is the sum of all $t^2$. Therefore, the square of the total running time of the algorithm is lower bounded by \begin{equation} \label{eq:src} T(n_1,\ldots,n_D)^2 \geq \left[x_{k,k}^{W_k} \cdots x_{k,K+1}^{W_{K+1}}\right]S_k(x_{k,k},\ldots,x_{k,K+1}). \end{equation} Together the inequalities (\ref{eq:prec}) and (\ref{eq:src}) allow us to estimate $T$. The total time complexity of the quantum algorithm is twice the sum of the coefficients given in Eq.~(\ref{eq:prec}) and (\ref{eq:src}) for all $k \in [K]$ (twice because of the calls to $\textsc{LayerPath}$ and its symmetric counterpart $\textsc{LayerPath}'$). This is upper bounded by $2K$ times the maximum of these coefficients. Since $2K$ is a constant, and there are $O(\log n)$ levels of recursion (see the next section), in total this contributes only $(2K)^{O(\log n)} = \poly(n)$ factor to the total complexity of the quantum algorithm. \subsubsection{Depth of recursion} Note that the algorithm stops the recursive calls if for at least one $k$, we have $W_k = W_{k+1}$, in which case it runs the classical dynamic programming on the whole sublattice at step \ref{itm:sc0}. That happens when $$\left\lfloor \sum_{d=1}^D \alpha_{k,d} d n_d\right\rfloor = \left\lfloor \sum_{d=1}^D \alpha_{k+1,d} d n_d\right\rfloor.$$ If this is true, then we also have $$\sum_{d=1}^D \alpha_{k+1,d} d n_d - \sum_{d=1}^D \alpha_{k,d} d n_d = c$$ for some constant $c < 1$. By regrouping the terms, we get $$\sum_{d=1}^D (\alpha_{k+1,d} - \alpha_{k,d}) d n_d = c.$$ Denote $h \coloneqq \min_{d \in [D]} \{ \alpha_{k+1,d} - \alpha_{k,d} \}$. Then $$\sum_{d=1}^D dn_d \leq \frac{c}{h}.$$ Note that the left hand side is the maximum total weight of a vertex. However, at each recursive call the difference between the vertices with the minimum and maximum total weights decreases twice, since the VTS call at step \ref{itm:sc3} runs over the vertices with weight half the current difference. Since $c$ and $h$ is constant, after $O(\log(nD)) = O(\log n)$ recursive calls the recursion stops. Moreover, the classical dynamic programming then runs on a sublattice of constant size, hence adds only a factor of $O(1)$ to the overall complexity. Lastly, we can address the contribution of the constant factor of VTS from Theorem \ref{thm:vts} to the complexity of our algorithm. At one level of recursion there are $K+1$ nested applications of VTS, and there are $O(\log n)$ levels of recursion. Therefore, the total overhead incurred is $O(1)^{O(K \log n)} = \poly(n)$, since $K$ is a constant. \subsection{Saddle point approximation} In this section, we show how to describe the tight asymptotic complexity of $T(n_1,\ldots,n_D)$ using the saddle point method (a detailed review can be found in \cite{FS09}, Chapter VIII). Our main technical tool will be the following theorem. \begin{theorem} \label{thm:asy} Let $p_1(x_1,\ldots,x_m)$, $\ldots$, $p_D(x_1,\ldots,x_m)$ be polynomials with non-negative coefficients. Let $n$ be a positive integer and $b_1, \ldots, b_D$ be non-negative rational numbers such that $b_1 + \ldots + b_D = 1$ and $b_d n$ is an integer for all $d \in [D]$. Let $a_{i,d}$ be rational numbers (for $i \in [m]$, $d \in [D]$) and $\alpha_i \coloneqq a_{i,1}b_1+\ldots+a_{i,D}b_D$. Suppose that $\alpha_i n$ are integer for all $i \in [m]$. Then \begin{enumerate}[(1)] \item \label{itm:upp} $\left[ x_1^{\alpha_1 n}\cdots x_m^{\alpha_m n}\right]\prod_{d=1}^D p_d(x_1,\ldots,x_m)^{b_d n} \leq \left(\inf_{x_1,\ldots,x_m>0} \prod_{d=1}^D \left(\frac{p_d(x_1,\ldots,x_m)}{x_1^{a_{1,d}}\cdots x_m^{a_{m,d}}}\right)^{b_d}\right)^n$ \item \label{itm:low} $\left[ x_1^{\alpha_1 n}\cdots x_m^{\alpha_m n}\right]\prod_{d=1}^D p_d(x_1,\ldots,x_m)^{b_d n} = \Omega\left(\left(\inf_{x_1,\ldots,x_m>0} \prod_{d=1}^D \left(\frac{p_d(x_1,\ldots,x_m)}{x_1^{a_{1,d}}\cdots x_m^{a_{m,d}}}\right)^{b_d}\right)^n \right)$, where $\Omega$ depends on the variable $n$. \end{enumerate} \end{theorem} \begin{proof} To prove this, we use the following saddle point approximation.\footnote{Setting $\gamma = 1$ in the statement of the original theorem.} \begin{theorem}[Saddle point method, Theorem 2 in \cite{BM04}] \label{thm:saddle} Let $p(x_1,\ldots,x_m)$ be a polynomial with non-negative coefficients. Let $\alpha_1, \ldots, \alpha_m$ be some rational numbers and let $n_i$ be the series of all integers $j$ such that $\alpha_k j$ are integers and $\left[x_1^{\alpha_1 j}\cdots x_m^{\alpha_m j}\right] p(x_1,\ldots,x_m)^j \neq 0$. Then $$\lim_{i \to \infty} \frac{1}{n_i} \log \left( \left[x_1^{\alpha_1 n_i}\cdot \ldots\cdot x_m^{\alpha_m n_i}\right] p(x_1,\ldots,x_m)^{n_i} \right)= \inf_{x_1,\ldots,x_m > 0} \log \left( \frac{p(x_1,\ldots,x_m)}{x_1^{\alpha_1} \cdot \ldots \cdot x_m^{\alpha_m}} \right).$$ \end{theorem} Let $p(x_1,\ldots,x_m) \coloneqq \prod_{d=1}^D p_d(x_1,\ldots,x_m)^{b_d}$, then \begin{align*} \frac{p(x_1,\ldots,x_m)}{x_1^{\alpha_1} \cdots x_m^{\alpha_m}} = \frac{\prod_{d=1}^D p_d(x_1,\ldots,x_m)^{b_d}}{x_1^{\alpha_1} \cdots x_m^{\alpha_m}} = \prod_{d=1}^D \frac{p_d(x_1,\ldots,x_m)^{b_d}}{x_1^{a_{1,d}b_d}\cdots x_m^{a_{m,d}b_d}} = \prod_{d=1}^D \left( \frac{p_d(x_1,\ldots,x_m)}{x_1^{a_{1,d}}\cdots x_m^{a_{m,d}}} \right)^{b_d}. \end{align*} For the first part, as $p(x_1,\ldots,x_m)^n$ has non-negative coefficients, the coefficient at the multinomial $x_1^{\alpha_1 n} \cdots x_m^{\alpha_m n}$ is upper bounded by $\inf_{x_1,\ldots,x_m>0} \frac{p(x_1,\ldots,x_m)^n}{x_1^{\alpha_1 n} \cdots x_m^{\alpha_m n}} = \left(\inf_{x_1,\ldots,x_m>0} \frac{p(x_1,\ldots,x_m)}{x_1^{\alpha_1} \cdots x_m^{\alpha_m}}\right)^n$. The second part follows directly by Theorem \ref{thm:saddle}. \end{proof} \subsubsection{Optimization program} To determine the complexity of the algorithm, we construct the following optimization problem. Recall that the Algorithm \ref{alg:main} is given by the number of layers $K$ and the constants $\alpha_{k,d}$ that determine the weight of the layers, so assume they are fixed known numbers. Assume that $\alpha_{k,d}$ are all rational numbers between $0$ and $1/2$ for $k \in [K]$; indeed, we can approximate any real number with arbitrary precision by a rational number. Also let $T_0 = 1$ and $\alpha_{K+1,d} = 1/2$ for all $d \in [D]$ for convenience. Examine the following program $\text{OPT}(D,K,\{\alpha_{k,d}\})$: \begin{align*} \text{minimize } T_D \hspace{1cm}\text{s.t.}\hspace{1cm} & T_d \geq \frac{P_d(x)}{x^{\alpha_{1,d}d}} & \forall d \in [D]\\ & T_d^2 \geq \frac{S_{k,d}(x_{k,k},\ldots,x_{k,K+1})}{x_{k,k}^{\alpha_{k,d}d}\cdots x_{k,K+1}^{\alpha_{K+1,d}d}} & \forall d \in [D], \forall k\in [K]\\ & T_d \geq 1 & \forall d \in [D]\\ & x > 0 \\ & x_{k,j} > 0 & \forall k \in [K], \forall j \in \{k,\ldots,K+1\} \end{align*} Let $n\coloneqq n_1+\ldots+n_D$ and $\alpha_k \coloneqq \frac{\sum_{d=1}^D \alpha_{k,d} d n_d}{n}$. Suppose that $T_1, \ldots, T_D$ is a feasible point of the program. Then by Theorem \ref{thm:asy} (\ref{itm:upp}) (setting $b_i \coloneqq n_i/n$ and $a_{i,d} \coloneqq \alpha_{i,d} d$) we have $$[x^{\alpha_1 n}] P(x) \leq \inf_{x>0} \prod_{d=1}^D \left(\frac{P_d(x)}{x^{\alpha_{1,d}d}}\right)^{n_d} \leq T_1^{n_1}\cdots T_D^{n_D}.$$ Similarly, \begin{align*} [x_{k,k}^{\alpha_k n}\cdots x_{k,K+1}^{\alpha_{K+1} n}] S_k(x_{k,k},\ldots,x_{k,K+1}) &\leq \inf_{x_{k,k},\ldots,x_{k,K+1}>0} \prod_{d=1}^D \left(\frac{S_{k,d}(x_{k,k},\ldots,x_{k,K+1})}{x_{k,k}^{\alpha_{k,d}d}\cdots x_{k,K+1}^{\alpha_{K+1,d}d}}\right)^{n_d} \\ &\leq (T_1^{n_1}\cdots T_D^{n_D})^2. \end{align*} Therefore, the program provides an upper bound on the complexity. There are two subtleties that we need to address for correctness. \begin{itemize} \item The numbers $\alpha_k n$ might not be integer; in Algorithm \ref{alg:main}, the weights of the layers are defined by $W_k = \lfloor \alpha_k n\rfloor$. This is a problem, since the inequalities in the program use precisely the numbers $\alpha_{k,d}$. Examine the coefficient $[x_1^{\lfloor\alpha_1 n\rfloor}\cdots x_m^{\lfloor\alpha_m n\rfloor}]p(x_1,\ldots,x_m)$ in such general case (when we need to round the powers). Let $\delta_k \coloneqq \alpha_k n - \lfloor \alpha_k n \rfloor$, here $0 \leq \delta_k < 1$. Then, by Theorem \ref{thm:asy} (\ref{itm:upp}), \begin{align*} \left[x_1^{\lfloor\alpha_1 n\rfloor}\cdots x_m^{\lfloor\alpha_m n\rfloor}\right]p(x_1,\ldots,x_m)^n &\leq \inf_{x_1,\ldots,x_m \geq 0} \frac{p(x_1,\ldots,x_m)^n}{x_1^{\alpha_1 n - \delta_1}\cdots x_m^{\alpha_m n - \delta_m}} = (*) \end{align*} Now let $\hat x_1,\ldots,\hat x_m$ be the arguments that achieve $\inf_{x_1,\ldots,x_m \geq 0} \frac{p(x_1,\ldots,x_m)}{x_1^{\alpha_1}\cdots x_m^{\alpha_m}}$. Since $0 \leq \delta_k < 1$, we have $\hat x_k^{\delta_k} \leq \max\{\hat x_k,1\}$. Hence \begin{align*} (*) &\leq (\hat x_1^{\delta_1}\cdots \hat x_m^{\delta_m})\cdot\frac{p(\hat x_1,\ldots,\hat x_m)^n}{\hat x_1^{\alpha_1 n}\cdots \hat x_m^{\alpha_m n}} \leq \left(\prod_{k=1}^m \max\{\hat x_k,1\}\right) \cdot \left(\inf_{x_1,\ldots,x_m \geq 0} \frac{p(x_1,\ldots,x_m)}{x_1^{\alpha_1}\cdots x_m^{\alpha_m}}\right)^n. \end{align*} As the additional factor is a constant, we can ignore it in the complexity. \item The second issue is when $W_k=W_{k+1}$ for some $k$. Then according to Algorithm \ref{alg:main}, we run the classical algorithm with complexity $\widetilde{\Theta}((D+1)^n)$. However, in that case $n$ is constant (see Section \ref{sec:poly}, Depth of recursion), which gives only a constant factor to the complexity. \end{itemize} \subsubsection{Optimality of the program} In the start of the analysis, we made an assumption that the exponential complexity $T(n_1,\ldots,n_D)$ can be expressed as $T_1^{n_1}\cdots T_D^{n_D}$. Here we show that the optimization program (which gives an upper bound on the complexity) can indeed achieve such value and gives the best possible solution. \begin{itemize} \item First, we prove that $\text{OPT}(D,K,\{\alpha_{k,d}\})$ has a feasible solution. For that, we need to show that all polynomials in the program can be upper bounded by a constant for some fixed values of the variables. First of all, $\frac{P_d(x)}{x^{\alpha_{1,d}d}}$ is upper bounded by $d+1$ (setting $x=1$). Now fix $k$ and examine the values $\frac{S_{k,d}(x_{k,k},\ldots,x_{k,K+1})}{x_{k,k}^{\alpha_{k,d}d}\cdots x_{k,K+1}^{\alpha_{K+1,d}d}}$. Examine only such assignments of the variables $x_{k,j}$ that $x_{k,k}x_{k,k+1}=1$ and $x_{k,j}=1$ for all other $j > k+1$. Now we write the polynomial as a univariate polynomial $S_{k,d}(y) \coloneqq S_{k,d}(1/y,y,1,1,\ldots,1)$. Note that for any summand of $S_{k,d}(y)$, if it contains some $T_i^2$ as a factor, then it is of the form $x_{k,k}^{p_k}x_{k,k+1}^{p_k+i}\cdot T_i^2 = y^i T_i^2$. Hence the polynomial can be written as $S_{k,d}(y) = \sum_{i=0}^d c_i y^i T_i^2$ for some constants $c_1, \ldots, c_d$. From this we can rewrite the corresponding program inequality and express $T_d^2$: \begin{align} T_d^2 &\geq \frac{\sum_{i=0}^d c_i y^i T_i^2}{y^{(\alpha_{k+1,d}-\alpha_{k,d})d}} \label{eq:qs}\\ T_d^2 &\geq \frac{\sum_{i=0}^{d-1} c_i y^i T_i^2}{y^{(\alpha_{k+1,d}-\alpha_{k,d})d}} + y^{(1-\alpha_{k+1,d}+\alpha_{k,d})d}c_d T_d^2 \nonumber\\ T_d^2 &\geq \frac{1}{1-y^{(1-\alpha_{k+1,d}+\alpha_{k,d})d}c_d} \cdot \frac{\sum_{i=0}^{d-1} c_i y^i T_i^2}{y^{(\alpha_{k+1,d}-\alpha_{k,d})d}}. \nonumber \end{align} Note that $c_d$ are constants that do not depend on $T_i$. If the right hand side is negative, then it follows that the original inequality Eq.~(\ref{eq:qs}) does not hold. Thus we need to pick such $y$ that the right hand side is positive for all $d$. Hence we require that $$y < \left(\frac{1}{c_d}\right)^{\frac{1}{(1-\alpha_{k+1,d}+\alpha_{k,d})d}}.$$ Since the right hand side is a constant that does not depend on $T_i$, we can pick such $y$ that satisfies this inequality for all $d$. Then it follows that all $T_i$ is also upper bounded by some constants (by induction on $i$). \item Now the question remains whether the optimal solution to $\text{OPT}(D,K,\{\alpha_{k,d}\})$ gives the optimal complexity. That is, is the complexity $T_1^n\cdots T_D^{n_D}$ given by the optimal solution of the optimization program such that $T_D$ is the smallest possible? Suppose that indeed the complexity of the algorithm is upper bounded by $T_1^n\cdots T_D^{n_D}$ for some $T_1$, $\ldots$, $T_D$. We will derive a corresponding feasible point for the optimization program. Examine the complexity of the algorithm for $n_1 = b_1 n, \ldots, n_D = b_D n$ for some fixed rational $b_i$ such that $b_1+\ldots+b_D=1$. The coefficients of the polynomials $P$ and $S_k$ give the complexity of the corresponding part of the algorithm (precalculation, and quantum search until the $k$-th level, respectively). Such coefficients are of the form $\left[ x_1^{\alpha_1 n}\cdots x_m^{\alpha_m n}\right]\prod_{d=1}^D p_d(x_1,\ldots,x_m)^{n_d}$. Let $A_d \coloneqq T_d$, if $p = P$, and $A_d \coloneqq T_d^2$, if $p = S_k$. Then we have $$A_1^{n_1}\cdots A_D^{n_D} \geq \left[ x_1^{\alpha_1 n}\cdots x_m^{\alpha_m n}\right]\prod_{d=1}^D p_d(x_1,\ldots,x_m)^{n_d} = (*)$$ On the other hand, $$(*) = \Omega\left(\left(\inf_{x_1,\ldots,x_m>0} \prod_{d=1}^D \left(\frac{p_d(x_1,\ldots,x_m)}{x_1^{a_{1,d}}\cdots x_m^{a_{m,d}}}\right)^{b_d}\right)^n \right)$$ when $n$ grows large by Theorem \ref{thm:asy} (\ref{itm:low}) (setting $a_{i,d} \coloneqq \alpha_{i,d} d$). Then, in the limit $n \to \infty$, we have \begin{equation} \label{eq:lb} A_1^{b_1}\cdots A_D^{b_D} \geq \inf_{x_1,\ldots,x_m>0} \prod_{d=1}^D \left(\frac{p_d(x_1,\ldots,x_m)}{x_1^{a_{1,d}}\cdots x_m^{a_{m,d}}}\right)^{b_d}. \end{equation} Now let $\Delta_{D-1}$ be the standard $D$-simplex defined by $\{ b \in \mathbb R^D \mid b_1+\ldots+b_D = 1, b_d \geq 0\}$. Define $F_d(x) \coloneqq \frac{p_d(x_1,\ldots,x_m)}{x_1^{a_{1,d}}\cdots x_m^{a_{m,d}}}$, and $F(b,x) \coloneqq \prod_{d=1}^D F_d(x)^{b_d}$ for $b \in \Delta_{D-1}$ and $x \in \mathbb R_{> 0}^m$. First, we prove that that for a fixed $b$, the function $F(b,x)$ is strictly convex. Examine the polynomial $p_d(x_1,\ldots,x_m)$, which is either $P_d(x)$ or $S_{k,d}(x_{k,k},\ldots,x_{k,K+1})$. It was shown in \cite{Good57}, Theorem 6.3 that if the coefficients of $p_d(x_1,\ldots,x_m)$ are non-negative, and the points $(c_1,\ldots,c_m)$, at which $$\left[x_1^{c_1}\cdots x_m^{c_m}\right]p_d(x_1,\ldots,x_m) > 0,$$ linearly span an $m$-dimensional space, then $\log(F_d(x))$ is a strictly convex function. If $p_d = P_d$, then this property immediately follows, because there is just one variable $x$ and the polynomial is non-constant. For $p_d = S_{k,d}$, the polynomial consists of summands of the form $T_{c_{k+1}-c_{k}}^2 x_{k,k}^{c_k}x_{k,k+1}^{c_{k+1}}\cdots x_{k,K+1}^{c_{K+1}}$, for $c_k \leq c_{k+1} \leq \ldots \leq c_{K+1}$. Note that the coefficient $T_{c_{k+1}-c_{k}}^2$ is positive. Thus the points $(c_k,\ldots,c_{K+1}) = (0,\ldots,0,1,\ldots,1)$ indeed linearly span a $(K-k+2)$-dimensional space. Therefore, $\log(F_d(x))$ is strictly convex. Then also the function $\sum_{d=1}^D b_d \log(F_d(x)) = \log(F(b,x))$ is strictly convex (for fixed $b$), as the sum of strictly convex functions is convex. Therefore, $F(b,x)$ is strictly convex as well. Therefore, the argument $\hat x(b)$ achieving $\inf_{x \in \mathbb R_{> 0}^m} F(b,x)$ is unique. Let $\hat F_d(b) \coloneqq F_d(\hat x(b))$ and define $D$ subsets of the simplex $C_d := \{b \in \Delta_{D-1} \mid \hat F_d(b) \leq A_d \}$. We will apply the following result for these sets: \begin{theorem}[Knaster-Kuratowski-Mazurkiewicz lemma \cite{KKM29}] Let the vertices of $\Delta_{D-1}$ be labeled by integers from $1$ to $D$. Let $C_1$, $\ldots$, $C_D$ be a family of closed sets such that for any $I \subseteq [D]$, the convex hull of the vertices labeled by $I$ is covered by $\cup_{d \in I} C_d$. Then $\cap_{d \in [D]} C_d \neq \varnothing$. \end{theorem} We check that the conditions of the lemma apply to our sets. First, note that $F(b,x)$ is continuous and strictly convex for a fixed $b$, hence $\hat x(b)$ is continuous and thus $\hat F_d(b)$ is continuous as well. Therefore, the ``threshold'' sets $C_d$ are closed. Secondly, let $I \subseteq [D]$ and examine a point $b$ in the convex hull of the simplex vertices labeled by $I$. For such a point, we have $b_d = 0$ for all $d \not\in I$. For the indices $d \in I$, for at least one we should have $\hat F_d(b) \leq A_d$, otherwise the inequality in Eq.~(\ref{eq:lb}) would be contradicted. Note that it was stated only for rational $b$, but since $\hat F_d(b)$ are continuous and any real number can be approximated with a rational number to arbitrary precision, the inequality also holds for real $b$. Thus indeed any such $b$ is covered by $\cup_{d \in I} C_d$. Therefore, we can apply the lemma and it follows that there exists a point $b \in \Delta_{D-1}$ such that $A_d \geq \hat F_d(b)$ for all $d \in [D]$. The corresponding point $\hat x(b)$ is a feasible point for the examined set of inequalities in the optimization program. \end{itemize} \subsubsection{Total complexity} Finally, we will argue that there exists such a choice for $\{\alpha_{k,d}\}$ that $$\OPT(D,K,\{\alpha_{k,d}\}) < D+1.$$ Examine the algorithm with only $K=1$; the optimal complexity for any $K > 1$ cannot be larger, as we can simulate $K$ levels with $K+1$ levels by setting $\alpha_{2,d}=\alpha_{1,d}+\epsilon$ for $\epsilon \to 0$ for all $d \in [D]$. For simplicity, denote $\alpha_d := \alpha_{1,d}$. \begin{itemize} \item Now examine the precalculation inequalities in $\OPT(D,1,\{\alpha_{1,d}\})$. For any values of $\alpha_{1,d}$, if we set $x=1$, we have $\frac{P_d(x)}{x^{\alpha_d d}} = \frac{\sum_{i=0}^d x^i} {x^{\alpha_d d}} = d+1$. The derivative is equal to $$\left( \frac{\sum_{i=0}^d x^i} {x^{\alpha_d d}} \right)' = \frac{x^{\alpha_d d} \cdot \sum_{i=1}^d i x^{i-1} - \alpha_d d x^{\alpha_d d - 1} \cdot \sum_{i=0}^d x^i}{x^{2\alpha_d d}} = \frac{d(d+1)}{2}-\alpha_d d(d+1)$$ at point $x = 1$. Thus when $\alpha_d < \frac{1}{2}$, the derivative is positive. It means that for arbitrary $\alpha_d < \frac{1}{2}$, there exists some $x(d)$ such that $\frac{P_d(x)}{x^{\alpha_d d}} < d+1$, and $\frac{P_d(x)}{x^{\alpha_d d}}$ monotonically grows on $x \in [x(d),1]$. Thus, for arbitrary setting of $\{\alpha_d\}$ such that $\alpha_d < \frac{1}{2}$ for all $d \in [D]$, we can take $\hat x \coloneqq \max_{d \in [D]} \{ x(d) \}$ as the common parameter, in which case all $\frac{P_d(\hat x)}{\hat x^{\alpha_d d}} < d+1$. \item Now examine the set of the quantum search inequalities. Let $y \coloneqq x_{1,1}$ and $z \coloneqq x_{1,2}$ for simplicity. Then such inequalities are given by $$T_d^2 \geq S_{1,d}(y,z) = \frac{\sum_{i=0}^d T_i^2 \sum_{p=0}^{d-i} y^p z^{p+i}}{y^{\alpha_d d} z^{d/2}}.$$ Now restrict the variables to condition $yz = 1$. In that case, the polynomial above simplifies to $$S_{1,d}(z) \coloneqq \frac{\sum_{i=0}^d T_i^2 \sum_{p=0}^{d-i} z^i}{y^{\frac d 2 + d\left(\alpha_d-\frac{1}{2}\right)} z^{d/2}} = \left(\sum_{i=0}^d T_i^2 (d-i+1) z^i\right)\cdot z^{d\left(\alpha_d-\frac{1}{2}\right)}.$$ We now find such values of $z$ and $\alpha_1, \ldots, \alpha_D$ so that $S_{1,d}(z) < (d+1)^2$ for all $d \in [D]$, where $T_1, \ldots, T_D$ are any values such that $T_d \leq d+1$ for all $d \in [D]$. Denote $\hat S_{1,d}(z)$ to be $S_{1,d}(z)$ with $T_d = d+1$ for all $d \in [D]$, then $\hat S_{1,d}(z) < (d+1)^2$ as well. Now let $T_d$ be the maximum of $\frac{P_d(\hat x)}{\hat x^{\alpha_d d}}$ from the previous bullet and $\hat S_{1,d}(z)$. Then, $T_d < d+1$, and we have both $T_d \geq \frac{P_d(\hat x)}{\hat x^{\alpha_d d}}$ and $T_d^2 \geq \hat S_{1,d}(z) \geq S_{1,d}(z)$, since $S_{1,d}(z)$ cannot become larger when $T_d$ decrease. Now we show how to find such $z$ and $\alpha_1, \ldots, \alpha_D$. Examine the sum in the polynomial $\hat S_{1,d}(z)$ $$\sum_{i=0}^d (i+1)^2 (d-i+1) z^i = (d+1) + \sum_{i=1}^d (i+1)^2 (d-i+1) z^i.$$ Examine the second part of the sum. We can find a sufficiently small value of $z \in (0,1)$ such that this part is smaller than any value $\epsilon > 0$ for all $d \in [D]$. Now, let $\alpha_d = \frac{1}{2}-\frac{c}{d}$ for some constant $c > 0$. Then $$z^{d\left(\alpha_d-\frac{1}{2}\right)} = z^{-c}$$ for all $d \in [D]$. Thus, the total value of the sum now is at most $(d+1+\epsilon)z^{-c}$. As $z^{-1} > 1$, take a sufficiently small value of $c$ so that this value is at most $(d+1)^2$. \end{itemize} Therefore, putting all together, we have the main result: \begin{theorem} \label{thm:main} There exists a bounded-error quantum algorithm that solves the path in the $n$-dimensional lattice problem using $\widetilde O(T_D^n)$ queries, where $T_D < D+1$. The optimal value of $T_D$ can be found by optimizing $\OPT(D,K,\{\alpha_{k,d}\})$ over $K$ and $\{\alpha_{k,d}\}$. \end{theorem} \subsection{Complexity for small \texorpdfstring{$\boldsymbol{D}$}{D}} \label{sec:smalld} To find the estimate on the complexity for small values of $D$ and $K$, we have optimized the value of $\OPT(D,K,\{\alpha_{k,d}\})$ using Mathematica (minimizing over the values of $\alpha_{k,d}$). Table \ref{tbl:numerical} compiles the results obtained by the optimization. In case of $D=1$, we recovered the complexity of the quantum algorithm from \cite{ABIKPV19} for the path in the hypercube problem, which is a special case of our algorithm. \begin{table}[ht] \begin{center} \begin{tabular}{c||c|c|c|c|c|c} & $D=1$ & $D=2$ & $D=3$ & $D=4$ & $D=5$ & $D=6$ \\ \hline\hline $K=1$ & $1.86793$ & $2.76625$ & $3.68995$ & $4.63206$ & $5.58735$ & $6.55223$ \\ \hline $K=2$ & $1.82562$ & $2.67843$ & $3.55933$ & $4.46334$ & $5.38554$ & $6.32193$ \\ \hline $K=3$ & $1.81819$ & $2.66198$ & $3.53322$ & $4.42759$ & $5.34059$ & $6.26840$ \\ \hline $K=4$ & $1.81707$ & $2.65939$ & $3.52893$ & $4.42148$ & $5.33263$ & $6.25862$ \\ \hline $K=5$ & $1.81692$ & $2.65908$ & $3.52836$ & $4.42064$ & $5.33149$ & $6.25720$ \end{tabular} \end{center} \caption{The complexity of the quantum algorithm for small values of $D$ and $K$.} \label{tbl:numerical} \end{table} For $K=1$, we were able to estimate the complexity for up to $D=18$. Figure \ref{fig:k1} shows the values of the difference between $D+1$ and $T_D$ for this range. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.85] \begin{axis}[ width=0.7\textwidth, height=\axisdefaultheight, axis lines = left, xlabel={$D$}, ylabel={$D+1-T_D$}, xmin=0, xmax=18, ymin=0, ymax=0.6, xtick={5,10,15}, ytick={0.1,0.2,0.3,0.4,0.5,0.6}, ymajorgrids=true, grid style=dashed, ] \addplot[ only marks, color=teal, mark size=2pt, ] coordinates { (1, 2-1.8679291102114184) (2, 3-2.7662504942190176) (3, 4-3.6899390889963155) (4, 5-4.632054318702819) (5, 6-5.5873596338697835) (6, 7-6.55222537443972) (7, 8-7.524152471011434) (8, 9-8.501400896330647) (9, 10-9.482736621759516) (10, 11-10.467266858165571) (11, 12-11.454333012714129) (12, 13-12.443440344113888) (13, 14-13.43421095829736) (14, 15-14.426351815325729) (15, 16-15.419632547557956) (16, 17-16.41386980382098) (17, 18-17.408916012469916) (18, 19-18.404651188768383) }; \end{axis} \end{tikzpicture} \caption{The advantage of the quantum algorithm over the classical for $K=1$.} \label{fig:k1} \end{figure} Our Mathematica code used for determining the values of $T_D$ can be accessed at \url{https://doi.org/10.5281/zenodo.4603689}. In Appendix \ref{app:numerical}, we list the parameters for the case $K=1$. \subsection{Lower bound for general \texorpdfstring{$\boldsymbol{D}$}{D}} \label{sec:lb} Even though Theorem \ref{thm:main} establishes the quantum advantage of the algorithm, it is interesting how large the speedup can get for large $D$. In this section, we prove that the speedup cannot be substantial, more specifically: \begin{theorem} \label{thm:lb} For any fixed integers $D \geq 1$ and $K \geq 1$, Algorithm \ref{alg:main} performs $\widetilde\Omega\left(\left(\frac{D+1}{\e}\right)^n\right)$ queries on the lattice $Q(D,n)$. \end{theorem} \begin{proof} The structure of the proof is as follows. First, we prove that if $\alpha_{1,D} > \frac{1}{4}$, then the number of queries used in the algorithm during the precalculation step \ref{itm:sc1} is at least $\widetilde\Omega((0.664554(D+1))^n)$ queries (Lemma \ref{thm:plb} in Appendix \ref{app:lba}). Then, we prove that if $\alpha_{1,D} \leq \frac{1}{4}$, then the quantum search part in steps \ref{itm:sc2} and \ref{itm:sc3} performs at least $\widetilde\Omega\left(\left(\frac{D+1}{\e}\right)^n\right)$ queries (Lemma \ref{thm:slb} in Appendix \ref{app:lba}). Therefore, depending on whether $\alpha_{1,D} > \frac{1}{4}$, one of the precalculation or the quantum search performs $\widetilde\Omega((c(D+1))^n)$ queries for constant $c$, and the claim follows, since $\frac{1}{\e} < 0.664554$. \end{proof} \section{Time complexity} \label{sec:time} In this section we examine a possible high-level implementation of the described algorithm and argue that there exists a quantum algorithm with the same exponential time complexity as the query complexity. Firstly, we assume the commonly used QRAM model of computation that allows to access $N$ memory cells in superposition in time $O(\log N)$ \cite{GLM08}. This is needed when the algorithm accesses the precalculated values of $\text{dp}$. Since in our case $N$ is always at most $(D+1)^n$, this introduces only a $O(\log ((D+1)^n)) = O(n)$ additional factor to the time complexity. The main problem that arises is the efficient implementation of VTS. During the VTS execution, multiple quantum algorithms should be performed in superposition. More formally, to apply VTS to algorithms $\mathcal A_1$, $\ldots$, $\mathcal A_N$, we should specify the \emph{algorithm oracle} that, given the index of the algorithm $i$ and the time step $t$, applies the $t$-th step of $\mathcal A_i$ (see Section 2.2 of \cite{CJOP20} for formal definition of such an oracle and related discussion). If the algorithms $\mathcal A_i$ are unstructured, the implementation of such an oracle may take even $O(N)$ time (if, for example, all of the algorithms perform a different gate on different qubits at the $t$-th step). We circumvent this issue by showing that it is possible to use only Grover's search to implement the algorithm, retaining the same exponential complexity (however, the sub-exponential factor in the complexity will increase). Nonetheless, the use of VTS in the query algorithm not only achieves a smaller query complexity, but also allowed to prove the estimate on the exponential complexity, which would not be so amiable for the algorithm that uses Grover's search. \subsection{Implementation} The main idea of the implementation is to fix a ``class'' of vertices for each of the $2K+1$ layers examined by the algorithm, and do this for all $r = O(\log n)$ levels of recursion. We will essentially define these classes by the number of coordinates of a vertex in such layer that are equal to $0$, $1$, $\ldots$, $D$. Then, we can first fix a class for each layer for all levels of recursion classically. We will show that there are at most $n^{D^2}$ different classes we have to consider at each layer. Since there are $2K+1$ layers at one level of recursion, and $O(\log n)$ levels of recursion, this classical precalculation will take time $n^{O(D^2 K \log n)}$. For each such choice of classes, we will run a quantum algorithm that checks for the path in the hyperlattice constrained on these classes of the vertices the path can go through. The advantage of the quantum algorithm will come from checking the permutations of the coordinates using Grover's search. The time complexity of the quantum part will be $n^{O(K \log n)} T_D^n$ ($T_d^n$ as in the query algorithm, and $n^{O(K \log n)}$ from the logarithmic factors in Grover's search), therefore the total time complexity will be $n^{O(D^2 K \log n)}\cdot n^{O(K \log n)} T_D^n=n^{O(D^2 K \log n)} T_D^n$, thus the exponential complexity stays the same. \subsubsection{Layer classes} In all of the applications of VTS in the algorithm, we use it in the following scenario: given a vertex $x$, examine all vertices $y$ with fixed weight $|y| = W$ such that $y < x$ (note that VTS over the middle layer $\mathcal L_{K+1}$ can be viewed in this way by taking $x$ to be the final vertex in the lattice, and VTS over the vertices in the layers symmetrical to $\mathcal L_{K+1}$ can be analyzed similarly). We define a \emph{class} of $y$'s (in respect to $x$) in the following way. Let $n_{a,b}$ be the number of $i \in [n]$ such that $y_i = a$ and $x_i = b$, where $a \leq b$. All $y$ in the same class have the same values of $n_{a,b}$ for all $a$, $b$. Also define a \emph{representative} of a class as a single particular $y$ from that class; we will define it as the lexicographically smallest such $y$. As mentioned in the informal description above, we can fix the classes for all layers examined by the quantum algorithm and generate the corresponding representatives classically. Note that in our quantum algorithm, recursive calls work with the sublattice constrained on the vertices $s \leq y \leq t$ for some $s < t$, so for each position of $y_i$ we should have also $y_i \geq s_i$; however, we can reduce it to lattice $0^n \leq y' \leq x$, where $x_i \coloneqq t_i-s_i$ for all $i$. To get the real value of $y$, we generate a representative $y'$, and set $y_i \coloneqq y_i'+s_i$. Consider an example for $D=2$. The following figure illustrates the representative $y$ (note that the order of positions of $x$ here is lexicographical for simplicity, but it may be arbitrary). \begin{figure}[H] \centering \begin{align*} x&=00\dotline[0.58cm]0\hspace{0.06cm}11\dotline[1.8cm]1\hspace{0.07cm}22\dotline[3cm]2\\ y&=\underbrace{00\ldots0}_{n_{0,0}}\underbrace{00\ldots0}_{n_{0,1}}\underbrace{11\ldots1}_{n_{1,1}}\underbrace{00\ldots0}_{n_{0,2}}\underbrace{11\ldots1}_{n_{1,2}}\underbrace{22\ldots2}_{n_{2,2}} \end{align*} \caption{The (lexicographically smallest) representative for $y$ for $D=2$.} \end{figure} Note that $n_{a,b}$ can be at most $n$. Therefore, there are at most $n^{D^2}$ choices for classes at each layer. Thus the total number of different sets of choices for all layers is $n^{O(D^2 K \log n)}$. For each such set of choices, we then run a quantum algorithm that checks for a path in the sublattice constrained on these classes. \subsubsection{Quantum algorithm} The algorithm basically implements Algorithm \ref{alg:main}, with VTS replaced by Grover's search. Thus we only describe how we run the Grover's search. We will also use the analysis of Grover's search with multiple marked elements. \begin{theorem}[Grover's search] Let $f : S \to \{0,1\}$, where $|S| = N$. Suppose we can generate a uniform superposition $\frac{1}{\sqrt N} \sum_{x \in S} \ket{x}$ in $O(\poly(\log N))$ time, and there is a bounded-error quantum algorithm $\mathcal A$ that computes $f(x)$ with time complexity $T$. Suppose also that there is a promise that either there are at least $k$ solutions to $f(x) = 1$, or there are none. Then there exists a bounded-error quantum algorithm that runs in time $O(T \log N \sqrt{N/k})$, and detects whether there exists $x$ such that $f(x) = 1$. \end{theorem} \begin{proof} First, it is well-known that in the case of $k$ marked elements, Grover's algorithm \cite{Grover96} needs $O(\sqrt{N/k})$ iterations. Second, the gate complexity of one iteration of Grover's search is known to be $O(\log N)$. Finally, even though $\mathcal A$ has constant probability of error, there is a result that implements Grover's search with a bounded-error oracle without introducing another logarithmic factor \cite{HMDw03}. \end{proof} Now, for a class $\mathcal C$ of $y$'s (for a fixed $x$) we need to generate a superposition $\frac{1}{\sqrt{|\mathcal C|}} \sum_{y \in \mathcal C} \ket{y}$ efficiently to apply Grover's algorithm. We will generate a slightly different superposition for the same purposes. Let $I_1, \ldots, I_D$ be sets $I_d \coloneqq \{i \in [n] \mid x_i = d\}$ and let $n_d \coloneqq |I_d|$. Let $y_{\mathcal C}$ be the representative of $\mathcal C$. We will generate the superposition \begin{equation} \bigotimes_{d=0}^D \frac{1}{\sqrt{n_d!}} \sum_{\pi \in S_{n_d}} \ket{{\pi(y_{\mathcal C}}_{I_d})}\ket{\pi}, \label{eq:superposition} \end{equation} where ${y_{\mathcal C}}_{I_d}$ are the positions of $y_{\mathcal C}$ in $I_d$. We need a couple of procedures to generate such state. First, there exists a procedure to generate the uniform superposition of permutations $\frac{1}{\sqrt{n!}}\sum_{\pi \in S_n} \ket{\pi_1,\ldots,\pi_n}$ that requires $O(n^2 \log n)$ elementary gates \cite{AL97,CdLYMW19}. Then, we can build a circuit with $O(\poly(n))$ gates that takes as an input $\pi \in S_n$, $s \in \{0,1,\ldots,D\}^n$ and returns $\pi(s)$. Such an circuit essentially could work as follows: let $t \coloneqq 0^n$; then for each pair $i, j \in [n]$, check whether $\pi(i)=j$; if yes, let $t_j \leftarrow t_j+ s_{\pi(i)}$; in the end return $t$. Using these two subroutines, we can generate the required superposition using $O(\poly(n))$ gates (we assume $D$ is a constant). However, we do not necessarily know the sets $I_d$, because the positions of $x$ have been permuted by previous applications of permutations. To mitigate this, note that we can access this permutation in its own register from the previous computation. That is, suppose that $x$ belongs to a class $\mathcal C'$ and $x = \sigma(x_{\mathcal C '})$, where $x_{\mathcal C '}$ is the representative of $\mathcal C'$ generated by the classical algorithm from the previous subsection. Then we have the state $\ket{\sigma(x_{\mathcal C '})}\ket{\sigma}$. We can then apply $\sigma$ to both $\pi(y_{\mathcal C})$ and $\pi$. That is, we implement the transformation $$\ket{\pi(y_{\mathcal C})}\ket{\pi} \to \ket{\sigma(\pi(y_{\mathcal C}))}\ket{\sigma\pi}.$$ Such transformation can also be implemented in $O(\poly(n))$ gates. Note that now we store the permutation $\sigma\pi$ in a separate register, which we use in a similar way recursively. Finally, examine the number of positive solutions among $\pi(y_{\mathcal C})$. That is, for how many $\pi$ there exists a path from $\pi(y)$ to $x$? Suppose that there is a path from $y$ to $x$ for some $y \in \mathcal C$. Examine the indices $I_d$; for $n_{a,d}$ of these indices $i$ we have $y_i = a$. There are exactly $n_{a,d}!$ permutations that permute these indices and don't change $y$. Hence, there are $\prod_{a = 0}^d n_{a,d}!$ distinct permutations $\pi \in S_{n_d}$ such that $\pi(y) = y$. Therefore, there are $k\coloneqq\prod_{d=0}^D \prod_{a = 0}^d n_{a,d}!$ distinct permutations $\pi$ among the considered such that $\pi(y) = y$. The total number of considered permutations is $N\coloneqq\prod_{d=0}^D n_d!$. Among these permutations, either there are no positive solutions, or at least $k$ of the solutions are positive. Grover's search then works in time $O(T \log N \sqrt{N/k})$. In this case, $N/k$ is exactly the size of the class $\mathcal C$, because $\frac{n_d!}{n_{0,d}!\cdots n_{d,d}!}$ is the number of unique permutations of ${y_{\mathcal C}}_{P_d}$, the multinomial coefficient $\binom{n_d}{n_{0,d},\ldots,n_{d,d}}$. Hence the state Eq.~(\ref{eq:superposition}) effectively replaces the need for the state $\frac{1}{\sqrt{|\mathcal C|}} \sum_{y \in \mathcal C} \ket{y}$. \subsubsection{Total complexity} Finally, we discuss the total time complexity of this algorithm. The exponential time complexity of the described quantum algorithm is at most the exponential query complexity because Grover's search examines a single class $\mathcal C$, while VTS in the query algorithm examines all possible classes. Since Grover's search has a logarithmic factor overhead, the total time complexity of the quantum part of the algorithm is what is described in Section~\ref{sec:query} multiplied by $n^{O(K \log n)}$, resulting in $n^{O(K \log n)} T_1^{n_1} \cdots T_D^{n_D}$. Since there are $n^{O(D^2 K \log n)}$ sets of choices for the classes of the layers, the final total time complexity of the algorithm is $n^{O(D^2 K \log n)} T_1^{n_1} \cdots T_D^{n_D}$. Therefore, we have the following result. \begin{theorem} \label{thm:time} Assuming QRAM model of computation, there exists a quantum algorithm that solves the path in the $n$-dimensional lattice problem and has time complexity $\poly(n)^{D^2 \log n} \cdot T_D^n$. \end{theorem} \section{Applications} \label{sec:app} \subsection{Set multicover} \label{sec:smc} As an example application of our algorithm, we apply it to the \textsc{Set Multicover} problem (SMC). This is a generalization of the \textsc{Minimum Set Cover} problem. The SMC problem is formulated as follows:\vspace{2mm} \textbf{Input:} A set of subsets $\mathcal S \subseteq 2^{[n]}$, and a positive integer $D$.\vspace{2mm} \textbf{Output:} The size $k$ of the smallest tuple $(S_1,\ldots,S_k) \in \mathcal S^k$, such that for all $i\in \mathcal [n]$, we have $|\{j \mid i \in S_j\}| \geq D$, that is, each element is covered at least $D$ times (note that each set $S \in \mathcal S$ can be used more than once).\vspace{2mm} Denote this problem by $\SMC_D$, and $m \coloneqq |\mathcal S|$. This problem has been studied classically, and there exists an exact deterministic algorithm based on the inclusion-exclusion principle that solves this problem in time $\widetilde O(m(D+1)^n)$ and polynomial space \cite{Nederlof08, HWYL10}. While there are various approximation algorithms for this problem, we are not aware of a more efficient classical exact algorithm. There is a different simple classical dynamic programming algorithm for this problem with the same time complexity (although it uses exponential space), which we can speed up using our quantum algorithm. For a vector $x \in \{0,1,\ldots,D\}^n$, define $\text{dp}(x)$ to be the size $k$ of the smallest tuple $(\mathcal C_1,\ldots,\mathcal C_k) \in \mathcal S^k$ such that for each $i$, we have $|\{j \in [k] \mid i \in \mathcal C_j\}| \geq x_i$. It can be calculated using the recurrence \begin{align*} \text{dp}(0^n) = 0, \hspace{1cm} \text{dp}(x) = 1 + \min_{S \in \mathcal S} \{ \text{dp}(x') \}, \end{align*} where $x'$ is given by $x_i' = \max\{0,x_i-\chi(S)_i\}$ for all $i$. Consequently, the answer to the problem is equal to $\text{dp}(D^n)$. The number of distinct $x$ is $(D+1)^n$, and $\text{dp}(x)$ for a single $x$ can be calculated in time $O(nm)$, if $\text{dp}(y)$ has been calculated for all $y < x$. Thus the time complexity is $O(nm(D+1)^n)$ and space complexity is $O((D+1)^n)$. Note that even though the state space of the dynamic programming here is $\{0,1,\ldots,D\}^n$, the underlying transition graph is not the same as the hyperlattice examined in the quantum algorithm. A set $S \in \mathcal S$ can connect vertices that are $|S|$ distance apart from each other, unlike distance $1$ in the hyperlattice. We can essentially reduce this to the hyperlattice-like transition graph by breaking such transition into $|S|$ distinct transitions. More formally, examine pairs $(x,S)$, where $x \in \{0,1,\ldots,D\}^n$, $S \in \mathcal S$. Let $e(x,S) := \min\{i \in S \mid x_i > 0\}$; if there is no such $i$, let $e(x,S)$ be $0$. Define a new function \begin{align*} \text{dp}(x,S) &= \begin{cases} 0, &\text{if $x=0^n$,}\\ \text{dp}(x-\chi(\{e(x,S)\}),S), &\text{if $e(x,S) > 0$,}\\ 1+\min_{T \in \mathcal S, e(x,T) > 0} \{ \text{dp}(x-\chi(\{e(x,T)\}),T\}, &\text{if $e(x,S) = 0$.} \end{cases} \end{align*} The new recursion also solves $\SMC_D$, and the answer is equal to $\min_{S \in \mathcal S}\{ \text{dp}(D^n,S)\}$. Examine the underlying transition graph between pairs $(x,S)$. We can see that there is a transition between two pairs $(x,S)$ and $(y,T)$ only if $y_i = x_i+1$ for exactly one $i$, and $y_i = x_i$ for other $i$. This is the $n$-dimensional lattice graph $Q(D,n)$. Thus we can apply our quantum algorithm with a few modifications: \begin{itemize} \item We now run Grover's search over $(x,S)$ with fixed $|x|$ for all $S \in \mathcal S$. This adds a $\poly(m,n)$ factor to each run of Grover's search. \item Since we are searching for the minimum value of $\text{dp}$, we actually need a quantum algorithm for finding the minimum instead of Grover's search. We can use the well-known quantum minimum finding algorithm that retains the same query complexity as Grover's search \cite{DH96}\footnote{Note that this algorithm assumes queries with zero error, but we apply it to bounded-error queries. However, it consists of multiple runs of Grover's search, so we can still use the result of \cite{HMDw03} to avoid the additional logarithmic factor.}. It introduces only an additional $O(\log n)$ factor for the queries of minimum finding to encode the values of $\text{dp}$, since $\text{dp}(x,S)$ can be as large as $Dn$. \item A single query for a transition between pairs $(x,S)$ and $(y,T)$ in this case returns the value of the value added to the dp at transition, which is either $0$ or $1$. If these pairs are not connected in the transition graph, the query can return $\infty$. Note that such query can be implemented in $\poly(m,n)$ time. \end{itemize} Since the total number of runs of Grover's search is $O(K \log n)$, the additional factor incurred is $\poly(m,n)^{O(K \log n)}$. This provides a quantum algorithm for this problem with total time complexity $$\poly(m,n)^{O(K \log n)}\cdot n^{O(D^2 K \log n)} T_D^n = m^{O(K \log n)} n^{O(D^2 K \log n)} T_D^n.$$ Therefore, we have the following theorem. \begin{theorem} \label{thm:smc} Assuming the QRAM model of computation, there exists a quantum algorithm that solves $\SMC_D$ in time $\poly(m,n)^{\log n} T_D^n$, where $T_D < D+1$. \end{theorem} \subsection{Related problems} We are aware of some other works that implement the dynamic programming on the $\{0,1,\ldots,D\}^n$ $n$-dimensional lattice. Psaraftis examined the job scheduling problem \cite{Psaraftis80}, with application to aircraft landing scheduling. The problem requires ordering $n$ groups of jobs with $D$ identical jobs in each group. A cost transition function is given: the cost of processing a job belonging to group $j$ after processing a job belonging to group $i$ is given by $f(i,j,d_1,\ldots,d_n)$, where $d_i$ is the number of jobs left to process. The task is to find an ordering of the $nD$ jobs that minimizes the total cost. This is almost exactly the setting for our quantum algorithm, hence we get $\poly(n)^{\log n} T_D^n$ time quantum algorithm. Psaraftis proposed a classical $O(n^2(D+1)^n)$ time dynamic programming algorithm. Note that if $f(i,j,d_1,\ldots,d_n)$ are unstructured (can be arbitrary values), then there does not exist a faster classical algorithm by the lower bound of Section \ref{sec:problem}. However, if $f(i,j,d_1,\ldots,d_n)$ are structured or can be computed efficiently by an oracle, there exist more efficient classical algorithms for these kinds of problems. For instance, the many-visits travelling salesman problem (MV-TSP) asks for the shortest route in a weighted $n$-vertex graph that visits vertex $i$ exactly $D_i$ times. In this case, $f(i,j,d_1,\ldots,d_n) = w(i,j)$, where $w(i,j)$ is the weight of the edge between $i$ and $j$. The state-of-the-art classical algorithm by Kowalik et al.~solves this problem in $\widetilde O(4^n)$ time and space \cite{KLNSW20}. Thus, our quantum algorithm does not provide an advantage. It would be quite interesting to see if there exists a quantum speedup for this MV-TSP algorithm. Lastly, Gromicho et al.~proposed an exact algorithm for the job-shop scheduling problem \cite{GvHST12,vHNOG17}. In this problem, there are $n$ jobs to be processed on $D$ machines. Each job consists of $D$ tasks, with each task to be performed on a separate machine. The tasks for each job need to be processed in a specific order. The time to process job $i$ on machine $j$ is given by $p_{ij}$. Each machine can perform at most one task at any moment, but machines can perform the tasks in parallel. The problem is to schedule the starting times for all tasks so as to minimize the last ending time of the tasks. Gromicho et al.~give a dynamic programming algorithm that solves the problem in time $O((p_{\max})^{2n}(D+1))^n$, where $p_{\max} = \max_{i,j} \{ p_{ij} \}$. The states of their dynamic programming are also vectors in $\{0,1,\ldots,D\}^n$: a state $x$ represents a partial completion of tasks, where $x_i$ tasks of job $i$ have already been completed. Their dynamic programming calculates the set of task schedulings for $x$ that can be potentially extended to an optimal scheduling for all tasks. However, it is not clear how to apply Grover's search to calculate a whole set of schedulings. Therefore, even though the state space is the same as in our algorithm, we do not know whether it is possible to apply it in this case. \section{Acknowledgements} We would like to thank Krišjānis Prūsis for helpful discussions and comments. A.G. has been supported in part by National Science Center under grant agreement 2019/32/T/ ST6/00158 and 2019/33/B/ST6/02011. M.K. has been supported by ``QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union's Horizon 2020 Programme'' (QuantAlgo project). R.M. was supported in part by JST PRESTO Grant Number JPMJPR1867 and JSPS KAKENHI Grant Numbers JP17K17711, JP18H04090, JP20H04138, and JP20H05966. J.V. has been supported in part by the project ``Quantum algorithms: from complexity theory to experiment'' funded under ERDF programme 1.1.1.5. \printbibliography \newpage
1,108,101,564,040
arxiv
\section{Experiment: attraction versus repulsion} Here, the inverted Cheerios effect is observed with sub-millimeter drops of ethylene glycol on a PDMS gel. The gel is a reticulated polymer formed by polymerizing small multifunctional prepolymers --~contrary to hydrogels, there is no liquid phase trapped inside. The low shear modulus of the PDMS gel gives an elasto-capillary length $\ell =\gamma/G =0.17\ \mathrm{mm}$ sufficiently large to be measurable in the optical domain. The interaction between two neighboring liquid drops is quantified by tracking their positions while they are sliding under the effect of gravity along a soft layer held vertically. The interaction can be either attractive (Fig.~\ref{fig1}~A) or repulsive (Fig.~\ref{fig1}~B): drops on relatively thick gel layers attract each other, while drops on relatively thin layers experience a repulsion. The drop-drop interaction induces a lateral motion that can be quantified by the horizontal component of the relative droplet velocity, $\Delta v_x$, with the convention that $\Delta v_x>0$ implies repulsion. In Fig.~\ref{fig1}~C, we report $\Delta v_x$ as a function of the separation $d$, defined as the shortest distance between the surfaces of the drops. The drops ($R\simeq 0.5-0.8\;\mathrm{mm}$) exhibit attraction when sliding down a thick layer ($h_0= 8\;\mathrm{mm}$, black curve), while they are repelled on a thin layer ($h_0= 0.04\;\mathrm{mm}$, red curve). $\Delta v_x$ is larger at close proximity, signaling an increase in the interaction force. Spontaneous merging occurs where drops encounter direct contact. Importantly, these interactions provide a new mechanism for droplet coarsening (or ordering) by coalescence (or its suppression) that has no counterpart on rigid surfaces. The interaction force $F$ can be inferred from the relative velocities between the drops, based on the effective ``drag" due to sliding on a gel. We first calibrate this drag by considering drops that are sufficiently separated, so that they do not experience any mutual interaction. The motion is purely downward and driven only by gravitational force $F_g = M g$, and inertia is negligible. Figure~\ref{fig1}~D shows that the droplet velocity $v_y$ is nonlinear, and approximately scales as $F_g^2$. This force-velocity calibration curve is in good agreement with viscoelastic dissipation in the gel, based on which one expects the scaling law~\cite{karpitschka2015}: \begin{equation}\label{eq:velocity} v \sim \frac{\ell}{\tau} \left( \frac{F}{2\pi R\gamma}\right)^{1/n}. \end{equation} Here $n$ is the rheological exponent that emerges from the scale invariance of the gel network \cite{ChambonWinter,LAL96,deGennes1996}, while $\tau$ is a characteristic timescale. The parameter values $n \simeq 0.61$ and $\tau\simeq 0.68\;\mathrm{s}$ are calibrated in a rheometer (see Methods). Equation \eqref{eq:velocity} is valid for $v$ below the characteristic rheological speed, $\ell/\tau$ -- this is justified here since $\ell/\tau \sim 0.25\ \mathrm{mm/s}$ for the silicone gel, while the reported speeds reach up to $\sim 100\ \mathrm{nm/s}$. The presence of a large viscoelastic dissipation in the gel exceeds the dissipation within the drop by orders of magnitude, and explains these extremely slow drop velocities observed experimentally~\cite{CGS96,karpitschka2015}. The force-distance relation for the Inverse Cheerios effect can now be measured directly using the independently calibrated force-velocity relation [Fig.~\ref{fig1}~D and \eqref{eq:velocity}]. By monitoring how the trajectories are deflected with respect to the downward motion of the drops, we obtain $F$. The key result is shown in Fig.~\ref{fig2}, where we report the interaction force $F$ as a function of distance $d$. Panel A shows experimental data for the attractive force ($F<0$) between drops on thick layers (black dots), together with the theoretical prediction outlined below. Supplementary Video 1 shows an example of attractive drop-drop interaction. The attractive force is of the order of $\mu$N, which is comparable to both the capillary force-scale $\gamma R$ and the elastic force-scale $GR^2$. The force decreases for larger distance and its measurable influence was up to $d \sim R$. \begin{figure}[tb] \centerline{\includegraphics[width=\colwidth]{Figure2.pdf}} \caption{Measured interaction force $F$ (symbols) as a function of their separation $d$, compared to three-dimensional theory (red lines, no adjustable parameters). A) Attraction on a thick elastic layer ($h_0 \approx 8 \mathrm{mm} \gg R \gg \ell$). B) Repulsion and attraction on a thin layer ($R \gg \ell \gtrsim h_0 \approx 40 \mathrm{\mu m}$). Each datapoint represent an average over $\sim 10$ realisations, the error bars giving the standard deviation. Measurements are based on pairs of ethylene glycol drops whose radii are in the range $R \sim 0.7 \pm 0.1 \mathrm{mm}$. The elastic substrate has a static shear modulus $0.28 \mathrm{kPa}$.} \label{fig2} \end{figure} Figure~\ref{fig2}~B shows the interaction force between drops on thin layers. The dominant interaction is now repulsive ($d\gtrsim h_0$), see Supplementary Video 2. Intriguingly, we find that the interaction is not purely repulsive, but also displays an attractive range at very small distance. It is possible to access this range experimentally in case the motion of the individual drops are sufficiently closely aligned (Supplementary Video 3). The ``neutral" distance where the interaction force changes sign appears when the separation is comparable to the substrate thickness $h_0$, suggesting that the key parameter governing whether the drops attract or repel is the thickness of the gel. \section{Mechanism of interaction: rotation of elastic meniscus} We explain the attraction versus repulsion of neighboring drops by computing the total free energy of drops on gel layers of different thicknesses. In contrast to the normal Cheerios effect, which involves two rigid particles, in the current experiment an additional element of complexity is present: both the droplet and the elastic substrate are deformable, and their shapes will change upon varying the distance $d$. Hence, the interaction force involves both the elastic and the surface tension contributions to the free energy that emerge from self-consistently computed shapes of the drops and elastic deformations. \begin{figure}[tb] \centerline{\includegraphics[width=\colwidth]{Figure3.pdf}} \caption{Mechanism of interaction between two liquid drops on a soft solid. A) Deformation $h(x)$ induced by a single droplet on a thick substrate. The zoom near the contact line illustrates that the contact angles satisfy the Neumann condition. B) A second drop placed on a thick substrate experiences a background profile due to the deformation already induced by the drop on the right. This background profile is shown in red. As a consequence, the solid angles near the elastic meniscus rotate by an angle $\varphi$ (see zoom). This rotation perturbs the Neumann balance, yielding an attractive force $\vec{f}$. C) The single-drop profile on a thin substrate yields a non-monotonic elastic deformation. The zoom illustrates a rotation $\varphi$ in the opposite direction, leading to a repulsive interaction.} \label{fig:mechanism} \end{figure} To reveal the mechanism of interaction, we first consider two dimensional drops, for which the free energy can be written as \begin{eqnarray} \mathcal{E}[h]&=&\mathcal{E}_{e\ell}[h]+\int\limits_{\rm dry}\id x \, \gamma_{SV}\sqrt{1+{h'}^2}\nonumber\\ &+&\int\limits_{\rm wet}\id x \left[ \gamma \sqrt{1+\mathcal{H}'^2}+ \gamma_{SL} \sqrt{1+{h'}^2}\right], \label{eq:tot_ener1} \end{eqnarray} The geometry is sketched in Fig.~\ref{fig:mechanism}~BC, and further details are given in the Supplementary Information. The elastic energy $\mathcal{E}_{e\ell}$ in the entire layer is a functional of the profile $h(x)$ describing the shape of the elastic solid: the functional explicitly depends on the layer thickness, and is ultimately responsible for the change from attraction to repulsion. The function $\mathcal{H}(x)$ represents the shape of the liquid-vapor interface. The integrals in \eqref{eq:tot_ener1} represent the interfacial energies; they depend on the surface tensions $\gamma$, $\gamma_{SL}$, $\gamma_{SV}$ associated with the liquid-vapor, solid-liquid and solid-vapor interfaces, respectively. Variation with respect to the contact line positions provide the relevant boundary conditions to the problem \cite{LUUKJFM14}. In order to determine the force per unit length $f$ characterizing the lateral interaction between the two-dimensional drops, the free energy \eqref{eq:tot_ener1} is minimized under the constraint that the nearest contact lines are separated by a distance $d$. This constraint is imposed by adding the term $f\;(x_i-d/2)$ to the free energy, where $x_i$ is the inner contact line position (see Supplementary Information). With this convention attractive forces correspond to $f<0$. Equivalently, $-f$ is the external force necessary to prevent the drop from moving towards each other, yielding an external work $-f \delta d$ when varying the spacing between the drops. The energy minimization reveals the mechanism of drop-drop interaction: the interaction force $f$ appears in the boundary condition for the contact angles, \begin{equation}\label{eq:neumannF} f = \gamma \cos \theta + \gamma_{SL} \cos \theta_{SL} - \gamma_{SV} \cos \theta_{SV}, \end{equation} where the angles are defined in Fig.~\ref{fig:mechanism}. Equation \eqref{eq:neumannF} can be thought of as an ``imbalance" of the usual Neumann boundary condition. For a single droplet, the contact angles satisfy Neumann's law, which is \eqref{eq:neumannF} with $f=0$ (Fig.~\ref{fig:mechanism}~A). On a thick elastic layer, the overall shape of the wetting ridge is of the form \cite{Style12,LUUKJFM14} \begin{equation}\label{eq:scaling} h(x) \sim \frac{\gamma}{G} \, \Psi \left( \frac{x}{\gamma_s/G} \right), \end{equation} where the horizontal scale is set by elasto-capillary length based on the typical solid surface tension $\gamma_s$. The origin of $f$ can be understood from the principle of superposition. Due to the substrate deformation of a single drop, a second drop approaching the first one will see a surface that is locally rotated by an angle $\varphi \sim h' \sim \gamma/\gamma_s$. The elastic meniscus near the inner contact line of this approaching drop gets rotated by an angle $\varphi$ (Fig.~\ref{fig:mechanism}~B). Importantly, changes in the liquid angle $\theta$ exhibit a weaker dependence $\sim h/R \sim \gamma/(GR)$, which for large drops can be ignored. As a consequence, this meniscus rotation induces an imbalance of the surface tension forces according to \eqref{eq:neumannF}, which for small rotations yields $f \simeq \gamma \varphi$, where $\varphi$ follows from the single drop deformation \eqref{eq:scaling}. There is no resultant interaction force from the stress below the drop, which, due to deformability of the drop, provides a uniform pressure. The inverted Cheerios effect is therefore substantially different from the Cheerios effect between two particles floating at the surface of a liquid. Apart from the drop being deformable, we note that the energy driving the interaction is different for the two cases: while the liquid interface shape is determined by the balance between gravity and surface tension in the Cheerios effect, the solid shape is determined by elasto-capillarity in the inverted Cheerios effect. Another difference is the mechanism by which the interaction is mediated. The Cheerios effect is primarily driven by a change in gravitational potential energy which implies a vertical displacement of particles: a heavy particle slides downwards, like a bead on a string, along the deformation created by a neighboring particle \cite{Vella2005}. A similar interaction was recently discussed for rigid cylinders that deform an elastic surface due to gravity \cite{Maha2015}. In contrast, the inverted Cheerios effect discussed here does not involve gravity and can be totally ascribed to elasto-capillary tilting of the solid interfaces -- as in Fig.~\ref{fig:mechanism} -- manifesting the interaction as a force near the contact line. The rotation of contact angles indeed explains why the drop-drop interaction can be either attractive or repulsive. On a thick substrate, the second drop experiences solid contact angles that are rotated counter clockwise, inducing an attractive force (Fig.~\ref{fig:mechanism}~B). By contrast, on a thin substrate the elastic deformation induced by the second drop has a non-monotonic profile $h(x)$. This is due to volume conservation: the lifting of the gel near the contact line creates a depression at larger distances (Fig.~\ref{fig:mechanism}~C). At large distance, the rotation of the contact angles thus changes sign, and, accordingly, the interaction force changes from attractive to repulsive. Naturally, the relevant length scale for this phenomenon is set by the layer thickness $h_0$. \section{Three-dimensional theory} The extension of the theory to three dimensions is straightforward and allows for a quantitative comparison with the experiments. For the three dimensional case we compute the shape of the solid numerically, by first numerically solving for the deformation field induced by a single using an axisymmetric elastic Green function \cite{Style12}. Adding a second drop on this deformed surface gives an intricate deformation that is shown in Fig.~\ref{fig3}. The imbalance of the Neumann law applies everywhere around the contact line: the background deformation induces a rotation of the solid contact angles around the drop. According to \eqref{eq:neumannF}, these rotations result into a distribution of force per unit length contact line $\vec{f} = f(\beta) \vec{e}_r$, where $\vec{e}_r$ is the radial direction associated with the interacting drop and $\beta$ the azimuthal angle along the contact line (Fig.~\ref{fig3}). The resultant interaction force $\vec{F}$ is obtained by integration along the contact line, as $\vec{F}=R\int d\beta \vec f(\beta)$ (see Supplementary Information). By symmetry, this force is oriented along the line connecting the two drops. \begin{figure}[tb] \centerline{\includegraphics[width=\colwidth]{Figure4.pdf}} \caption{Three-dimensional calculation of interface deformation for a pair of axisymmetric drops. The elasto-capillary meniscus between the two drops is clearly visible, giving a rotation of the contact angle around the drop. The total interaction force $\vec{F}$ is obtained by integration of the horizontal force $\vec{f}$ (indicated in red) related to the imbalance of the Neumann law around the contact line. Parameter values are set to $\ell/R = 0.1$, $\gamma/\gamma_s = 1$.} \label{fig3} \end{figure} The interaction force obtained by the three-dimensional theory is shown as the red curves in Fig.~\ref{fig2}~AB. The theory gives an excellent description of the experimental data without adjustable parameters. The quantitative agreement indicates that the interaction mechanism is indeed caused by the rotation of the elastic meniscus. \section{Discussion} In summary, we have shown that liquid drops can exhibit a mutual interaction when deposited on soft surfaces. The interaction is mediated by substrate deformations, and its direction (repulsive versus attractive) can be tuned by the thickness of the layer. The measured force/distance relation is in quantitative agreement with the proposed elasto-capillary theory. The current study reveals that multiple ``pinchings'' of an elastic layer by localized tractions $\gamma$ lead to an interaction having a range comparable to $\gamma/G$. The key insight is that interaction emerges from the rotation of the elastic surface, providing a generic mechanism that should be applicable to a wide range of objects interacting on soft media. In biological settings, elasto-capillary interactions may play a role in cell-cell interactions, which are known to be sensitive to substrate stiffness \cite{Guo:BPJ2006}. In addition, the elastic interaction could also play a role in cell-extracellular matrix interactions, as a purely passive force promoting aggregation between anchor points on the surface of adhered biological cells. For example, it has been demonstrated that a characteristic distance of about $70$~nm between topographical features enables the clustering of integrins. These transmembrane proteins are responsible for cell adhesion to the surrounding matrix, mediating the formation of strong anchor points when cells adhere to substrates \cite{huang2009,dalby2014}. Assuming that the topographical features ``pinch'' the cell with a force likely comparable to the cell's cortical tension, which takes values in the range $0.1-1\;\mathrm{mN/m}$ \cite{krieg2008,tinevez2009, fischer2014,sliogeryte2014}, and an elastic modulus of $10^3-10^4\;\mathrm{Pa}$ in the physiological range of biological tissues \cite{swift2013}, one predicts a range of interaction consistent with observations. More generally, substrate-mediated interactions could be dynamically programmed using the responsiveness of many gels to external stimuli (pH, temperature, electric fields). Possible applications range from fog harvesting and cooling to self-cleaning or anti-fouling surfaces, which rely on controlling drop migration and coalescence. The physical mechanisms revealed here, in combination with the fully quantitative elasto-capillary theory, paves the way for new design strategies for smart soft surfaces. \section{Material and Methods} The Supplementary Information provides further technical information, the variational derivation underlying Equation (3), and the numerical scheme that lead to the calculations of Fig. 2 and 4. Supplementary Videos 1, 2 and 3 show typical experiments of drop-drop interactions. {\bf Substrate preparation~}The two prepolymer components (Dow Corning CY52-276~A and~B) were mixed in a ratio of 1.3:1 (A:B). Thick elastic layers ($\sim 8\,\mathrm{mm}$) were prepared in petri dishes (diameter $\sim 90\,\mathrm{mm}$). Thin layers ($\sim 40\,\mathrm{\mu m}$) were prepared by spin-coating the gel onto silicon wafers. Thicknesses were determined by color interferometry. See supplement for details on substrate curing \& rheology. {\bf Determining the interaction between drops~}Droplets of Ethylene Glycol ($V \sim 0.3 - 0.8 {\mathrm \mu l}$) were pipetted onto a small region near the center of the cured substrate. The sample was then mounted vertically so that gravity acts along the surface ($-y$ direction, compare Fig.~\ref{fig1}~A,B). The droplets were observed in transmission (thick layers) or reflection (thin layers) with collimated illumination, using a telecenric lens (JenMetar 1x) and a digital camera (pco 1200). Images were taken every 10 s. The contours of the droplets were determined by a standard correlation technique. At large separation, droplets move downward due to gravity. The gravitational force for each droplet is proportional to its volume. The relation between force and velocity is follows the same power law as the rheology, which was explained recently~\cite{karpitschka2015}. Due to their different volumes/velocities, distances between droplets change. Whenever two droplets come close, their trajectories change due to their interaction. Drops on thick substrates (Figure~\ref{fig1}~C, black) attract and finally merge. On a rigid surface, these droplets would have passed by each other. Opposite holds for droplets on thin layers (red): the droplets repel each other, which prevents coalescence. To determine the interaction forces, we first evaluate the velocity vector of each individual droplet. The droplets behave as quasi-stationary, and the total force vector acting on each droplet is aligned with its velocity vector. The magnitude of the total force is derived by the calibration shown in Fig.~\ref{fig1}~D). The interaction force is obtained by subtracting the gravitational force vector from the total force vector. Figure~\ref{fig2}~A shows data from nine individual droplet pairs, at different times and different locations on the substrate. Panel B) shows data from 18 different droplet pairs. The raw data has been averaged over distance bins, taking the standard deviation as error bar. \begin{acknowledgments} SK acknowledges financial support from NWO through VIDI Grant No. 11304. AP and JS acknowledge financial support from ERC (the European Research Council) Consolidator Grant No. 616918. \end{acknowledgments}
1,108,101,564,041
arxiv
\section{Introduction} Knowledge graphs (KGs) aim at semantically representing the world's truth in the form of machine-readable graphs composed of triple facts. Knowledge graph embedding encodes each element (entities and relations) in knowledge graph into a continuous low-dimensional vector space. The learned representations make the knowledge graph essentially computable and have been proved to be helpful for knowledge graph completion and information extraction \cite{TransE,TransH,TransR,TransD,TranSparse}. \begin{figure}[ht] \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{0pt} \includegraphics[width=75.0mm]{pic/1.pdf} \caption{An example of concepts, instances, and isA transitivity.} \label{fig_intro} \end{figure} In recent years, various knowledge graph embedding methods have been proposed, among which the translation-based models are simple and effective with good performances. Inspired by word2vec \cite{Word2Vec}, given a triple (h, r, t), TransE learns vector embeddings $\mathbf{h}$, $\mathbf{r}$ and $\mathbf{t}$ which satisfy $\mathbf{r} \approx \mathbf{t} - \mathbf{h}$. Afterwards, TransH \cite{TransH}, TransR/CTransR \cite{TransR} and TransD \cite{TransD}, etc are proposed to address the problem of TransE when modeling 1-to-N, N-to-1, and N-to-N relations. As extensions of RESCAL\cite{RESCAL}, which is a bilinear model, HolE\cite{HolE}, DistMult\cite{DistMult} and ComplEx\cite{complEx} achieve the state-of-the-art performances. Meanwhile, there are also some different methods using a variety of external information such as entity types \cite{DKRL}, textual descriptions \cite{TEKE}, as well as logical rules to strengthen representations of knowledge graphs \cite{Wang2015Knowledge,Shu2016Jointly,Rockt2015Injecting}. However, all these methods ignore to distinguish between concepts and instances, and regard both as entities to make a simplification. Actually, concepts and instances are organized differently in many real world datasets like YAGO \cite{YAGO}, Freebase \cite{Freebase}, and WordNet \cite{Wordnet}. Hierarchical concepts in these knowledge bases provide a natural way to categorize and locate instances. Therefore, the common simplification in previous work will lead to the following two drawbacks: \textbf{Insufficient concept representation:} Concepts are essential information in knowledge graph. A concept is a fundamental category of existence \cite{rosch:natural} and can be reified by all of its actual or potential instances. Figure \ref{fig_intro} presents an example of concepts and instances about university staffs. Most knowledge embedding methods encode both concepts and instances as vectors, cannot explicitly represent the difference between concepts and instances. \textbf{Lack transitivity of both isA relations:} \texttt{instanceOf} and \texttt{subClassOf} (generally known as isA) are two special relations in knowledge graph. Different from most other relations, isA relations exhibit transitivity, e.g., the dotted lines in Figure \ref{fig_intro} represent the facts inferred by isA transitivity. The indiscriminate vector representation for all relations in previous work cannot reserve this property well (see Section \ref{triple classification} for details). To address these issues, we propose a novel translation embedding model named TransC in this paper. Inspired by \cite{mind}, concepts in people's mind are organized hierarchically and instances should be close to concepts that they belong to. Hence in TransC, each concept is encoded as a sphere and each instance as a vector in the same semantic space, and relative positions are employed to model the relations between concepts and instances. More specifically, \texttt{instanceOf} relation is naturally represented by checking whether an instance vector is inside a concept sphere. For the \texttt{subClassOf} relation, we enumerate and quantify four possible relative positions between two concept spheres. We also define loss functions to measure the relative positions and optimize knowledge graph embeddings. Finally, we incorporate them into translation-based models to jointly learn the knowledge representations of concepts, instances and relations. Experiments on real world datasets extracted from YAGO show that TransC outperforms previous work like TransE, TransD, HolE, DistMult and ComplEx in most cases. The contributions of this paper can be summarized as follows: \begin{enumerate} \item To the best of our knowledge, we are the first to propose and formalize the problem of knowledge graph embedding which differentiates between concepts and instances. \item We propose a novel knowledge embedding method named TransC, which distinguishes between concepts and instances and deals with the transitivity of isA relations. \item We construct a new dataset based on YAGO for evaluation. Experiments on link prediction and triple classification demonstrate that TransC successfully addresses the above problems and outperforms state-of-the-art methods. \end{enumerate} \section{Related Work} There are a variety of models for knowledge graph embedding. We divide them into three kinds and introduce them respectively. \subsection{Translation-based Models} \textbf{TransE \cite{TransE}} regards a relation $\mathbf{r}$ as a translation from $\mathbf{h}$ to $\mathbf{t}$ for a triple $(h, r, t)$ in training set. The vector embeddings of this triple should satisfy $\mathbf{h} + \mathbf{r} \approx \mathbf{t}$. Hence, $\mathbf{t}$ should be the nearest neighbor of $\mathbf{h} + \mathbf{r}$, and the loss function is \begin{equation} f_r(h, t) = ||\mathbf{h} + \mathbf{r} - \mathbf{t}||_2^2. \end{equation} TransE is suitable for 1-to-1 relations, but it has problems when handling 1-to-N, N-to-1, and N-to-N relations. \noindent\textbf{TransH \cite{TransH}} attempts to alleviate the problems of TransE above. It regards a relation vector $\mathbf{r}$ as a translation on a hyperplane with $\mathbf{w}_r$ as the normal vector. The vector embeddings will be first projected to the hyperplane of relation $r$ and get $\mathbf{h}_{\perp} = \mathbf{h} - \mathbf{w}_r^{\top}\mathbf{h}\mathbf{w}_r$ and $\mathbf{t}_{\perp} = \mathbf{t} - \mathbf{w}_r^{\top}\mathbf{t}\mathbf{w}_r$. The loss function of TransH is \begin{equation} f_r(h, t) = ||\mathbf{h}_{\perp} + \mathbf{r} - \mathbf{t}_{\perp}||_2^2. \end{equation} \noindent\textbf{TransR/CTransR \cite{TransR}} addresses the issue in TransE and TransH that some entities are similar in the entity space but comparably different in other specific aspects. It sets a transfer matrix $\mathbf{M}_r$ for each relation $r$ to map entity embedding to relation vector space. Its loss function is \begin{equation} f_r(h, t) = ||\mathbf{M}_r\mathbf{h} + \mathbf{r} - \mathbf{M}_r\mathbf{t}||_2^2. \end{equation} \noindent\textbf{TransD \cite{TransD}} considers the different types of entities and relations at the same time. Each relation-entity pair $(r, e)$ will have a mapping matrix $\mathbf{M}_{re}$ to map entity embedding into relation vector space. And the projected vectors could be defined as $\mathbf{h}_\perp = \mathbf{M}_{rh}\mathbf{h}$ and $\mathbf{t}_\perp = \mathbf{M}_{rt}\mathbf{t}$. The loss function of TransD is \begin{equation} f_r(h, t) = ||\mathbf{h}_{\perp} + \mathbf{r} - \mathbf{t}_{\perp}||_2^2. \end{equation} There are many other translation-based models in recent years. For example, TranSparse \cite{TranSparse} simplifies TransR by enforcing the sparseness on the projection matrix, PTransE \cite{PTransE} considers relation paths as translations between entities for representation learning, \cite{ManifoldE} proposes a manifold-based embedding principle (ManifoldE) for precise link prediction, TransF \cite{TransF} regards relation as translation between head entity vector and tail entity vector with flexible magnitude, \cite{TransG} proposes a new generative model TransG, and KG2E \cite{KG2E} uses Gaussian embedding to model the data uncertainty. All these models can be seen in \cite{Survey}. \subsection{Bilinear Models} RESCAL\cite{RESCAL} is the first bilinear model. It associates each entity with a vector to capture its latent semantics. Each relation is represented as a matrix which models pairwise interactions between latent factors. Many extensions of RESCAL have been proposed by restricting bilinear functions in recent years. For example, DistMult \cite{DistMult} simplifies RESCAL by restricting the matrices representing relations to diagonal matrices. HolE\cite{HolE} combines the expressive power of RESCAL with the efficiency and simplicity of DistMult. It represents both entities and relations as vectors in $\mathbb{R}^{d}$. ComplEx\cite{complEx} extends DistMult by introducing complex-valued embeddings so as to better model asymmetric relations. \subsection{External Information Learning Models} External information like textual information is significant for knowledge representation. TEKE \cite{TEKE} uses external context information in a text corpus to represent both entities and words into a joint vector space with alignment models. DKRL \cite{DKRL} directly learns entity representations from entity descriptions. \cite{Wang2015Knowledge,Shu2016Jointly,Rockt2015Injecting} use logical rules to strengthen representations of knowledge graphs. All models above do not differentiate between concepts and instances. To the best of our knowledge, our proposed TransC is the first attempt which represents concepts, instances, and relations differently in the same space. \begin{figure*}[ht] \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{0pt} \includegraphics[width=160.0mm]{pic/3.pdf}\\ \caption{Four relative positions between sphere $s_i$ and $s_j$.} \label{four_position} \end{figure*} \section{Problem Formulation} In this section, we formulate the problem of knowledge graph embedding with concepts and instances. Before that, we first introduce the input knowledge graph. \textit{\textbf{Knowledge Graph}} $\mathcal{KG}$ describes concepts, instances, and the relations between them. It can be formalized as $\mathcal{KG} = \{\mathcal{C}, \mathcal{I}, \mathcal{R}, \mathcal{S}\}$. $\mathcal{C}$ and $\mathcal{I}$ denote the sets of concepts and instances respectively. Relation set $\mathcal{R}$ can be formalized as $\mathcal{R} = \{r_e, r_c\} \cup \mathcal{R}_l$, where $r_e$ is an \texttt{instanceOf} relation, $r_c$ is a \texttt{subClassOf} relation, and $\mathcal{R}_l$ is the instance relation set. Therefore, the triple set $\mathcal{S}$ can be divided into three disjoint subsets: \begin{enumerate} \item \texttt{InstanceOf} triple set $\mathcal{S}_e = \{(i, r_e, c)_k\}_{k=1}^{n_e}$, where $i \in \mathcal{I}$ is an instance, $c \in \mathcal{C}$ is a concept, and $n_e$ is the size of $\mathcal{S}_e$. \item \texttt{SubClassOf} triple set $\mathcal{S}_c = \{(c_{i}, r_c, c_{j})_k\}_{k=1}^{n_c}$, where $c_i, c_j \in \mathcal{C}$ are concepts, $c_i$ is a sub-concept of $c_j$, and $n_c$ is the size of $\mathcal{S}_c$. \item Relational triple set $\mathcal{S}_l = \{(h, r, t)_k\}_{k=1}^{n_l}$, where $h, r \in \mathcal{I}$ are head instance and tail instance, $r \in \mathcal{R}_l$ is an instance relation, and $n_l$ is the size of $\mathcal{S}_l$. \end{enumerate} Given knowledge graph $\mathcal{KG}$, \textbf{knowledge graph embedding with concepts and instances} aims at learning embeddings for instances, concepts, and relations in the same space $\mathbb{R}^k$. For each concept $c \in \mathcal{C}$, we learn a sphere $s(\mathbf{p}, m)$ with $\mathbf{p} \in \mathbb{R}^k$ and $m$ denoting the sphere center and radius. For each instance $i \in \mathcal{I}$ and instance relation $r \in \mathcal{R}_l$, we learn a low-dimensional vector $\mathbf{i} \in \mathbb{R}^k$ and $\mathbf{r} \in \mathbb{R}^k$ respectively. Specifically, the \texttt{instanceOf} and \texttt{subClassOf} representations are well-designed so that the transitivity of isA relations can be reserved, namely, \texttt{instanceOf}-\texttt{subClassOf} transitivity shown in Equation \ref{formu1}: \begin{equation}\label{formu1} \resizebox{.88\hsize}{!}{$(i, r_e, c_1)\in S_e \wedge (c_1, r_c, c_2)\in S_c \rightarrow (i, r_e, c_2)\in S_e,$} \end{equation} and \texttt{subClassOf}-\texttt{subClassOf} transitivity shown in Equation \ref{formu2}: \begin{equation}\label{formu2} \resizebox{.88\hsize}{!}{$(c_1, r_c, c_2)\in S_c \wedge (c_2, r_c, c_3)\in S_c \rightarrow (c_1, r_c, c_3)\in S_c.$} \end{equation} Based on the definition, how to model concepts and isA relations is critical to solve this problem. \section{Our Approach} To differentiate between concepts and instances for knowledge graph embedding, we propose a novel method named TransC. We define different loss functions to measure the relative positions in embedding space, and then jointly learn the representations of concepts, instances, and relations based on the translation-based models. \subsection{TransC} We have three kinds of triples in our triple set $\mathcal{S}$ and define different loss function for them respectively. \textbf{InstanceOf Triple Representation}. For a given \texttt{instanceOf} triple $(i, r_e, c)$, if it is a true triple, $\mathbf{i}$ should be inside the sphere $s$ to represent the \texttt{instanceOf} relation between them. Actually, there is another relative position that $\mathbf{i}$ is outside the sphere $s$. In this condition, the embeddings still need to be optimized. The loss function is defined as \begin{equation} f_e(i, c) = ||\mathbf{i} - \mathbf{p}||_2 - m. \end{equation} \textbf{SubClassOf Triple Representation}. For a \texttt{subClassOf} triple $(c_i, r_c, c_j)$, just like before, concepts $c_i, c_j$ are encoded as spheres $s_i(\mathbf{p}_i, m_i)$ and $s_j(\mathbf{p}_j, m_j)$. We first denote the distance between the centers of the two spheres as \begin{equation} d = ||\mathbf{p}_i - \mathbf{p}_j||_2. \end{equation} If $(c_i, r_c, c_j)$ is a true triple, sphere $s_i$ should be inside sphere $s_j$ (Figure \ref{four_position}a) to represent the \texttt{subClassOf} relation between them. Actually, there are three other relative positions between sphere $s_i$ and $s_j$ (as shown in Figure \ref{four_position}). We also have three loss functions under these three conditions: \begin{enumerate} \item $s_i$ is separate from $s_j$ (Figure \ref{four_position}b). The embeddings still need to be optimized. In this condition, the two spheres need to get closer in optimalization. Therefore, the loss function is defined as \begin{equation} f_c(c_i, c_j) = ||\mathbf{p}_i - \mathbf{p}_j||_2 + m_i - m_j. \end{equation} \item $s_i$ intersects with $s_j$ (Figure \ref{four_position}c). This condition is similar to condition 1. The loss function is defined as \begin{equation} f_c(c_i, c_j) = ||\mathbf{p}_i - \mathbf{p}_j||_2 + m_i - m_j. \end{equation} \item $s_j$ is inside $s_i$ (Figure \ref{four_position}d). It is different from our target and we should reduce $m_j$ and increase $m_i$. Hence, the loss function is \begin{equation} f_c(c_i, c_j) = m_i - m_j. \end{equation} \end{enumerate} \textbf{Relational Triple Representation}. For a relational triple $(h, r, t)$, TransC will learn low-dimensional vectors $\mathbf{h}, \mathbf{t}, \mathbf{r} \in \mathbb{R}^k$ for instances and relations. Just like TransE \cite{TransE}, the loss function of this kind of triples is defined as \begin{equation} f_r(h, t) = ||\mathbf{h} + \mathbf{r} - \mathbf{t}||_2^2. \end{equation} After having embeddings above, TransC can easily deal with the transitivity of isA relations. If we have true triples $(i, r_e, c_i)$ and $(c_i, r_c, c_j)$, which means $\mathbf{i}$ is inside the sphere $s_i$ and $s_i$ is inside $s_j$, we can get a result that $\mathbf{i}$ is also inside the sphere $s_j$. It can be concluded that $(i, r_e, c_j)$ is a true triple and TransC can handle \texttt{instanceOf}-\texttt{subClassOf} transitivity. Similarly, if we have true triples $(c_i, r_c, c_j)$ and $(c_j, r_c, c_k)$, we can get a result that sphere $s_i$ is inside sphere $s_k$. It means $(c_i, r_e, c_k)$ is a true triple and TransC can deal with \texttt{subClassOf}-\texttt{subClassOf} transitivity. In experiments, we enforce constrains as $||\mathbf{h}||_2 \le 1$, $||\mathbf{r}||_2 \le 1$, $||\mathbf{t}||_2 \le 1$ and $||\mathbf{p}||_2 \le 1$. \subsection{Training Method} For \texttt{instanceOf} triples, we use $\xi$ and $\xi'$ to denote a positive triple and a negative triple. $\mathcal{S}_e$ and $\mathcal{S}_e'$ are used to describe the positive triple set and negative triple set. Then we can define a margin-based ranking loss for \texttt{instanceOf} triples: \begin{equation} \mathcal{L}_e = \sum_{\xi \in \mathcal{S}_e} \sum_{\xi' \in \mathcal{S}_e'}[\gamma_e + f_e(\xi) - f_e(\xi')]_{+}, \end{equation} where $[x]_{+} \triangleq$ max $(0, x)$ and $\gamma_e$ is the margin separating positive triplets and negative triplets. Similarly, for \texttt{subClassOf} triples, we will have a ranking loss: \begin{equation} \mathcal{L}_c = \sum_{\xi \in \mathcal{S}_c} \sum_{\xi' \in \mathcal{S}_c'}[\gamma_c + f_c(\xi) - f_c(\xi')]_{+}, \end{equation} and for relational triples, we will have a ranking loss: \begin{equation} \mathcal{L}_l = \sum_{\xi \in \mathcal{S}_l} \sum_{\xi' \in \mathcal{S}_l'}[\gamma_l + f_r(\xi) - f_r(\xi')]_{+}. \end{equation} Finally, we define the overall loss function as linear combinations of these three functions: \begin{equation} \mathcal{L} = \mathcal{L}_e + \mathcal{L}_c + \mathcal{L}_l. \end{equation} The goal of training TransC is to minimize the above function, and iteratively update embeddings of concepts, instances, and concepts. Every triple in our training set has a label to indicate whether the triple is positive or negative. But existing knowledge graph only contains positive triples. We need to generate negative triples by corrupting positive triples. For a relational triple $(h, r, t)$, we replace $h$ or $t$ to generate a negative triple $(h', r, t)$ or $(h, r, t')$. For example, we get $h'$ by randomly picking from a set $\mathcal{M}_t = \mathcal{M}_1 \cup \mathcal{M}_2 \cup \dots \cup \mathcal{M}_n$, where $n$ is the number of concepts that $t$ belongs to and $\mathcal{M}_i = \{a | a \in \mathcal{I} \land (a, r_e, c_i) \in \mathcal{S}_e \land (t, r_e, c_i) \in \mathcal{S}_e \land t \ne a \}$. For the other two kinds of triples, we follow the same policy to generate negative triples. We also use two strategies ``unif" and ``bern" described in \cite{TransH} to replace instances or concepts. \section{Experiments and Analysis} We evaluate our method on two typical tasks commonly used in knowledge graph embedding: link prediction \cite{TransE} and triple classification \cite{NTN}. \subsection{Datasets} Most previous work used FB15K and WN18 \cite{TransE} for evaluation. But these two datasets are not suitable for our model because FB15K mainly consists of instances and WN18 mainly contains concepts. Therefore, we use another popular knowledge graph YAGO \cite{YAGO} for evaluation, which contains a lot of concepts from WordNet and instances from Wikipedia. We construct a subset of YAGO named YAGO39K for evaluation through the following steps: \begin{table} \small \centering \setlength\tabcolsep{2pt} \begin{tabular}{|c|r|r|} \hline {DataSets} & {YAGO39K} & {M-YAGO39K}\\\hline \#Instance & {39,374} & {39,374}\\ \#Concept & {46,110} & {46,110}\\ \#Relation & {39} & {39}\\ \#Relational Triple & {354,997} & {354,997}\\ \#\texttt{InstanceOf} Triple & {442,836} & {442,836}\\ \#\texttt{SubClassOf} Triple & {30,181} & {30,181}\\\hline \#Valid (Relational Triple) & {9,341} & {9,341}\\ \#Test (Relational Triple) & {9,364} & {9,364}\\ \#Valid (\texttt{InstanceOf} Triple) & {5,000} & {8,650}\\ \#Test (\texttt{InstanceOf} Triple) & {5,000} & {8,650}\\ \#Valid (\texttt{SubClassOf} Triple) & {1,000} & {1,187}\\ \#Test (\texttt{SubClassOf} Triple) & {1,000} & {1,187}\\ \hline \end{tabular} \caption{\label{table1}Statistics of YAGO39K and M-YAGO39K.} \end{table} \begin{table*}[!htb] \small \centering \setlength{\belowcaptionskip}{-1pt} \begin{tabular}{c|cc|ccc|cccc} \hline Experiments & \multicolumn{5}{c|}{Link Prediction} & \multicolumn{4}{c}{Triple Classification(\%)} \\ \hline \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{MRR} & \multicolumn{3}{c|}{Hits@N(\%)} & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{Precision} & \multirow{2}{*}{Recall} & \multirow{2}{*}{F1-Score} \\ & \texttt{Raw} & \texttt{Filter} & \texttt{1} & \texttt{3} & \texttt{10} & & & & \\ \hline TransE & 0.114 & 0.248 & 12.3 & 28.7 & 51.1 & 92.1 & 92.8 & 91.2 & 92.0 \\ TransH & 0.102 & 0.215 & 10.4 & 24.0 & 45.1 & 90.8 & 91.2 & 90.3 & 90.8 \\ TransR & 0.112 & 0.289 & 15.8 & 33.8 & 56.7 & 91.7 & 91.6 & 91.9 & 91.7 \\ TransD & 0.113 & 0.176 & 8.9 & 19.0 & 35.4 & 89.3 & 88.1 & 91.0 & 89.5 \\ HolE & 0.063 & 0.198 & 11.0 & 23.0 & 38.4 & 92.3 & 92.6 & 91.9 & 92.3 \\ DistMult & \textbf{0.156} & 0.362 & 22.1 & 43.6 & 66.0 & 93.5 & 93.9 & 93.0 & 93.5 \\ ComplEx & 0.058 & 0.362 & 29.2 & 40.7 & 48.1 & 92.8 & 92.6 & \textbf{93.1} & 92.9 \\ \hline TransC (unif) & 0.087 & \textbf{0.421} & 28.3 & 50.0 & 69.2 & 93.5 & 94.3 & 92.6 & 93.4 \\ TransC (bern) & 0.112 & 0.420 & \textbf{29.8} & \textbf{50.2} & \textbf{69.8} & \textbf{93.8} & \textbf{94.8} & 92.7 & \textbf{93.7} \\ \hline \end{tabular} \caption{\label{table2}Experimental results on link prediction and triple classification for relational triples. Hits@N uses results of ``Filter" evaluation setting.} \end{table*} (1) We randomly select some relational triples like $(h, r, t)$ from the whole YAGO dataset as our relational triple set $\mathcal{S}_l$. (2) For every instance and instance relation existed in our relational triples, we save it to construct instance set $\mathcal{I}$ and instance relation set $\mathcal{R}_l$ respectively. (3) For every \texttt{instanceOf} triple $(i, r_e, c)$ in YAGO, if $i \in \mathcal{I}$, we save this triple to construct \texttt{instanceOf} triple set $\mathcal{S}_e$. (4) For every concept existed in \texttt{instanceOf} triple set $\mathcal{S}_e$, we save it to construct concept set $\mathcal{C}$. (5) For every \texttt{subClassOf} triple $(c_i, r_c, c_j)$ in YAGO, if $c_i \in \mathcal{C} \land c_j \in \mathcal{C}$, we save this triple to construct \texttt{subClassOf} triple set $\mathcal{S}_c$. (6) Finally, we achieve our triple set $\mathcal{S} = \mathcal{S}_e \cup \mathcal{S}_c \cup \mathcal{S}_l$ and our relation set $\mathcal{R} = \{r_e, r_c\} \cup \mathcal{R}_l$. To evaluate every model's performance in handling the transitivity of isA relations, we generate some new triples based on YAGO39K using the transitivity of isA relations. These new triples will be added to valid and test datasets of YAGO39K to create a new dataset named M-YAGO39K. Specific steps are described as follows: (1) For every \texttt{instanceOf} triple $(i, r_e, c)$ in valid and test dataset, if $(c, r_c, c_j)$ exists in training dataset, we save a new \texttt{instanceOf} triple $(i, r_e, c_j)$. (2) For every \texttt{subClassOf} triple $(c_i, r_c, c_j)$ in valid and test dataset, if $(c_j, r_c, c_k)$ exists in training dataset, we save a new \texttt{subClassOf} triple $(c_i, r_c, c_k)$. (3) We add these new triples to valid and test dataset of YAGO39K to get M-YAGO39K. The statistics of YAGO39K and M-YAGO39K are shown in Table \ref{table1}. \subsection{Link Prediction} Link Prediction aims to predict the missing $h$ or $t$ for a relational triple $(h, r, t)$. In this task, we need to give a ranking list of candidate instances from the knowledge graph, instead of only giving one best result. For every test relational triple $(h, r, t)$, we remove the head or tail instance and replace it with all instances existed in knowledge graph, and rank these instances in ascending order of distances calculated by loss function $f_r$. Just like \cite{TransE}, we use two evaluation metrics in this task: (1) the mean reciprocal rank of all correct instances (MRR) and (2) the proportion of correct instances that rank no larger than N (Hits@N). A good embedding model should achieve a high MRR and a high Hits@N. We note that a corrupted triple may also exist in knowledge graph, which should also be regarded as a correct prediction. However, the above evaluations do not handle this issue and may underestimate the results. Hence, we filter out every triple appeared in our knowledge graph before getting the ranking list. The first evaluation setting is called ``Raw" and the second one is called ``Filter." We report the experiment results on both settings. In this task, we use dataset YAGO39K for evaluation. We select learning rate $\lambda$ for SGD among \{0.1, 0.01, 0.001\}, the three margins $\gamma_l$, $\gamma_e$ and $\gamma_c$ among \{0.1, 0.3, 0.5, 1, 2\}, the dimension of instance vectors and relation vectors $n$ among \{20, 50, 100\}. The best configurations are determined according to the Hits@10 in valid set. The optimal configurations are: $\gamma_l = 1$, $\gamma_e = 0.1$, $\gamma_c = 1$, $\lambda = 0.001$, $n = 100$ and taking $L_2$ as dissimilarity. We train every model for 1000 rounds in this task. Evaluation results on YAGO39K are shown in Table \ref{table2}. From the table, we can conclude that: (1) TransC significantly outperforms other models in terms of Hits@N. This indicates that TransC can use isA triples' information better than other models, which is helpful for instance representation learning. (2) TransC performs a little bit worse than DistMult in some settings. The reason may be that we determine the best configurations only according to the Hits@10, which may lead to a low MRR. (3) The ``bern" sampling trick works well for TransC. \begin{table*}[!htb] \centering \setlength{\belowcaptionskip}{-1pt} \small \begin{tabular}{c|cccc|cccc} \hline Datasets & \multicolumn{4}{c|}{YAGO39K} & \multicolumn{4}{c}{M-YAGO39K}\\ \hline \multirow{1}{*}{Metric} & Accuracy & Precision & Recall & F1-Score & Accuracy & Precision & Recall & F1-Score \\ \hline TransE & 82.6 & 83.6 & 81.0 & 82.3 & 71.0$\downarrow$ & 81.4$\downarrow$ & 54.4$\downarrow$ & 65.2$\downarrow$ \\ TransH & 82.9 & 83.7 & 81.7 & 82.7 & 70.1$\downarrow$ & 80.4$\downarrow$ & 53.2$\downarrow$ & 64.0$\downarrow$ \\ TransR & 80.6 & 79.4 & \textbf{82.5} & 80.9 & 70.9$\downarrow$ & 73.0$\downarrow$ & 66.3$\downarrow$ & 69.5$\downarrow$ \\ TransD & 83.2 & 84.4 & 81.5 & 82.9 & 72.5$\downarrow$ & 73.1$\downarrow$ & 71.4$\downarrow$ & 72.2$\downarrow$\\ HolE & 82.3 & 86.3 & 76.7 & 81.2 & 74.2$\downarrow$ & 81.4$\downarrow$ & 62.7$\downarrow$ & 70.9$\downarrow$ \\ DistMult & \textbf{83.9} & \textbf{86.8} & 80.1 & \textbf{83.3} & 70.5$\downarrow$ & 86.1$\downarrow$ & 49.0$\downarrow$ & 62.4$\downarrow$ \\ ComplEx & 83.3 & 84.8 & 81.1 & 82.9 & 70.2$\downarrow$ & 84.4$\downarrow$ & 49.5$\downarrow$ & 62.4$\downarrow$ \\ \hline TransC (unif) & 80.2 & 81.6 & 80.0 & 79.7 & \textbf{85.5}$\uparrow$ & \textbf{88.3}$\uparrow$ & 81.8$\uparrow$ & 85.0$\uparrow$ \\ TransC (bern) & 79.7 & 83.2 & 74.4 & 78.6 & 85.3$\uparrow$ & 86.1$\uparrow$ & \textbf{84.2}$\uparrow$ & \textbf{85.2}$\uparrow$ \\ \hline \end{tabular} \caption{\label{table3}Experimental results on \texttt{instanceOf} triple classification(\%).} \end{table*} \begin{table*}[!htb] \centering \setlength{\belowcaptionskip}{-1pt} \small \begin{tabular}{c|cccc|cccc} \hline Datasets & \multicolumn{4}{c|}{YAGO39K} & \multicolumn{4}{c}{M-YAGO39K}\\ \hline \multirow{1}{*}{Metric} & Accuracy & Precision & Recall & F1-Score & Accuracy & Precision & Recall & F1-Score \\ \hline TransE & 77.6 & 72.2 & 89.8 & 80.0 & 76.9$\downarrow$ & 72.3$\uparrow$ & 87.2$\downarrow$ & 79.0$\downarrow$ \\ TransH & 80.2 & 76.4 & 87.5 & 81.5 & 79.1$\downarrow$ & 72.8$\downarrow$ & 92.9$\uparrow$ & 81.6$\uparrow$ \\ TransR & 80.4 & 74.7 & 91.9 & 82.4 & 80.0$\downarrow$ & 73.9$\downarrow$ & 92.9$\uparrow$ & 82.3$\downarrow$ \\ TransD & 75.9 & 70.6 & 88.8 & 78.7 & 76.1$\uparrow$ & 70.7$\uparrow$ & 89.0$\uparrow$ & 78.8$\uparrow$\\ HolE & 70.5 & 73.9 & 63.3 & 68.2 & 66.6$\downarrow$ & 72.3$\downarrow$ & 53.7$\downarrow$ & 61.7$\downarrow$\\ DistMult & 61.9 & 68.7 & 43.7 & 53.4 & 60.7$\downarrow$ & 71.7$\uparrow$ & 35.5$\downarrow$ & 47.7$\downarrow$\\ ComplEx & 61.6 & 71.5 & 38.6 & 50.1 & 59.8$\downarrow$ & 65.6$\downarrow$ & 41.4$\uparrow$ & 50.7$\uparrow$\\ \hline TransC (unif) & 82.9 & 77.1 & 93.7 & 84.6 & 83.0$\uparrow$ & 77.5$\uparrow$ & \textbf{93.1}$\downarrow$ & 84.7$\uparrow$ \\ TransC (bern) & \textbf{83.7} & \textbf{78.1} & \textbf{93.9} & \textbf{85.2} & \textbf{84.4}$\uparrow$ & \textbf{80.7}$\uparrow$ & 90.4$\downarrow$ & \textbf{85.3}$\uparrow$ \\ \hline \end{tabular} \caption{\label{table4}Experimental results on \texttt{subClassOf} triple classification(\%).} \end{table*} \subsection{Triple Classification}\label{triple classification} Triple Classification aims to judge whether a given triple is correct or not, which is a binary classification task. This triple can be a relational triple, an \texttt{instanceOf} triple or a \texttt{subClassOf} triple. Negative triples are needed for evaluation of binary classification. Hence, we construct some negative triples following the same setting in \cite{NTN}. There are as many true triples as negative triples in both valid and test set. For triple classification, we set a threshold $\delta_r$ for every relation $r$. For a given test triple, if its loss function is smaller than $\delta_r$, it will be classified as positive, otherwise negative. $\delta_r$ is obtained by maximizing the classification accuracy on valid set. In this task, we use dataset YAGO39K and M-YAGO39K for evaluation. Parameters are selected in the same way as in link prediction. The best configurations are determined by accuracy in valid set. The optimal configurations for YAGO39K are: $\gamma_l = 1$, $\gamma_e = 0.1$, $\gamma_c = 0.1$, $\lambda = 0.001$, $n = 100$ and taking $L_2$ as dissimilarity. The optimal configurations for M-YAGO39K are: $\gamma_l = 1$, $\gamma_e = 0.1$, $\gamma_c = 0.3$, $\lambda = 0.001$, $n = 100$ and taking $L_2$ as dissimilarity. For both datasets, we traverse all the training triples for 1000 rounds. Our datasets have three kinds of triples. Hence, we do experiments on them respectively. Experimental results for relational triples, \texttt{instanceOf} triples, and \texttt{subClassOf} triples are shown in Table \ref{table2}, Table \ref{table3}, and Table \ref{table4} respectively. In Table \ref{table3} and Table \ref{table4}, a rising arrow means performance of this model have a promotion from YAGO39K to M-YAGO39K and a down arrow means a drop. From Table \ref{table2}, we can learn that: (1) TransC outperforms all previous work in relational triple classification. (2) The ``bern" sampling trick works better than ``unif" in TransC. From Table \ref{table3} and Table \ref{table4}, we can conclude that: (1) On YAGO39K, some compared models perform better than TransC in \texttt{instanceOf} triple classification. This is because that \texttt{instanceOf} has most triples (53.5\%) among all relations in YAGO39K. This relation is trained superabundant times and nearly achieves the best performance, which has an adverse effect on other triples. TransC can find a balance between them and all triples achieve a good performance. (2) On YAGO39K, TransC outperforms other models in \texttt{subClassOf} triple classification. As shown in Table \ref{table1}, \texttt{subClassOf} triples are much less than \texttt{instanceOf} triples. Hence, other models can not achieve the best performance under the bad influence of \texttt{instanceOf} triples. (3) On M-YAGO39K, TransC outperforms previous work in both \texttt{instanceOf} triple classification and \texttt{subClassOf} triple classification, which indicates that TransC can handle the transitivity of isA relations much better than other models. (4) After comparing experimental results in YAGO39K and M-YAGO39K, we can find that most previous work's performance suffers a big drop in \texttt{instanceOf} triple classification and a small drop in \texttt{subClassOf} triple classification. This shows that previous work can not deal with \texttt{instanceOf}-\texttt{subClassOf} transitivity well. (5) In TransC, nearly all performances have a significant promotion from YAGO39K to M-YAGO39K. Both \texttt{instanceOf}-\texttt{subClassOf} transitivity and \texttt{subClassOf}-\texttt{subClassOf} transitivity are solved well in TransC. \subsection{Case Study} We have shown that TransC have a good performance for knowledge graph embedding and dealing with transitivity of isA relations. In this section, we present an example of finding new \texttt{instanceOf} triples and \texttt{subClassOf} triples using results of TransC. As shown in Figure \ref{fig3}, \textit{New York City} is an instance and others are concepts. The solid lines represent the triples from our datasets and the dotted lines represent the facts inferred by our model. TransC can find two new \texttt{instanceOf} triples (\textit{New York City}, \texttt{instanceOf}, \textit{City}) and (\textit{New York City}, \texttt{instanceOf}, \textit{Municipality}). It can also find a new \texttt{subClassOf} triple (\textit{Port Cities}, \texttt{subClassOf}, \textit{City}). Following the transitivity of isA relations, we can know all these three new triples are right. Unfortunately, most previous work regards these three triples as wrong, which means they can not handle transitivity of isA relations well. \begin{figure}[ht] \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{0pt} \includegraphics[width=75.0mm]{pic/4.pdf}\\ \caption{An inference example of TransC.} \label{fig3} \end{figure} \section{Conclusion and Future Work} In this paper, we propose a new knowledge embedding model named TransC. TransC embeds instances, concepts, and relations in the same space to deal with the transitivity of isA relations. We create a new dataset YAGO39K for evaluation. Experiment results show that TransC outperforms previous translation-based models in most cases. Besides, It can also handle the transitivity of isA relations much better than other models. In our future work, we will explore the following research directions: (1) Sphere is a simple model to represent a concept in semantic space, but it still have some limits since it is too naive. we will try to find a more expressive model instead of spheres to represent concepts. (2) A concept may have different meanings in different triples. We will try to use several typical vectors of instances as a concept's centers to represent different meanings of a concept. Then a concept can have different embeddings in different triples. \section*{Acknowledgments} The work is supported by NSFC key project (No. 61533018, U1736204, 61661146007), Ministry of Education and China Mobile Research Fund (No. 20181770250), and THUNUS NExT Co-Lab.
1,108,101,564,042
arxiv
\section{Introduction} Camera pose estimation is the task of determining the position and orientation of a camera in 3D space and it has many applications in computer vision, cartography, and related fields. Augmented reality, robot localization, navigation, or 3D reconstruction are just a few of them. To estimate the camera pose, correspondences between known real world features and their counterparts in the image plane of the camera have to be learned. The features can be \emph{e.g}\bmvaOneDot points, lines, or combinations of both~\cite{kuang2013pose}. The task has been solved using \emph{point correspondences} first~\cite{lowe1987three,fischler1981random}. This is called the \emph{Perspective-n-Point} (PnP) problem and it still enjoys attention of the scientific community~\cite{ferraz2014very}. Camera pose can also be estimated using \emph{line correspondences}, which is called the \emph{Perspective-n-Line} (PnL) problem. A remarkable progress in solving PnL has been achieved in the last years~\cite{mirzaei2011globally,zhang2013robust,bhat2014line}, particularly thanks to the work of Mirzaei and Roumeliotis~\cite{mirzaei2011globally} and more recently to the work of Zhang \emph{et al}\bmvaOneDot\cite{zhang2013robust}. Both of the methods are accurate, cope well with noisy data, and they are more efficient than the previously known methods. Computational efficiency is a critical aspect for many applications and we show that it can be pushed even further. We propose an efficient solution to the PnL problem which is substantially faster yet accurate and robust compared to the state-of-the-art \cite{mirzaei2011globally,zhang2013robust}. The idea is to parameterize the 3D lines using Pl\"{u}cker coordinates~\cite{bartoli2005structure} to allow using Linear Least Squares to estimate the projection matrix. The camera pose parameters are then extracted from the projection matrix by posterior constraint enforcement. The proposed method (\textbf{i}) is more than the order of magnitude faster than the state-of-the-art \cite{mirzaei2011globally,zhang2013robust}, (\textbf{ii}) yields only one solution of the PnL problem, and (\textbf{iii}) similarly to the state-of-the-art, copes well with image noise, and is initialization-free. These advantages make the proposed method particularly suitable for scenarios with many lines. The method needs 9 lines in the minimal case, so it is not practical for a RANSAC-like framework because it would result in increased number of iterations. To eliminate this, we involve an alternative algebraic scheme to deal with mismatched line correspondences. The rest of this paper is organized as follows. We present a review of related work in Section~\ref{sec:related}. Then we state the basics of parameterizing 3D lines using Pl\"{u}cker coordinates in Section~\ref{sec:pose-estim}, show how the lines are projected onto the image plane and how we exploit it to estimate the camera pose. We evaluate the performance of our method using simulations and real-world experiments in Section~\ref{sec:evaluation}, and conclude in Section~\ref{sec:conclusions}. \vspace{-1em} \section{Related work} \label{sec:related} \vspace{-0.5em} The task of camera pose estimation from line correspondences is receiving attention for more than two decades. Some of the earliest works are the ones of Liu \emph{et al}\bmvaOneDot\cite{liu1990determination} and Dhome \emph{et al}\bmvaOneDot\cite{dhome1989determination}. They introduce two different ways to deal with the PnL problem which can be tracked until today -- algebraic and iterative approaches. The \emph{iterative approaches} consider pose estimation as a Nonlinear Least Squares problem by iteratively minimizing specific cost function, which usually has a geometrical meaning. Earlier works \cite{liu1990determination} attempted to estimate the camera position and orientation separately while the latter ones \cite{kumar1994robust, christy1999iterative, david2003simultaneous} favour simultaneous estimation. The problem is that majority of iterative algorithms do not guarantee convergence to the global minimum; therefore, without an accurate initialization, the estimated pose is often far from the true camera pose. The \emph{algebraic approaches} estimate the camera pose by solving a system of (usually polynomial) equations, minimizing an algebraic error. Dhome \emph{et al}\bmvaOneDot\cite{dhome1989determination} and Chen~\cite{chen1990pose} solve the minimal problem of pose estimation from 3 line correspondences whereas Ansar and Daniilidis~\cite{ansar2003linear} work with 4 or more lines. Their algorithm has quadratic computational complexity depending on the number of lines and it may fail if the polynomial system has more than 1 solution. More crucial disadvantage of these methods is that they become unstable in the presence of image noise and must be plugged into a RANSAC or similar loop. Recently, two major improvements of algebraic approaches have been achieved. First, Mirzaei and Roumeliotis~\cite{mirzaei2011globally} proposed a method which is both efficient (linear computational complexity depending on the number of lines) and robust in the presence of image noise. The cases with 3 or more lines can be handled. A polynomial system with 27 candidate solutions is constructed and solved through the eigendecomposition of a multiplication matrix. Camera orientations having the least square error are considered to be the optimal ones. Camera positions are obtained separately using the Linear Least Squares. Nonetheless, the problem of this algorithm is that it often yields multiple solutions. The second recent improvement is the work of Zhang \emph{et al}\bmvaOneDot\cite{zhang2013robust}. Their method works with 4 or more lines and is more accurate and robust than the method of Mirzaei and Roumeliotis. An intermediate model coordinate system is used in the method of Zhang \emph{et al}\bmvaOneDot, which is aligned with a 3D line of longest projection. The lines are divided into triples for each of which a P3L polynomial is formed. The optimal solution of the polynomial system is selected from the roots of its derivative in terms of a least squares residual. A drawback of this algorithm is that the computational time increases strongly for higher number of lines. In this paper, we propose an algebraic solution to the PnL problem which is an order of magnitude faster than the two described state-of-the-art methods yet it is comparably accurate and robust in the presence of image noise. \vspace{-1em} \section{Pose estimation using Pl\"{u}cker coordinates} \label{sec:pose-estim} \vspace{-0.5em} Let us assume that we have (\textbf{i}) a calibrated pinhole camera and (\textbf{ii}) correspondences between 3D lines and their images obtained by the camera. The 3D lines are parameterized using Pl\"{u}cker coordinates (Section~\ref{subsec:plucker}) which allows linear projection of the lines into the image (Section~\ref{subsec:projection}). A line projection matrix can thus be estimated using Linear Least Squares (Section~\ref{subsec:lineprojmat}). The camera pose parameters are extracted from the line projection matrix (Section~\ref{subsec:estimcampose}). An outlier rejection scheme must be employed in cases where line mismatches occur (Section~\ref{subsec:outliers}). For the pseudocode of our algorithm, please refer to Appendix~A in the supplementary material~\cite{pribyl2015supplementary}. An implementation of our algorithm in Matlab is also provided. Let us now define the coordinate systems: a world coordinate system $\{W\}$ and a camera coordinate system $\{C\}$, both are right-handed. The camera $x$-axis goes right, the $y$-axis goes up and the $z$-axis goes behind the camera, so that the points situated in front of the camera have negative $z$ coordinates in $\{C\}$. A homogeneous 3D point $\mathbf{A}^{\scriptscriptstyle W} = (a^{\scriptscriptstyle W}_x ~ a^{\scriptscriptstyle W}_y ~ a^{\scriptscriptstyle W}_z ~ a^{\scriptscriptstyle W}_w)^\top$ in $\{W\}$ is transformed into a point $\mathbf{A}^{\scriptscriptstyle C} = (a^{\scriptscriptstyle C}_x ~ a^{\scriptscriptstyle C}_y ~ a^{\scriptscriptstyle C}_z ~ a^{\scriptscriptstyle C}_w)^\top$ in $\{C\}$ as \begin{equation} \mathbf{A}^{\scriptscriptstyle C} ~=~ \left( \begin{array}{lc} \mathbf{R} & -\mathbf{R} \mathbf{t} \\ \mathbf{0}_{\scriptscriptstyle 1 \times 3} & 1 \end{array} \right) ~ \mathbf{A}^{\scriptscriptstyle W}~, \label{eq:transform} \end{equation} \noindent where $\mathbf{R}$ is a $3 \times 3$ rotation matrix describing the orientation of the camera in $\{W\}$ by means of three consecutive rotations along the three axes $z$, $y$, $x$ by respective angles $\gamma$, $\beta$, $\alpha$. $\mathbf{t} = (t_x ~ t_y ~ t_z)^\top$ is a $3 \times 1$ translation vector representing the position of the camera in $\{W\}$. Let us now assume that we have a calibrated pinhole camera (i.e. we know its intrinsic parameters), which observes a set of 3D lines. Given $n \ge 9$ 3D lines $\mathbf{L}_i$ $(i = 1 \dots n)$ and their respective projections $\mathbf{l}_i$ onto the normalized image plane, we are able to estimate the camera pose. We parameterize the 3D lines using Pl\"{u}cker coordinates. \subsection{Pl\"{u}cker coordinates of 3D lines} \label{subsec:plucker} 3D lines can be represented using several parameterizations in the projective space \cite{bartoli2005structure}. Parameterization using Pl\"{u}cker coordinates is complete (i.e. every 3D line can be represented) but not minimal (a 3D line has 4 degrees of freedom but Pl\"{u}cker coordinate is a homogeneous 6-vector). The benefit of using Pl\"{u}cker coordinates is in convenient linear projection of 3D lines onto the image plane. Given two distinct 3D points $\mathbf{A} = (a_x ~ a_y ~ a_z ~ a_w)^\top$ and $\mathbf{B} = (b_x ~ b_y ~ b_z ~ b_w)^\top$ in homogeneous coordinates, a line joining them can be represented using Pl\"{u}cker coordinates as a homogeneous 6-vector $\mathbf{L} = (\mathbf{u}^\top ~ \mathbf{v}^\top)^\top = (L_1 ~ L_2 ~ L_3 ~ L_4 ~ L_5 ~ L_6)^\top$, where \vspace{-1em} \begin{eqnarray} \label{eq:plucker} \mathbf{u}^\top &=& (L_1 ~ L_2 ~ L_3) = (a_x ~ a_y ~ a_z) ~ \times ~ (b_x ~ b_y ~ b_z) \\ \nonumber \mathbf{v}^\top &=& (L_4 ~ L_5 ~ L_6) = a_w(b_x ~ b_y ~ b_z) ~ - ~ b_w(a_x ~ a_y ~ a_z) \enspace, \end{eqnarray} \noindent '$\times$' denotes a vector cross product. The $\mathbf{v}$ part encodes direction of the line while the $\mathbf{u}$ part encodes position of the line in space. In fact, $\mathbf{u}$ is a normal of an interpretation plane -- a plane passing through the line and the origin. As a consequence, $\mathbf{L}$ must satisfy a bilinear constraint $\mathbf{u}^\top \mathbf{v} = 0$. Existence of this constraint explains the discrepancy between 4 degrees of freedom of a 3D line and its parameterization by a homogeneous 6-vector. More on Pl\"{u}cker coordinates can be found in~\cite{hartley2004multiple}. \vspace{-1em} \subsection{Projection of 3D lines} \label{subsec:projection} \vspace{-0.3em} 3D lines can be transformed from the world coordinate system $\{W\}$ into the camera coordinate system $\{C\}$ using the $6 \times 6$ line motion matrix $\mathbf{T}$~\cite{bartoli20043d} as \vspace{-0.5em} \begin{equation} {\mathbf{L}}^{\scriptscriptstyle C} = \mathbf{T} {\mathbf{L}}^{\scriptscriptstyle W} \enspace . \label{eq:linetransform} \end{equation} \vspace{-1.7em} \noindent The line motion matrix is defined as \vspace{-0.5em} \begin{equation} \mathbf{T} = \left( \begin{array}{lc} \mathbf{R} & \mathbf{R} [-\mathbf{t}]_{\times} \\ \mathbf{0}_{\scriptscriptstyle 3 \times 3} & \mathbf{R} \end{array} \right) \enspace , \label{eq:linemotionmatrix} \end{equation} \noindent where $\mathbf{R}$ is a $3 \times 3$ rotation matrix and $[\mathbf{t}]_{\times}$ is a $3 \times 3$ skew-symmetric matrix constructed from the translation vector $\mathbf{t}$\footnote{Please note that our line motion matrix differs slightly from the matrix of Bartoli and Sturm~\cite[Eq.~(6)]{bartoli20043d}, namely in the upper right term: We have $\mathbf{R} [-\mathbf{t}]_{\times}$ instead of $[\mathbf{t}]_{\times} \mathbf{R}$ due to different coordinate system.}. After 3D lines are transformed into the camera coordinate system, their projections onto the image plane can be determined as intersections of their interpretation planes with the image plane; see Figure~\ref{fig:projection} for illustration. \begin{figure} \hspace{-2em} \floatbox[ \capbeside\thisfloatsetup{capbesideposition={right,center},capbesidewidth=0.55\linewidth} ]{figure}[0.85\FBwidth] { \caption{ 3D line projection. The 3D line $\mathbf{L}$ is parameterized by its direction vector $\mathbf{v}$ and a normal $\mathbf{u}$ of its interpretation plane, which passes through the origin of the camera coordinate system $\{C\}$. Since the projected 2D line $\mathbf{l}$ lies at the intersection of the interpretation plane and the image plane, it is fully defined by the normal $\mathbf{u}$. } \label{fig:projection} } { \includegraphics[width=\linewidth]{line_projection.eps} \hspace{-1.5em} } \vspace{-1.25em} \end{figure} Recall from Eq.~(\ref{eq:plucker}) that coordinates of a 3D line consist of two 3-vectors: $\mathbf{u}$ (normal of an interpretation plane) and $\mathbf{v}$ (direction of a line). Since $\mathbf{v}$ is not needed to determine the projection of a line, only $\mathbf{u}$ needs to be computed. Thus, when transforming a 3D line according to Eq.~(\ref{eq:linetransform}) in order to calculate is projection, only the upper half of $\mathbf{T}$ is needed, yielding the $3 \times 6$ line projection matrix \vspace{-0.8em} \begin{equation} \mathbf{P} = \left( \begin{array}{ccc} \mathbf{R} & & \mathbf{R} [-\mathbf{t}]_{\times} \end{array} \right) \enspace . \label{eq:lineprojectionmatrix} \end{equation} \vspace{-1.2em} \noindent A 3D line ${\mathbf{L}}^{\scriptscriptstyle W}$ is then projected using the line projection matrix $\mathbf{P}$ as \vspace{-0.5em} \begin{equation} {\mathbf{l}}^{\scriptscriptstyle C} \approx \mathbf{P} {\mathbf{L}}^{\scriptscriptstyle W} \enspace , \label{eq:lineprojection} \end{equation} \noindent where ${\mathbf{l}}^{\scriptscriptstyle C} = (l^{\scriptscriptstyle C}_x ~ l^{\scriptscriptstyle C}_y ~ l^{\scriptscriptstyle C}_w)^\top$ is a homogeneous 2D line in the normalized image plane and '$\approx$' denotes an equivalence of homogeneous coordinates, i.e. equality up to multiplication by a scale factor. \vspace{-1em} \subsection{Linear estimation of the line projection matrix} \label{subsec:lineprojmat} \vspace{-0.5em} As the projection of 3D lines is defined by Eq.~(\ref{eq:lineprojection}), the problem of camera pose estimation resides in estimating the line projection matrix $\mathbf{P}$, which encodes all the six camera pose parameters $t_x$, $t_y$, $t_z$, $\alpha$, $\beta$, $\gamma$. We solve this problem using the Direct Linear Transformation (DLT) algorithm, similarly to Hartley~\cite{hartley1998minimizing} who works with points. The system of linear equations (\ref{eq:lineprojection}) can be transformed into a homogeneous system \vspace{-1.1em} \begin{equation} \label{eq:system} \mathbf{M} \mathbf{p} = \mathbf{0} \end{equation} \noindent by transforming each equation of (\ref{eq:lineprojection}) so that only a 0 remains at the right-hand side. This forms a $2n \times 18$ measurement matrix $\mathbf{M}$ which contains coefficients of equations generated by correspondences between 3D lines and their projections $\mathbf{L}_i \leftrightarrow \mathbf{l}_i$ $(i = 1 \dots n, ~ n \ge 9)$. For details on construction of $\mathbf{M}$, please refer to Appendix~B in the supplementary material~\cite{pribyl2015supplementary}. The DLT then solves (\ref{eq:system}) for $\mathbf{p}$ which is a 18-vector containing the entries of the line projection matrix $\mathbf{P}$. Eq.~(\ref{eq:system}), however, holds only in the noise-free case. If a noise is present in the measurements, an inconsistent system is obtained. \vspace{-0.5em} \begin{equation} \label{eq:noisysystem} \mathbf{M} \mathbf{\hat{p}} = \boldsymbol{\epsilon} \end{equation} \noindent Only an approximate solution $\mathbf{\hat{p}}$ may be found through minimization of a $2n$-vector of measurement residuals $\boldsymbol{\epsilon}$ in the least squares sense on the right hand side of Eq.~(\ref{eq:noisysystem}). Since DLT algorithm is sensitive to the choice of coordinate system, it is crucial to prenormalize the data to get properly conditioned $\mathbf{M}$~\cite{hartley1997defense}. Thanks to the principle of duality~\cite{coxeter2003projective}, coordinates of 2D lines can be treated as homogeneous coordinates of 2D points. The points should be translated and scaled so that their centroid is at the origin and their average distance from the origin is equal to $\sqrt{2}$. The Pl\"{u}cker coordinates of 3D lines cannot be treated as homogeneous 5D points because of the bilinear constraint (see Section~\ref{subsec:plucker}). However, the closest point to a set of 3D lines can be computed using the Weiszfeld algorithm~\cite{aftab1lqclosest} and the lines can be translated so that the closest point is the origin. Once the system of linear equations given by (\ref{eq:noisysystem}) is solved in the least squares sense, \emph{e.g}\bmvaOneDot by Singular Value Decomposition (SVD) of $\mathbf{M}$, the estimate $\mathbf{\hat{P}}$ of the $3 \times 6$ line projection matrix $\mathbf{P}$ can be recovered from the 18-vector $\mathbf{\hat{p}}$. \subsection{Estimation of the camera pose} \label{subsec:estimcampose} The $3 \times 6$ estimate $\mathbf{\hat{P}}$ of the line projection matrix $\mathbf{P}$ obtained as a least squares solution of Eq.~(\ref{eq:noisysystem}) does not satisfy the constraints imposed on $\mathbf{P}$. In fact, $\mathbf{P}$ has only 6 degrees of freedom -- the 6 camera pose parameters $t_x$, $t_y$, $t_z$, $\alpha$, $\beta$, $\gamma$. It has, however, 18 entries suggesting that it has 12 independent linear constraints, see Eq.~(\ref{eq:lineprojectionmatrix}). The first six constraints are imposed by the rotation matrix $\mathbf{R}$ that must satisfy the orthonormality constraints (unit-norm and mutually orthogonal rows). The other six constraints are imposed by the skew-symmetric matrix $[\mathbf{t}]_{\times}$ (three zeros on the main diagonal and antisymmetric off-diagonal elements). We propose the following method to extract the camera pose parameters from the estimate $\mathbf{\hat{P}}$. First, the scale of $\mathbf{\hat{P}}$ has to be determined, since $\mathbf{\hat{p}}$ is usually of unit length as a minimizer of $\boldsymbol{\epsilon}$ in Eq.~(\ref{eq:noisysystem}). The correct scale of $\mathbf{\hat{P}}$ can be determined from its left $3 \times 3$ submatrix $\mathbf{\hat{P}}_1$ which is an estimate of the rotation matrix $\mathbf{R}$. Since the determinant of an orthonormal matrix must be equal to 1, $\mathbf{\hat{P}}$ has to be scaled by a factor $s = {1}/{\sqrt[3]{\det \mathbf{\hat{P}}_1}}$ so that $\det (s\mathbf{\hat{P}}_1) = 1$. Second, the camera pose parameters can be extracted from $s\mathbf{\hat{P}}_2$, the scaled right $3 \times 3$ submatrix of $\mathbf{\hat{P}}$. The right submatrix is an estimate of a product of an orthonormal and a skew-symmetric matrix ($\mathbf{R}[-\mathbf{t}]_{\times}$) which has the same structure as the essential matrix~\cite{longuet1981computer} used in multi-view computer vision. Therefore, we use a method for the decomposition of an essential matrix into a rotation matrix and a skew-symmetric matrix (see~\cite[p. 258]{hartley2004multiple}) as follows: Let $s\mathbf{\hat{P}}_2$ = $\mathbf{U} \bm{\Sigma} \mathbf{V}^\top$ be the SVD of the scaled $3 \times 3$ submatrix $s\mathbf{\hat{P}}_2$, and let \begin{equation} \mathbf{Z} = \left( \begin{array}{rrr} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \enspace , \enspace \mathbf{W} = \left( \begin{array}{rrr} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right)\enspace . \label{eq:ZWmatrices} \end{equation} \noindent Two possible solutions (A and B) exist for the estimate $\mathbf{\hat{R}}$ of the rotation matrix and estimate $\hat{[\mathbf{t}]}_{\times}$ of the skew-symmetric matrix: \begin{equation} \label{eq:solutions} \begin{array}{lcl} \mathbf{\hat{R}}_\mathrm{A} = \mathbf{UW} \;\; \mathrm{diag}(1 \; 1 \> \pm 1) \mathbf{V}^\top, & \hat{[\mathbf{t}]}_{\times\mathrm{A}} = \sigma \mathbf{VZ} \;\; \mathbf{V}^\top\\ \mathbf{\hat{R}}_\mathrm{B} = \mathbf{UW}^\top \mathrm{diag}(1 \; 1 \> \pm 1) \mathbf{V}^\top, & \hat{[\mathbf{t}]}_{\times\mathrm{B}} = \sigma \mathbf{VZ}^\top \mathbf{V}^\top \end{array} \enspace , \end{equation} \noindent where $\sigma = (\Sigma_{1,1} + \Sigma_{2,2}) / 2$ is an average of the first two singular values of $s\mathbf{\hat{P}}_2$ (a properly constrained essential matrix has the first and second singular values equal to each other and the third one is zero). The $\pm 1$ term in Eq.~(\ref{eq:solutions}) denotes either $1$ or $-1$ which has to be put on the diagonal so that $\det \mathbf{\hat{R}}_\mathrm{A} = \det \mathbf{\hat{R}}_\mathrm{B} = 1$. The correct solution A or B is chosen based on a simple check whether 3D lines are in front of the camera or not. Extraction of the components $t_x$, $t_y$, $t_z$ of the translation vector from the skew symmetric matrix $[\mathbf{t}]_{\times}$ and also extraction of the rotation angles $\alpha$, $\beta$, $\gamma$ from the rotation matrix $\mathbf{R}$ are straightforward. This completes the pose estimation procedure. Alternative ways of extracting the camera pose parameters from $s\mathbf{\hat{P}}$ also exist, \emph{e.g}\bmvaOneDot computing the closest rotation matrix $\mathbf{\hat{R}}$ to the left $3 \times 3$ submatrix of $s\mathbf{\hat{P}}_1$ and then computing $\hat{[\mathbf{t}]}_{\times} = - \mathbf{\hat{R}}^\top s\mathbf{\hat{P}}_2$. However, our experiments showed that the alternative ways are less robust to image noise. Therefore, we have chosen the solution described in this section. \vspace{0.5em} \subsection{Rejection of mismatched lines} \label{subsec:outliers} In practice, mismatches of lines (i.e. outlying correspondences) often occur, which degrades the performance of camera pose estimation. RANSAC algorithm is commonly used to identify and remove outliers; however, as our method works with 9 and more line correspondences, it is unsuitable for use in a RANSAC-like framework because the required number of correspondences leads to increased number of iterations. For this reason, we use an alternative scheme called Algebraic Outlier Rejection (AOR) recently proposed by Ferraz \emph{et al}\bmvaOneDot~\cite{ferraz2014very}. It is an iterative approach integrated directly into the pose estimation procedure (specifically, into solving Eq.~(\ref{eq:noisysystem}) in Section~\ref{subsec:lineprojmat}) in form of Iteratively Reweighted Least Squares. Wrong correspondences are identified as outlying based on the residual $\epsilon_i$ of the least squares solution in Eq.~(\ref{eq:noisysystem}). Correspondences with residuals above a predefined threshold $\epsilon_\mathrm{max}$ are assigned zero weights, which effectively removes them from processing in the next iteration, and the solution is recomputed. This is repeated until the error of the solution stops decreasing. % The strategy for choosing $\epsilon_\mathrm{max}$ may be arbitrary but our experiments showed that the strategy $\epsilon_\mathrm{max} = \mathrm{Q}_{j}(\epsilon_1, \ldots, \epsilon_n)$ has a good tradeoff between robustness and the number of iterations. $\mathrm{Q}_j(\cdot)$ denotes the $j$th quantile, where $j$ decreases following the sequence (0.9, 0.8, $\ldots$ , 0.3) for the first 7 iterations and then it remains constant 0.25. This strategy usually leads to approximately 10 iterations. It is important \emph{not} to prenormalize the data in this case because it will impede the identification of outliers. Prenormalization of inliers should be done just before the last iteration. \section{Experimental evaluation} \label{sec:evaluation} Accuracy, robustness, and efficiency of the proposed algorithm were evaluated and compared with the state-of-the-art methods. The following methods were compared: \newenvironment{tight_enumerate}{ \begin{enumerate} \setlength{\itemsep}{0em} \setlength{\parskip}{0em} }{\end{enumerate}} \begin{tight_enumerate} \item \textbf{Mirzaei}, the method by Mirzaei and Roumeliotis~\cite{mirzaei2011globally} (results shown in red {\textcolor[rgb]{0.75,0,0}{\rule{0.5em}{0.5em}}}\,), \item \textbf{Zhang}, the method by Zhang \emph{et al}\bmvaOneDot\cite{zhang2013robust} (results shown in blue {\textcolor[rgb]{0,0,0.75}{\rule{0.5em}{0.5em}}}\,), \item \textbf{ours}, the proposed method (results shown in green {\textcolor[rgb]{0,0.75,0}{\rule{0.5em}{0.5em}}}\,). \end{tight_enumerate} \noindent Both simulations using synthetic lines and experiments using the real-world imagery are presented. \vspace{-.75em} \subsection{Synthetic lines} \label{subsec:synthetic-lines} \vspace{-.5em} Monte Carlo simulations with synthetic lines were performed under the following setup: at each trial, $n$ 3D line segments were generated by randomly placing segment endpoints inside a cube $10^3$\,m large which was centered at the origin of $\{W\}$. A virtual pinhole camera with image size of $640 \times 480$\,pixels and focal length of 800\,pixels was placed randomly in the distance of 25\,m from the origin. The camera was then oriented so that it looked directly at the origin, having all 3D line segments in its field of view. The 3D line segments were projected onto the image plane. Coordinates of the 2D endpoints were then perturbed with independent and identically distributed Gaussian noise with standard deviation of $\sigma_\mathrm{p}$ pixels. 1000 trials were carried out for each combination of $n$, $\sigma_\mathrm{p}$ parameters. Accuracy and robustness of each method was evaluated by measuring the estimated and true camera pose while varying $n$ and $\sigma_\mathrm{p}$ similarly to~\cite{mirzaei2011globally}. The position error $\Delta\tau = ||\mathbf{\hat{t}} - \mathbf{t}||$ is the distance from the estimated position $\mathbf{\hat{t}}$ to the true position $\mathbf{t}$. The orientation error $\Delta\Theta$ was calculated as follows. The difference between the true and estimated rotation matrix ($\mathbf{R}^\top \mathbf{\hat{R}}$) is converted to axis-angle representation ($\mathbf{e}$, $\theta$) and the absolute value of the difference angle $|\theta|$ is considered as the orientation error. \begin{figure}[h] \centering \includegraphics[width=0.99\linewidth]{errors.eps} \caption{ The distribution of orientation errors ($\Delta\Theta$, \textbf{top}) and position errors ($\Delta\tau$, \textbf{bottom}) in estimated camera pose as a function of the number of lines. Two levels of Gaussian noise are depicted: with standard deviation of $\sigma_\mathrm{p}=2$\,px (\textbf{left}) and with $\sigma_\mathrm{p}=10$\,px (\textbf{right}). Each box depicts the median (\emph{dash}), interquartile range - IQR (\emph{box body}), minima and maxima in the interval of $10 \times$ IQR (\emph{whiskers}) and outliers (\emph{isolated dots}). } \label{fig:errors} \end{figure} As illustrated in Figure~\ref{fig:errors}, 25 lines are generally enough for our method to be on par with the state-of-the-art in terms of accuracy. 50 and more lines are usually exploited better by our method. As the number of lines grows, our method becomes even more accurate than the others. It should be noted that the orientation error decreases more rapidly than the position error with the number of lines. Our method is outperformed by the others in the minimal case of 9 lines. However, as soon as more lines are available, the results of our approach rapidly improve. This fact is a matter of chosen parameterization. Pl\"{u}cker coordinates of 9 lines are just enough to define all 18 entries of the line projection matrix $\mathbf{P}$ in Eq.~(\ref{eq:lineprojectionmatrix}). More lines bring redundancy into the system and compensate for noise in the measurements. However, even 9 lines are enough to produce an exact solution in a noise-free case. All the three methods sometimes yield an improper estimate with exactly opposite orientation. This can be observed as isolated dots particularly in Figure~\ref{fig:errors} (top, right). Furthermore, the method of Mirzaei sometimes produced an estimate where the camera is located in between the 3D lines and it has random orientation. This happened more frequently in the presence of stronger image noise, as it is apparent from increased red bars in Figure~\ref{fig:errors} (right). The robustness of Mirzaei's method is thus much lower compared to our method and Zhang's method. However, the method of Zhang sometimes produced a degenerate pose estimate very far from the correct camera position when the 3D lines projected onto a single image point (this phenomenon cannot be seen in Figure~\ref{fig:errors} as such estimates are out of scale of the plots). The proposed method does not suffer from any of these two issues and is more robust in cases with 50 and more lines. \subsection{Real images} The three methods were also tested using real-world images from the VGG Multiview Data\-set\footnote{\url{http://www.robots.ox.ac.uk/\textasciitilde vgg/data/data-mview.html}}. It contains indoor and outdoor image sequences of buildings with extracted 2D line segments, their reconstructed positions in 3D, and camera projection matrices. Each method was run on the data and the estimated camera poses were used to reproject the 3D lines onto the images to validate the results. The proposed algorithm performs similarly or better than Zhang's method while Mirzaei's method behaves noticeably worse, as it can be seen in Figure~\ref{fig:VGG_dataset} and Table~\ref{table:results}. Detailed results with all images from the sequences are available as supplementary material~\cite{pribyl2015supplementary}. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{VGG_dataset.eps} % \caption{ (\textbf{top}) Example images from the VGG dataset overlaid with reprojections of 3D line segments using our estimated camera pose. (\textbf{bottom}) Average camera orientation error $\Delta\Theta = |\theta|$ and average position error $\Delta\tau = ||\mathbf{\hat{t}} - \mathbf{t}||$ in individual image sequences. } \label{fig:VGG_dataset} \vspace{-1em} \end{figure} \setlength{\tabcolsep}{3pt} \begin{table} \begin{center} {\fontsize{.95em}{1.2em}\selectfont \begin{tabular}{lrrccccccccc} \hline\noalign{\smallskip} & \#\hspace{0.5em} & \#\hspace{0.8em} & & \multicolumn{2}{c}{\textbf{Mirzaei}} & & \multicolumn{2}{c}{\textbf{Zhang}} & & \multicolumn{2}{c}{\textbf{ours}}\\ Sequence & lines & imgs. & & $\Delta\Theta$ & $\Delta\tau$ & & $\Delta\Theta$ & $\Delta\tau$ & & $\Delta\Theta$ & $\Delta\tau$\\ \noalign{\smallskip} \hline \noalign{\smallskip} Corridor & 69 & 11 & & 15.510\,$^{\circ}$ & 1.510\,m & & \textbf{0.029\,$^{\circ}$} & \textbf{0.008\,m} & & 0.034\,$^{\circ}$ & 0.013\,m\\ Merton College I & 295 & 3 & & \hspace{0.5em}1.610\,$^{\circ}$ & 0.511\,m & & 0.401\,$^{\circ}$ & \textbf{0.115\,m} & & \textbf{0.195\,$^{\circ}$} & 0.128\,m\\ Merton College II & 302 & 3 & & 22.477\,$^{\circ}$ & 5.234\,m & & 0.676\,$^{\circ}$ & 0.336\,m & & \textbf{0.218\,$^{\circ}$} & \textbf{0.151\,m}\\ Merton College III & 177 & 3 & & \hspace{0.5em}1.667\,$^{\circ}$ & 0.608\,m & & 0.859\,$^{\circ}$ & 0.436\,m & & \textbf{0.223\,$^{\circ}$} & \textbf{0.101\,m}\\ University Library & 253 & 3 & & \hspace{0.5em}0.837\,$^{\circ}$ & 0.423\,m & & 1.558\,$^{\circ}$ & 0.833\,m & & \textbf{0.189\,$^{\circ}$} & \textbf{0.138\,m}\\ Wadham College & 380 & 5 & & 21.778\,$^{\circ}$ & 3.907\,m & & 0.103\,$^{\circ}$ & \textbf{0.047\,m} & & \textbf{0.086\,$^{\circ}$} & 0.072\,m\\ \hline \end{tabular} } \vspace{-1.5em} \end{center} \caption{Results of the methods on the VGG dataset in terms of average camera orientation error $\Delta\Theta = |\theta|$ and average position error $\Delta\tau = ||\mathbf{\hat{t}} - \mathbf{t}||$. Best results are in bold.} \label{table:results} \end{table} \setlength{\tabcolsep}{1.4pt} \subsection{Efficiency} Efficiency of each method was evaluated by measuring runtime on a desktop PC with a quad core Intel i5 3.33\,GHz CPU. Matlab implementations downloaded from the websites of the respective authors were used. As it can be seen in Table~\ref{table:time} and Figure~\ref{fig:time}, our method significantly outperforms the others in terms of speed. Computational complexity of all evaluated methods is linearly dependent on the number of lines. However, the absolute numbers differ substantially. Mirzaei's method is slower than Zhang's method for up to cca 200 lines. This is due to computation of a $120 \times 120$ Macaulay matrix in Mirzaei's method which has an effect of a constant time penalty. However, Zhang's method is slower than Mirzaei's for more than 200 lines. Our method is the fastest no matter how many lines are processed; it is approximately one order of magnitude faster than both competing methods. The linear computational complexity of our method is only achieved due to the prenormalization of input data and subsequent SVD of the $2n \times 18$ measurement matrix $\mathbf{M}$; all the other computations are performed in constant time. \begin{figure}[h] \begin{floatrow} \capbtabbox{ \setlength{\tabcolsep}{4pt} { \fontsize{0.95em}{1.2em}\selectfont \begin{tabular}{lrrrr} \hline\noalign{\smallskip} \#\,lines & 9 & 100 & 1000\\ \noalign{\smallskip} \hline \noalign{\smallskip} Mirzaei & 72.0 & 79.5 & 168.2\\ Zhang & 8.7 & 42.1 & 899.4\\ ours & \textbf{3.2} & \textbf{3.8} & \textbf{28.5}\\ \hline \end{tabular} } \setlength{\tabcolsep}{1.4pt} \vspace{2em} }{ \caption{ Runtimes in milliseconds for varying number of lines, averaged over 1000 runs. } \label{table:time} } \ffigbox[21em]{ \includegraphics[width=\linewidth]{time.eps} \vspace{-2em} }{ \caption{ The distribution of runtimes as a function of the number of lines. Logarithmic vertical axis. Meaning of the boxes is the same as in Figure~\ref{fig:errors}. } \label{fig:time} } \end{floatrow} \vspace{-.5em} \end{figure} \vspace{-1em} \subsection{Robustness to outliers} \vspace{-.5em} As a practical requirement, robustness to outlying correspondences was also tested. The experimental setup was the same as in Section~\ref{subsec:synthetic-lines}, using $n = 500$\,lines which endpoints were perturbed with slight image noise with $\sigma_{\mathrm{p}} = 2$\,pixels. The image lines simulating outlying correspondences were perturbed with an aditional extreme noise with $\sigma_{\mathrm{p}} = 100$\,pixels. The methods of Mirzaei and Zhang were plugged into a MLESAC (an improved version of RANSAC)~\cite{torr2000mlesac} framework, generating camera pose hypotheses from 3 and 4 randomly selected line correspondences, respectively. The inlying correspondences were identified based on the line reprojection error~\cite{taylor1995structure}. No heuristics for early hypothesis rejection was utilized, as it can also be incorporated into the Algebraic Outlier Rejection scheme, \emph{e.g}\bmvaOneDot by weighting the line correspondences. The proposed method with AOR was set up as described in Section~\ref{subsec:outliers}. While the RANSAC-based approaches can theoretically handle any percentage of outliers, the proposed method with AOR has a break-down point at about 30\,\% of outliers, as depicted in Figure~\ref{fig:outliers}. However, for the lower percentage of outliers, our method is more accurate and 5-7$\times$ faster. \begin{figure}[h] \includegraphics[width=0.8\linewidth]{outliers.eps} \caption{ Camera pose errors (\textbf{left}, \textbf{center}) and runtime (\textbf{right}) depending on the percentage of outliers. $n = 500$\,lines, $\sigma_{\mathrm{p}} = 2$\,pixels, averaged over 1000 runs. } \label{fig:outliers} \vspace{-0.5em} \end{figure} The original AOR approach applied to the PnP problem~\cite{ferraz2014very} has a higher break-down point at 45\,\%. We think it might be because the authors need to estimate a null space with only 12 entries whereas we estimate 18 entries of the nullspace $\mathbf{\hat{p}}$ in Eq.~(\ref{eq:noisysystem}). The use of barycentric coordinates for parameterization of 3D points in~\cite{ferraz2014very} may also play a role. \section{Conclusions} \label{sec:conclusions} \vspace{-0.3em} In this paper, a novel algebraic approach to the Perspective-n-Line problem is proposed. The approach is substantially faster, yet equally accurate and robust compared to the state-of-the-art. The superior computational efficiency of the proposed method achieving speed-ups of more than one order of magnitude for high number of lines is proved by simulations and experiments. As an alternative to the commonly used RANSAC, Algebraic Outlier Rejection is used to deal with mismatched lines. The proposed method requires at least 9 lines, but it is particularly suitable for large scale and noisy scenarios. For very small size noisy scenarios ($\leq 25$ lines), the state-of-the-art performs better and we recommend to use the Zhang's method. Future work involves examination of the degenerate line configurations. The Matlab code of the proposed method and the appendices are publicly available in the supplementary material~\cite{pribyl2015supplementary}. \vspace{-0.5em} \paragraph{Acknowledgements} This work was supported by the Technology Agency of the Czech Republic by projects TA02030835 D-NOTAM and TE01020415 V3C. It was also supported by SoMoPro II grant (financial contribution from the EU 7 FP People Programme Marie Curie Actions, REA 291782, and from the South Moravian Region). The content of this article does not reflect the official opinion of the European Union. Responsibility for the information and views expressed therein lies entirely with the authors.
1,108,101,564,043
arxiv
\section{Introduction} The recent discovery of a number of remarkable tails of gas and stars stripped from galaxies found in clusters has been an exciting area of study in the field of cluster galaxy evolution e.g. \citet{Gavazzi+95, Kenney+99, Gavazzi+01, Yoshida+02, Yoshida+04, Sun+05, Oosterloo+05, Cortese+06, Chung+07, Cortese+07, Sun+07, Kenney+08, Yoshida+08, Smith+10, Yagi+10, Abramson+11, Yoshida+12, Fossati+12, Yagi+13, Jachym+14, Kenney+14, Ebeling+14, Yagi+15, Poggianti+17, Yagi+17, Boselli+18}. Ram pressure stripping (RPS) is an important mechanism by which galaxies, especially those in clusters, evolve. It removes gas, quenches star formation, and drives evolution in environments with a sufficient density of intracluster gas \citep{Gunn+72}. In sufficiently high-mass clusters, such as Coma ($\sim$ 10$^{15}$ M$_{\odot}$), star forming galaxies can eventually be stripped of even the most strongly gravitationally bound gas at the center, completely quenching star formation \citep{BravoAlfaro+00, Smith+10, Yagi+10}. While many recently observed ``jellyfish galaxies" have tails seen in ionized gas \citep{Yoshida+12, Yagi+10, Zhang+13, Poggianti+17}, at least some of these tails are composed of a multiphase mixture of gas, some hot enough to see in the X-ray \citep{Sun+05, Sun+10, Zhang+13, Sanders+13, Sanders+14}, but also a significant fraction by mass of molecular gas \citep{Jachym+14, Jachym+17}. Simple physics as well as simulations strongly suggest that ram pressure stripping in clusters is too weak to strip the densest gas directly, except possibly in extreme cases; thus, most of this molecular gas likely cools $\textit{in situ}$ \citep{Tonnesen+09, Tonnesen+10}. Gas cooling, heating, compression, and possibly other factors affect the locations and rates of star formation in tails, and the combination of factors is not well understood \citep{Tonnesen+12}. A number of gas-stripped tails with ongoing star formation within the tails have been discovered in massive clusters such as Coma, however, stripped tails with active star formation have also been found in lower mass clusters such as Virgo \citep{Hester+10, Fumagalli+11, Yagi+13, Kenney+14}. However, not all strongly stripped galaxies are `jellyfish' i.e. have lots of star formation in the tail (e.g. \citet{Boselli+16}). In depth studies of RPS tails to constrain the quantity of stripped gas, as well as the mass of stars formed that eventually become part of the intra-cluster light (ICL), or fall back into the galaxy to form new thick disk or halo components \citep{Abramson+11}, are key to gaining a fuller picture of star formation in stripped tails. However, the rate and efficiency of star formation in these tails is still a matter of discussion; most findings so far point to a significantly lower efficiency of star formation in extraplanar regions than the disk of the galaxy. In a sample of Virgo galaxies studied by \citet{Vollmer+12} the star formation efficiency (SFE) was found to be $\sim$ 3 times lower in the stripped extraplanar gas than the disk. A similar study using GALEX FUV and H I maps of eight RPS tails in the Virgo cluster found the overall SFE to be $\sim$ 10 times lower in the tail than in the body of the host galaxy \citep{Boissier+12}. In \citet{Jachym+14} the SFR (SFR) surface density varied by a factor of $\sim$ 50 along three pointings of the ram pressure stripped tail of ESO 137-001 in Abell 3627, becoming less efficient with distance from the host galaxy. \citet{Jachym+14} and \citet{Vollmer+12} both used H$\alpha$ emission as a direct proxy for star formation; \citet{Vollmer+12} did also compare with SFR derived from FUV observations. The connection between H$\alpha$ emission and recent star formation in the disks of galaxies is well known, but in the unique environment of ram pressure stripped tails, subject to heating from shocks and other mechanisms, we cannot assume the same relationship holds. MUSE studies, such as that conducted by \citet{Fossati+16} showed that most of the H$\alpha$ in the tail of the galaxy ESO 137-001 was not associated with H II regions or star formation. Furthermore, in \citet{Boselli+16}, the authors concluded that the long, H$\alpha$ tail of NGC 4569 in the Virgo cluster contained no H II regions or evidence of star formation. This implies that some other ionization mechanism must be at play. It should be noted that some tails have shown good concordance between expected H$\alpha$ luminosity and SFR as measured by identifying H II regions with line ratios, such as in the jellyfish galaxy JO206 in the IIZW108 cluster \citep{Poggianti+17}. However, even if line ratios are measured and show that H$\alpha$ is the product of star formation, the H$\alpha$ may yield an underestimate of the true SFR. H II regions in RPS tails may be stripped of some of the gas close to the ionizing stars, and as a result leak more Lyman continuum photons than typical disk H II regions (Kenney, J.D.P., et al. \textit{in prep}). In sum, it is risky to derive SFRs solely through H$\alpha$ luminosities, and/or line ratios. The ideal way to study the ages and properties of stars in RPS tails is through direct observations of the stars themselves. With an accurate measurement of the ages and locations of young stars throughout the tail, along with an estimate of the age of the host RPS tail, one can investigate the conditions leading to star formation in the tail interstellar medium (ISM). For example, in \citet{Tonnesen+12}, the authors found that in a simulation of star formation in an RPS tail, it took $\sim$ 200 Myr for tail gas to cool sufficiently for star formation to take place. However, many RPS tails show star formation occurring near the body of the galaxy, a region that contains recently stripped gas (see Figure 2 of \citet{Jachym+14} for an example), as well as no clear trend in the age of stars formed with distance along the tail (e.g. \citet{Cortese+07}). The disparity between simulation and observation suggests heating and cooling mechanisms in tails are still poorly constrained. Careful investigation of the location and history of star formation in tails is key to solving the disparity between simulations and observations. Studies based on direct observation of star clumps in tails have been conducted, and have found masses ranging from $\sim 10^3-10^6$ M$_{\odot}$ and ages between $\sim$ $1-100$ Myr \citep{Cortese+07, Yagi+13, Boselli+18}. Some studies have claimed older populations, although the amount of dust extinction in tails, uncertainty in the stellar formation history (burst vs. continuous model such as in \citet{Yoshida+08}), and the contamination of possible tail sources with background galaxies in the tail direction can also complicate studies of star formation in tails. For example, \citet{Fumagalli+11} estimated the ages of stars in the tail of IC3418 in the Virgo cluster to be up to 1 Gyr. However, it was shown in \citep{Kenney+14} that some of these measurements were likely overestimates due to contamination from background galaxies, and the true maximum age of stars may be closer to $\sim$ 300 Myr. Especially important for estimating the ages of stars in tails is a UV filter, to capture the young stars, and differentiate from background sources in the cluster. With our deep HST observations in multiple bands, including the F275W UV filter, we can estimate the ages and masses of young stars in the tails. We also have the resolving power, and depth, to differentiate background galaxies from tail sources, and analyze dust extinction in the most recently stripped part of the tail. We also make use of the resolution of HST to measure the sizes of stellar associations to estimate whether they are gravitationally bound. If they are not gravitationally bound, star clumps will dissociate over time, falling below detection limits, and mixing with the ICL. This possibility has not been studied previously. The fantastic resolution of HST allows us to do so. \begin{figure*} \plotone{D100_subaru.pdf} \caption{On the left an image of D100 and its companion galaxies, observed in the Subaru telescope $R$-band. On the right is an unsharp mask of the $R$-band image on the left, to highlight substructure.} \label{fig:Subaru} \end{figure*} \subsection{D100} D100, named in \citet{Dressler+80} (also identified as GMP 2910 \citep{Godwin+83}), is a SBab galaxy in the Coma cluster, with a luminosity of 0.3 L$_{\star}$, and an estimated stellar mass of 4 $\times \, 10^9$ M$_\odot$ from the WiscM11 spectral energy distribution (SED) model \citep{WiscM11}\footnote{The mass estimate in the catalogue had to be adjusted for the true distance to Coma, as opposed to that estimated from simply the redshift of D100, which is affected by the intrinsic velocity of the galaxy in the cluster.}. An $R$-band image from the Subaru telescope \citep{Yagi+07} of the galaxy and its neighbors is shown in Figure \ref{fig:Subaru}. The galaxy is at a projected distance of $\sim$ 240 kpc from the center of the Coma cluster. We adopt a luminosity distance to Coma of 100 Mpc, corresponding to a size scale of 1''=0.464 kpc, and a distance modulus of 35.0, using standard cosmological parameters from the WMAP nine year survey \citep{Hinshaw+13}. D100 has a remarkable 60 kpc long and extremely narrow, 1.5 kpc wide, tail of gas that streams out from the center of the galaxy, first observed in H$\alpha$ by the Subaru Suprime-Cam \citep{Yagi+07}, and seen in Figure \ref{fig:HST_Ha}. This galaxy was also observed with the HST WFC2 by \citet{Caldwell+99}, who noted strong ongoing star formation, observed with ground-based spectroscopy, in the central 2'' ($\sim$ 1 kpc) of the galaxy, but a $\sim$ 0.25 Gyr post-starburst spectrum at a radius of $\sim$ 3''. We note that \citet{Caldwell+99} did not have HST UV data, and their optical HST data was shallower and with poorer resolution than our new WFC3 data. The authors pointed out the ``unusual morphology" of the prominent dust on the northern side of D100, perhaps a hint of the tail that would be discovered in the later H$\alpha$ observations by \citet{Yagi+07}. The tail and body of the galaxy were observed in the UV with GALEX and CFHT MegaCam by \citet{Smith+10}. They find faint $u$-band emission along the length of the tail with some more concentrated knots, suggesting young stars. However, the 3727\AA [OII] doublet falls in the CFHT $u$-band making it difficult to establish the direct correlation of $u$-band emission to the presence of young stars, due the possible presence of warm gas in the spectrum. Deeper optical imaging of the tail is necessary to investigate the presence of stars. The components of the gaseous tail of D100, as well as some properties of the tail that resulted from ram pressure were investigated in detail in \citet{Jachym+17}. Along with hot gas detected in the soft X-ray, large amounts of molecular gas were found along the length of the tail identified via observations with the IRAM 30m telescope in CO(2-1) and CO(1-0) in four different pointings with a 30'' beam. The total mass of molecular gas in H$_2$ in the tail is estimated to be $\sim$ 1 $\times$ 10$^9$ M$_\odot$, assuming a standard value for the relation between CO emission and H$_2$ mass. The tail was undetected in HI to a 3$\sigma$ limit of $\sim 0.5 \times 10^8$ M$_\odot$, suggesting that the tail is dominated by cold molecular gas. \citet{Jachym+17} note that while they find abundant molecular gas, they are only able to state that star formation in the tail \textit{may} be present, due to the inability to distinguish between H$\alpha$ arising from photoionization by hot stars, and that excited by some other method. \subsection{Outline} In Section 2 of this paper, we present the details of our HST observations and the data reduction scheme employed. In Section 3 we describe the HST images of the three galaxies in the field, noting features and evidence for ram pressure stripping, as well as outside-in quenching. We analyze the morphology of the RPS tail of the galaxy through investigation of the dust in the tail. We also constrain the SFR and SFE of the tail, as well as the characteristics of the star clumps in the tail. Finally, in Section 4, we discuss the astrophysical effects contributing to the properties of the RPS tail catalogued in Section 3. In Section 5 we summarize our results. \section{Observations} D100 was observed as part of our HST program 14361 (PI: Sun), targetting the region near the center of the Coma cluster with the HST ACS instrument in F475W for $1440$ seconds and F814W for 674 seconds, as well as the WFC3 instrument in F275W for 2583 seconds, in May-July of 2016. The image was centered such that we were able to see the full extent of the H$\alpha$ tail identified in \citet{Yagi+07}. Part of the same field of view, including the body of D100 and its tail, was covered in parallel observations in late 2017 and early 2018 in HST program 14182 (PI: Puzia), adding 2396s of F814W time, and 4454s of F475W time. While D99 and GMP2913 were covered only by our original data, D100 and the tail were covered for a total of 5894s in F475W and 3070s in F814W. The ACS data in two high-throughput filters allows us to detect faint optical features, as well as make a detailed color map of the galaxy. The F275W filter allows us to trace the light from young stars in order to identify recent star formation in the body and tail of the galaxy. When combined, we can use the three filters to constrain the stellar ages and masses of star clumps found in the tail, as well as the star formation history of the disk. The H$\alpha$ data we used have been corrected for the velocity gradient of the tail, over-subtraction in the $R$-band, and for the presence of [NII] and [SII] lines in the filter, with the assumption that [NII]/H$\alpha$=0.66, and [SII](6717\AA+6731\AA)/H$\alpha$=0.66. The correction method was the same as that described in \citet{Yagi+17}. Our observations reached a limiting surface brightness, estimated by taking the value of three times the background standard deviation, of 29.1 mags in F275W, 30.0 mags in F814W, and 30.9 mags in F475W. Magnitudes listed throughout this paper are given in AB mags. Milky Way extinction is minimal in the direction of Coma; from \citet{Schlafly+11} estimates for SDSS filters that roughly correlate to ours, extinction in the $u$-band, or F275W, is 0.035 mags, in $g$-band, or F475W, 0.027 mags, and in the $r$-band, F814W, 0.019 mags. \subsection{Data Reduction} To combine the data from our original proposal with the new data from the parallel observations we used both the HST pipeline tools with DrizzlePac, and the astrodrizzle tool SWarp \citep{SWarp}. We first aligned the data from the different fields of view with DrizzlePac, as well as removing some cosmic rays from fields covered by multiple observations. After alignment, the images were drizzled, a process standard to HST imaging by which images are combined while preserving photometry and resolution, weighted by the statistical significance of each pixel, and corrected for geometric distortions. The data was also corrected for charge transfer effects using DrizzlePac, then combined using SWarp. Because the F275W data had a smaller pixel scale than the F475W and F814W observations, the F475W and F814W had to be re-gridded to the pixel scale of the F275W so that the images could be compared on a pixel-pixel basis. This was done using the Lanczos-3 6x6-tap filter interpolation scheme in SWarp, the recommended method for flux-conserving interpolation. In areas not covered by multiple observations, such as the regions around D99 and GMP 2913, obvious cosmic rays were still visible in the image, even after the automatic HST pipeline was run. As only two pointings were taken in each filter, and each was slightly offset from the other, there is a detector gap in the WFC3 and ACS imagers that is only covered once. As such, the standard HST pipeline to remove cosmic rays could not be used here, and a custom scheme needed to be employed. Sources were identified with the Astromatic tool SExtractor \citep{Sextractor+96} in each filter, and then compared between the filters. If a source was found in only one filter to higher than 12$\sigma$ significance (a typical threshold for the peak of a cosmic ray in HST data) it was labeled a cosmic ray and cleaned from the image. If the source was detected in more than one filter it was not removed. The resulting reduced images are shown in Figures \ref{fig:HST_Ha} and \ref{fig:sidebyside}. \begin{figure*} \plotone{HST_Ha.jpeg} \caption{An HST false color image with the F814W filter in red, the F475W filter in blue, and an average of the F475W and F814W in green. The F275W was not used for this image, as the signal to noise in the F275W was much lower than the other filters. Overlayed in bright red is H$\alpha$ data from the ground based Subaru Suprime-Cam, first published in \citet{Yagi+07}. The image was provided to us by the STScI imaging team led by J. DePasquale.} \label{fig:HST_Ha} \end{figure*} \section{Analysis} \subsection{Main Body of D100} D100 appears to show a classical grand design spiral structure, with two prominent arms. Through isophotal analysis of the F814W data, shown in Figure \ref{fig:isophote}, we have found evidence for a bar-like structure in D100, stretching across a diameter of about 2.6'', or 1.2 kpc. The central region of the galaxy appears relatively undisturbed. It is unlikely that D100 is interacting with its neighbor to the east, D99, as a large difference in velocity (4500 km s$^{-1}$ greater than that of D100) makes interaction unlikely. However, it appears likely that D100 has had some interaction with the dwarf galaxy GMP 2913 to the south. This galaxy is roughly three magnitudes fainter than D100 and D99 in the F814W. At a distance of 100 Mpc, we measure its absolute magnitude as $-16.65$ in F814W, and it has a velocity difference with D100 of only 132 km s$^{-1}$, based on its redshift from \citet{Yagi+07}. The two galaxies show evidence of interaction, as the southwest outskirts of the disk of D100 show a clear disturbance in the deep Subaru $R$-band image (Figure \ref{fig:Subaru}) and the HST image (Figure \ref{fig:sidebyside}), as well as irregular isophotes in the HST F814W data (Figure \ref{fig:isophote}). The tidal disturbance appears to be a weak interaction; the dwarf morphology is a bit irregular, and the disruption in D100 appears limited to an elongation of the southern edge of the galaxy at $r \sim$ 6'' $\sim$ 3 kpc. Simulations from \citet{Toomre+72} showing the early stages of tidal interaction being limited to slight elongation of one side of the galaxy suggest that the interaction between D100 and GMP2913 may have begun only recently. Thus, we do not believe the tidal interaction should have had a direct impact on the ram pressure stripping of the galaxy. It is possible, however, it may have altered the distribution of gas in D100 (if this gas had not already been stripped), driving some gas inward, and making this concentrated gas harder to strip. \begin{figure*} \plotone{sidebyside.jpeg} \caption{The images, in the three HST bands, of D100, its neighboring anemic spiral galaxy to the left, D99, and the dwarf galaxy GMP 2913 to the south. At the bottom-right is a false color image made from combining F275W (blue), F475W (green), and F814W (red).} \label{fig:sidebyside} \end{figure*} Information on the star formation history of the galaxy can be gleaned from simply inspecting the high-resolution HST three color image, shown in Figure \ref{fig:colorstreams}. The outer disk appears to have been stripped of all its ISM by the ram pressure, as the only clear dust extinction in D100 is visible in the the central $r \sim 1$''. This is also where the F275W emission appears most concentrated in several pockets within $r$ $\sim$1'' of the nucleus, along the spiral arms. However, the outer parts of these arms have a redder color, indicating a stellar population older than the innermost parts of the spiral arms, indicative of outside in quenching of star formation (see Section 3.2 for further discussion). Other than this central region, the only other noticeable star formation in the galaxy is visible in the region 3'' northeast of the nucleus, in the dust tail. This bright F275W source (see Figure \ref{fig:colorstreams}) near the outskirts of the galaxy appears to be a very blue young stellar complex within the dust tail. \begin{figure*} \plotone{bar_test.pdf} \caption{An isophotal contour plot of the F814W filter image of D100, shown on the left, and zoomed in on the central region on the right. The contour levels vary from 22.0 to 16.9 mag/arcsec$^2$ by increments of 0.37 mag/arcsec$^2$. The position angle of the isophotes appears to be relatively fixed in the inner part of the galaxy, bounded by the two orange marks, suggesting a bar-like structure. Beyond this, the isophotes follow the structure of the spiral arms out a few contours, then eventually become more consistently elliptical.} \label{fig:isophote} \end{figure*} \subsection{Snaky stellar streams} An interesting feature in the outskirts of the galaxy, faintly visible in the color image (Figure \ref{fig:colorstreams}) are the deviations in the stellar morphology from the presiding grand design spiral structure, located at $r$ $\sim$ 4.5'' from the nucleus in the north, and $r$ $\sim$ 6'' in the southeast. Thin (width of $\sim$ 0.25'') and long (length of $\sim$ 2.5'') obliquely curved distributions of stars are visible at the outskirts of the galaxy. They are more visible in the green than the red (F475W-F814W=0.5), suggesting they are relatively young distributions when compared with the surrounding disk (F475W-F814W=1.0). In an attempt to render these thin streams of stars more visible, we also generated an unsharp mask image of the galaxy (Figure \ref{fig:unsharp}). The origin of these streams is as of yet unknown, as well as whether their formation can be related to ram pressure stripping, or tidal effects. One possibility is that they may be an abundance of stars formed in the ISM compressed by the galaxy rotating into the ram pressure front, such as the features seen in NGC4921 \citep{Kenney+15}. Another possibility is that these are re-accreted stellar clumps, originally formed in the inner tail, but that did not reach the escape velocity and fell back onto the galaxy, forming stellar streams that occupy a thick disk or halo \citep{Abramson+11}. Such re-accretion of stars formed in ram pressure stripped tails has been found in simulations \citep{Tonnesen+12}. \begin{figure} \plotone{HST_D100_only.pdf} \caption{A false color HST image with the three filters from our study, F275W in dark blue, F475W in lighter blue, and F814W in red. The most obvious region of extraplanar star formation in the tail (Source 1) is 3'' northeast of the galaxy nucleus. Image provided by STScI imaging team led by J. DePasquale.} \label{fig:colorstreams} \end{figure} \begin{figure} \plotone{unsharp_475.pdf} \caption{An unsharp-mask HST image of D100, in the region shown in Figure \ref{fig:colorstreams}, with the F475W filter image. Marked in white are thin stellar streams which appear distinct from the normal spiral structure, and which could be the result of ram pressure.} \label{fig:unsharp} \end{figure} \subsection{Dust Extinction in Main Body \& Tail} The region of strongest extinction is the circumnuclear region of the galaxy, shown by \citet{Caldwell+99} to be a starburst region, based on optical spectroscopy. The strong extinction seems to have a well-defined extent, measuring $\sim$ 2'', or $\sim$ 0.93 kpc in diameter through the nucleus, along the major axis. The visible dust tail extends outward toward the NE at PA = 178$^\circ$ with respect to the major axis, from the center of D100 to a distance of at least 6.5'' (Figure \ref{fig:dustpoints}). It is clear that this dust lies above the disk plane, and thus is an extraplanar feature. The spiral structure shows that the galaxy rotates clockwise, and the N side of the galaxy has been shown to be approaching \citep{Jachym+17}, thus the eastern side of the disk is the far side. Dust in the galaxy near the disk plane on the far side of the disk does not create strong extinction, so strong dust extinction viewed toward the far side of the disk must originate from extraplanar dust closer to us than the disk midplane. The extinction in the tail has a well defined outer envelope, with remarkably smooth and straight edges. In Figure \ref{fig:dustpoints}, we plot the width of the tail, measured by dust extinction, versus distance along the minor axis. The width of the tail, measured by dust extinction, ranges from 2'' to 3'' at a distance of 2.5'' along the minor axis. The width increases by $\sim$ 50\% within 1'' along the minor axis, then widens less, increasing only 20\% out as far as 3'' on the southern edge, and 4'' on the northern edge. We note, however, that our measure of the initial tail broadening around the center of the galaxy may be biased by the fact that dust remaining in the disk will have weaker extinction. Thus, while the dust extinction in the tail is seen much more strongly, as it is in front of the disk, dust in the disk of the galaxy near the circumnuclear region may still be present. Outside the circumnuclear region (the inner $r \sim$ 1'' of the galaxy) the dust tracks remarkably the outskirts of the H$\alpha$ tail, as seen in Figure \ref{fig:dustpoints}. However, since the resolution of the H$\alpha$ image is much poorer than that of the HST images (0.75'' vs 0.11''), there may be more dust at the edges of the tail, while H$\alpha$ emission concentrated more near the center of the tail is artificially smoothed outward to the edges. Interpreting the width of the H$\alpha$ tail in the circumnuclear region of the galaxy is also difficult, as separating the H$\alpha$ emission from continued star formation in the disk from the emission associated with gas excited in the tail is not possible. Furthermore, the H$\alpha$ image has imperfect continuum subtraction because of the dependence on the continuum color \citep{Yagi+10, Spector+12}. The effect is large where the continuum is bright\footnote{For example, there is some excess emission in the narrowband image around the center of D99. However, a spectrum from the eBOSS survey \citep{Dawson+14} as part of SDSS DR14 \citep{Abolfathi+17} shows that the H$\alpha$ line is not detected in this galaxy. This indicates the apparent emission seen in the narrowband image is not H$\alpha$, but residual broadband continuum.}. There is more extinction to the N of the minor axis than to the S, so the morphology is clearer to the N. The N side of the dust tail has a well-defined, fairly smooth and straight outer envelope, which extends relatively unbroken for at least 4.4''= 2.1 kpc. A band of strong extinction at the edge has a uniform width of 0.5''= 237 pc. In places it appears as 2 parallel lanes, i.e., doubled, with variable extinction in between and along the lanes. There is less extinction in the S edge of dust tail so its morphology is not as clear, but it seems to have a well-defined and straight outer edge. In between the N tail edge and the minor axis is a more irregular distribution of strong extinction. The smooth and straight tail edges, the linear substructure of some dust features, and the small amount of tail broadening, are remarkable and unexpected from simple stripping models. We discuss the implications of this further in Section 4. \begin{figure*} \plottwo{dust_tail_visual_c.pdf}{dust_tail_c.pdf} \caption{On the left, F814W contours of D100 are shown in black; the contour levels vary from 18.9 to 20.7 mag/arcsec$^2$ by increments of 0.36 mag/arcsec$^2$. H$\alpha$ contours are shown in red; the contour levels vary from 12.5 to 800 $\times \, 10^{-18}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ by a factor of two for each contour level. It should be noted that the H$\alpha$ contours are unreliable in the central 1'' due to imperfect continuum subtraction. Outside of the central 1'' there is good agreement between the edge of the visible dust and the H$\alpha$. The line is drawn from the center of the galaxy in the direction of H$\alpha$ tail, parallel to the N filament of dust. On the right, distance from the tail line drawn on the left, to the points at the edges of the dust tail for both the top and bottom of the tail. The asterisks mark the southern visible edge of the tail in extinction, and the plus signs the northern visible edge. Past the furthermost points from the nucleus, the surface brightness of the galaxy is too low to visibly detect any dust extinction from the surroundings. It can be seen the tail appears to broaden rapidly near the circumnuclear region, but maintain a near uniform width further out.} \label{fig:dustpoints} \end{figure*} \subsection{Evidence for Ram Pressure Stripping in D100 and its neighbors D99 and GMP2913} Here we discuss constraints on the star formation histories of D100 and its two nearest apparent neighbors, D99 and GMP2913, and evidence on ram pressure stripping and tidal interactions for the galaxies. There is evidence from the HST data that all three galaxies have experienced ram pressure stripping. Outside-in quenching as a result of ram pressure stripping in clusters is a well known prediction from basic physics, and from simulations \citep{Boselli+06, Kapferer+09, Tonnesen+10}, but high resolution studies of the timescale of the radial progression of quenching are a recent and active area of study (e.g. \citet{Pappalardo+10, Abramson+11, Merluzzi+16, Fossati+18}). \subsubsection{D100} Our high resolution HST data allows us to quantitatively measure the radial gradient of star formation quenching. We select various iso-color regions of the three galaxies at varying radii. Regions selected are shown in Figure \ref{fig:starburst_regions}. In order to analyze the observed colors of several regions of the three galaxies, we make use of the Starburst99 stellar population models \citep{Leitherer+99, Vasquez+05}. Our input parameters for this stellar population assume a standard Kroupa initial mass function (IMF) and utilize Padova isochrones \citep{Bressan+93, Fagotto+94a, Fagotto+94b, Girardi+00}, with added thermally pulsating AGB stars. Both D100 and D99 have stellar masses near $\sim$ 4 $\times \, 10^9$ M$_\odot$, and thus are expected to have metallicities of 12 + log(O/H) $\sim$ 8.7, about solar \citep{Sanchez+17}. The dwarf galaxy GMP 2913 has a much smaller mass (1.8 $\times 10^8$ M$_\odot$), and thus, based on the mass-metallicity relation \citep{Tremonti+04}, we use a 0.2 solar metallicity model for that galaxy. We use the output from Starburst99 to construct the SED of a stellar population with a truncated star formation history. This history assumes star formation started 12 Gyr ago, proceeded at a constant rate, then was abruptly truncated (Figure \ref{fig:burstmodel}, dotted line). Additionally, we construct a SED for a truncated star formation history where star formation momentarily increased at the time of truncation, such that 2\% of the stellar mass formed at the time of truncation -- an increase in SFR by a factor of $\sim$ 25 for a timestep of 10 Myr (Figure \ref{fig:burstmodel}, dash-dotted line). Using the SED outputs from these models, we extracted broadband colors for the F275W, F475W, and F814W bands, which we compare with our selected regions. In D100, the radial extent of the dust extinction and the H$\alpha$ emission indicates that ongoing star formation is confined to the central $r \sim 0.8$''. Beyond this radius, there seems to be no ongoing SF and no dust, and here we use the HST colors to estimate quenching times. Our data show a clear radial color gradient from within $r \sim 0.8$'' out to between $3$''$ - \, 5$'', evidence of outside-in quenching. The quenching times we derive from this color gradient are shown in Figure \ref{fig:quenching_gradient}. From our color measurements alone we cannot break the degeneracy between burst strength and quenching time. Some of the colors could be fit with either a simple truncation model, or with a truncation plus burst model with an older quenching time. However we can break this degeneracy using the spectroscopy results of \citet{Caldwell+99} for the region outside $r$ $\sim$ 3'', for which they found a postburst population with a quenching time of $\sim$ 250 Myr. We find that a model with a burst strength of 2\% results in a quenching time of $280 \pm 20$ Myr in this region, so we adopt this burst strength as our preferred model for all regions. However we also calculate quenching times for different burst strengths (0.4\% and 5\%), to provide an estimate of the systematic uncertainty in quenching times. Within $r=0.8$'', the two regions tested are so blue they require a starburst, and cannot be explained with the simple truncation model. For $r=1.15$''$ - \, 3$'', we derive a quenching time of $\sim 145^{+35}_{-110}$ Myr based on our 2\% starburst burst model. The star formation radial quenching profile of D100 shows an interesting correspondence with the tail width and age, which is discussed further in Section 4. \begin{figure*} \plotone{regions2_labels.pdf} \caption{Regions for color analysis; the background is the F475W image. For the regions in D100, the sections of the galaxy with obvious dust extinction near the tail, and around the circumnuclear region, are excluded from the elliptical apertures. Inner regions selected for their especially strong F275W emission are labeled with numbers that correspond to their labeling in the model plot in Figure \ref{fig:burstmodel}.} \label{fig:starburst_regions} \end{figure*} \begin{figure*} \plotone{ssp_twocolor_burst.pdf} \caption{A color-color plot of isocolor regions of the three galaxies in this study. The lines show stellar population models generated with Starburst99. The black lines show models for solar metallicity; they also have corresponding age labels. The black, dot-dashed line shows a truncation model with a 2\% burst, while the black, dotted line shows a simple truncation model. The green line shows a 2\% burst model with a metallicity of 0.2 solar abundance, to match that expected for the dwarf galaxy, GMP 2913. There is only one annulus drawn for the dwarf galaxy because it has uniform colors throughout. While we show for reference an extinction vector (for an arbitrary $A_{F475W}=0.5$ mags), note that we don't believe any of these regions contain significant dust as the obviously dusty portion of D100 was excluded, and D99 \& GMP 2913 have likely been stripped of all their ISM.} \label{fig:burstmodel} \end{figure*} \begin{figure*} \plotone{quenching_timescale.pdf} \caption{Quenching time versus radius for regions in D100 and D99 marked in Figure \ref{fig:burstmodel}. The cross-hatched boxes show the estimate of the quenching time based on our SB99 model with a 2\% burst. The solid black boxes are the smaller regions in D100, marked as 1, 2 \& 3 in Figure \ref{fig:starburst_regions} with the quenching time also estimated from the 2\% burst model. The shaded boxes show the upper and lower limits on the quenching time based on the 5\% burst, and the simple truncation model (the simple truncation model, does not extend to the bluest measured region, region 1, so here the lower bound is the the 0.4\% burst model). In the lower plot, the vertical dashed line shows the estimated stripping radius of D100, within which star formation is still ongoing. Both galaxies show clear evidence for outside-in quenching.} \label{fig:quenching_gradient} \end{figure*} \subsubsection{D99} The nearby (on the sky) galaxy D99 (also called GMP 2897) is located 0.29 arcminutes SE of D100 (Figure \ref{fig:sidebyside}). D99 is nearly face-on, and similar in radial size to D100, with an absolute magnitude of $M_{F814W}=-18.97$, compared to the absolute magnitude of D100, $M_{F814W}=-19.65$. Its stellar mass is also quite similar, 4.7 $\times 10^9$ M$_\odot$ from the WiscM11 SED model \citep{WiscM11}. As noted previously, it is very unlikely to be physically associated with D100, as it differs in radial velocity by $\sim$ 4500 km s$^{-1}$. The velocity of D99 ($\sim$ 9900 km s$^{-1}$) at a projected distance of 240 kpc is on the extreme end of bound Coma galaxies, but theoretical predictions for bound cluster members presented in \citet{Kent+82}, and a more recent empirical sample in \citet{Kadowaki+17} both support D99 being a bound cluster member. We present an isophotal analysis and unsharp mask image in Figure \ref{fig:D99_structure}, which highlights some similarities with D100. We hypothesize that D99 is a similar galaxy to D100 at a later evolutionary stage of the same ram pressure stripping process. HST light profiles and the unsharp mask image reveal a bar/lens in the center, and two faint spiral arms, as well as a bulge + disk radial light profile. These morphological characteristics, along with the lack of H$\alpha$ emission from both the Subaru imaging data and SDSS spectroscopy, strongly suggest that D99 is a stripped spiral galaxy, not yet converted fully to a ``red and dead" lenticular galaxy via RPS. Our stellar population models in Figure \ref{fig:burstmodel} can't distinguish between a simple truncation and a truncation with a burst. However, spectroscopy by \citet{Caldwell+99} suggests a $\sim$ 1 Gyr old poststarburst population beyond $r=1$''; this age agrees with both our 2\% and 5\% burst model. We then find that the color gradient in D99 corresponds to a quenching time (with the 2\% model, with the lower bound being the truncated model, and the upper bound the 5\% burst model) of $\sim 280^{+100}_{-135}$ Myr at $r$ $<$ 0.2'', $\sim 500^{+200}_{-200}$ Myr at $r = 0.2$''$ - \, 1.0$'', $\sim 900^{+300}_{-450}$ Myr at $r=1.0$''$ - \, 4.0$'', and $\sim 1150^{+350}_{-550}$ Myr at $r = 4.0$''$ - \, 6.0$''. This data is plotted in Figure \ref{fig:quenching_gradient}, and supports a trend of outside-in quenching consistent with RPS. Furthermore, the large difference in quenching time between the central region of D99 and the outskirts suggests that it may take several hundred Myr more to quench the central regions of D100 completely, if stripping proceeds at a similar rate. Both the morphology, and the outside-in radial quenching gradient strongly suggest that D99 is very much like D100, but caught at a later evolutionary stage, $\sim 280^{+100}_{-135}$ Myr after the last nuclear gas was stripped. \begin{figure*} \plotone{D99.pdf} \caption{On the left, an isophotal contour plot of D99 from our HST F814W data. The contour levels vary from 21.4 to 16.3 mag/arcsec$^2$ by increments of 0.36 mag/arcsec$^2$. On the right, an unsharp-mask image to highlight sub-structure, generated from the Subaru $R$-band image of D99. Both figures show a faint bar or lens, with the unsharp masked image revealing two spiral arms. This suggests that D99 is a fully stripped spiral.} \label{fig:D99_structure} \end{figure*} \subsubsection{GMP 2913} GMP 2913 is a dwarf irregular galaxy, with an absolute magnitude of $M_{F814W}=-16.65$, about 3 magnitudes fainter than D99 and D100. Assuming a stellar M/L ratio of 1, the stellar mass of this galaxy is 1.8 $\times 10^8$ M$_\odot$, only $\sim$ 4\% the mass of D100. This corresponds to an expected metallicity around 0.2 solar \citep{Sanchez+17}. It is located 0.33' = 9.37 kpc from D100 on the sky, and has a velocity within $\sim$ 200 km s$^{-1}$ of D100 \citep{Yagi+07}, thus it is possibly gravitationally bound to D100, although the velocity in the plane of the sky is unknown. There is evidence that GMP 2913 is, or has recently in the past, gravitationally interacted with D100. Its irregular structure, and the irregular southern side of the disk of D100 suggest a tidal interaction. It has quite uniform colors, with F475W-F814W $\sim$ 0.5 throughout, but very low level, diffuse F275W, and no visible dust extinction. In the spectrum obtained by FOCAS/Subaru \citep{Yagi+07}, there is no H$\alpha$ emission, and is clear H$\beta$ and H$\alpha$ absorption. Thus, there is no ongoing star formation; however, since no shorter wavelength information is available to search for strong H$\delta$ absorption, it is unclear if GMP 2913 is also a post-starburst galaxy. With a 2\% burst model we estimate its quenching time is $\sim 300^{+100}_{-150}$ Myr, with a significant uncertainty as it does not fall very close to the model colors in Figure \ref{fig:burstmodel}. If GMP 2913 and D100 are indeed bound to each other, they are likely orbiting together through the cluster, and have experienced RPS at about the same time. It is likely that all the gas was stripped out of GMP 2913 first and more rapidly, as it is a lower mass galaxy than D100, with a weaker gravitational potential. Rapid stripping of the entire galaxy would explain the uniform color of the dwarf galaxy. \subsection{Stellar sources in the tail} \subsubsection{Estimating ages and masses of stellar clumps} The ram pressure stripped tail of D100 has been shown to contain large amounts of molecular gas \citep{Jachym+17}, required for the formation of stars, accompanying the prominent H$\alpha$ emission. However, it is not known how much of this H$\alpha$ emission is due to young stars. In order to quantify the amount of star formation in the tail of D100, we made a catalogue of sources in the tail region. We searched for sources within a rectangular region, with the length set as the extent of the H$\alpha$ tail, $\sim$ 105'', and the width set as the length of the apparent optical disk of D100 ($\sim$ 20''). We set the search area as the width of the optical disk of the galaxy such that we could find evidence of both stars in the current gas tail, and any stars that may have formed when the outskirts of the galaxy were stripped. We objectively identified surface brightness peaks that could be stellar sources in the tail of D100 using Source Extractor \citep{Sextractor+96} operating on the F475W band, as it had the highest S/N. Our detection criteria were set to find sources with four contiguous pixels above 3$\sigma$ significance. Then, utilizing the dual-imaging mode of Source Extractor, the apertures from the F475W band were applied to the F814W and F275W band. The list of sources was refined by removing obvious background galaxies, identified by their morphology (such as clear spiral arms, or exponential disks). However, interlopers that were more poorly resolved may still remain intermixed with stellar sources in the tail. The locations of the resulting detections are shown in Figure \ref{fig:rectangle}, and includes a total of 37 identified sources, with about half found in the H$\alpha$ tail, and the other half outside. All sources are detected in the F814W, however, only 10 sources are detected in the F275W filter, and all of these ten sources were found within the extent of the H$\alpha$ tail. Sources which did not have a F275W detection were assigned an upper limit value. This value was based on the dimmest visible object in the F275W image, with $m_{F275W} = 28.3$, about 2$\sigma$ above the background. \begin{figure*} \plotone{New_manual_annotate.pdf} \caption{Sources in the tail region of D100 over-plotted on a greyscale F475W image. In red, contours from a smoothed H$\alpha$ image, the contour levels vary from 4.8 to 160,000 $\times \, 10^{-18}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ by a factor of two increment between contour levels. The green rectangle shows the search area. Sources are numbered, with those shown in blue having positive F275W detections.} \label{fig:rectangle} \end{figure*} Images of all F275W detected sources, and a selection of sources of varying morphologies with no F275W emission are shown in Figures \ref{fig:numberedsources} \& \ref{fig:stamps}. Figure \ref{fig:numberedsources} shows the brightest source, source 1, the only one detected with enough sensitivity to show clear substructure in all bands. \begin{figure*} \plotone{source_37_only.pdf} \caption{Images of source 1, located at the base of the tail. From left to right: HST F275W, F475W, F814W, and F475W+F814W with H$\alpha$ in red from ground-based Subaru observations \citep{Yagi+10} overlayed. Distinct features of the star clump can be seen, such as two quasi-arms of stars, with a concentration seen in the F275W at the easternmost tip, the brightest point outside the center of the region. It should be noted that in the F814W image, there is significant background emission from the disk of the galaxy, while in the F275W image, there is no measurable background, as stars in the disk here are much older than the star forming complex.} \label{fig:numberedsources} \end{figure*} \begin{figure*} \plotone{stamps_abbreviated.pdf} \caption{On the left, 2 arcsecond cutouts of all F275W detected sources in, from left to right, F275W, F475W, F814W, and F475W+F814W with H$\alpha$. On the right are some examples of sources with no F275W detections that are likely background sources. All of these sources are also shown labeled in Figure \ref{fig:rectangle}. The stamps are centered on the source that is numbered.} \label{fig:stamps} \end{figure*} We plot the sources in a color-color (F275W-F475W vs F475W-F814W) and color-magnitude diagram (F475W vs F275W-F475W) in Figures \ref{fig:tracks_labeled} \& \ref{fig:colormag1}. All sources with a positive F275W detection are found inside the H$\alpha$ tail, and none are found outside. Given that the ratio of the area of the tail to the area of the rectangle is 0.23, the likelihood that all F275W sources would randomly fall in the tail would be $4\times10^{-7}$. Thus, we are confident that the F275W detections correspond to young stellar complexes associated with the tail of D100, and are not extraneous background sources. If they were background sources, they should be randomly distributed within the search area, and not preferentially located in the tail. While we earlier used Starburst99 to model an extended star formation history with an abrupt truncation, in Figure \ref{fig:tracks_labeled} we simply use the single stellar population (SSP) results from Starburst99 to estimate the ages and masses of each stellar clump, based on their color and magnitude. Given their small sizes and masses, and their relative isolation, it is likely that the stars in each clump formed in a single star forming event. Our models include the expected effects of emission from nebular lines. Strong nebular lines are only expected to fall in the F475W band, and have significant effects for sources with H II regions, i.e. younger than 10 Myr. We used the output H$\beta$ flux from the Starburst99 models, and also included the expected contribution from H$\delta$, H$\gamma$, and [OIII], using predicted line ratios from the SVO Filter Profile Service\footnote{The SVO Filter Profile Service. Rodrigo, C., Solano, E., Bayo, A. \hfill \break http://ivoa.net/documents/Notes/SVOFPS/index.html}$^,$\footnote{The Filter Profile Service Access Protocol. Rodrigo, C., Solano, E. \hfill \break http://ivoa.net/documents/Notes/SVOFPSDAL/index.html}. The sources detected in all bands are shown in the color-color diagram Figure \ref{fig:tracks_labeled} with numerical labels. The tracks fit the observations fairly well, although there are a few sources redder in both colors than theoretically predicted. We expect there to be dust extinction in the tail, due to the prevalence of molecular gas throughout the tail, and the dust seen permeating the tail near the galaxy. For the furthest source from the tracks, source 11, to fall on the track following the slope of the extinction vector shown on the plot, 0.6 magnitude of extinction in the F475W filter would be required, according to extinction curves for HST filters from \citet{Dong+14}. In \citet{Poggianti+19}, the authors found the average extinction from the Balmer decrement in a sample of 16 ram pressure stripped tails was $\sim$ $A_V = 0.5$. Thus, our models are within a plausible amount of extinction, and we assume the offsets from the model tracks are due to solely to the error bars, and extinction, and we derive age estimates accordingly. For sources not detected in the F275W filter and not coincident with the H$\alpha$ tail (also shown in Figure \ref{fig:tracks_labeled}) we expect the reason they do not fall on the tracks is that they are not well modeled by SSPs. These are background sources, likely galaxies behind Coma, and thus have complex star formation histories and/or significant redshifts. Some sources undetected in F275W and coincident with the gas tail may indeed be tail stellar sources, as they fall near the model tracks. Limit sources in the tail tend to be dimmer in F475W than F275W detected tail sources, as indicated in Figure \ref{fig:colormag1}, and thus may be either too old, or too low mass to be detected in the F275W filter. \begin{figure*} \plotone{SB99_annotate_nebula.pdf} \caption{Color-color diagram of sources detected in the F275W filter, all of which are found in the tail, and limit sources both inside and outside the tail. Also shown is our SSP model track generated with SB99 with Padova tracks. The model is generated for a population with solar metallicity (Z=0.02), the labeled model ages are shown as open triangles. Sources are labeled with their identifier from Figures \ref{fig:numberedsources} \& \ref{fig:stamps}. The extinction vector (shown here for an $A_{F475W}=0.5$ mag, which results in a reddening value of $E(B-V) \sim 0.5$) is derived from the extinction curve generated by \citet{Dong+14}.} \label{fig:tracks_labeled} \end{figure*} \begin{figure*} \plotone{275_tracks.pdf} \caption{Color magnitude diagram of the sources in the tail region of D100 shown in Figures \ref{fig:numberedsources} \& \ref{fig:stamps}. The crosses are sources that have positive F275W detections, while the upper limit arrows correspond to sources with no detection in F275W. Symbols in green are inside an area of positive H$\alpha$ flux corresponding to the tail of D100, seen in Figure \ref{fig:rectangle}, while those in blue are outside the tail. The source that is brightest in F475W is source 1, the most visually obvious source of F275W flux shown in Figure \ref{fig:numberedsources}. The lines correspond to SSP's of a fixed mass evolving in time, generated with SB99, while labels are the age of the population at the marked open diamonds. The extinction vector is drawn for $A_{F475W} = 0.5$ mag.} \label{fig:colormag1} \end{figure*} For each source, an estimate of the age was made based on its position in the color-color diagram relative to the tracks, and accounting for error bars on the luminosity of the source. If the source does not fall on the track within the errors, its path was traced to the track using the extinction vector. Uncertainty in age was estimated using the error bars on color. For source 2 and 16, an estimate was difficult as neither error bars, nor the extinction vector can put these source on the track. Large uncertainties in age are given for sources 2 and 16, and an estimate of their actual age was made based on their relative position with respect to the tracks. For source 16, we also compared with estimates of the age and mass based on the H$\alpha$ luminosity of the surrounding H II region (Figure \ref{fig:H_alpha}), discussed below. For each source an estimate of the mass was made based on the age and luminosity of the source, as well as the uncertainties in these parameters. The results of the calculations of mass and age based on the models are shown in Table 1. We find a range of ages between about $1-35$ Myr, and a range in masses of 10$^3$ $\sim$ 10$^5$ M$_{\odot}$. For some of the star clumps, the masses are so small that the stochasticity of the IMF may begin to have an effect on the colors, affecting our mass and age estimates. In \citet{Calzetti+13}, the author found that a star clump with a mass of 1.7 $\times$ 10$^4$ M$_{\odot}$ was required (such that at least one 30 M$_{\odot}$ star was formed) for a fully sampled Kroupa IMF. For a star cluster of $\sim$ 1 $\times$ 10$^4$ M$_{\odot}$, the effects of stochastic sampling can result in a scatter as large as 20\% in the measured ionizing photon flux; for a star cluster of $\sim$ 1 $\times$ 10$^3$ M$_{\odot}$, scatter can reach 70\% \citep{Calzetti+13, Cervino+02}. This may explain why some sources, such as source 16, with an estimated mass of 6.9$^{+1.6}_{-0.3} $ $\times$10$^{3}$ M$_{\odot}$, fall in regions not predicted by our models. \begin{table*} \centering \caption{Properties of each star clump detected in F275W in the tail of D100. From left to right, in column (1) the source number. In Column (2), (3), and (4), the magnitude and color of the sources.In column (5) the $A_{F475W}$ mag listed is the magnitudes of extinction in the F475W band required for the source to fall on the SB99 tracks. In columns (6) and (7) the age and mass of the sources from SB99. In column (8) the observed half-light radius of the sources in the F475W (discussed in section 3.5.5).} \label{my-label} \begin{tabular}{cccccccc} \hline \textbf{\#} & \textbf{F475W} & \textbf{F275W-F475W} & \textbf{F475W-F814W} & \textbf{A$_{F475W}$} & \textbf{Age (Myr)} & \textbf{Mass (M$_{\odot}$)} & \textbf{$R_e$ (pc)} \\ \hline \textbf{1} & 22.2 $\pm$ 0.01 & 0.35 $\pm$ 0.03 & 0.58 $\pm$ 0.01 & 0.3 & 8$^{+5}_{-5}$ & 2.1$^{+1.7}_{-1.4}$ $\times$10$^{5}$ & 104 $\pm$ 21 \\ \textbf{2} & 25.2 $\pm$ 0.04 & -0.29 $\pm$ 0.18 & 1.05 $\pm$ 0.05 & - & 10$^{+5}_{-5}$ & 2.1$^{+1.3}_{-1.2} $ $\times$10$^{4}$ & 75 $\pm$ 24 \\ \textbf{6} & 24.9 $\pm$ 0.02 & -0.009 $\pm$ 0.16 & -0.36 $\pm$ 0.06 & - & 6$^{+1}_{-1}$ & 1.0$^{+0.7}_{-0.1} $ $\times$10$^{4}$ & 62 $\pm$ 20 \\ \textbf{8} & 25.0 $\pm$ 0.03 & 1.04 $\pm$ 0.65 & 1.06 $\pm$ 0.04 & 0.6 & 10$^{+5}_{-5}$ & 2.5$^{+1.6}_{-1.5} $ $\times$10$^{4}$ & 68 $\pm$ 30 \\ \textbf{10} & 24.7 $\pm$ 0.03 & -0.01 $\pm$ 0.22 & 0.71 $\pm$ 0.05 & 0.3 & 10$^{+2}_{-2}$ & 3.3$^{+0.3}_{-1.2} $ $ \times$10$^{4}$ & 74 $\pm$ 31 \\ \textbf{11} & 25.7 $\pm$ 0.04 & 0.21 $\pm$ 0.40 & 1.09 $\pm$ 0.06 & 0.6 & 10$^{+5}_{-5}$ & 1.3$^{+0.9}_{-0.8} $ $\times$10$^{4}$ & 47 $\pm$ 25 \\ \textbf{16} & 25.1 $\pm$ 0.04 & -0.51 $\pm$ 0.19 & -0.30 $\pm$ 0.15 & 0.1 & 1$^{+5}_{-0.9}$ & 6.9$^{+1.6}_{-0.3} $ $\times$10$^{3}$ & 51 $\pm$ 23 \\ \textbf{27} & 26.9 $\pm$ 0.10 & -0.22 $\pm$ 0.56 & 0.15 $\pm$ 0.24 & - & 9$^{+15}_{-2}$ & 4.1$^{+5.7}_{-2.4} $ $\times$10$^{3}$ & $\le$ 36 \\ \textbf{28} & 25.9 $\pm$ 0.06 & 0.43 $\pm$ 0.57 & 0.20 $\pm$ 0.14 & 0.1 & 35$^{+20}_{-20}$ & 3.1$^{+0.5}_{-1.2} $ $\times$10$^{4}$ & 63 $\pm$ 21 \\ \textbf{31} & 25.3 $\pm$ 0.04 & 1.05 $\pm$ 0.61 & 0.74 $\pm$ 0.05 & 0.5 & 9$^{+16}_{-2}$ & 1.8$^{+2.6}_{-1.0} $ $\times$10$^{4}$ & 51 $\pm$ 23 \end{tabular} \end{table*} The inclusion of Starburst99 models with nebular lines allows us to compare the measured H$\alpha$ luminosity from the ground-based Subaru telescope observations with those predicted for the star clumps with possible H II regions. These would be sources with ages younger than 10 Myr. Four sources from Table 1 fit this criteria, but source 31 has a large uncertainty in the age, and does not appear to have any nearby H$\alpha$ peak suggestive of an H II region (although at an age of 9 Myr, the H$\alpha$ flux of the H II region approaches the detection limit of the Subaru observations). Thus we compare just sources 16, 6, and 1, to our model, the youngest sources with ages of 1$^{+5}_{-0.9}$, 6$^{+1}_{-1}$, and 8$^{+5}_{-5}$, Myr respectively. \begin{figure*} \plotone{H_alpha_prediction.pdf} \caption{Comparison of the measured H$\alpha$ luminosity of three tail sources from ground based Subaru observations, and predicted nebular H$\alpha$ emission, from Starburst99 models. The H$\alpha$ luminosity of each source has been normalized to that produced by a mass of stars of 10$^4$ M$_{\odot}$, to match the model. Different aperture sizes are shown for each source, including the same size as that used to calculate the total F475W flux with HST, and double that aperture radius. The greater resolution of HST compared to Subaru, as well as the unknown extent that the ionizing flux from the star clumps penetrates, introduces some uncertainty as to the best aperture size to use. However, the double aperture size encompasses the entire H$\alpha$ peak seen in the Subaru image for Source 6. For Source 16, the aperture size enclosing the entire nearby H$\alpha$ peak is labeled as ``largest aperture'' in the plot, and is about four times the area of the HST aperture.} \label{fig:H_alpha} \end{figure*} Sources 1 \& 6 show excellent agreement with the predicted H$\alpha$ flux, adding an independent confirmation of our estimates of the age and mass of these sources. For Source 16, there is a large uncertainty in the age, and correct aperture size. The source agrees within the error bars, but shows a generally lower than expected H$\alpha$ flux. This may be a factor of the stochasticity of the IMF for such a low mass source, contributing to a different relation between stellar mass and H$\alpha$ flux than predicted from the IMF of our model. Dust extinction in the H II region surrounding this source is also unknown. However, the lower than expected H$\alpha$ luminosity may also be a sign of RPS stripping of the H II region itself. If enough gas is stripped, some Ly-$\alpha$ photons escape the H II region, and the source will produce less H$\alpha$ flux from recombination than predicted. This effect could cause the SFR based on H$\alpha$ luminosity of RPS tails to be underestimated (Kenney, J.D.P., et al. \textit{in prep}). Along with an instantaneous burst model, we also experimented with a model in which sources formed within a duration 10 Myr burst (with a boxcar shape). However, we found that about half the sources shown in Figure \ref{fig:tracks_labeled} (those with F475W-F814W $\gtrsim$ 0.8) no longer fell near the tracks, since the extended rightward jog in the track at ages $\sim 10-30$ Myr in the color-color diagram of Figure \ref{fig:tracks_labeled} largely disappears. This suggests that the instantaneous burst model is preferable to the 10 Myr burst model. For sources that fell nearer the 10 Myr burst track, the best fitting average age is older, and the best fitting mass is larger by about a factor of two. This was because sources composed of older stellar populations with the same luminosity as sources composed of younger stellar populations tend to be more massive. However, our comparison of the predicted and observed H$\alpha$ luminosities of some sources, shown in Figure \ref{fig:H_alpha}, supports the age and mass estimates for the sources from our instantaneous models. Thus, we believe an instantaneous burst model is the best approximation for the star formation history of these sources. \subsubsection{Comparison of star clump properties in D100 with other RPS tails} Our results, along with other studies of ram pressure stripped \citep{Cortese+07, Yagi+13} and tidal \citep{Boselli+18} tails, find ages of stars in tails to be $\lesssim 100$ Myr, and with masses between $10^{3}-10^{6}$ M$_{\odot}$ with masses on the higher end of this range being rarer. While the highest masses are similar to those of globular clusters, we find in this study that the tail star clumps are probably not bound (see Section 3.5.5), so these sources are probably not single star clusters. The consistent ages of $\lesssim 100$ Myr are likely due to observational constraints; it is much easier to observe younger stars, as luminosity declines rapidly with age. One noteworthy case of older stars found in a ram pressure stripped tail is in the dwarf galaxy IC 3418 \citep{Fumagalli+11}, in which the ages of some stars in the tail, after removal of contaminating background sources \citep{Kenney+14}, were found to be as old as 300 Myr. It is challenging but important to detect older stars in tails, in order to piece together their evolutionary histories. \subsubsection{The SFR of the tail} We calculate the SFR of the tail from our estimates of age and mass from SB99 models. The most dominant source in stellar mass is source 1 at the base of the tail ($\sim$ 3.5'' from the nucleus), with a mass about equal to that of all other sources combined. We choose to exclude this region because we wish to ultimately calculate the star formation efficiency of the main part of the tail, and this source is not resolved from the disk of D100 in the IRAM $\sim$ 30'' aperture observations of D100 from \citet{Jachym+17}. The total stellar mass of all the other sources detected by HST in the tail is 1.6$^{+0.8}_{-0.7}$ $\times \,10^5$ M$_{\odot}$. However, we must consider the fact that the limiting magnitude of our F275W observations means that we are not sensitive to young star clumps below a certain mass that may be present. We use the CFHT $u$-band data from \citet{Smith+10}, which is more sensitive to diffuse stellar emission, to estimate the mass of stars below our limit. One concern about this data is that there could be line contamination from the 3727\AA \, [OII] line within the $u$-band. However, we find that line contamination is unlikely to be a large effect. In Figure \ref{fig:UV-HST}, we show that the $u$-band data aligns spatially with the HST observed clumps, while the H$\alpha$ is slightly offset, or in some cases unassociated with the $u$-band, seen in the Figure \ref{fig:UV-HST}. Thus, it seems likely that the $u$-band flux is dominated by stellar continuum emission. This displacement in the local peak of the H$\alpha$ and the location of some of the stellar sources is also quite interesting in its own right. One would expect sources of 10 Myr or less in age to have associated H II regions. In all but source 31 (which has significant uncertainty in the age) this appears to be true. However, these peaks can be seen, to varying degree, slightly offset downstream from the stellar sources, in both Figure \ref{fig:UV-HST}, and the cutouts in Figure \ref{fig:stamps}. The downstream displacement in H$\alpha$ could be due to the original gas in the H II region in which the stars were formed being stripped further downstream by the RPS. This phenomenon has also been seen in other ram pressure stripped galaxies, such as RB 199 \citep{Yoshida+12}, and in the tail of NGC 4388 \citep{Yagi+13}. \begin{figure*} \plotone{UV_HST_annotate.pdf} \caption{\textbf{Greyscale}: HST F475W, \textbf{Red}: Subaru H$\alpha$, \textbf{Blue}: $u$-band CFHT observations from \citet{Smith+10}. Three zoom-ins are chosen, both \textbf{A} and \textbf{C} have associated stars identified in both HST and $u$-band, and are coincident with nearby H$\alpha$ peaks, while \textbf{B} is an example of a region with a strong H$\alpha$ peak, and no associated stars detected in either HST or $u$-band. $u$-band contour vary from 29.9 to 23.12 mag/arcsec$^2$ by increments of 0.9 mag/arcsec$^2$, and H$\alpha$ contours in the tail vary from 4.8 to 160,000 $\times \, 10^{-18}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ by a factor of two for each contour level.} \label{fig:UV-HST} \end{figure*} Given that $\sim$ 72\% of the $u$-band flux is found in the clumps of stars we identify with HST, and the other 28\% is found in diffuse emission throughout the tail, we assume 28\% the stars by mass in the tail are located outside our detected clumps, and below the detection limits of our HST observations. Thus, there is a total mass of young stars in the tail of 2.1 $^{+1.0}_{-0.9}$ $\times \,10^5$ M$_{\odot}$. This is assuming that the ages and colors of these stars undetected by HST are, on average, similar to those we have detected. The estimate of the timescale of star formation to which we are sensitive is based on two factors. The first is the oldest star cluster we detect, source 28 with an age of $35 \pm \, 20$ Myr. The second factor is the detection limit in the F275W of our data; theoretically, we should be able to detect a 10$^4$ M$_{\odot}$ star clump up to 50 Myr in age, after accounting for extinction. We use $35 \pm \, 20$ Myr as the maximum age of star formation to which we are sensitive. With this stellar mass and timescale, we estimate the SFR of the tail as: $$\mathrm{SFR_{tail}}=\frac{2.1 \times 10^5 \, M_{\odot}}{3.5 \times 10^7 \, \mathrm{yr}}=6.0^{+3}_{-3} \times 10^{-3} \, M_{\odot} \, \mathrm{yr}^{-1}.$$ Our calculations on the SFR of the tail show that H$\alpha$ flux should not be used as a proxy for star formation in ram pressure stripped tails. The total H$\alpha$ flux is calculated from the ground-based Subaru observations \citep{Yagi+10}, which, as described previously in Section 2, have been corrected for the [NII] and [SII] lines, as well as over-subtraction in the $R$-band. The total flux of the tail excluding the area around source 1 is approximately 6.5$\times 10^{-15}$ ergs/s/cm$^2$. This is after applying a correction for the estimated dust extinction in the tail, based on the average extinction for the star clumps listed in Table 1, resulting in approximately 0.28 magnitudes of extinction at H$\alpha$. At the distance of D100, this corresponds to a total luminosity of $L_{H\alpha} = 7.78 \times 10^{39}$ ergs/s, which corresponds to a SFR of 0.042 $M_{\odot}$ yr$^{-1}$, using the coefficients for translating H$\alpha$ flux to SFR from \citet{Kennicutt+09}, assuming a Kroupa IMF. However, our HST analysis has found a measured SFR of only $0.006^{+0.003}_{-0.003}$ $M_{\odot}$ yr$^{-1}$, a factor of seven less. This indicates that most of the H$\alpha$ emission in the tail must be the product of some mechanism other than star formation. It should be noted that for star clumps that do not fully stochastically sample the IMF, the H$\alpha$/UV ratio has been shown to be lower than fully sampled clusters \citep{Fumagalli+11}. Furthermore, short term variations in the star formation history, such as may occur in the extreme environment of a ram pressure stripped tail, have also been shown to lower this ratio, when compared to normal star formation histories \citep{Boselli+09, Emami+18}. Thus there may be additional uncertainty to consider when comparing the expected SFR of the tail with a standard formula such as that by \citet{Kennicutt+09}. However, these factors alone would not be enough on their own to explain a factor of seven difference between the predicted and measured H$\alpha$ luminosity. Furthermore, much of the H$\alpha$ emission in the tail is extended and smooth, and not patchy as would be expected from H$\alpha$ emission associated with star formation. Our findings on the low measured SFR in comparison to the prediction from H$\alpha$ is in contrast to other recent studies of ram pressure stripped tails. A study of a ram pressure stripped `jellyfish' galaxy in the GASP sample by \citet{George+18} finds good concordance between the measured SFR from the UV, and that predicted from the H$\alpha$ emission in the tail. Other studies of GASP galaxies, such as \citet{Poggianti+19}, have also found that the dominant ionization mechanism for H$\alpha$ excitation in these tails is photoionization from young stars. This is quite different than our results for the tail of D100, in which we find levels of star formation seven times lower than those predicted from H$\alpha$, suggesting a dominant mechanism, or mechanisms, other than photoionization from young stars is present in the tail. Thus, one would not want to make a general conclusion about RPS galaxies from only a subset of the population. \subsubsection{The star formation efficiency of the tail} To estimate the amount of molecular gas in the tail of D100, we use the number quoted in \citet{Jachym+17}, M$_{\mathrm{H}2}$ $\sim$ 10$^9 \, M_{\odot}$. We also have an upper limit on the H I in the tail of 0.5 $\times \, 10^8 \, M_{\odot}$ \citep{Jachym+17}. The mass of H$_2$ and H I in the tail allows us to calculate the gas consumption timescale, ``star formation efficiency", of the tail as: $$\mathrm{SFE_{tail}}=\frac{6.0^{+3}_{-3} \times 10^{-3} \, M_{\odot} \mathrm{yr}^{-1}}{10^9 \, M_{\odot}} = 6.0^{+3}_{-3} \times10^{-12} \, \mathrm{yr}^{-1}.$$ The overall star formation efficiency of the tail is $\sim$ $6 \, \times$ 10$^{-12}$ yr$^{-1}$, quite low, corresponding to a gas depletion timescale of 1.7 $\times$ 10$^{11}$ years. This is nearly 2 orders of magnitude lower than the inner disks of normal spirals \citet{Bigiel+08}, but comparable to the SFE of the outer disks of spirals and a couple other RPS tails. However, if our estimate of the gas mass is accurate, it is the highest gas surface density at which such a low SFE has been measured. We compare this estimate of star formation efficiency to that in the disks of nearby spirals, as well as two other ram pressure stripped tails in Figure \ref{fig:Pavel_14_tail_add}. We also calculate the surface density of gas in the tail. We assume that the molecular gas detected in the apertures is all concentrated in the region of the tail visible in H$\alpha$. Given a tail length of 60 kpc and a width of 1.5 kpc, the total area of the tail is 90 kpc$^2$. This gives an average gas surface density over the tail of $\Sigma_{gas}= 11 \, M_{\odot} \, \mathrm{pc}^{-2}$, and a SFR density of $\Sigma_{SFR} =6.6^{+3.3}_{-3.3} \times10^{-5} \, M_{\odot} \,\mathrm{yr}^{-1} \, \mathrm{kpc}^{-2}$. In order to see whether there is a gradient in SFE based on radial distance from the body of the host galaxy, we divide the tail into two parts, the inner tail, extending along the first 30 kpc of the tail, and the outer tail extending along the last 30 kpc. Based on the fraction of molecular gas per aperture along the tail, we estimate that 57\% of the total flux from molecular gas is detected in the inner tail, while 43\% is found in the outer tail. Given a total mass of molecular gas in the tail of $\sim10^9$ M$_{\odot}$, this results in the surface density of gas in the inner tail being 13 $M_{\odot} \, \mathrm{pc}^{-2}$, and in outer tail, 10 $M_{\odot} \, \mathrm{pc}^{-2}$. Furthermore, only sources 27, 28, and 31 are found in the outer tail. The resulting surface densities of gas and star formation are plotted in Figure \ref{fig:Pavel_14_tail_add}. \begin{figure*} \plotone{Pavel_edit.pdf} \caption{SFR surface density versus gas surface density, from \citet{Jachym+14}, with our data from D100 added. The lower error bars show what the results would be with a factor of 10 difference in the CO$-$H$_2$ relation. The upper error bar on the gas mass is given from the limit on the mass of H I in the tail. In blue points are the body, and three tail regions of ESO 137-001, with point A being closest to the body of the galaxy, and C being furthest away. Filled circles show molecular gas, and the bars show upper limits on HI. Contours show SFRs and efficiencies sampled from the disks of seven nearby spirals from \citet{Bigiel+08}. The black dots show average molecular gas depletion time measured in 30 nearby galaxies. For comparison, the tail of the Virgo cluster dwarf galaxy IC3418 is also included. The blue pentagons show data from \citet{Moretti+18} from four ram pressure stripped tails in the GASP sample. They represent lower limits on both the star formation and gas surface density (see \citet{Moretti+18} for details), thus the SFE in these tails could be even higher than shown here.} \label{fig:Pavel_14_tail_add} \end{figure*} Overall, D100 exhibits star formation efficiency comparable to, and possibly even lower than that found in the tail of ESO 137-001. However, we must consider that the amount of molecular gas relies on an accurate H$_2$-CO conversion factor. \citet{Jachym+17} point out that the conversion factor may vary widely throughout the tail, especially from empirical values measured in the disks of galaxies due to the different conditions in this environment. We show an error bars on the mass of molecular gas in the tail of a factor of 10 less in Figure \ref{fig:Pavel_14_tail_add} to account for this. Even accounting for such an extreme variance in the H$_2$-CO conversion factor, the SFE in the tail of D100 is much lower than that of the inner disks of spirals from the \citet{Bigiel+08} sample. The upper error bar on the gas mass is given from the limit on the mass of H I in the tail. Similar to what is seen in ESO 137-001, we also see the same trend in star formation efficiency falling with distance from the body of the galaxy, with the outer tail being about two times less efficient than the inner tail. This suggests that this may be the case overall for ram pressure stripped galaxies. This could be explained by the idea that the less dense gas will be accelerated by ram pressure more easily than the densest gas, meaning that more dense gas, i.e. the sites of star formation, will be located preferentially in the inner tail \citep{Jachym+14, Jachym+17}. A key difference between D100 and ESO 137-001 is that the gas surface density in the tail of D100 is almost an order of magnitude greater than that of ESO 137-001. While ESO 137-001 has similar SFR and gas surface density as the outer disks of nearby spirals, the tail of D100 has a gas surface density of $\sim$ 9-12 $M_{\odot} \, \mathrm{pc}^{-2}$. This corresponds to the area of disks where the star formation surface density in normal spiral disks is almost two orders of magnitude greater. We note that the beam size for the CO(1-0) observations are not the same in the three plotted galaxies. The ratio in beam area between ESO 137-001 and D100 is $\sim$ 2.5, so ESO 137-001 is more beam diluted than D100, which could result in it having a lower beam-averaged gas surface density (and SFR/area), by a factor of $\sim$ 2. This factor would not change our overall conclusions regarding the star formation efficiency comparison between the galaxies. \subsubsection{Are young tail stars in star clusters?} With the resolution of HST we get useful information on the sizes of the tail star clumps. In Figure \ref{fig:clump_sizes} we plot the effective (half-light) radii R$_{e}$ of the stellar sources versus their stellar masses, and compare to the effective radii and masses of known star clusters, super star clusters, and star cluster complexes from \citet{Bouwens+17}. The source sizes have been corrected for the PSF of the HST F475W filter ($\sim$ 0.08'' for our data). Only one source, source 27, the least massive star clump, is unresolved and thus is labeled as an upper limit in the plot. The sizes of the sources in D100 are much larger than single star clusters, which are gravitationally bound, and are consistent with these sources being large, unbound star cluster complexes. This suggests that the star cluster complexes we are analyzing are gravitationally unbound, and will disperse over time. This would make it difficult to detect the older stars formed in ram pressure stripped tails, and to determine the total contribution to intracluster light from ram pressure stripped tails. \begin{figure*} \plotone{star_complex_sizes.pdf} \caption{The effective radii of the star clusters found within the tail, calculated by taking the geometric mean of the effective semi-minor and semi-major axes, are plotted versus the stellar masses. The polygons of different colors show the general parameter space of star clusters, super stellar clusters, and star cluster complexes from an empirical sample compiled by \citet{Bouwens+17}. Star clusters and super stellar clusters are taken from a sample in the local universe (z $\sim$ 0), while star cluster complex sizes and masses come from a large sample of complexes in galaxies from z $=0-3$. The dashed black line indicated the limits of the sensitivity of the sample from \citet{Bouwens+17}. Our deep observations on the relatively nearby Coma cluster allows us to sample a fainter cluster population than \citet{Bouwens+17}. The blue line shows the PSF limit of the size of a resolved star clump for the HST observations, 36.5 pc, and the upper limit sign is for source 27.} \label{fig:clump_sizes} \end{figure*} \section{On the morphology of the tail} An especially remarkable feature of D100 is its simple, long, straight, and relatively unbroadening H$\alpha$ tail \citep{Yagi+07}. We gain new insights into the physical processes that shape the tail from our HST imaging data, both from the dust morphology viewed in the base of the tail, and from the temporal progression of outside-in disk stripping that we have measured from HST colors, that relates directly to the broadening of the tail. The color HST image in Figure \ref{fig:colorstreams} shows coherent, filamentary dust structures, especially visible at both edges of the tail. The northernmost filament shows a straight, continuous filament extending a total of 4'' (1.9 kpc), suggesting little turbulence in the flow of the gas. The long, straight, and relatively unbroadening H$\alpha$ tail (Figure \ref{fig:sidebyside}) also suggests a minimum of turbulence. Due to how relatively simple and well-defined the tail of D100 is, it is easier to study than tails with many complex components from different radii. It is an excellent galaxy to compare to simulations, to investigate the factors that contribute to the overall width and structure of tails. Other observed ram pressure stripped tails, such as the well studied tail of ESO 137-001, tend to be messier, i.e. with discrete components at a range of distances from the tail center, and overall more broad than the tail of D100. In Figure 2 of \citet{Jachym+14}, along with the main tail, one other filamentary gas tail at larger galactic radii can be seen in ESO 137-001. This would be due to the lower density ISM in the inner radii of the galaxy being stripped before the outer disk completely loses its gas. This is likely the case for low inclination (closer to face-on) stripping when diffuse gas is stripped from a region before denser ISM, leaving pockets in the gas disk where hydrodynamic instabilities can form, that increase the stripping rate \citep{Quilis+00}. Other galaxies in Coma with ram pressure stripped tails such as IC4040 \citep{Yoshida+12} and RB199 \citep{Yoshida+08} also show these characteristic multi-component tails likely due to stripping at multiple radii at once. D100 may have a simpler, single-component tail structure, with inhibited stripping at multiple radii, due to undergoing a highly inclined stripping event (see Figure 7 of \citet{Jachym+17} for a visualization). In highly-inclined stripping cases there is both a larger path length for gas to travel through the disk, and a higher projected gas surface density. These factors make it difficult for ram pressure to punch holes through the disk. Another remarkable feature of the D100 tail is its narrow, relatively unchanging width over its length. Our observations have shown that the dust tail expands from an initial diameter of 0.95 kpc to, a diameter of 1.4 kpc at the edge of the disk of the galaxy, an increase of $\sim$ 50\%. The H$\alpha$ tail extends 60 kpc from the center of the galaxy, and broadens slightly from a half-width of 0.95 kpc at the center to 1.7 kpc at the end of the tail. There are several plausible mechanisms by which tails could broaden, and the fact that so little broadening is seen means that their impact on the tail of D100 is minimal. We briefly discuss each possible mechanism: radial progression of outside-in stripping, angular momentum from disk rotation, gas pressure, and turbulence (including influence from magnetic fields). In the absence of any other broadening mechanism, the tail should broaden simply due to the radial progression of outside-in disk stripping. From the HST colors, we have determined (in Section 3.4.1) the star formation quenching time as a function of radius, and found that the stripping radius has progressed from $r \sim 1.3-2.3$ kpc to $r \sim 0.25$ kpc in the last 280 Myr. This matches very well the measured $\sim$ 2 kpc half-width of the outer tail and the $\sim$ 250 Myr age of the outer tail estimated from gas kinematics from \citet{Jachym+17}. Thus, nearly all of the tail broadening is consistent with the radial progression of outside-in disk stripping, putting a strong limit on all the other factors which can cause broadening. Furthermore, this suggests gas stripped from further out in the disk of D100 should be found even further downstream. This gas may be part of a broader, fainter (not currently detected), and older tail component. Another factor in the lack of tail broadening could be due to the near edge-on stripping of D100. The momentum of the gas in the disk carries some orbital velocity, related to the distance from galaxy center and the mass of the host galaxy. As this gas is stripped, it will expand outward since it is no longer forced to rotate about the galaxy center. \citet{Roediger+05} showed in simulations that azimuthal asymmetries in the radius of origin of gas tails in RPS galaxies resulted for inclinations less than 30 degrees (like D100) due to the angular momentum of gas in the disk. In nearly face-on stripping, the angle of incidence of RPS is near perpendicular to the angle of rotation of gas in the disk, likely leading to neither inhibition, nor promotion, of broadening. In the nearly edge-on stripping case, however, during part of its orbit, the gas will rotate at a near parallel or anti-parallel angle to the incidence of RPS. The momentum change from ram pressure would thus sometimes act opposite the angle of rotation, reducing the angular momentum of the gas as it is stripped. This could result in less broadening, as opposed to other observed tails in galaxies that are stripped closer to face-on. It should be noted D100 is a relatively low-mass spiral, with the rotation speed estimated to be only about 50 km s$^{-1}$ in the inner galaxy \citep{Caldwell+99}. Such low angular velocity in the innermost galaxy could also result in little to no broadening. Another explanation for the lack of tail broadening comes from simulations that focus on modeling gas cooling, such as that by \citet{Tonnesen+10}, which show that narrower tails result from including gas cooling in their simulation. This is due to the reduced pressure of the gas in the tail with respect to the intercluster medium (ICM). If the ISM of the tail is overpressured with respect to ICM it will expand. The tail from their paper that has cooled to $\sim$8,000 K, similar to the temperature of the gas seen in the H$\alpha$ image, appears similar to the H$\alpha$ tail of D100 in its lack of significant broadening. Finally, the turbulence of both gas in the tail, and in the surrounding ICM, is a factor that could influence broadening. Work by \citet{Roediger+08b} suggests that flaring of the gas tail is determined by the turbulence in the ICM flow past the galaxy. By varying the viscosity in their hydrodynamical simulation, \citet{Roediger+08a} found that an inviscid ICM leads to more turbulence and vortices in the ram pressure stripped tail of the simulated galaxy. Consequently, a high viscosity in the ICM in the area of D100 would lead to the suppression of hydrodynamical instabilities, and result in a more straight, and unbroadening tail. That said, studies of ram pressure stripping of elliptical galaxies in the Virgo cluster found no need to invoke any additional factor for ICM viscosity to explain the observed tails \citep{Roediger+15a, Roediger+15b, Kraft+17}. Coma, however, is significantly hotter than Virgo, so we would expect the influence from viscosity to be much higher, due to its strong temperature dependence. D100 would be an excellent place to further study ICM viscosity and thermal conductivity. The influence of magnetic fields could also reduce turbulence in the tail. \citet{Ruszkowski+14} found that, in a comparison between an MHD simulation (with fields only in the ICM, not the body of the galaxy), and a hydrodynamical simulation, the morphology of RPS tails changed significantly. In MHD simulations, magnetic fields inhibit thermal conduction, and result in the formation of long, filamentary structures in the gas tail. The tail that formed in the MHD simulation was also much narrower than the corresponding, purely hydrodynamical case, in which a much clumpier, disparate, tail resulted. Such structure has been noted in the H$\alpha$ tail of NGC 4569 in Virgo by \citet{Boselli+16}, who also cited it as evidence of the influence of magnetic fields in the tail. Our even higher resolution view of the dust in D100 with HST reveals kpc scale filaments of dust, not previously seen in the dust in any other RPS tail. Magnetic fields permeating the stripped gas may play a role in inhibiting turbulence, leading to a narrower tail. The modest amount of broadening in the H$\alpha$ tail is consistent with being caused by the radial progression of outside-in stripping over the last $\sim$ 250 Myr that we have measured. The other factors that could cause broadening, specifically angular momentum from disk rotation, gas pressure, and turbulence, are apparently not significant in the tail of D100. More work is needed to understand the influence of magnetic fields in RPS galaxies; the importance of magnetic fields in the formation of ISM substructure has been noted in other RPS influenced galaxies \citep{Kenney+15}. \section{Summary} We have presented new HST F275W, F475W, and F814W observations of the galaxy D100 in the Coma cluster, known for its spectacular 60 $\times$ 1.5 kpc ram pressure stripped H$\alpha$ tail. This tail was previously found to host gas hot enough to emit in the soft X-ray, as well as containing $\sim$ 10$^9$ M$_{\odot}$ of cold molecular gas. Given the amount of molecular gas, star formation in the tail was expected, but not measured quantitatively. Our new data has allowed us to characterize the amount and efficiency of star formation, as well as the ages, masses, and sizes of star complexes that formed in that tail. We also analyzed the star formation histories of the main bodies of D100 and its two close companions, and shown that their quenching histories suggest evidence of ram pressure stripping in all three. \begin{description} \item[$\bullet$ Star formation in the tail] Through analysis of the colors and magnitudes from HST, we have constrained the ages and masses of the tail star clumps from comparison with the Starburst99 SSP model. We find a SFR in the tail of 6.0 $\times \,10^{-3}$ M$_{\odot}$ yr$^{-1}$ over the last 35 Myr, and a star formation efficiency of $ 6 \times \,10^{-12}$ yr$^{-1}$. Overall, the star formation efficiency is a factor of $\sim$ 2 times higher in the half of the tail closer to the galaxy. Furthermore, the SFR is a factor of 7 times less than would be predicted by the H$\alpha$ flux of the tail, demonstrating some other excitation mechanism is dominant in the H$\alpha$ tail. This in contrast to some recently published results from the GASP group, which found good agreement between the H$\alpha$ flux and measured SFR \citep{George+18, Poggianti+19}. \item[$\bullet$ Star clump sizes] We have found from analysis of the stellar masses and sizes of the star complexes in the tail of D100, and comparison to a large sample of observed star clusters and complexes, that the tail star clumps are likely to be gravitationally unbound complexes. They have masses of $\sim$ $10^3-10^5$ M$_{\odot}$, and sizes (based on $R_e$) of $50-100$ pc, much larger than any known bound star clusters. Thus, they should disperse over time, rendering older stellar clumps formed in the tail more diffuse, and harder to detect. \item[$\bullet$ Outside-in quenching] From color analyses, we find radial age gradients due to outside-in quenching in D100 and its apparent neighbor D99. For a model star formation history with a 2\% burst at the time of quenching, D100 has a quenching time of $\sim$ 280 Myr at $r=2$ kpc and $\le$ 50 Myr at $r=0.5$ kpc, and an ongoing starburst inside $r=0.5$ kpc. D99 is completely quenched, with a quenching time of ~1 Gyr at $r=3$ kpc, and $\sim$ 300 Myr in the center. \item[$\bullet$ Longevity of spiral arms] While the disk of D100 outside the central $r \sim 500$ pc has been completely stripped of gas and ceased star formation $100-400$ Myr ago, strong spiral structure is still visible. In D100's apparent neighbor D99, which is similar in mass and basic structure to D100 but has an outer disk quenching age of $\sim$ 1 Gyr, faint spiral structure is still visible. Unsharp masking reveals two main spiral arms with amplitudes of only $\sim$ 1.5\%. This suggests the time for a stripped spiral to evolve into an S0 type galaxy is on the order of $\sim$ 1 Gyr. \item[$\bullet$ Substructure in the tail and little broadening] HST images reveal parallel kiloparsec-length dust filaments not previously seen in any other RPS tail. Such filaments are seen in MHD simulations, but not purely hydrodynamical simulations, suggesting that magnetic fields are important in RPS tails. Magnetic fields permeating the stripped gas may play a role in inhibiting turbulence, leading to a narrower tail. The modest amount of broadening in the H$\alpha$ tail is consistent with being caused by the radial progression of outside-in stripping over the last $\sim$ 250 Myr that we have measured. The other factors that could cause broadening, specifically angular momentum from disk rotation, gas pressure, and turbulence, are apparently not significant in the tail of D100. \end{description} \acknowledgments We thank the referee for their helpful comments. W. Cramer, J. Kenney, \& M. Sun acknowledge the support of STScI Grant HST-GO-14361.003. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained June-July of 2016, at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program 14361 (PI:Sun), and additional archival data from program 14182 (PI:Puzia). M. Sun would like to acknowledge support from NSF grant 1714764, M. Sun would also like to acknowledge support from Chandra Award GO6-17111X. This work is based in part on data collected by the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. We also thank Russell Smith for facilitating, and Stephen Gwynn for providing the reduced CFHT data from MegaPipe. P.J. acknowledges support by the project LM2015067 of the Ministry of Education, Youth and Sports of the Czech Republic. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. This research has made use of the SVO Filter Profile Service (http://svo2.cab.inta-csic.es/theory/fps/) supported from the Spanish MINECO through grant AyA2014-55216. Fantastic false color images of the reduced HST data were provided by the STScI imaging team led by J. DePasquale. \clearpage \bibliographystyle{apalike}
1,108,101,564,044
arxiv
\section{Introduction} Anisotropic magnetic resonance (MR) images are those acquired with high in-plane resolution and low through-plane resolution. Such anisotropic volumes are common-place and are acquired to reduce scan time and the burden on the patient, while preserving SNR. To improve through-plane resolution, super-resolution~(SR) methods have been developed on MR volumes~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}. The application of SR methods to estimate the underlying isotropic volume leads to improved downstream task performance~\cite{zhao2019applications}. For 2D multi-slice protocols, the through-plane point-spread function (PSF) is known as the slice profile. This signal can be modeled as a strided 1D convolution between the slice profile and the object to be imaged~\cite{han2021mr,prince2006medical,sonderby2016amortised}. Ideally, the separation between slices is equivalent to the full-width-at-half-max~(FWHM) of the slice profile, but volumes can also be acquired where the slice separation is less than or greater than the slice profile FWHM; ``slice overlap'' and ``slice gap'' respectively. Data-driven SR methods must either simulate low-resolution (LR) data from high-resolution (HR) data (the typical supervised approach) using an assumption of the slice profile~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}, or the slice profile itself must be approximated~\cite{han2021mr} for internally trained, self-supervised, or unsupervised methods. In either case, SR methods are formulated as a classical inverse problem: \begin{equation} y = Ax + n \label{eq:inverse} \end{equation} where $y$ is the LR observation, $A$ is the degradation matrix (equivalent to a strided convolution with the slice profile), $x$ is the underlying HR image, and $n$ is noise. Commonly, this is precisely how paired training data is created; HR data is degraded by $A$ to obtain the LR $y$ and weights $\theta$ of a neural network $\phi$ are learned such that $\phi_\theta(y) \approx x$. However, under this framework there is no specification of how the missing information might look; the model is end-to-end and directed only by the dataset. In our work, we propose an entirely novel SR framework based on perfect reconstruction~(PR) filter banks. From filter bank theory, PR of a signal $x$ is possible through an $M$-channel filter bank with the correct design of an analysis bank $H$ and synthesis bank $F$~\cite{strang1996wavelets}. Under this formulation, we do not change Eq.~\ref{eq:inverse} but explicitly recognize our observation $y$ as the ``coarse approximation'' filter bank coefficients and the missing information necessary to recover $x$ as the ``detail'' coefficients; see Fig.~\ref{fig:obs_model} for reference. \begin{figure}[!tb] \centering \includegraphics[width=0.9\textwidth]{figs/filter_bank.png} \caption{The LR input $y$ occurs after a convolution of the unobserved HR $x$ with slice profile $H_0$ and downsampling factor $M$. Both $y$ and $H_0$ (green) are given and fixed. The downsampling step $M$ is also fixed. In Stage 1, $H_1, \ldots, H_{M-1}$ and $F_0, F_1, \ldots, F_{M-1}$ are learned; in Stage 2, a mapping from $y$ to $d_1, \ldots, d_{M-1}$ is learned.} \label{fig:obs_model} \end{figure} The primary contribution of this work is to reformulate SR to isotropy of 2D-acquired MR volumes as a filter bank regression framework. This approach has five benefits. Firstly, the observed low-frequency information is untouched in the reconstruction; thus our method explicitly synthesizes the missing high frequencies and does not need to learn to preserve acquired low frequency information. Secondly, in our framework, filtering is necessarily separate from downsampling, allowing us to address ``slice gap'' acquisition recovery. Third, the analysis filters of PR filter banks necessarily introduce aliasing which is canceled via the synthesis filters; therefore, we do not need to directly handle anti-aliasing of our observed image. Fourth, our model has dynamic capacity for lower-resolution images. This is intuitive: when less measurements are taken, more estimates must be done in recovery. Fifth, our method does not rely on external training data, using the inherent HR data existing within the volume to perform internal supervision. In the remainder of the paper, we describe this framework in detail, provide practical implementation, and evaluate against a state-of-the-art supervised SR technique. We believe this formulation of SR as filter bank coefficient regression lays the foundation for future theoretical and experimental work in SR of anisotropic MR images. \section{Methods} The analysis bank, $H$, and synthesis bank, $F$, each consist of $M$ 1D filters represented in the $z$-domain as $H_k$ and $F_k$, respectively, with corresponding spatial domain representations $h_k$ and $f_k$. As illustrated in Fig.~\ref{fig:obs_model}, input signal $x$ is delayed by $z^{-k}$, filtered by $H_k$, and decimated with $\downarrow M$ (keeping only the $M^\text{th}$ entry) to produce the corresponding coefficients. These coefficients exhibit aliasing and distortion which are corrected by the synthesis filters~\cite{strang1996wavelets}. Reconstruction from coefficients comes from zero-insertion upsampling with $\uparrow M$, passing through filters $F_k$, advancing by $z^k$, and summation across the $M$ channels. Traditional design of $M$-channel PR filter banks involves the deliberate choice of a prototype low-pass filter $H_0$ such that modulations and alternations of the prototype produce the remaining filters for both the analysis and synthesis filter banks~\cite{strang1996wavelets}. $M$ is also chosen based on the restrictions of the problem at hand. However, for anisotropic 2D-acquired MRI, the slice profile \textit{is} the low-pass filter and as such we have a fixed, given $H_0$. The separation between slices is also given as $M$, which is equal to the FWHM of $h_0$ plus any further gap between slices. We use $A \bigoplus B$ to denote a FWHM of $A$~mm and slice gap of $B$~mm; and note that $M = A + B$. Our goal is to estimate filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$ and the detail coefficients $d_1, \ldots, d_{M-1}$ which lead to PR of $x$. We approach this problem in two stages. In Stage~1, we approximate the missing analysis and synthesis filters, assuming there exists a set of filters to complete the $M$-channel PR filter bank given that $H_0$ and $M$ are fixed and known ahead of time. In Stage~2, we perform a regression on the missing coefficients. Both of these stages are optimized in a data-driven end-to-end fashion with gradient descent. After both stages have trained, our method is applied by regressing $d_1, \ldots, d_{M-1}$ from $y$ and feeding all coefficients through the synthesis bank, producing $\hat{x}$, our estimate of the HR signal. The Stage~2 coefficient regression occurs in 2D, so we construct our estimate of the 3D volume by averaging stacked 2D predictions from the synthesis bank from both through-plane axes. \textbf{Stage 1: Banks}\qquad{}Previous works assume the slice profile is Gaussian with FWHM equal to the slice thickness~\cite{zhao2020smore,oktay2016multi}; we estimate the slice profile, $H_0$, directly with~\cite{han2021mr}. We now want to estimate the filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$. To do this, we learn the spatial representations $h_1, \ldots, h_{M-1}$ and $f_0, \ldots, f_{M-1}$. Let $\mathcal{D}_1 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^d$ be the training set for Stage~1, consisting of 1D rows and columns drawn from the in-plane slices of $y$ --- recall that the in-plane slices are HR. We initialize $h_1, \ldots, h_{M-1}$ according to a cosine modulation~\cite{strang1996wavelets} of $h_0$, \begin{equation*} h_k[n] = h_0[n] \sqrt{\frac{2}{M}} \cos{\left( \left( k + \frac{1}{2} \right) \left( n + \frac{M + 1}{2} \right) \frac{\pi}{M}\right) }. \end{equation*} Accordingly, we initialize $f_k$ to $h_k$. We estimate $\hat{x}_{i}$ by passing $x_i$ through the analysis and synthesis banks, then (since the entire operation is differentiable) step $h_k$ and $f_k$ through gradient descent. The reconstruction error is measured with mean squared error loss and the filters updated based on AdamW~\ref{loshchilov2017decoupled} optimizer with a learning rate set of $0.1$ and with one-cycle learning rate scheduler~\cite{smith2019super} of $100,000$ steps with a batch size of $32$. See Fig.~\ref{fig:example_filters} for Stage~1 estimated PR filters. \begin{figure}[!tb] \centering \includegraphics[width=0.9\textwidth]{figs/network.png} \caption{Stage~2 network architecture for the generator and discriminator. All convolutional layers use a $3\times 3$ kernel. The generator and discriminator use $16$ and $2$ residual blocks, respectively. The number of features for the generator was $128 \times M$ while for the discriminator $64 \times M$. The final convolution outputs $M-1$ channels corresponding to the missing filter bank detail coefficients.} \label{fig:network} \end{figure} \textbf{Stage 2: Coefficients}\qquad{}From Stage~1, we have the analysis and synthesis banks and now estimate the missing detail coefficients given only the LR observation $y$. With the correct coefficients and synthesis filters, PR of $x$ is possible. For this stage, we choose 2D patches in spite of the 1D SR problem as a type of ``neighborhood regularization''. Let $\mathcal{D}_2 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^{p \times pM}$; ie: the training set for Stage~2 consists of 2D $p \times pM$ patches drawn from the in-plane slices of $y$. The second dimension will be decimated by $M$ after passing through the analysis banks, equating to $y, d_1, \ldots, d_{M-1} \in \mathbb{R}^{p \times p}$. We use the analysis bank (learned in Stage~1) to create training pairs $\{(y_i, (d_1, d_2, \ldots, d_{M-1})_i\}_{i=1}^N$ and fit a convolutional neural network (CNN) $G: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{{p \times p}^{M-1}}$ to map $y_i$ to $(d_1, \ldots, d_{M-1})_i$. Since this is an image-to-image translation task, we adopt the approach proposed in Pix2Pix~\cite{pix2pix2017} and include an adversarial patch discriminator. Empirically, we found more learnable parameters are needed with greater $M$. Thus, our generator $G$ is a CNN illustrated in Fig.~\ref{fig:network} with $16$ residual blocks and $128 \times M$ kernels of size $3\times 3$ per convolutional layer. The discriminator $D$ has the same architecture but with only $2$ residual blocks and $64 \times M$ kernels per convolutional layer. Our final loss function for Stage~2 is identical to the loss proposed in~\cite{pix2pix2017} and is calculated on the error in $d_k$. We use the AdamW optimizer~\ref{loshchilov2017decoupled} with a learning rate of $10^{-4}$ and the one-cycle learning rate scheduler~\cite{smith2019super} for $500,000$ steps at a batch size of $32$. \section{Experiments and Results} \textbf{Experiments}\qquad{}We evaluated our method on $30$ T1-weighted MR brain volumes from the OASIS-3 dataset. We simulated LR acquisition via convolution with a Gaussian kernel with FWHM $\in \{2, 4, 6\}$ and slice gap $\in \{0, 1, 2\}$; nine combinations of FHWM and slice gap in total. To validate Stage~1, which learns 1D analysis and synthesis filters, we measured reconstruction of in-plane slices by stacking 1D signals, then stacked these slices into a volume to calculate PSNR. We perform this for the simulated LR volume along both in-plane axes (to judge in-plane training efficacy) as well as for the HR volume along four through-plane axes (to judge through-plane testing feature efficacy). To evaluate Stage~2, we compared our method to two approaches which also do not rely on external training data: cubic b-spline interpolation, a standard in increasing digital resolution of medical images, and SMORE~\cite{zhao2020smore}, a state-of-the-art self-super-resolution technique for anisotropic MR volumes. To help improve SMORE, it is trained not with a Gaussian slice profile assumption but with the same slice profile estimate that we use, based on the estimate from~\cite{han2021mr}. \begin{figure}[!tb] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figs/psnr.png} \caption{} \label{fig:psnr} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{figs/ssim.png} \caption{} \label{fig:ssim} \end{subfigure} \caption{\textcolor{red}{TODO: add signficiance asterisks}. Quantitative metrics PSNR in \textbf{(a)} and SSIM in \textbf{(b)}, computed over the $30$ image volumes. Significance tests performed with the Wilcoxon signed rank test; $\ast$ denotes $p$-values $ < 0.05$. Note that $A\bigoplus B$ denotes a FWHM of $A$~mm and slice gap of $B$~mm.} % \label{f:oasis30} \end{figure} \textbf{Stage 1 Results}\qquad{} Stage~1 reconstruction is executed in 1D, and there are six ways to extract 1D signals from a 3D volume: two for each axis. In Table~\ref{tab:autoencoding}, we show the mean reconstruction PSNR $\pm$ standard deviation for each of these extraction angles at each resolution, demonstrating the filter bank's reconstruction resilience to features across the axes. If we had attained PR filters, the PSNR would be $\infty$; our estimates fall short of this. We trained Stage~1 to both 1D directions from in-plane data, represented by $\theta_0$ and $\theta_1$; these columns indicate the efficacy of our optimized filters for reconstruction of the training data. For Stage~2, we fit a model to estimate the missing coefficients, but we also assume that the correct coefficients will recover the HR volume. To this end, we evaluate reconstruction of the ground truth (GT) volume as a sort of ``upper bound'' on estimation. We ensemble the SR prediction by averaging stacked slices from both through-planes; thus, we represent both 1D directions of both through-planes as $\alpha_0, \alpha_1, \beta_0, \beta_1$. \begin{table}[!tb] \centering \caption{Mean $\pm$ std. dev. of volumetric PSNR values for Stage~1 reconstruction of the $10$ subjects. LR indicates a reconstruction of the input low-resolution volume and GT the ground truth volume. $\theta_i$ corresponds to in-plane while $\alpha_i$ and $\beta_i$ to through-plane reconstruction along direction $i$.} \label{tab:autoencoding} \begin{tabular}{c|c|c|c|c|c|c} \toprule \hspace*{8ex} & LR $\theta_0$ & LR $\theta_1$ & GT $\alpha_0$ & GT $\alpha_1$ & GT $\beta_0$ & GT $\beta_1$\\ \cmidrule{1-7} $2\bigoplus0$ & $62.24\pm 0.97$ & $60.19\pm 3.74$ & $60.63\pm 0.56$ & $59.59\pm 2.54$ & $60.63\pm 0.56$ & $55.47\pm 4.69$\\ $2\bigoplus1$ & $63.01\pm 4.91$ & $62.25\pm 5.09$ & $64.32\pm 0.63$ & $59.49\pm 5.52$ & $64.32\pm 0.63$ & $53.81\pm 6.50$\\ $2\bigoplus2$ & $62.57\pm 1.59$ & $57.93\pm 5.32$ & $60.62\pm 1.34$ & $59.31\pm 3.65$ & $60.62\pm 1.34$ & $52.09\pm 4.34$\\ \cmidrule{1-7} $4\bigoplus0$ & $55.47\pm 3.81$ & $52.36\pm 5.32$ & $48.91\pm 4.65$ & $48.77\pm 4.68$ & $48.91\pm 4.65$ & $44.08\pm 4.78$\\ $4\bigoplus1$ & $53.03\pm 1.54$ & $50.31\pm 3.41$ & $44.19\pm 1.57$ & $45.65\pm 1.63$ & $44.19\pm 1.57$ & $44.28\pm 2.14$\\ $4\bigoplus2$ & $54.71\pm 2.61$ & $51.08\pm 4.51$ & $46.75\pm 2.83$ & $46.39\pm 3.27$ & $46.75\pm 2.83$ & $43.27\pm 2.80$\\ \cmidrule{1-7} $6\bigoplus0$ & $49.97\pm 1.07$ & $40.18\pm 4.77$ & $40.14\pm 1.35$ & $41.04\pm 1.40$ & $40.14\pm 1.35$ & $35.76\pm 3.19$\\ $6\bigoplus1$ & $52.35\pm 0.55$ & $45.69\pm 5.24$ & $42.11\pm 0.84$ & $42.74\pm 1.25$ & $42.11\pm 0.84$ & $39.76\pm 3.47$\\ $6\bigoplus2$ & $53.17\pm 3.17$ & $49.11\pm 3.41$ & $43.66\pm 4.12$ & $44.87\pm 3.99$ & $43.66\pm 4.12$ & $41.50\pm 2.29$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/example_filters.png} \caption{Estimated PR filters from Stage~1 for a single subject at $4\bigoplus 0$ resolution in the frequency domain. Note the amplitude for analysis and synthesis banks are on different scales, DC is centered, and $h_0$ is estimated by~\cite{han2021mr}.} \label{fig:example_filters} \end{figure} \begin{figure}[!tb] \centering \includegraphics[width=0.9\textwidth]{figs/example_coefs.png} \caption{(First row) Coefficient transformation at $4\bigoplus 0$ using the learned analysis bank on the ground truth. (Second row) Observation $y$ and estimated spatial detail coefficients $G(y) = (d_1, d_2, d_3)$.} \label{fig:example_coefs} \end{figure} \begin{figure}[!tb] % \centering % \includegraphics[width=0.9\textwidth]{figs/qualitative.png} % \caption{Mid-sagittal slice for a representative subject at different resolutions and gaps for each method. $A\bigoplus B$ signifies a slice thickness of $A$~mm and a gap of $B$~mm. Fourier magnitude is displayed in dB on every other row. The top two rows correspond to $2\bigoplus 0$ for the MR slice and Fourier space, the second two rows are for $4\bigoplus 1$, and the bottom two rows are for $6\bigoplus 2$.} % \label{fig:qualitative} % \end{figure} \textbf{Stage 2 Results}\qquad{} PSNR and SSIM were calculated on entire volumes. Box plots of PSNR and SSIM are shown in Fig.~\ref{f:oasis30}. We also show a mid-sagittal slice in Fig.~\ref{fig:qualitative} of a representative subject at $2\bigoplus 0$, $4\bigoplus 1$, and $6\bigoplus 2$. This subject is near the median PSNR value for that resolution across the $30$ subjects evaluated in our experiments, for which SMORE outperforms our method at $2\bigoplus 0$, is on par with our method at $4\bigoplus 1$, and is outperformed by our method at $6\bigoplus 2$. Also shown in Fig.~\ref{fig:qualitative} is the corresponding Fourier space. We see that our proposed method includes more high frequencies than the other methods. In Fig.~\ref{fig:example_coefs}, we visualize the filter bank coefficients for a single subject at $4\bigoplus 0$, nearest neighbor interpolated to isotropic digital resolution. The estimated detail coefficients are of similar contrast to the ground truth, but exhibit less visible aliasing. Aliasing in the coefficient space is necessary, as the synthesis bank cancels it, so the lack of learned aliasing diminishes the reconstruction results. This also suggests that learning aliasing may be difficult for our model. \section{Discussion and conclusions} Under this framework, we make explicit what information is missing given a LR observation and directly aim to estimate it. However, we do not, in all situations, outperform end-to-end methods such as SMORE, which provide a deep network the LR input and the HR output and ask it to learn a mapping. Importantly our approach is theoretically sound unlike those deep network methods. Moreover, with some additional refinement PR is achievable with our method. As the resolution worsens and slice gap increases, our proposed method better handles the task than SMORE. We have presented a novel formulation for SR of 2D-acquired anisotropic MR volumes as the regression of missing detail coefficients in an $M$-channel PR filter bank. In theory, these coefficients exist and give exact recovery of the underlying HR signal. However, it is unknown whether a mapping of $y \rightarrow (d_1, \ldots, d_{M-1})$ exists, and whether it is possible to find filters to complete the analysis and synthesis banks to guarantee PR. In practice, we estimate these in two stages: Stage~1 estimates the missing analysis and synthesis filters towards PR and Stage~2 trains a CNN to regress the missing detail coefficients given the coarse approximation $y$. Future work will include: 1)~an increased understanding of limits of the training set in learning the regression (is internal training sufficient?); 2)~the degree to which the mapping $G$ is valid (is the image-to-image translation task possible?); 3)~more analysis of the frequency space in the results (does simple noise suffice to increase resolution in the Fourier domain, or are the high frequencies ``real''?); and 4)~explore using constraints during training to guide Stages~1 and~2 towards PR. Improved estimation accuracy is also a future goal. True PR filter banks would greatly improve the method, as Table~\ref{tab:autoencoding} serves as a type of ``upper bound'' for our method; regardless of the quality of coefficient regression, even given the ideal ground truth coefficients, reconstruction accuracy was limited. Additionally, further investigation into improved regression is needed---a model which can better capture the necessary aliasing in the coefficient domain is vital for PR. \bibliographystyle{splncs04} \section{Introduction} Anisotropic magnetic resonance (MR) images are those acquired with high in-plane resolution and low through-plane resolution. It is common practice to acquire anisotropic volumes in clinics as it reduces scan time and motion artifacts while preserving SNR. To improve through-plane resolution, super-resolution~(SR) methods have been developed on MR volumes~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}. The application of SR methods to estimate the underlying isotropic volume has been shown to lead to improved performance on downstream tasks~\cite{zhao2019applications}. For 2D multi-slice protocols, the through-plane point-spread function (PSF) is known as the slice profile. When the sampling step is integer, the through-plane signals of an acquired MR image can be modeled as a strided 1D convolution between the slice profile and the object to be imaged~\cite{han2021mr,prince2006medical,sonderby2016amortised}. Commonly, the separation between slices is equivalent to the full-width-at-half-max~(FWHM) of the slice profile, but volumes can also be acquired where the slice separation is less than or greater than the slice profile FWHM, corresponding to ``slice overlap'' and ``slice gap'' respectively. Data-driven SR methods usually simulate low-resolution (LR) data from high-resolution (HR) data using an assumed slice profile~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}, or the slice profile is estimated according to the image data or acquisition~\cite{han2021mr}. In either case, neural SR methods are formulated as a classical inverse problem: \begin{equation} y = Ax \label{eq:inverse} \end{equation} where $y$ is the LR observation, $A$ is the degradation matrix (equivalent to a strided convolution with the slice profile), and $x$ is the underlying HR image. Commonly, this is precisely how paired training data is created; HR data is degraded by $A$ to obtain the LR $y$ and weights $\theta$ of a neural network $\phi$ are learned such that $\phi_\theta(y) \approx x$. However, under this framework there is no specification of information lost by application of $A$; the model is end-to-end and directed only by the dataset. In our work, we propose an entirely novel SR framework based on perfect reconstruction~(PR) filter banks. From filter bank theory, PR of a signal $x$ is possible through an $M$-channel filter bank with a correct design of an analysis bank $H$ and synthesis bank $F$~\cite{strang1996wavelets}. Under this formulation, we do not change Eq.~\ref{eq:inverse} but explicitly recognize our observation $y$ as the ``coarse approximation'' filter bank coefficients and the missing information necessary to recover $x$ as the ``detail'' coefficients; see Fig.~\ref{fig:obs_model}. For reference, in machine learning jargon the analysis bank is an encoder, the synthesis bank a decoder, and the coarse approximation and detail coefficients are analogous to a ``latent space''. \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/filter_bank.png} \caption{The LR input $y$ occurs after a convolution of the unobserved HR $x$ with slice profile $H_0$ and downsampling factor $M$. Both $y$ and $H_0$ (green) are given and fixed. The downsampling step $M$ is also fixed. In Stage 1, $H_1, \ldots, H_{M-1}$ and $F_0, F_1, \ldots, F_{M-1}$ are learned; in Stage 2, a mapping from $y$ to $d_1, \ldots, d_{M-1}$ is learned.} \label{fig:obs_model} \end{figure} The primary contribution of this work is to reformulate SR to isotropy of 2D-acquired MR volumes as a filter bank regression framework. This approach has several benefits. First, the observed low-frequency information is untouched in the reconstruction; thus our method explicitly synthesizes the missing high frequencies and does not need to learn to preserve acquired low frequency information. Second, in our framework, the downsampling factor is $M$, specifying the number of channels in the $M$-channel filter bank and allowing us to attribute more constrained parameters in ``slice gap'' acquisition recovery. Third, the analysis filters of PR filter banks necessarily introduce aliasing which is canceled via the synthesis filters; therefore, we do not need to directly handle anti-aliasing of our observed image. Fourth, our Stage~2 architecture has dynamic capacity for lower-resolution images. This is intuitive: when less measurements are taken, more estimates must be done in recovery and a more powerful model is necessitated. Fifth, our method exploits the nature of anisotropic volumetric data; the in-plane slices are HR while the through-plane slices are LR. Thus, we does not rely on external training data, using the in-plane HR data to perform internal supervision. In the remainder of the paper, we describe this framework in detail, provide practical implementation, and evaluate against a state-of-the-art supervised SR technique. We demonstrate the feasibility of formulating SR as filter bank coefficient regression and believe it lays the foundation for future theoretical and experimental work in SR of MR images. \section{Methods} The analysis bank, $H$, and synthesis bank, $F$, each consist of $M$ 1D filters represented in the $z$-domain as $H_k$ and $F_k$, respectively, with corresponding spatial domain representations $h_k$ and $f_k$. As illustrated in Fig.~\ref{fig:obs_model}, input signal $x$ is delayed by $z^{-k}$, filtered by $H_k$, and decimated with $\downarrow M$ (keeping only the $M^\text{th}$ entry) to produce the corresponding coefficients. These coefficients exhibit aliasing and distortion which are corrected by the synthesis filters~\cite{strang1996wavelets}. Reconstruction from coefficients comes from zero-insertion upsampling with $\uparrow M$, passing through filters $F_k$, advancing by $z^k$, and summation across the $M$ channels. \begin{figure}[!tb] \centering \includegraphics[ width=\textwidth, page=2, trim=3cm 37cm 25cm 7cm, clip, ]{figs/multi_figs.pdf} \caption{Stage~2 network architecture for the generator and discriminator. All convolutional layers use a $3\times 3$ kernel. The generator and discriminator use $16$ and $2$ residual blocks, respectively. The number of features for the generator was $128 \times M$ while for the discriminator $64 \times M$. The final convolution outputs $M-1$ channels corresponding to the missing filter bank detail coefficients.} \label{fig:network} \end{figure} Traditional design of $M$-channel PR filter banks involves the deliberate choice of a prototype low-pass filter $H_0$ such that modulations and alternations of the prototype produce the remaining filters for both the analysis and synthesis filter banks~\cite{strang1996wavelets}. $M$ is also chosen based on the restrictions of the problem at hand. However, for anisotropic 2D-acquired MRI, the slice profile \textit{is} the low-pass filter and as such we have a fixed, given $H_0$. The separation between slices is also given as $M$, which is equal to the FWHM of $h_0$ plus any further gap between slices. We use $A \bigoplus B$ to denote a FWHM of $A$~mm and slice gap of $B$~mm; and note that $M = A + B$. For this preliminary work, we assume $A, B$, and $M$ are all integer and, without loss of generality, assume that the in-plane resolution is $1 \bigoplus 1$. Our goal is to estimate filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$ and the detail coefficients $d_1, \ldots, d_{M-1}$ which lead to PR of $x$. We approach this problem in two stages for stability. In Stage~1, we approximate the missing analysis and synthesis filters, assuming there exists a set of filters to complete the $M$-channel PR filter bank given that $H_0$ and $M$ are fixed and known ahead of time. This must be learned first to establish the approximate PR filter bank conditions on the coefficient space. Then, in Stage~2, we perform a regression on the missing coefficients. Both of these stages are optimized in a data-driven end-to-end fashion with gradient descent. After both stages have trained, our method is applied by regressing $d_1, \ldots, d_{M-1}$ from $y$ and feeding all coefficients through the synthesis bank, producing $\hat{x}$, our estimate of the HR signal. The Stage~2 coefficient regression occurs in 2D, so we construct our estimate of the 3D volume by averaging stacked 2D predictions from the synthesis bank from both through-plane axes. \textbf{Stage 1: Filter Optimization}\qquad{}Previous works assumed the slice profile was Gaussian with FWHM equal to the slice thickness~\cite{zhao2020smore,oktay2016multi}; we estimate the slice profile, $H_0$, directly with~\cite{han2021mr}. We next want to estimate the filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$. To do this, we learn the spatial representations $h_1, \ldots, h_{M-1}$ and $f_0, \ldots, f_{M-1}$ from 1D rows and columns drawn from the high resolution in-plane slices of y, denoted $\mathcal{D}_1 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^d$. We initialize $h_1, \ldots, h_{M-1}$ according to a cosine modulation~\cite{strang1996wavelets} of $h_0$, \begin{equation*} h_k[n] = h_0[n] \sqrt{\frac{2}{M}} \cos{\left( \left( k + \frac{1}{2} \right) \left( n + \frac{M + 1}{2} \right) \frac{\pi}{M}\right) }. \end{equation*} Accordingly, we initialize $f_k$ to $h_k$. We estimate $\hat{x}_{i}$ by passing $x_i$ through the analysis and synthesis banks, then (since the entire operation is differentiable) step $h_k$ and $f_k$ through gradient descent. The reconstruction error is measured with mean squared error loss and the filters updated based on the AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate set of $0.1$ and with one-cycle learning rate scheduler~\cite{smith2019super} of $100,000$ steps with a batch size of $32$. \textbf{Stage 2: Coefficients}\qquad{}From Stage~1, we have the analysis and synthesis banks and now estimate the missing detail coefficients given only the LR observation $y$. With the correct coefficients and synthesis filters, PR of $x$ is possible. For this stage, we chose 2D patches in spite of the 1D SR problem as a type of ``neighborhood regularization''. Let $\mathcal{D}_2 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^{p \times pM}$; ie: the training set for Stage~2 consists of 2D $p \times pM$ patches drawn from the in-plane slices of $y$. The second dimension will be decimated by $M$ after passing through the analysis banks, equating to $y, d_1, \ldots, d_{M-1} \in \mathbb{R}^{p \times p}$. We use the analysis bank (learned in Stage~1) to create training pairs $\{(y_i, (d_1, d_2, \ldots, d_{M-1})_i\}_{i=1}^N$ and fit a convolutional neural network (CNN) $G: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{{p \times p}^{M-1}}$ to map $y_i$ to $(d_1, \ldots, d_{M-1})_i$. Since this is an image-to-image translation task, we adopt the approach proposed in Pix2Pix~\cite{pix2pix2017} and include an adversarial patch discriminator. Empirically, we found more learnable parameters are needed with greater $M$. Thus, our generator $G$ is a CNN illustrated in Fig.~\ref{fig:network} with $16$ residual blocks and $128 \times M$ kernels of size $3\times 3$ per convolutional layer. The discriminator $D$ has the same architecture but with only $2$ residual blocks and $64 \times M$ kernels per convolutional layer. Our final loss function for Stage~2 is identical to the loss proposed in~\cite{pix2pix2017} and is calculated on the error in $d_k$. We use the AdamW optimizer~\cite{loshchilov2017decoupled} with a learning rate of $10^{-4}$ and the one-cycle learning rate scheduler~\cite{smith2019super} for $500,000$ steps at a batch size of $32$. \section{Experiments and Results} \begin{table}[!tb] \centering \caption{Mean $\pm$ std. dev. of volumetric PSNR values for Stage~1 reconstruction of the $10$ subjects. LR indicates a reconstruction of the input low-resolution volume and GT the ground truth volume. $\text{Ax}_i$ corresponds to in-plane while $\text{Sag}_i$ and Cor to through-plane reconstruction along direction $i$.} \label{tab:autoencoding} \begin{tabular}{c|c|c|c|c|c} \toprule \hspace*{8ex} & LR $\text{Ax}_0$ & LR $\text{Ax}_1$ & GT $\text{Sag}_0$ & GT $\text{Sag}_1$ & GT $\text{Cor}$\\ \cmidrule{1-6} $2\bigoplus0$ & $62.24\pm 0.97$ & $60.19\pm 3.74$ & $60.63\pm 0.56$ & $59.59\pm 2.54$ & $55.47\pm 4.69$\\ $2\bigoplus1$ & $63.01\pm 4.91$ & $62.25\pm 5.09$ & $64.32\pm 0.63$ & $59.49\pm 5.52$ & $53.81\pm 6.50$\\ $2\bigoplus2$ & $62.57\pm 1.59$ & $57.93\pm 5.32$ & $60.62\pm 1.34$ & $59.31\pm 3.65$ & $52.09\pm 4.34$\\ \cmidrule{1-6} $4\bigoplus0$ & $55.47\pm 3.81$ & $52.36\pm 5.32$ & $48.91\pm 4.65$ & $48.77\pm 4.68$ & $44.08\pm 4.78$\\ $4\bigoplus1$ & $53.03\pm 1.54$ & $50.31\pm 3.41$ & $44.19\pm 1.57$ & $45.65\pm 1.63$ & $44.28\pm 2.14$\\ $4\bigoplus2$ & $54.71\pm 2.61$ & $51.08\pm 4.51$ & $46.75\pm 2.83$ & $46.39\pm 3.27$ & $43.27\pm 2.80$\\ \cmidrule{1-6} $6\bigoplus0$ & $49.97\pm 1.07$ & $40.18\pm 4.77$ & $40.14\pm 1.35$ & $41.04\pm 1.40$ & $35.76\pm 3.19$\\ $6\bigoplus1$ & $52.35\pm 0.55$ & $45.69\pm 5.24$ & $42.11\pm 0.84$ & $42.74\pm 1.25$ & $39.76\pm 3.47$\\ $6\bigoplus2$ & $53.17\pm 3.17$ & $49.11\pm 3.41$ & $43.66\pm 4.12$ & $44.87\pm 3.99$ & $41.50\pm 2.29$\\ \bottomrule \end{tabular} \end{table} \textbf{Experiments}\qquad{}We evaluated our method on $30$ T1-weighted MR brain volumes from the OASIS-3 dataset. We simulated LR acquisition via convolution with a Gaussian kernel with FWHM $\in \{2, 4, 6\}$ and slice gap $\in \{0, 1, 2\}$; nine combinations of FHWM and slice gap in total. The HR plane was axial while the LR planes were sagittal and coronal. To validate Stage~1, which learns 1D analysis and synthesis filters, we measured reconstruction of in-plane slices by stacking 1D signals, then stacked these slices into a volume to calculate PSNR. We perform this for the simulated LR volume along both in-plane axes (to judge in-plane training efficacy) as well as for the HR volume along four through-plane axes (to judge through-plane testing feature efficacy). To evaluate Stage~2, we compared our method to two approaches which also do not rely on external training data: cubic b-spline interpolation and SMORE~\cite{zhao2020smore}, a state-of-the-art self-super-resolution technique for anisotropic MR volumes. To help improve SMORE, it is trained not with a Gaussian slice profile assumption but with the same slice profile estimate that we use, based on the estimate from~\cite{han2021mr}. \textbf{Stage 1 Results}\qquad{} \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/example_filters.png} \caption{Estimated PR filters from Stage~1 for a single subject at $4\bigoplus 0$ resolution in the frequency domain. Note the amplitude for analysis and synthesis banks are on different scales, DC is centered, and $h_0$ is estimated by~\cite{han2021mr}.} \label{fig:example_filters} \end{figure} An example of learned filters for one resolution, $4\bigoplus 0$, is illustrated in the frequency domain in Fig.~\ref{fig:example_filters}. Recall that the fixed filter $h_0$ is the slice selection profile. Note that the optimization process led to approximations of bandpass filters. Stage~1 reconstruction is executed in 1D and there are two ways to extract 1D signals from a 2D plane. In Table~\ref{tab:autoencoding}, we show the mean reconstruction PSNR $\pm$ standard deviation for each of these extraction directions at each resolution per plane, demonstrating the filter bank's reconstruction resilience to features across the axes. If we had attained PR filters, the PSNR would be $\infty$; our estimates fall short of this. We trained Stage~1 using both 1D directions from in-plane data, represented by $\text{Ax}_0$ and $\text{Ax}_1$; these columns indicate the efficacy of our optimized filters for reconstruction of the training data. For Stage~2, we fit a model to estimate the missing coefficients, but we also assumed that the correct coefficients will recover the HR volume. To this end, we evaluate reconstruction of the ground truth (GT) volume as a sort of ``upper bound'' on estimation. We ensembled the SR prediction by averaging stacked slices from both through-planes; thus, we represent both 1D directions of both through-planes as $\text{Sag}_0, \text{Sag}_1, \text{Cor}$. Note that the redundant 1D direction omitted: if a volume is represented by $x, y,$ and $z$ axes, 1D signals along $z$ from the $x-z$ plane are the same as 1D signals along $z$ in the $y-z$ plane. \begin{figure}[!tb] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figs/n=30psnr.pdf} \caption{} \label{fig:psnr} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{figs/n=30ssim.pdf} \caption{} \label{fig:ssim} \end{subfigure} \caption{Quantitative metrics PSNR in \textbf{(a)} and SSIM in \textbf{(b)}, computed over the $30$ image volumes. Significance tests performed with the Wilcoxon signed rank test; $\ast$ denotes $p$-values $ < 0.05$; ``ns'' stands for ``not significant'', $p$-values $\ge 0.05$. Note that $A\bigoplus B$ denotes a FWHM of $A$~mm and slice gap of $B$~mm.} % \label{f:oasis30} \end{figure} \begin{figure}[!tb] % \centering % \includegraphics[ width=\textwidth, page=1, trim=4cm 0cm 7.5cm 0cm, clip, ]{figs/multi_figs.pdf} % \caption{Mid-sagittal slice for a representative subject at different resolutions and gaps for each method. The low resolution column is digitally upsampled with k-space zero-filling. $A\bigoplus B$ signifies a slice thickness of $A$~mm and a gap of $B$~mm. Fourier magnitude is displayed in dB on every other row. The top two rows correspond to $2\bigoplus 0$ for the MR slice and Fourier space, the second two rows are for $4\bigoplus 1$, and the bottom two rows are for $6\bigoplus 2$.} % \label{fig:qualitative} % \end{figure} \textbf{Stage 2 Results}\qquad{} PSNR and SSIM were calculated on entire volumes. Box plots of PSNR and SSIM are shown in Fig.~\ref{f:oasis30}. We also show a mid-sagittal slice in Fig.~\ref{fig:qualitative} of a representative subject at $2\bigoplus 0$, $4\bigoplus 1$, and $6\bigoplus 2$. This subject is near the median PSNR value for that resolution across the $30$ subjects evaluated in our experiments, for which SMORE outperforms our method at $2\bigoplus 0$, is on par with our method at $4\bigoplus 1$, and is outperformed by our method at $6\bigoplus 2$. Also shown in Fig.~\ref{fig:qualitative} is the corresponding Fourier space. We see that our proposed method includes more high frequencies than the other methods. \begin{figure}[!tb] \centering \includegraphics[ width=\textwidth, page=3, trim=1cm 38cm 25cm 0cm, clip, ]{figs/multi_figs.pdf} \caption{(First row) Coefficient transformation at $4\bigoplus 0$ using the learned analysis bank on the ground truth. (Second row) Observation $y$ and estimated spatial detail coefficients $G(y) = (d_1, d_2, d_3)$.} \label{fig:example_coefs} \end{figure} In Fig.~\ref{fig:example_coefs}, we visualize the filter bank coefficients for a single subject at $4\bigoplus 0$, nearest neighbor interpolated to isotropic digital resolution. The estimated detail coefficients are of similar contrast to the ground truth, but exhibit less visible aliasing. Aliasing in the coefficient space is necessary, as the synthesis bank cancels it, so the lack of learned aliasing diminishes the reconstruction results. This also suggests that learning aliasing may be difficult for our model. \section{Discussion and conclusions} Under this framework, we make explicit what information is missing given a LR observation and directly aim to estimate it. However, we do not, in all situations, outperform end-to-end methods such as SMORE, which provide a deep network the LR input and the HR output and ask it to learn a mapping. Importantly our approach is theoretically sound unlike those deep network methods. Moreover, with some additional refinement PR is achievable with our method. As the resolution worsens and slice gap increases, our proposed method better handles the task than SMORE. We have presented a novel formulation for SR of 2D-acquired anisotropic MR volumes as the regression of missing detail coefficients in an $M$-channel PR filter bank. In theory, these coefficients exist and give exact recovery of the underlying HR signal. However, it is unknown whether a mapping of $y \rightarrow (d_1, \ldots, d_{M-1})$ exists, and whether it is possible to find filters to complete the analysis and synthesis banks to guarantee PR. In practice, we estimate these in two stages: Stage~1 estimates the missing analysis and synthesis filters towards PR and Stage~2 trains a CNN to regress the missing detail coefficients given the coarse approximation $y$. Future work will include: 1)~an increased understanding of limits of the training set in learning the regression (is internal training sufficient?); 2)~the degree to which the mapping $G$ is valid (is the image-to-image translation task possible?); 3)~more analysis of the frequency space in the results (does simple noise suffice to increase resolution in the Fourier domain, or are the high frequencies ``real''?); and 4)~explore using constraints during training to guide Stages~1 and~2 towards PR. Improved estimation accuracy is also a future goal. True PR filter banks would greatly improve the method, as Table~\ref{tab:autoencoding} serves as a type of ``upper bound'' for our method; regardless of the quality of coefficient regression, even given the ideal ground truth coefficients, reconstruction accuracy was limited. Additionally, further investigation into improved regression is needed---a model which can better capture the necessary aliasing in the coefficient domain is vital for PR. \bibliographystyle{splncs04} \section{Introduction} Anisotropic magnetic resonance (MR) images are those acquired with high in-plane resolution and low through-plane resolution. It is common practice to acquire anisotropic volumes in clinics as it reduces scan time and motion artifacts while preserving signal-to-noise ratio. To improve through-plane resolution, data-driven super-resolution~(SR) methods have been developed on MR volumes~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}. The application of SR methods to estimate the underlying isotropic volume has been shown to lead to improved performance on downstream tasks~\cite{zhao2019applications}. For 2D multi-slice protocols, the through-plane point-spread function (PSF) is known as the slice profile. When the sampling step is an integer, the through-plane signals of an acquired MR image can be modeled as a strided 1D convolution between the slice profile and the object to be imaged~\cite{han2021mr,prince2006medical,sonderby2016amortised}. Commonly, the separation between slices is equivalent to the full-width-at-half-max~(FWHM) of the slice profile, but volumes can also be acquired where the slice separation is less than or greater than the slice profile FWHM, corresponding to ``slice overlap'' and ``slice gap'' respectively. Data-driven SR methods usually simulate low-resolution (LR) data from high-resolution (HR) data using an assumed slice profile~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}, or an estimated the slice profile according to the image data or acquisition~\cite{han2021mr}. In either case, neural SR methods are formulated as a classical inverse problem: \begin{equation} y = Ax, \label{eq:inverse} \end{equation} where $y$ is the LR observation, $A$ is the degradation matrix, and $x$ is the underlying HR image. Commonly, this is precisely how paired training data is created; HR data is degraded by $A$ to obtain the LR $y$ and weights $\theta$ of a neural network $\phi$ are learned such that $\phi_\theta(y) \approx x$. However, under this framework there is no specification of information lost by application of $A$; the model is end-to-end and directed only by the dataset. In our work, we propose an entirely novel SR framework based on perfect reconstruction~(PR) filter banks. From filter bank theory, PR of a signal $x$ is possible through an $M$-channel filter bank with a correct design of an analysis bank $H$ and synthesis bank $F$~\cite{strang1996wavelets}. Under this formulation, we do not change Eq.~\ref{eq:inverse} but explicitly recognize our observation $y$ as the ``coarse approximation'' filter bank coefficients and the missing information necessary to recover $x$ as the ``detail'' coefficients; see Fig.~\ref{fig:obs_model}. For reference, in machine learning jargon, the analysis bank is an encoder, the synthesis bank a decoder, and the coarse approximation and detail coefficients analogous to a ``latent space''. \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/filter_bank.png} \caption{The LR input $y$ occurs after a convolution of the unobserved HR $x$ with slice profile $H_0$ and the fixed downsampling factor $M$. Both $y$ and $H_0$ (green) are given and fixed. In Stage 1, filters $H_1, \ldots, H_{M-1}$ and $F_0, F_1, \ldots, F_{M-1}$ are learned; in Stage 2, a mapping from $y$ to $d_1, \ldots, d_{M-1}$ is learned.} \label{fig:obs_model} \end{figure} The primary contribution of this work is to reformulate SR to isotropy of 2D-acquired MR volumes as a filter bank regression framework. The proposed framework has several benefits. First, the observed low-frequency information is untouched in the reconstruction; thus our method explicitly synthesizes the missing high frequencies and does not need to learn to preserve acquired low frequency information. Second, in our framework, the downsampling factor is $M$, specifying the number of channels in the $M$-channel filter bank and allowing us to attribute more constrained parameters in the ``slice gap'' acquisition recovery. Third, the analysis filters of PR filter banks necessarily introduce aliasing which is canceled via the synthesis filters; therefore, we do not need to directly handle the anti-aliasing of the observed image. Fourth, our Stage~2 architecture has a dynamic capacity for lower-resolution images. The rationale behind the dynamic capacity is intuitive: when fewer measurements are taken, more estimates must be done in recovery and a more robust model is necessitated. Fifth, our method exploits the nature of anisotropic volumetric data; the in-plane slices are HR while the through-plane slices are LR. Thus, we do not rely on external training data, as we only need the in-plane HR data to perform internal supervision. In the remainder of the paper, we describe this framework in detail, provide practical implementation, and evaluate against a state-of-the-art internally supervised SR technique. We demonstrate the feasibility of formulating SR as filter bank coefficient regression and believe it lays the foundation for future theoretical and experimental work in SR of MR images. \section{Methods} The analysis bank, $H$, and synthesis bank, $F$, each consist of $M$ 1D filters represented in the $z$-domain as $H_k$ and $F_k$, respectively, with corresponding spatial domain representations $h_k$ and $f_k$. As illustrated in Fig.~\ref{fig:obs_model}, input signal $X(z) = \mathcal{Z}(x)$\footnote{\url{https://en.wikipedia.org/wiki/Z-transform}} is delayed by $z^{-k}$, filtered by $H_k$, and decimated with $\downarrow M$ (keeping only the $M^\text{th}$ entry) to produce the corresponding coefficients. These coefficients exhibit aliasing and distortion which are corrected by the synthesis filters~\cite{strang1996wavelets}. Reconstruction from coefficients comes from zero-insertion upsampling with $\uparrow M$, passing through filters $F_k$, advancing by $z^k$, and summing across the $M$ channels. \begin{figure}[!tb] \centering \includegraphics[ width=\textwidth, page=2, trim=3cm 37cm 25cm 7cm, clip, ]{figs/multi_figs.pdf} \caption{This Stage~2 network architecture has the same structure for both the generator and discriminator but with different hyperparameters. All convolutional layers use a $3\times 3$ kernel. The generator and discriminator use $16$ and $2$ residual blocks, respectively. The number of features for the generator was $128 \times M$ while for the discriminator $64 \times M$. The final convolution outputs $M-1$ channels corresponding to the missing filter bank detail coefficients.} \label{fig:network} \end{figure} Traditional design of $M$-channel PR filter banks involves deliberate choice of a prototype low-pass filter $H_0$ such that modulations and alternations of the prototype produce the remaining filters for both the analysis and synthesis filter banks~\cite{strang1996wavelets}. $M$ is also chosen based on the restrictions of the problem at hand. However, for anisotropic 2D-acquired MRI, the slice profile \textit{is} the low-pass filter and as such we have a fixed, given $H_0$. The separation between slices is also given as $M$, which is equal to the FWHM of $h_0$ plus any further gap between slices. We use $A \bigoplus B$ to denote a FWHM of $A$~mm and slice gap of $B$~mm and note that $M = A + B$. For this preliminary work, we assume $A, B$, and $M$ are all integer and, without loss of generality, assume that the in-plane resolution is $1 \bigoplus 1$. Our goal is to estimate filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$ and the detail coefficients $d_1, \ldots, d_{M-1}$ which lead to PR of $x$. We approach this problem in two stages for stability. In Stage~1, we approximate the missing analysis and synthesis filters, assuming there exists a set of filters to complete the $M$-channel PR filter bank given that $H_0$ and $M$ are fixed and known ahead of time. These must be learned first to establish the approximate PR filter bank conditions on the coefficient space. Then, in Stage~2, we perform a regression on the missing coefficients. Both of these stages are optimized in a data-driven end-to-end fashion with gradient descent. After training, our method is applied by regressing $d_1, \ldots, d_{M-1}$ from $y$ and feeding all coefficients through the synthesis bank, producing $\hat{x}$, our estimate of the HR signal. The Stage~2 coefficient regression occurs in 2D, so we construct our estimate of the 3D volume by averaging stacked 2D predictions from the synthesis bank from both through-plane axes. \textbf{Stage 1: Filter Optimization}\qquad{}Previous works assumed the slice profile was Gaussian with FWHM equal to the slice separation~\cite{zhao2020smore,oktay2016multi}; we estimate the slice profile, $H_0$, directly with ESPRESO~\cite{han2021mr}. We next aim to estimate the filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$. To achieve this, we learn the spatial representations $h_1, \ldots, h_{M-1}$ and $f_0, \ldots, f_{M-1}$ from 1D rows and columns drawn from the high resolution in-plane slices of y, denoted $\mathcal{D}_1 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^d$. We initialize $h_1, \ldots, h_{M-1}$ according to a cosine modulation~\cite{strang1996wavelets} of $h_0$, which is defined as \begin{equation*} h_k[n] = h_0[n] \sqrt{\frac{2}{M}} \cos{\left[ \left( k + \frac{1}{2} \right) \left( n + \frac{M + 1}{2} \right) \frac{\pi}{M}\right] }. \end{equation*} Accordingly, we initialize $f_k$ to $h_k$. We estimate $\hat{x}_{i}$ by passing $x_i$ through the analysis and synthesis banks, then (since the entire operation is differentiable) step $h_k$ and $f_k$ through gradient descent. The reconstruction error is measured with mean squared error loss and the filters updated based on the AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate set of $0.1$ and with one-cycle learning rate scheduler~\cite{smith2019super} of $100,000$ steps with a batch size of $32$. \textbf{Stage 2: Coefficient Regression}\qquad{}From Stage~1, we have the analysis and synthesis banks and now estimate the missing detail coefficients given only the LR observation $y$. With the correct coefficients and synthesis filters, PR of $x$ is possible. For this stage, we chose 2D patches in spite of the 1D SR problem as a type of ``neighborhood regularization''. Let $\mathcal{D}_2 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^{p \times pM}$; i.e.: the training set for Stage~2 consists of 2D $p \times pM$ patches drawn from the in-plane slices of $y$. The second dimension will be decimated by $M$ after passing through the analysis banks, resulting in $y, d_1, \ldots, d_{M-1} \in \mathbb{R}^{p \times p}$. We use the analysis bank (learned in Stage~1) to create training pairs $\{(y_i, (d_1, d_2, \ldots, d_{M-1})_i\}_{i=1}^N$ and fit a convolutional neural network (CNN) $G: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{{p \times p}^{M-1}}$ to map $y_i$ to $(d_1, \ldots, d_{M-1})_i$. Since this is an image-to-image translation task, we adopt the widely used approach proposed in Pix2Pix~\cite{pix2pix2017} including the adversarial patch discriminator. Empirically, we found more learnable parameters are needed with greater $M$. Thus, our generator $G$ is a CNN illustrated in Fig.~\ref{fig:network} with $16$ residual blocks and $128 \times M$ kernels of size $3\times 3$ per convolutional layer. The discriminator $D$ has the same architecture but with only $2$ residual blocks and $64 \times M$ kernels per convolutional layer. Our final loss function for Stage~2 is identical to the loss proposed in~\cite{pix2pix2017} and is calculated on the error in $d_k$. We use the AdamW optimizer~\cite{loshchilov2017decoupled} with a learning rate of $10^{-4}$ and the one-cycle learning rate scheduler~\cite{smith2019super} for $500,000$ steps at a batch size of $32$. \section{Experiments and Results} \begin{table}[!tb] \centering \caption{Mean $\pm$ std. dev. of volumetric PSNR values for Stage~1 reconstruction of the $10$ subjects. LR indicates a reconstruction of the input low-resolution volume and GT the ground truth volume. $\text{Ax}_i$ corresponds to in-plane while $\text{Sag}_i$ and Cor to through-plane reconstruction along direction $i$.} \label{tab:autoencoding} \begin{tabular}{c|c|c|c|c|c} \toprule \hspace*{9ex} & LR $\text{Ax}_0$ & LR $\text{Ax}_1$ & GT $\text{Sag}_0$ & GT $\text{Sag}_1$ & GT $\text{Cor}$\\ \cmidrule{1-6} $2\bigoplus0$& ~$62.24\pm 0.97$~ & ~$60.19\pm 3.74$~ & ~$60.63\pm 0.56$~ & ~$59.59\pm 2.54$~ & ~$55.47\pm 4.69$\\ $2\bigoplus1$ & $63.01\pm 4.91$ & $62.25\pm 5.09$ & $64.32\pm 0.63$ & $59.49\pm 5.52$ & $53.81\pm 6.50$\\ $2\bigoplus2$ & $62.57\pm 1.59$ & $57.93\pm 5.32$ & $60.62\pm 1.34$ & $59.31\pm 3.65$ & $52.09\pm 4.34$\\ \cmidrule{1-6} $4\bigoplus0$ & $55.47\pm 3.81$ & $52.36\pm 5.32$ & $48.91\pm 4.65$ & $48.77\pm 4.68$ & $44.08\pm 4.78$\\ $4\bigoplus1$ & $53.03\pm 1.54$ & $50.31\pm 3.41$ & $44.19\pm 1.57$ & $45.65\pm 1.63$ & $44.28\pm 2.14$\\ $4\bigoplus2$ & $54.71\pm 2.61$ & $51.08\pm 4.51$ & $46.75\pm 2.83$ & $46.39\pm 3.27$ & $43.27\pm 2.80$\\ \cmidrule{1-6} $6\bigoplus0$ & $49.97\pm 1.07$ & $40.18\pm 4.77$ & $40.14\pm 1.35$ & $41.04\pm 1.40$ & $35.76\pm 3.19$\\ $6\bigoplus1$ & $52.35\pm 0.55$ & $45.69\pm 5.24$ & $42.11\pm 0.84$ & $42.74\pm 1.25$ & $39.76\pm 3.47$\\ $6\bigoplus2$ & $53.17\pm 3.17$ & $49.11\pm 3.41$ & $43.66\pm 4.12$ & $44.87\pm 3.99$ & $41.50\pm 2.29$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/example_filters.png} \caption{Estimated PR filters from Stage~1 for a single subject at $4\bigoplus 0$ resolution in the frequency domain. Note the amplitudes for analysis and synthesis banks are on different scales, DC is centered, and $h_0$ is estimated by ESPRESO~\cite{han2021mr}.} \label{fig:example_filters} \end{figure} \begin{figure}[!tb] \centering % \begin{tabular}{c c c} \includegraphics[width=0.45\textwidth]{figs/n=30psnr.pdf} && \includegraphics[width=0.45\textwidth]{figs/n=30ssim.pdf} \\[-0.75em] % \textbf{(a)} && \textbf{(b)}\\[-.75em] % \end{tabular} \caption{Quantitative metrics PSNR in \textbf{(a)} and SSIM in \textbf{(b)}, computed over the $30$ image volumes. Significance tests performed with the Wilcoxon signed rank test; $\ast$ denotes $p$-values $ < 0.05$; ``ns'' stands for ``not significant''.} % \label{f:oasis30} \end{figure} \begin{figure}[!tbp] % \centering % \includegraphics[ width=0.9\textwidth, page=1, trim=4cm 0cm 7.5cm 0cm, clip, ]{figs/multi_figs.pdf} % \caption{Mid-sagittal slice for a representative subject at different resolutions and gaps for each method. The low resolution column is digitally upsampled with k-space zero-filling. $A\bigoplus B$ signifies a slice thickness of $A$~mm and a gap of $B$~mm. Fourier magnitude is displayed in dB on every other row. The top two rows correspond to $2\bigoplus 0$ for the MR slice and Fourier space, the second two rows are for $4\bigoplus 1$, and the bottom two rows are for $6\bigoplus 2$. } % \label{fig:qualitative} % \end{figure} \textbf{Experiments}\qquad{}We evaluated our method on $30$ T1-weighted MR brain volumes from the OASIS-3 dataset~\cite{LaMontagne2019_OASIS3}. We simulated LR acquisition via convolution with a Gaussian kernel with FWHM $\in \{2, 4, 6\}$ and slice gap $\in \{0, 1, 2\}$; nine combinations of FHWM and slice gap in total. The HR plane was axial while the LR planes were sagittal and coronal. To validate Stage~1, which learns 1D analysis and synthesis filters, we measured reconstruction of in-plane slices by stacking 1D signals, then stacked these slices into a volume to calculate PSNR. We performed this for the simulated LR volume along both in-plane axes (to judge in-plane training efficacy) as well as for the HR volume along four through-plane axes (to judge through-plane testing efficacy). To evaluate Stage~2, we compared our method to two approaches which also do not rely on external training data: cubic b-spline interpolation and SMORE~\cite{zhao2020smore}, a state-of-the-art self-super-resolution technique for anisotropic MR volumes. For a fair comparison and improving SMORE results, SMORE was trained with the same slice profile that we use instead of a Gaussian slice profile used in the original paper, based on the ESPRESO estimate~\cite{han2021mr}. \textbf{Stage 1 Results}\qquad{} An example of learned filters in the frequency domain for one resolution, $4\bigoplus 0$, is shown in Fig.~\ref{fig:example_filters}. Recall that the fixed filter $h_0$ is the slice selection profile. Note our optimization approximated bandpass filters. Stage~1 reconstruction is executed in 1D, but there are two ways to extract 1D signals from a 2D plane. In Table~\ref{tab:autoencoding}, we show the mean reconstruction PSNR $\pm$ std. dev. for both of these extraction directions at each resolution per plane, demonstrating the filter bank's reconstruction resilience to features across the axes. If we had attained PR filters, the PSNR would be $\infty$; our estimates fall short of this. We trained Stage~1 using both 1D directions from in-plane data, represented by $\text{Ax}_0$ and $\text{Ax}_1$; in Table~\ref{tab:autoencoding} these columns indicate the efficacy of our optimized filters for reconstruction of the training data. For Stage~2, we fit a model to estimate the missing coefficients, but we also assumed that the correct coefficients will recover the HR volume. To this end, we evaluate reconstruction of the ground truth (GT) volume as a sort of ``upper bound'' on estimation. We ensembled the SR prediction by averaging stacked slices from both through-planes; thus, we represent both 1D directions of both through-planes as $\text{Sag}_0$, $\text{Sag}_1$, $\text{Cor}$. A redundant 1D direction is omitted: 1D signals along $z$ in the $xz$-plane are the same as 1D signals along $z$ in the $yz$-plane. \textbf{Stage 2 Results}\qquad{} PSNR and SSIM were calculated on entire volumes. Box plots of PSNR and SSIM are shown in Fig.~\ref{f:oasis30}. We also show a mid-sagittal slice in Fig.~\ref{fig:qualitative} of a representative subject at $2\bigoplus 0$, $4\bigoplus 1$, and $6\bigoplus 2$. This subject is near the median PSNR value for that resolution across the $30$ subjects evaluated in our experiments and for which SMORE outperforms our method at $2\bigoplus 0$, is on par with our method at $4\bigoplus 1$, and is outperformed by our method at $6\bigoplus 2$. Also shown in Fig.~\ref{fig:qualitative} is the corresponding Fourier space. We see that our proposed method includes more high frequencies than the other methods. \section{Discussion and conclusions} Under our proposed deep filter bank framework, we make explicit what information is missing given a LR observation and directly aim to estimate it. However, we do not, in all situations, outperform the end-to-end method SMORE, which provides a deep network the LR input and the HR output and asks it to learn a mapping. We would emphasize that our approach establishes a new theoretic basis for super-resolution. Moreover, with additional refinement, PR is potentially achievable with our proposed method. As the resolution worsens and slice gap increases, our proposed method better handles the task than SMORE in our experiments, validating the usefulness of our method for super resolving anisotropic MR images with large slice gaps. In this paper, we have presented a novel formulation for SR of 2D-acquired anisotropic MR volumes as the regression of missing detail coefficients in an $M$-channel PR filter bank. In theory, these coefficients exist and give exact recovery of the underlying HR signal. However, it is unknown whether a mapping of $y \rightarrow (d_1, \ldots, d_{M-1})$ exists, and whether it is possible to find filters to complete the analysis and synthesis banks to guarantee PR. In practice, we estimate these in two stages: Stage~1 estimates the missing analysis and synthesis filters towards PR and Stage~2 trains a CNN to regress the missing detail coefficients given the coarse approximation $y$. Future work will include: 1)~deeper investigation into the limits of the training set in learning the regression (is internal training sufficient?); 2)~the degree to which the mapping $G$ is valid (is the image-to-image translation task possible?); 3)~more analysis of the frequency space in the results (does simple noise suffice to increase resolution in the Fourier domain, or are the high frequencies ``real''?); and 4)~develop methods to exactly achieve or better approximate PR. True PR filter banks would greatly improve the method, as Table~\ref{tab:autoencoding} serves as a type of ``upper bound'' for our method; regardless of the quality of coefficient regression, even given the ideal ground truth coefficients, reconstruction accuracy would be limited. Additionally, further investigation into improved regression is needed---a model which can better capture the necessary aliasing in the coefficient domain is vital for PR. \bibliographystyle{splncs04} \section{Introduction} Anisotropic magnetic resonance (MR) images are those acquired with high in-plane resolution and low through-plane resolution. It is common practice to acquire anisotropic volumes in clinics as it reduces scan time and motion artifacts while preserving signal-to-noise ratio. To improve through-plane resolution, data-driven super-resolution~(SR) methods have been developed on MR volumes~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}. The application of SR methods to estimate the underlying isotropic volume has been shown to lead to improved performance on downstream tasks~\cite{zhao2019applications}. For 2D multi-slice protocols, the through-plane point-spread function (PSF) is known as the slice profile. When the sampling step is an integer, the through-plane signals of an acquired MR image can be modeled as a strided 1D convolution between the slice profile and the object to be imaged~\cite{han2021mr,prince2006medical,sonderby2016amortised}. Commonly, the separation between slices is equivalent to the full-width-at-half-max~(FWHM) of the slice profile, but volumes can also be acquired where the slice separation is less than or greater than the slice profile FWHM, corresponding to ``slice overlap'' and ``slice gap'' scenarios, respectively. Data-driven SR methods usually simulate low-resolution (LR) data from high-resolution (HR) data using an assumed slice profile~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}, or an estimated the slice profile according to the image data or acquisition~\cite{han2021mr}. In either case, neural SR methods are formulated as a classical inverse problem: \begin{equation} y = Ax, \label{eq:inverse} \end{equation} where $y$ is the LR observation, $A$ is the degradation matrix, and $x$ is the underlying HR image. Commonly, this is precisely how paired training data is created; HR data is degraded by $A$ to obtain the LR $y$ and weights $\theta$ of a neural network $\phi$ are learned such that $\phi_\theta(y) \approx x$. However, under this framework there is no specification of information lost by application of $A$; the model is end-to-end and directed only by the dataset. In our work, we propose an entirely novel SR framework based on perfect reconstruction~(PR) filter banks. From filter bank theory, PR of a signal $x$ is possible through an $M$-channel filter bank with a correct design of an analysis bank $H$ and synthesis bank $F$~\cite{strang_nguyen_1997}. Under this formulation, we do not change Eq.~\ref{eq:inverse} but explicitly recognize our observation $y$ as the ``coarse approximation'' filter bank coefficients and the missing information necessary to recover $x$ as the ``detail'' coefficients (see Fig.~\ref{fig:obs_model}). For reference, in machine learning jargon, the analysis bank is an encoder, the synthesis bank is a decoder, and the coarse approximation and detail coefficients are analogous to a ``latent space''. \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/filter_bank.png} \caption{The filter bank observation model. Both of $y$ and $H_0$ (green) are given and fixed. In Stage 1, filters $H_1, \ldots, H_{M-1}$ and $F_0, F_1, \ldots, F_{M-1}$ are learned; in Stage 2, a mapping from $y$ to $d_1, \ldots, d_{M-1}$ is learned.} \label{fig:obs_model} \end{figure} The primary contribution of this work is to reformulate SR to isotropy of 2D-acquired MR volumes as a filter bank regression framework. The proposed framework has several benefits. First, the observed low-frequency information is untouched in the reconstruction; thus, our method explicitly synthesizes the missing high frequencies and does not need to learn to preserve acquired low frequency information. Second, in our framework, the downsampling factor is $M$, specifying the number of channels in the $M$-channel filter bank and allowing us to attribute more constrained parameters in the ``slice gap'' acquisition recovery. Third, the analysis filters of PR filter banks necessarily introduce aliasing which is canceled via the synthesis filters; therefore, we do not need to directly handle the anti-aliasing of the observed image. Fourth, our architecture has a dynamic capacity for lower-resolution images. The rationale behind the dynamic capacity is intuitive: when fewer measurements are taken, more estimates must be done in recovery and a more robust model is required. Fifth, our method exploits the nature of anisotropic volumetric data; the in-plane slices are HR while the through-plane slices are LR. Thus, we do not rely on external training data, as we only need the in-plane HR data to perform internal supervision. In the remainder of the paper, we describe this framework in detail, provide practical implementation, and evaluate against a state-of-the-art internally supervised SR technique. We demonstrate the feasibility of formulating SR as filter bank coefficient regression and believe it lays the foundation for future theoretical and experimental work in SR of MR images. \section{Methods} The analysis bank, $H$, and synthesis bank, $F$, each consist of $M$ 1D filters represented in the $z$-domain as $H_k$ and $F_k$, respectively, with corresponding spatial domain representations $h_k$ and $f_k$. As illustrated in Fig.~\ref{fig:obs_model}, input signal $X(z) = \mathcal{Z}(x)$\footnote{\url{https://en.wikipedia.org/wiki/Z-transform}} is delayed by $z^{-k}$, filtered by $H_k$, and decimated with $\downarrow M$ (keeping only the $M^\text{th}$ entry) to produce the corresponding coefficients. These coefficients exhibit aliasing and distortion which are corrected by the synthesis filters~\cite{strang_nguyen_1997}. Reconstruction from coefficients comes from zero-insertion upsampling with $\uparrow M$, passing through filters $F_k$, advancing by $z^k$, and summing across the $M$ channels. \begin{figure}[!tb] \centering \includegraphics[ width=0.9\textwidth, page=2, trim=3cm 37cm 25cm 7cm, clip, ]{figs/multi_figs.pdf} \caption{This network architecture, used in the second stage of our algorithm, has the same structure for both the generator and discriminator but with different hyperparameters. All convolutional layers use a $3\times 3$ kernel. The generator and discriminator use $16$ and $2$ residual blocks, respectively. The number of features for the generator was $128 \times M$ while for the discriminator $64 \times M$. The final convolution outputs $M-1$ channels corresponding to the missing filter bank detail coefficients.} \label{fig:network} \end{figure} Traditional design of $M$-channel PR filter banks involves deliberate choice of a prototype low-pass filter $H_0$ such that modulations and alternations of the prototype produce the remaining filters for both the analysis and synthesis filter banks~\cite{strang_nguyen_1997}. $M$ is also chosen based on the restrictions of the problem at hand. However, for anisotropic 2D-acquired MRI, the slice profile \textit{is} the low-pass filter and as such we have a fixed, given $H_0$. The separation between slices is also given as $M$, which is equal to the FWHM of $h_0$ plus any further gap between slices. We use $A \oplus B$ to denote a FWHM of $A$~mm and slice gap of $B$~mm and note that $M = A + B$. For this preliminary work, we assume $A, B$, and $M$ are all integer and, without loss of generality, assume that the in-plane resolution is $1 \oplus 1$. Our goal is to estimate filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$ and the detail coefficients $d_1, \ldots, d_{M-1}$ which lead to PR of $x$. We approach this problem in two stages for stability. In Stage~1, we approximate the missing analysis and synthesis filters, assuming there exists a set of filters to complete the $M$-channel PR filter bank given that $H_0$ and $M$ are fixed and known ahead of time. These must be learned first to establish the approximate PR filter bank conditions on the coefficient space. Then, in Stage~2, we perform a regression on the missing coefficients. Both of these stages are optimized in a data-driven end-to-end fashion with gradient descent. After training, our method is applied by regressing $d_1, \ldots, d_{M-1}$ from $y$ and feeding all coefficients through the synthesis bank, producing $\hat{x}$, our estimate of the HR signal. The Stage~2 coefficient regression occurs in 2D, so we construct our estimate of the 3D volume by averaging stacked 2D predictions from the synthesis bank from both cardinal planes containing the through-plane axis. \textbf{Stage 1: Filter Optimization}\qquad{}Previous works assumed the slice profile is Gaussian with FWHM equal to the slice separation~\cite{zhao2020smore,oktay2016multi}; instead, we estimate the slice profile, $H_0$, directly with ESPRESO\footnote{\url{https://github.com/shuohan/espreso}}~\cite{han2021mr}. We next aim to estimate the filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$. To achieve this, we learn the spatial representations $h_1, \ldots, h_{M-1}$ and $f_0, \ldots, f_{M-1}$ from 1D rows and columns drawn from the high resolution in-plane slices of y, denoted $\mathcal{D}_1 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^d$. We initialize $h_1, \ldots, h_{M-1}$ according to a cosine modulation~\cite{strang_nguyen_1997} of $h_0$, which is defined as \begin{equation*} h_k[n] = h_0[n] \sqrt{\frac{2}{M}} \cos{\left[ \left( k + \frac{1}{2} \right) \left( n + \frac{M + 1}{2} \right) \frac{\pi}{M}\right] }. \end{equation*} Accordingly, we initialize $f_k$ to $h_k$. We estimate $\hat{x}_{i}$ by passing $x_i$ through the analysis and synthesis banks, then (since the entire operation is differentiable) step $h_k$ and $f_k$ through gradient descent. The reconstruction error is measured with mean squared error loss and the filters updated based on the AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate set of $0.1$ and with one-cycle learning rate scheduler~\cite{smith2019super} of $100,000$ steps with a batch size of $32$. \textbf{Stage 2: Coefficient Regression}\qquad{}From Stage~1, we have the analysis and synthesis banks and now estimate the missing detail coefficients given only the LR observation $y$. With the correct coefficients and synthesis filters, PR of $x$ is possible. For this stage, we use 2D patches in spite of the 1D SR problem as a type of ``neighborhood regularization''. Let $\mathcal{D}_2 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^{p \times pM}$; i.e., the training set for Stage~2 consists of 2D $p \times pM$ patches drawn from the in-plane slices of $y$. The second dimension will be decimated by $M$ after passing through the analysis banks, resulting in $y, d_1, \ldots, d_{M-1} \in \mathbb{R}^{p \times p}$. We use the analysis bank (learned in Stage~1) to create training pairs $\{(y_i, (d_1, d_2, \ldots, d_{M-1})_i\}_{i=1}^N$ and fit a convolutional neural network (CNN) $G: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{{p \times p}^{M-1}}$ to map $y_i$ to $(d_1, \ldots, d_{M-1})_i$. Since this is an image-to-image translation task, we adopt the widely used approach proposed in Pix2Pix~\cite{pix2pix2017} including the adversarial patch discriminator. Empirically, we found more learnable parameters are needed with greater $M$. Thus, our generator $G$ is a CNN illustrated in Fig.~\ref{fig:network} with $16$ residual blocks and $128 \times M$ kernels of size $3\times 3$ per convolutional layer. The discriminator $D$ has the same architecture but with only $2$ residual blocks and $64 \times M$ kernels per convolutional layer. Our final loss function for Stage~2 is identical to the loss proposed in~\cite{pix2pix2017} and is calculated on the error in $d_k$. We use the AdamW optimizer~\cite{loshchilov2017decoupled} with a learning rate of $10^{-4}$ and the one-cycle learning rate scheduler~\cite{smith2019super} for $500,000$ steps at a batch size of $32$. \section{Experiments and Results} \begin{table}[!tb] \centering \caption{Mean $\pm$ std. dev. of volumetric PSNR values for Stage~1 reconstruction of the $10$ subjects. ``Self'' indicates a reconstruction of the input low-resolution volume on which the filter bank was optimized, while ``GT'' indicates reconstruction of the isotropic ground truth volume. (L-R) is the left-to-right direction, (A-P) is the anterior-to-posterior direction, and (S-I) is the superior-to-inferior direction.} \label{tab:autoencoding} \begin{tabular}{c|c|c|c|c|c} \toprule \hspace*{9ex} & Self (L-R) & Self (A-P) & GT (L-R) & GT (A-P) & GT (S-I)\\ \cmidrule{1-6} $2\oplus0$& ~$62.24\pm 0.97$~ & ~$60.19\pm 3.74$~ & ~$60.63\pm 0.56$~ & ~$59.59\pm 2.54$~ & ~$55.47\pm 4.69$\\ $2\oplus1$ & $63.01\pm 4.91$ & $62.25\pm 5.09$ & $64.32\pm 0.63$ & $59.49\pm 5.52$ & $53.81\pm 6.50$\\ $2\oplus2$ & $62.57\pm 1.59$ & $57.93\pm 5.32$ & $60.62\pm 1.34$ & $59.31\pm 3.65$ & $52.09\pm 4.34$\\ \cmidrule{1-6} $4\oplus0$ & $55.47\pm 3.81$ & $52.36\pm 5.32$ & $48.91\pm 4.65$ & $48.77\pm 4.68$ & $44.08\pm 4.78$\\ $4\oplus1$ & $53.03\pm 1.54$ & $50.31\pm 3.41$ & $44.19\pm 1.57$ & $45.65\pm 1.63$ & $44.28\pm 2.14$\\ $4\oplus2$ & $54.71\pm 2.61$ & $51.08\pm 4.51$ & $46.75\pm 2.83$ & $46.39\pm 3.27$ & $43.27\pm 2.80$\\ \cmidrule{1-6} $6\oplus0$ & $49.97\pm 1.07$ & $40.18\pm 4.77$ & $40.14\pm 1.35$ & $41.04\pm 1.40$ & $35.76\pm 3.19$\\ $6\oplus1$ & $52.35\pm 0.55$ & $45.69\pm 5.24$ & $42.11\pm 0.84$ & $42.74\pm 1.25$ & $39.76\pm 3.47$\\ $6\oplus2$ & $53.17\pm 3.17$ & $49.11\pm 3.41$ & $43.66\pm 4.12$ & $44.87\pm 3.99$ & $41.50\pm 2.29$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/example_filters.png} \caption{Estimated PR filters from Stage~1 for a single subject at $4\oplus 0$ resolution in the frequency domain. Note the amplitudes for analysis and synthesis banks are on different scales, DC is centered, and $h_0$ is estimated by ESPRESO~\cite{han2021mr}.} \label{fig:example_filters} \end{figure} \begin{figure}[!tbp] % \centering % \includegraphics[ width=\textwidth, page=1, trim=4cm 0cm 7.5cm 0cm, clip, ]{figs/multi_figs.pdf} % \caption{Mid-sagittal slice for a representative subject at different resolutions and gaps for each method. The low resolution column is digitally upsampled with k-space zero-filling. $A\oplus B$ signifies a slice thickness of $A$~mm and a gap of $B$~mm. Fourier magnitude is displayed in dB on every other row. The top two rows correspond to $2\oplus 0$ for the MR slice and Fourier space, the second two rows are for $4\oplus 1$, and the bottom two rows are for $6\oplus 2$. } % \label{fig:qualitative} % \end{figure} \begin{figure}[!tb] \centering % \begin{tabular}{c c c} \includegraphics[width=0.5\textwidth]{figs/n=30psnr.pdf} && \includegraphics[width=0.5\textwidth]{figs/n=30ssim.pdf} \\[-0.75em] % \textbf{(a)} && \textbf{(b)}\\[-.75em] % \end{tabular} \caption{Quantitative metrics PSNR in \textbf{(a)} and SSIM in \textbf{(b)}, computed over the $30$ image volumes. Significance tests performed with the Wilcoxon signed rank test; $\ast$ denotes $p$-values $ < 0.05$; ``ns'' stands for ``not significant''.} % \label{f:oasis30} \end{figure} \textbf{Experiments}\qquad{}We performed two experiments to evaluate the efficacy of each stage in our approach. We randomly selected $30$ T1-weighted MR brain volumes from the OASIS-3 dataset~\cite{LaMontagne2019_OASIS3} to validate both stages and simulated LR acquisition via convolution with a Gaussian kernel with FWHM $\in \{2, 4, 6\}$ and slice gap $\in \{0, 1, 2\}$; nine combinations of FHWM and slice gap in total. For these experiments, the HR plane was axial while the cardinal LR planes were sagittal and coronal. \textbf{Stage 1 Results}\qquad{}We trained Stage~1 using both cardinal 1D directions from in-plane data; that is, left-to-right (L-R) and anterior-to-posterior (A-P), but in Stage~2 we will be applying the synthesis bank to three cardinal directions: L-R, A-P, and superior-to-inferior (S-I). Since a PR filter bank states that exact recovery of a signal is guaranteed given the correct coefficients and synthesis filters, we evaluated our PR approximation by reconstructing the ground truth (GT) volume along each of these cardinal directions. Indeed, the coefficients generated by our learned analysis bank are what we will regress in Stage~2, so a reconstruction of the GT serves as a sort of ``upper bound'' on our super-resolution estimate. We performed 1D reconstruction along both cardinal directions and collated all reconstructions into 3D volumes. The mean volumetric reconstruction PSNR $\pm$ std. dev. across the $30$ subjects is shown in Table~\ref{tab:autoencoding}. The filter bank is learned from LR volumes (table rows) and applied to itself (left two table columns) and to the corresponding GT volume (right three table columns). In other words, we want to evaluate self-auto-encoding as well as answer the question of how well internal training generalizes to reconstruction of an isotropic volume. If we had attained PR filters, the PSNR would be $\infty$; our estimates fall short of this. Notably, reconstruction performance drops in the (S-I) direction, features of which are not directly present in the training data. Additionally, an example of learned filters in the frequency domain for one resolution, $4\oplus 0$, is shown in Fig.~\ref{fig:example_filters}. Recall that the fixed filter $h_0$ is the slice selection profile. Note our optimization approximated bandpass filters. \textbf{Stage 2 Results}\qquad{}To evaluate Stage~2, we compared our method to two approaches which also do not rely on external training data: cubic b-spline interpolation and SMORE~\cite{zhao2020smore}, a state-of-the-art self-super-resolution technique for anisotropic MR volumes. For a fair comparison and improving SMORE results, SMORE was trained with the same slice profile that we use instead of a Gaussian slice profile used in the original paper (the ESPRESO estimate~\cite{han2021mr}). Qualitative results are displayed in Fig.~\ref{fig:qualitative} of a mid-sagittal slice for a representative subject at $2\oplus 0$, $4\oplus 1$, and $6\oplus 2$. This subject is near the median PSNR value for that resolution across the $30$ subjects evaluated in our experiments and for which SMORE outperforms our method at $2\oplus 0$, is on par with our method at $4\oplus 1$, and is outperformed by our method at $6\oplus 2$. Also shown in Fig.~\ref{fig:qualitative} is the corresponding Fourier space, and we see that our proposed method includes more high frequencies than the other methods. For quantitative results, PSNR and SSIM were calculated on entire volumes; illustrated as box plots in Fig.~\ref{f:oasis30}. \section{Discussion and conclusions} In this paper, we have presented a novel filter bank formulation for SR of 2D-acquired anisotropic MR volumes as the regression of filter-specified missing detail coefficients in an $M$-channel PR filter bank. We would emphasize that our approach establishes a new theoretic basis for super-resolution. In theory, these coefficients exist and give exact recovery of the underlying HR signal. However, it is unknown whether a mapping of $y \rightarrow (d_1, \ldots, d_{M-1})$ exists, and whether it is possible to find filters to complete the analysis and synthesis banks to guarantee PR. In practice, we estimate these in two stages: Stage~1 estimates the missing analysis and synthesis filters towards PR and Stage~2 trains a CNN to regress the missing detail coefficients given the coarse approximation $y$. Although we do not, in all situations, outperform a competitive end-to-end method, in our experiments as the resolution worsens and slice gap increases our proposed method better handles the task, validating the usefulness of our method for super resolving anisotropic MR images with large slice gaps. Future work will include: 1)~deeper investigation into the limits of the training set in learning the regression; 2)~the degree to which the mapping $G$ is valid; 3)~more analysis of the super-resolved frequency space; and 4)~develop methods to exactly achieve or better approximate PR. True PR filter banks should greatly improve the method, as Table~\ref{tab:autoencoding} serves as a type of ``upper bound'' for our method; regardless of the quality of coefficient regression, even given the ideal ground truth coefficients, reconstruction accuracy would be limited. Additionally, further investigation into improved regression is needed---a model which can better capture the necessary aliasing in the coefficient domain is vital for PR. \bibliographystyle{splncs04} \section{Introduction} Anisotropic magnetic resonance (MR) images are those acquired with high in-plane resolution and low through-plane resolution. It is common practice to acquire anisotropic volumes in clinics as it reduces scan time and motion artifacts while preserving signal-to-noise ratio. To improve through-plane resolution, data-driven super-resolution~(SR) methods have been developed on MR volumes~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}. The application of SR methods to estimate the underlying isotropic volume has been shown to lead to improved performance on downstream tasks~\cite{zhao2019applications}. For 2D multi-slice protocols, the through-plane point-spread function (PSF) is known as the slice profile. When the sampling step is an integer, the through-plane signals of an acquired MR image can be modeled as a strided 1D convolution between the slice profile and the object to be imaged~\cite{han2021mr,prince2006medical,sonderby2016amortised}. Commonly, the separation between slices is equivalent to the full-width-at-half-max~(FWHM) of the slice profile, but volumes can also be acquired where the slice separation is less than or greater than the slice profile FWHM, corresponding to ``slice overlap'' and ``slice gap'' scenarios, respectively. Data-driven SR methods usually simulate low-resolution (LR) data from high-resolution (HR) data using an assumed slice profile~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}, or an estimated the slice profile according to the image data or acquisition~\cite{han2021mr}. In either case, neural SR methods are formulated as a classical inverse problem: \begin{equation} y = Ax, \label{eq:inverse} \end{equation} where $y$ is the LR observation, $A$ is the degradation matrix, and $x$ is the underlying HR image. Commonly, this is precisely how paired training data is created; HR data is degraded by $A$ to obtain the LR $y$ and weights $\theta$ of a neural network $\phi$ are learned such that $\phi_\theta(y) \approx x$. However, under this framework there is no specification of information lost by application of $A$; the model is end-to-end and directed only by the dataset. In our work, we propose an entirely novel SR framework based on perfect reconstruction~(PR) filter banks. From filter bank theory, PR of a signal $x$ is possible through an $M$-channel filter bank with a correct design of an analysis bank $H$ and synthesis bank $F$~\cite{strang_nguyen_1997}. Under this formulation, we do not change Eq.~\ref{eq:inverse} but explicitly recognize our observation $y$ as the ``coarse approximation'' filter bank coefficients and the missing information necessary to recover $x$ as the ``detail'' coefficients (see Fig.~\ref{fig:obs_model}). For reference, in machine learning jargon, the analysis bank is an encoder, the synthesis bank is a decoder, and the coarse approximation and detail coefficients are analogous to a ``latent space''. \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/filter_bank.png} \caption{The filter bank observation model. Both of $y$ and $H_0$ (green) are given and fixed. In Stage 1, filters $H_1, \ldots, H_{M-1}$ and $F_0, F_1, \ldots, F_{M-1}$ are learned; in Stage 2, a mapping from $y$ to $d_1, \ldots, d_{M-1}$ is learned.} \label{fig:obs_model} \end{figure} The primary contribution of this work is to reformulate SR to isotropy of 2D-acquired MR volumes as a filter bank regression framework. The proposed framework has several benefits. First, the observed low-frequency information is untouched in the reconstruction; thus, our method explicitly synthesizes the missing high frequencies and does not need to learn to preserve acquired low frequency information. Second, in our framework, the downsampling factor is $M$, specifying the number of channels in the $M$-channel filter bank and allowing us to attribute more constrained parameters in the ``slice gap'' acquisition recovery. Third, the analysis filters of PR filter banks necessarily introduce aliasing which is canceled via the synthesis filters; therefore, we do not need to directly handle the anti-aliasing of the observed image. Fourth, our architecture has a dynamic capacity for lower-resolution images. The rationale behind the dynamic capacity is intuitive: when fewer measurements are taken, more estimates must be done in recovery and a more robust model is required. Fifth, our method exploits the nature of anisotropic volumetric data; the in-plane slices are HR while the through-plane slices are LR. Thus, we do not rely on external training data, as we only need the in-plane HR data to perform internal supervision. In the remainder of the paper, we describe this framework in detail, provide practical implementation, and evaluate against a state-of-the-art internally supervised SR technique. We demonstrate the feasibility of formulating SR as filter bank coefficient regression and believe it lays the foundation for future theoretical and experimental work in SR of MR images. \section{Methods} The analysis bank, $H$, and synthesis bank, $F$, each consist of $M$ 1D filters represented in the $z$-domain as $H_k$ and $F_k$, respectively, with corresponding spatial domain representations $h_k$ and $f_k$. As illustrated in Fig.~\ref{fig:obs_model}, input signal $X(z) = \mathcal{Z}(x)$\footnote{\url{https://en.wikipedia.org/wiki/Z-transform}} is delayed by $z^{-k}$, filtered by $H_k$, and decimated with $\downarrow M$ (keeping every the $M^\text{th}$ entry) to produce the corresponding coefficients. These coefficients exhibit aliasing and distortion which are corrected by the synthesis filters~\cite{strang_nguyen_1997}. Reconstruction from coefficients comes from zero-insertion upsampling with $\uparrow M$, passing through filters $F_k$, advancing by $z^k$, and summing across the $M$ channels. \begin{figure}[!tb] \centering \includegraphics[ width=0.9\textwidth, page=2, trim=3cm 37cm 25cm 7cm, clip, ]{figs/multi_figs.pdf} \caption{This network architecture, used in the second stage of our algorithm, has the same structure for both the generator and discriminator but with different hyperparameters. All convolutional layers use a $3\times 3$ kernel. The generator and discriminator use $16$ and $2$ residual blocks, respectively. The number of features for the generator was $128 \times M$ while for the discriminator $64 \times M$. The final convolution outputs $M-1$ channels corresponding to the missing filter bank detail coefficients.} \label{fig:network} \end{figure} Traditional design of $M$-channel PR filter banks involves deliberate choice of a prototype low-pass filter $H_0$ such that modulations and alternations of the prototype produce the remaining filters for both the analysis and synthesis filter banks~\cite{strang_nguyen_1997}. $M$ is also chosen based on the restrictions of the problem at hand. However, for anisotropic 2D-acquired MRI, the slice profile \textit{is} the low-pass filter and as such we have a fixed, given $H_0$. The separation between slices is also given as $M$, which is equal to the FWHM of $h_0$ plus any further gap between slices. We use $A \oplus B$ to denote a FWHM of $A$~mm and slice gap of $B$~mm and note that $M = A + B$. For this preliminary work, we assume $A, B$, and $M$ are all integer and, without loss of generality, assume that the in-plane resolution is $1 \oplus 0$. Our goal is to estimate filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$ and the detail coefficients $d_1, \ldots, d_{M-1}$ which lead to PR of $x$. We approach this problem in two stages for stability. In Stage~1, we approximate the missing analysis and synthesis filters, assuming there exists a set of filters to complete the $M$-channel PR filter bank given that $H_0$ and $M$ are fixed and known ahead of time. These must be learned first to establish the approximate PR filter bank conditions on the coefficient space. Then, in Stage~2, we perform a regression on the missing coefficients. Both of these stages are optimized in a data-driven end-to-end fashion with gradient descent. After training, our method is applied by regressing $d_1, \ldots, d_{M-1}$ from $y$ and feeding all coefficients through the synthesis bank, producing $\hat{x}$, our estimate of the HR signal. The Stage~2 coefficient regression occurs in 2D, so we construct our estimate of the 3D volume by averaging stacked 2D predictions from the synthesis bank from both cardinal planes containing the through-plane axis. \textbf{Stage 1: Filter Optimization}\qquad{}Previous works assumed the slice profile is Gaussian with FWHM equal to the slice separation~\cite{zhao2020smore,oktay2016multi}; instead, we estimate the slice profile, $H_0$, directly with ESPRESO\footnote{\url{https://github.com/shuohan/espreso2}}~\cite{han2021mr}. We next aim to estimate the filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$. To achieve this, we learn the spatial representations $h_1, \ldots, h_{M-1}$ and $f_0, \ldots, f_{M-1}$ from 1D rows and columns drawn from the high resolution in-plane slices of y, denoted $\mathcal{D}_1 = \{x_i\}_{i=1}^N$. We initialize these filters according to a cosine modulation~\cite{strang_nguyen_1997} of $h_0$, which is defined as \begin{equation*} f_k[n] = h_k[n] = h_0[n] \sqrt{\frac{2}{M}} \cos{\left[ \left( k + \frac{1}{2} \right) \left( n + \frac{M + 1}{2} \right) \frac{\pi}{M}\right] }, \end{equation*} for $ k \in \{1, 2, \ldots, M-1\}$. Accordingly, we initialize $f_0$ to $h_0$. We estimate $\hat{x}_{i}$ by passing $x_i$ through the analysis and synthesis banks, then (since the entire operation is differentiable) step $h_k$ and $f_k$ through gradient descent. The reconstruction error is measured with mean squared error loss and the filters updated based on the AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate set of $0.1$ and with one-cycle learning rate scheduler~\cite{smith2019super} of $100,000$ steps with a batch size of $32$. \textbf{Stage 2: Coefficient Regression}\qquad{}From Stage~1, we have the analysis and synthesis banks and now estimate the missing detail coefficients given only the LR observation $y$. With the correct coefficients and synthesis filters, PR of $x$ is possible. For this stage, we use 2D patches, in spite of the 1D SR problem, as a type of ``neighborhood regularization''. Let $\mathcal{D}_2 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^{p \times pM}$; i.e., the training set for Stage~2 consists of 2D $p \times pM$ patches drawn from the in-plane slices of $y$. The second dimension will be decimated by $M$ after passing through the analysis banks, resulting in $y, d_1, \ldots, d_{M-1} \in \mathbb{R}^{p \times p}$. We use the analysis bank (learned in Stage~1) to create training pairs $\{(y_i, (d_1, d_2, \ldots, d_{M-1})_i\}_{i=1}^N$ and fit a convolutional neural network (CNN) $G: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{{p \times p}^{M-1}}$ to map $y_i$ to $(d_1, \ldots, d_{M-1})_i$. Since this is an image-to-image translation task, we adopt the widely used approach proposed in Pix2Pix~\cite{pix2pix2017} including the adversarial patch discriminator. Empirically, we found more learnable parameters are needed with greater $M$. Thus, our generator $G$ is a CNN illustrated in Fig.~\ref{fig:network} with $16$ residual blocks and $128 \times M$ kernels of size $3\times 3$ per convolutional layer. The discriminator $D$ has the same architecture but with only $2$ residual blocks and $64 \times M$ kernels per convolutional layer. Our final loss function for Stage~2 is identical to the loss proposed in~\cite{pix2pix2017} and is calculated on the error in $d_k$. We use the AdamW optimizer~\cite{loshchilov2017decoupled} with a learning rate of $10^{-4}$ and the one-cycle learning rate scheduler~\cite{smith2019super} for $500,000$ steps at a batch size of $32$. \section{Experiments and Results} \begin{table}[!tb] \centering \caption{Mean $\pm$ std. dev. of volumetric PSNR values for Stage~1 reconstruction of the $10$ subjects. ``Self'' indicates a reconstruction of the input low-resolution volume on which the filter bank was optimized, while ``GT'' indicates reconstruction of the isotropic ground truth volume. (L-R), (A-P), and (S-I) are the left-to-right, anterior-to-posterior, and superior-to-inferior directions, respectively.} \label{tab:autoencoding} \begin{tabular}{c|c|c|c|c|c} \toprule \hspace*{9ex} & Self (L-R) & Self (A-P) & GT (L-R) & GT (A-P) & GT (S-I)\\ \cmidrule{1-6} $2\oplus0$& ~$62.24\pm 0.97$~ & ~$60.19\pm 3.74$~ & ~$60.63\pm 0.56$~ & ~$59.59\pm 2.54$~ & ~$55.47\pm 4.69$\\ $2\oplus1$ & $63.01\pm 4.91$ & $62.25\pm 5.09$ & $64.32\pm 0.63$ & $59.49\pm 5.52$ & $53.81\pm 6.50$\\ $2\oplus2$ & $62.57\pm 1.59$ & $57.93\pm 5.32$ & $60.62\pm 1.34$ & $59.31\pm 3.65$ & $52.09\pm 4.34$\\ \cmidrule{1-6} $4\oplus0$ & $55.47\pm 3.81$ & $52.36\pm 5.32$ & $48.91\pm 4.65$ & $48.77\pm 4.68$ & $44.08\pm 4.78$\\ $4\oplus1$ & $53.03\pm 1.54$ & $50.31\pm 3.41$ & $44.19\pm 1.57$ & $45.65\pm 1.63$ & $44.28\pm 2.14$\\ $4\oplus2$ & $54.71\pm 2.61$ & $51.08\pm 4.51$ & $46.75\pm 2.83$ & $46.39\pm 3.27$ & $43.27\pm 2.80$\\ \cmidrule{1-6} $6\oplus0$ & $49.97\pm 1.07$ & $40.18\pm 4.77$ & $40.14\pm 1.35$ & $41.04\pm 1.40$ & $35.76\pm 3.19$\\ $6\oplus1$ & $52.35\pm 0.55$ & $45.69\pm 5.24$ & $42.11\pm 0.84$ & $42.74\pm 1.25$ & $39.76\pm 3.47$\\ $6\oplus2$ & $53.17\pm 3.17$ & $49.11\pm 3.41$ & $43.66\pm 4.12$ & $44.87\pm 3.99$ & $41.50\pm 2.29$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/example_filters.png} \caption{Estimated PR filters from Stage~1 for a single subject at $4\oplus 0$ resolution in the frequency domain. Note the amplitudes for analysis and synthesis banks are on different scales, DC is centered, and $h_0$ is estimated by ESPRESO~\cite{han2021mr}.} \label{fig:example_filters} \end{figure} \begin{figure}[!tbp] % \centering % \includegraphics[ width=\textwidth, page=1, trim=4cm 0cm 7.5cm 0cm, clip, ]{figs/multi_figs.pdf} % \caption{Mid-sagittal slice for a representative subject at different resolutions and gaps for each method. The low resolution column is digitally upsampled with k-space zero-filling. $A\oplus B$ signifies a slice thickness of $A$~mm and a gap of $B$~mm. Fourier magnitude is displayed in dB on every other row. The top two rows correspond to $2\oplus 0$ for the MR slice and Fourier space, the second two rows are for $4\oplus 1$, and the bottom two rows are for $6\oplus 2$. } % \label{fig:qualitative} % \end{figure} \begin{figure}[!tb] \centering % \begin{tabular}{c c c} \includegraphics[width=0.5\textwidth]{figs/n=30psnr.pdf} && \includegraphics[width=0.5\textwidth]{figs/n=30ssim.pdf} \\[-0.75em] % \textbf{(a)} && \textbf{(b)}\\[-.75em] % \end{tabular} \caption{Quantitative metrics PSNR in \textbf{(a)} and SSIM in \textbf{(b)}, computed over the $30$ image volumes. Significance tests performed with the Wilcoxon signed rank test; $\ast$ denotes $p$-values $ < 0.05$; ``ns'' stands for ``not significant''.} % \label{f:oasis30} \end{figure} \textbf{Experiments}\qquad{}We performed two experiments to evaluate the efficacy of each stage in our approach. We randomly selected $30$ T1-weighted MR brain volumes from the OASIS-3 dataset~\cite{LaMontagne2019_OASIS3} to validate both stages and simulated LR acquisition via convolution with a Gaussian kernel with FWHM $\in \{2, 4, 6\}$ and slice gap $\in \{0, 1, 2\}$, yielding nine combinations of FHWM and slice gap in total. For these experiments, the HR plane was axial while the cardinal LR planes were sagittal and coronal. We note that both Stage~1 and Stage~2 are trained for each LR volume separately as our proposed method does not use external training data, but instead relies on the inherent anisotropy in the multi-slice volume (i.e., HR in-plane and LR through-plane data). \textbf{Stage 1 Results}\qquad{}We trained Stage~1 using both cardinal 1D directions from in-plane data; that is, left-to-right (L-R) and anterior-to-posterior (A-P) directions. We then performed 1D reconstruction along these cardinal directions and collated all reconstructions into 3D volumes. In other words, this is an evaluation of self-auto-encoding. The mean volumetric reconstruction PSNR $\pm$ std. dev. across the $30$ subjects is shown in Table~\ref{tab:autoencoding}. In addition to applying the learned filters to the LR image itself, we would also like to test the extent of signal recovery for the HR counterpart that is the ground truth (GT) of the LR volume. Indeed, the coefficients generated by our learned analysis bank are what we will regress in Stage~2, so a reconstruction of the GT is also shown in the right three columns of Table~\ref{tab:autoencoding}. This serves as a sort of ``upper bound'' on our super-resolution estimate and also answers the question of how well internal training generalizes to reconstruction of an isotropic volume. We note that if we had attained PR filters, the PSNR would be $\infty$; our estimates fall short of this. Notably, reconstruction performance drops in the (S-I) direction; this is likely due to the fact that signals along this direction were not included in the training data. Additionally, an example of learned filters in the frequency domain for one resolution, $4\oplus 0$, is shown in Fig.~\ref{fig:example_filters}. Recall that the fixed filter $h_0$ is the slice selection profile. Note our optimization approximated bandpass filters. \textbf{Stage 2 Results}\qquad{}To evaluate Stage~2, we compared our method to two approaches which also do not rely on external training data: cubic b-spline interpolation and SMORE~\cite{zhao2020smore}, a state-of-the-art self-super-resolution technique for anisotropic MR volumes. For a fair comparison and improving SMORE results, SMORE was trained with the same slice profile that we use (the ESPRESO estimate~\cite{han2021mr}) instead of a Gaussian slice profile used in the original paper. Qualitative results are displayed in Fig.~\ref{fig:qualitative} of a mid-sagittal slice for a representative subject at $2\oplus 0$, $4\oplus 1$, and $6\oplus 2$. This subject is near the median PSNR value for that resolution across the $30$ subjects evaluated in our experiments and for which SMORE outperforms our method at $2\oplus 0$, is on par with our method at $4\oplus 1$, and is outperformed by our method at $6\oplus 2$. Also shown in Fig.~\ref{fig:qualitative} is the corresponding Fourier space, and we see that our proposed method includes more high frequencies than the other methods. For quantitative results, PSNR and SSIM were calculated on entire volumes, as illustrated in box plots in Fig.~\ref{f:oasis30}. \section{Discussion and conclusions} In this paper, we have presented a novel filter bank formulation for SR of 2D-acquired anisotropic MR volumes as the regression of filter-specified missing detail coefficients in an $M$-channel PR filter bank. We would emphasize that our approach establishes a new theoretic basis for super-resolution. In theory, these coefficients exist and give exact recovery of the underlying HR signal. However, it is unknown whether a mapping of $y \rightarrow (d_1, \ldots, d_{M-1})$ exists, and whether it is possible to find filters to complete the analysis and synthesis banks to guarantee PR. In practice, we estimate these in two stages: Stage~1 estimates the missing analysis and synthesis filters towards PR and Stage~2 trains a CNN to regress the missing detail coefficients given the coarse approximation $y$. Although we do not, in all situations, outperform a competitive end-to-end method, in our experiments as the resolution worsens and slice gap increases our proposed method better handles the task, validating the usefulness of our method for super resolving anisotropic MR images with large slice gaps. Future work will include: 1)~deeper investigation into the limits of the training set in learning the regression; 2)~the degree to which the mapping $G$ is valid; 3)~more analysis of the super-resolved frequency space; and 4)~develop methods to exactly achieve or better approximate PR. True PR filter banks should greatly improve the method, as Table~\ref{tab:autoencoding} serves as a type of ``upper bound'' for our method; regardless of the quality of coefficient regression, even given the ideal ground truth coefficients, reconstruction accuracy would be limited. Additionally, further investigation into improved regression is needed---a model which can better capture the necessary aliasing in the coefficient domain is vital for PR. \bibliographystyle{splncs04} \section{Introduction} Anisotropic magnetic resonance (MR) images are those acquired with high in-plane resolution and low through-plane resolution. It is common practice to acquire anisotropic volumes in clinics as it reduces scan time and motion artifacts while preserving signal-to-noise ratio. To improve through-plane resolution, data-driven super-resolution~(SR) methods have been developed on MR volumes~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}. The application of SR methods to estimate the underlying isotropic volume has been shown to lead to improved performance on downstream tasks~\cite{zhao2019applications}. For 2D multi-slice protocols, the through-plane point-spread function (PSF) is known as the slice profile. When the sampling step is an integer, the through-plane signals of an acquired MR image can be modeled as a strided 1D convolution between the slice profile and the object to be imaged~\cite{han2021mr,prince2006medical,sonderby2016amortised}. Commonly, the separation between slices is equivalent to the full-width-at-half-max~(FWHM) of the slice profile, but volumes can also be acquired where the slice separation is less than or greater than the slice profile FWHM, corresponding to ``slice overlap'' and ``slice gap'' scenarios, respectively. Data-driven SR methods usually simulate low-resolution (LR) data from high-resolution (HR) data using an assumed slice profile~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}, or an estimated slice profile according to the image data or acquisition~\cite{han2021mr}. In either case, neural SR methods are formulated as a classical inverse problem: \begin{equation} y = Ax, \label{eq:inverse} \end{equation} where $y$ is the LR observation, $A$ is the degradation matrix, and $x$ is the underlying HR image. Commonly, this is precisely how paired training data is created; HR data is degraded by $A$ to obtain the LR $y$ and weights $\theta$ of a neural network $\phi$ are learned such that $\phi_\theta(y) \approx x$. However, under this framework there is no specification of information lost by application of $A$; the model is end-to-end and directed only by the dataset. In our work, we propose an entirely novel SR framework based on perfect reconstruction~(PR) filter banks. From filter bank theory, PR of a signal $x$ is possible through an $M$-channel filter bank with a correct design of an analysis bank $H$ and synthesis bank $F$~\cite{strang_nguyen_1997}. Under this formulation, we do not change Eq.~\ref{eq:inverse} but explicitly recognize our observation $y$ as the ``coarse approximation'' filter bank coefficients and the missing information necessary to recover $x$ as the ``detail'' coefficients (see Fig.~\ref{fig:obs_model}). For reference, in machine learning jargon, the analysis bank is an encoder, the synthesis bank is a decoder, and the coarse approximation and detail coefficients are analogous to a ``latent space''. \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/filter_bank.png} \caption{The filter bank observation model. Both $y$ and $H_0$ (green) are given and fixed. In Stage 1, filters $H_1, \ldots, H_{M-1}$ and $F_0, F_1, \ldots, F_{M-1}$ (purple) are learned; in Stage 2, a mapping from $y$ to $d_1, \ldots, d_{M-1}$ (gold) is learned.} \label{fig:obs_model} \end{figure} The primary contribution of this work is to reformulate SR to isotropy of 2D-acquired MR volumes as a filter bank regression framework. The proposed framework has several benefits. First, the observed low-frequency information is untouched in the reconstruction; thus, our method explicitly synthesizes the missing high frequencies and does not need to learn to preserve acquired low frequency information. Second, in our framework, the downsampling factor is $M$, specifying the number of channels in the $M$-channel filter bank and allowing us to attribute more constrained parameters in the ``slice gap'' acquisition recovery. Third, the analysis filters of PR filter banks necessarily introduce aliasing which is canceled via the synthesis filters; therefore, we do not need to directly handle the anti-aliasing of the observed image. Fourth, our architecture has a dynamic capacity for lower-resolution images. The rationale behind the dynamic capacity is intuitive: when fewer measurements are taken, more estimates must be done in recovery and a more robust model is required. Fifth, our method exploits the nature of anisotropic volumetric data; the in-plane slices are HR while the through-plane slices are LR. Thus, we do not rely on external training data, as we only need the in-plane HR data to perform internal supervision. In the remainder of the paper, we describe this framework in detail, provide practical implementation, and evaluate against a state-of-the-art internally supervised SR technique. We demonstrate the feasibility of formulating SR as filter bank coefficient regression and believe it lays the foundation for future theoretical and experimental work in SR of MR images. \section{Methods} The analysis bank, $H$, and synthesis bank, $F$, each consist of $M$ 1D filters represented in the $z$-domain as $H_k$ and $F_k$, respectively, with corresponding spatial domain representations $h_k$ and $f_k$. As illustrated in Fig.~\ref{fig:obs_model}, input signal $X(z) = \mathcal{Z}(x)$\footnote{$\mathcal{Z}(x)$ is the $\mathcal{Z}$-transform of $x$\cite{strang_nguyen_1997}} is delayed by $z^{-k}$, filtered by $H_k$, and decimated with $\downarrow M$ (keeping every $M^\text{th}$ entry) to produce the corresponding coefficients. These coefficients exhibit aliasing and distortion which are corrected by the synthesis filters~\cite{strang_nguyen_1997}. Reconstruction from coefficients comes from zero-insertion upsampling with $\uparrow M$, passing through filters $F_k$, advancing by $z^k$, and summing across the $M$ channels. \begin{figure}[!tb] \centering \includegraphics[ width=0.9\textwidth, page=2, trim=3cm 37cm 25cm 7cm, clip, ]{figs/multi_figs.pdf} \caption{This network architecture, used in the second stage of our algorithm, has the same structure for both the generator and discriminator but with different hyperparameters. All convolutional layers used a $3\times 3$ kernel. The generator and discriminator used $16$ and $2$ residual blocks, respectively. The generator had $128 \times M$ features per convolutional layer while the discriminator had $64 \times M$ features per convolutional layer. The final convolution outputs $M-1$ channels corresponding to the missing filter bank detail coefficients.} \label{fig:network} \end{figure} Traditional design of $M$-channel PR filter banks involves a deliberate choice of a prototype low-pass filter $H_0$ such that modulations and alternations of the prototype produce the remaining filters for both the analysis and synthesis filter banks~\cite{strang_nguyen_1997}. $M$ is also chosen based on the restrictions of the problem at hand. However, for anisotropic 2D-acquired MRI, the slice profile \textit{is} the low-pass filter and as such we have a fixed, given $H_0$. The separation between slices is also given as $M$, which is equal to the FWHM of $h_0$ plus any further gap between slices. We use $A \oplus B$ to denote a FWHM of $A$~mm and slice gap of $B$~mm and note that $M = A + B$. For this preliminary work, we assume $A, B$, and $M$ are all integer and, without loss of generality, assume that the in-plane resolution is $1 \oplus 0$. Our goal is to estimate filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$ and the detail coefficients $d_1, \ldots, d_{M-1}$ which lead to PR of $x$. We approach this problem in two stages for stability. In Stage~1, we approximate the missing analysis and synthesis filters, assuming there exists a set of filters to complete the $M$-channel PR filter bank given that $H_0$ and $M$ are fixed and known ahead of time. These must be learned first to establish the approximate PR filter bank conditions on the coefficient space. Then, in Stage~2, we perform a regression on the missing coefficients. Both of these stages are optimized in a data-driven end-to-end fashion with gradient descent. After training, our method is applied by regressing $d_1, \ldots, d_{M-1}$ from $y$ and feeding all coefficients through the synthesis bank, producing $\hat{x}$, our estimate of the HR signal. The Stage~2 coefficient regression occurs in 2D, so we construct our estimate of the 3D volume by averaging stacked 2D predictions from the synthesis bank from both cardinal planes containing the through-plane axis. \textbf{Stage 1: Filter Optimization}\qquad{}Previous works assumed the slice profile is Gaussian with FWHM equal to the slice separation~\cite{zhao2020smore,oktay2016multi}; instead, we estimate the slice profile, $H_0$, directly with ESPRESO\footnote{\url{https://github.com/shuohan/espreso2}}~\cite{han2021mr}. We next aim to estimate the filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$. To achieve this, we learn the spatial representations $h_1, \ldots, h_{M-1}$ and $f_0, \ldots, f_{M-1}$ from 1D rows and columns drawn from the high resolution in-plane slices of $y$, denoted $\mathcal{D}_1 = \{x_i\}_{i=1}^N$. We initialize these filters according to a cosine modulation~\cite{strang_nguyen_1997} of $h_0$, which is defined as \begin{equation*} f_k[n] = h_k[n] = h_0[n] \sqrt{\frac{2}{M}} \cos{\left[ \left( k + \frac{1}{2} \right) \left( n + \frac{M + 1}{2} \right) \frac{\pi}{M}\right] }, \end{equation*} for $ k \in \{1, 2, \ldots, M-1\}$. Accordingly, we initialize $f_0$ to $h_0$. We estimate $\hat{x}_{i}$ by passing $x_i$ through the analysis and synthesis banks, then (since the entire operation is differentiable) step all filters except $h_0$ through gradient descent. The reconstruction error is measured with mean squared error loss and the filters are updated based on the AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate of $0.1$, the one-cycle learning rate scheduler~\cite{smith2019super}, and a batch size of $32$ for $100,000$ steps. \textbf{Stage 2: Coefficient Regression}\qquad{}From Stage~1, we have the analysis and synthesis banks and now estimate the missing detail coefficients given only the LR observation $y$. With the correct coefficients and synthesis filters, PR of $x$ is possible. For this stage, we use 2D patches, in spite of the 1D SR problem, as a type of ``neighborhood regularization''. Let $\mathcal{D}_2 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^{p \times pM}$; i.e., the training set for Stage~2 consists of 2D $p \times pM$ patches drawn from the in-plane slices of $y$. The second dimension will be decimated by $M$ after passing through the analysis banks, resulting in $y, d_1, \ldots, d_{M-1} \in \mathbb{R}^{p \times p}$. We use the analysis bank (learned in Stage~1) to create training pairs $\{(y_i, (d_1, d_2, \ldots, d_{M-1})_i\}_{i=1}^N$ and fit a convolutional neural network (CNN) $G: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{{p \times p}^{M-1}}$ to map $y_i$ to $(d_1, \ldots, d_{M-1})_i$. In this work, we set $p = 32$. Since this is an image-to-image translation task, we adopt the widely used approach proposed in Pix2Pix~\cite{pix2pix2017} including the adversarial patch discriminator. Empirically, we found more learnable parameters are needed with greater $M$. Thus, our generator $G$ is a CNN illustrated in Fig.~\ref{fig:network} with $16$ residual blocks and $128 \times M$ kernels of size $3\times 3$ per convolutional layer. The discriminator $D$ has the same architecture but with only $2$ residual blocks and $64 \times M$ kernels per convolutional layer. Our final loss function for Stage~2 is identical to the loss proposed in~\cite{pix2pix2017} and is calculated on the error in $(d_1, \ldots, d_{M-1})_i$. We use the AdamW optimizer~\cite{loshchilov2017decoupled} with a learning rate of $10^{-4}$ and the one-cycle learning rate scheduler~\cite{smith2019super} for $500,000$ steps at a batch size of $32$. \section{Experiments and Results} \begin{table}[!tb] \centering \caption{Mean $\pm$ std. dev. of volumetric PSNR values for Stage~1 reconstruction of the $10$ subjects. ``Self'' indicates a reconstruction of the input low-resolution volume on which the filter bank was optimized, while ``GT'' indicates reconstruction of the isotropic ground truth volume. (L-R), (A-P), and (S-I) are the left-to-right, anterior-to-posterior, and superior-to-inferior directions, respectively.} \label{tab:autoencoding} \begin{tabular}{c|c|c|c|c|c} \toprule \hspace*{9ex} & Self (L-R) & Self (A-P) & GT (L-R) & GT (A-P) & GT (S-I)\\ \cmidrule{1-6} $2\oplus0$& ~$62.24\pm 0.97$~ & ~$60.19\pm 3.74$~ & ~$60.63\pm 0.56$~ & ~$59.59\pm 2.54$~ & ~$55.47\pm 4.69$\\ $2\oplus1$ & $63.01\pm 4.91$ & $62.25\pm 5.09$ & $64.32\pm 0.63$ & $59.49\pm 5.52$ & $53.81\pm 6.50$\\ $2\oplus2$ & $62.57\pm 1.59$ & $57.93\pm 5.32$ & $60.62\pm 1.34$ & $59.31\pm 3.65$ & $52.09\pm 4.34$\\ \cmidrule{1-6} $4\oplus0$ & $55.47\pm 3.81$ & $52.36\pm 5.32$ & $48.91\pm 4.65$ & $48.77\pm 4.68$ & $44.08\pm 4.78$\\ $4\oplus1$ & $53.03\pm 1.54$ & $50.31\pm 3.41$ & $44.19\pm 1.57$ & $45.65\pm 1.63$ & $44.28\pm 2.14$\\ $4\oplus2$ & $54.71\pm 2.61$ & $51.08\pm 4.51$ & $46.75\pm 2.83$ & $46.39\pm 3.27$ & $43.27\pm 2.80$\\ \cmidrule{1-6} $6\oplus0$ & $49.97\pm 1.07$ & $40.18\pm 4.77$ & $40.14\pm 1.35$ & $41.04\pm 1.40$ & $35.76\pm 3.19$\\ $6\oplus1$ & $52.35\pm 0.55$ & $45.69\pm 5.24$ & $42.11\pm 0.84$ & $42.74\pm 1.25$ & $39.76\pm 3.47$\\ $6\oplus2$ & $53.17\pm 3.17$ & $49.11\pm 3.41$ & $43.66\pm 4.12$ & $44.87\pm 3.99$ & $41.50\pm 2.29$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/example_filters.png} \caption{Estimated PR filters from Stage~1 for a single subject at $4\oplus 0$ resolution in the frequency domain. Note the amplitudes for analysis and synthesis banks are on different scales, DC is centered, and $h_0$ is estimated by ESPRESO~\cite{han2021mr}.} \label{fig:example_filters} \end{figure} \begin{figure}[!tbp] % \centering % \includegraphics[ width=\textwidth, page=1, trim=4cm 0cm 7.5cm 0cm, clip, ]{figs/multi_figs.pdf} % \caption{Mid-sagittal slice for a representative subject at different resolutions and gaps for each method. The low resolution column is digitally upsampled with k-space zero-filling. $A\oplus B$ signifies a slice thickness of $A$~mm and a gap of $B$~mm. Fourier magnitude is displayed in dB on every other row. The top two rows correspond to $2\oplus 0$ for the MR slice and Fourier space, the second two rows are for $4\oplus 1$, and the bottom two rows are for $6\oplus 2$. } % \label{fig:qualitative} % \end{figure} \begin{figure}[!tb] \centering % \begin{tabular}{c c c} \includegraphics[width=0.5\textwidth]{figs/n=30psnr.pdf} && \includegraphics[width=0.5\textwidth]{figs/n=30ssim.pdf} \\[0em] % \textbf{(a)} && \textbf{(b)}\\[0em] % \end{tabular} \caption{Quantitative metrics PSNR in \textbf{(a)} and SSIM in \textbf{(b)}, computed over the $30$ image volumes. Significance tests performed between SMORE and our proposed method with the Wilcoxon signed rank test; $\ast$ denotes $p$-values $ < 0.05$; ``ns'' stands for ``not significant''.} % \label{f:oasis30} \end{figure} \textbf{Experiments}\qquad{}We performed two experiments to evaluate the efficacy of each stage in our approach. We randomly selected $30$ T1-weighted MR brain volumes from the OASIS-3 dataset~\cite{LaMontagne2019_OASIS3} to validate both stages and simulated LR acquisition via convolution with a Gaussian kernel with FWHM $\in \{2, 4, 6\}$ and slice gap $\in \{0, 1, 2\}$, yielding nine combinations of FHWM and slice gap in total. For these experiments, the HR plane was axial while the cardinal LR planes were sagittal and coronal. We note that both Stage~1 and Stage~2 are trained for each LR volume separately as our proposed method does not use external training data, but instead relies on the inherent anisotropy in the multi-slice volume (i.e., HR in-plane and LR through-plane data). \textbf{Stage 1 Results}\qquad{}We trained Stage~1 using both cardinal 1D directions from in-plane data; that is, left-to-right (L-R) and anterior-to-posterior (A-P) directions. We then performed 1D reconstruction along these cardinal directions and collated all reconstructions into 3D volumes. In other words, this is an evaluation of self-auto-encoding. The mean volumetric reconstruction PSNR $\pm$ std. dev. across the $30$ subjects is shown in Table~\ref{tab:autoencoding}. In addition to applying the learned filters to the LR image itself, we would also like to test the extent of signal recovery for the HR counterpart that is the ground truth (GT) of the LR volume. Indeed, the coefficients generated by our learned analysis bank are what we will regress in Stage~2, so a reconstruction of the GT is also shown in the right three columns of Table~\ref{tab:autoencoding}. This serves as a sort of ``upper bound'' on our super-resolution estimate and also answers the question of how well internal training generalizes to reconstruction of an isotropic volume. We note that if we had attained PR filters, the PSNR would be $\infty$; our estimates fall short of this. Notably, reconstruction performance drops in the (S-I) direction; this is likely due to the fact that signals along this direction were not included in the training data. Additionally, an example of learned filters in the frequency domain for one resolution, $4\oplus 0$, is shown in Fig.~\ref{fig:example_filters}. Recall that the fixed filter $h_0$ is the slice selection profile. Note our optimization approximated bandpass filters. \textbf{Stage 2 Results}\qquad{}To evaluate Stage~2, we compared our method to two approaches which also do not rely on external training data: cubic b-spline interpolation and SMORE~\cite{zhao2020smore}, a state-of-the-art self-super-resolution technique for anisotropic MR volumes. For a fair comparison and improving SMORE results, SMORE was trained with the same slice profile that we use (the ESPRESO estimate~\cite{han2021mr}) instead of a Gaussian slice profile used in the original paper. Qualitative results are displayed in Fig.~\ref{fig:qualitative} of a mid-sagittal slice for a representative subject at $2\oplus 0$, $4\oplus 1$, and $6\oplus 2$. This subject is near the median PSNR value for that resolution across the $30$ subjects evaluated in our experiments and for which SMORE outperforms our method at $2\oplus 0$, is on par with our method at $4\oplus 1$, and is outperformed by our method at $6\oplus 2$. Also shown in Fig.~\ref{fig:qualitative} is the corresponding Fourier space, and we see that our proposed method includes more high frequencies than the other methods. For quantitative results, PSNR and SSIM were calculated on entire volumes, as illustrated in box plots in Fig.~\ref{f:oasis30}. \section{Discussion and conclusions} In this paper, we have presented a novel filter bank formulation for SR of 2D-acquired anisotropic MR volumes as the regression of filter-specified missing detail coefficients in an $M$-channel PR filter bank. We would emphasize that our approach establishes a new theoretic basis for super-resolution. In theory, these coefficients exist and give exact recovery of the underlying HR signal. However, it is unknown whether a mapping of $y \rightarrow (d_1, \ldots, d_{M-1})$ exists, and whether it is possible to find filters to complete the analysis and synthesis banks to guarantee PR. In practice, we estimate these in two stages: Stage~1 estimates the missing analysis and synthesis filters towards PR and Stage~2 trains a CNN to regress the missing detail coefficients given the coarse approximation $y$. Although we do not, in all situations, outperform a competitive end-to-end method, in our experiments as the resolution worsens and slice gap increases our proposed method better handles the task, validating the usefulness of our method for super resolving anisotropic MR images with large slice gaps. Future work will include: 1)~deeper investigation into the limits of the training set in learning the regression; 2)~the degree to which the mapping $G$ is valid; 3)~more analysis of the super-resolved frequency space; and 4)~develop methods to exactly achieve or better approximate PR. True PR filter banks should greatly improve the method, as Table~\ref{tab:autoencoding} serves as a type of ``upper bound'' for our method; regardless of the quality of coefficient regression, even given the ideal ground truth coefficients, reconstruction accuracy would be limited. Additionally, further investigation into improved regression is needed---a model which can better capture the necessary aliasing in the coefficient domain is vital for PR. \bibliographystyle{splncs04} \section{Introduction} Anisotropic magnetic resonance (MR) images are those acquired with high in-plane resolution and low through-plane resolution. It is common practice to acquire anisotropic volumes in clinics as it reduces scan time and motion artifacts while preserving signal-to-noise ratio. To improve through-plane resolution, data-driven super-resolution~(SR) methods have been developed on MR volumes~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}. The application of SR methods to estimate the underlying isotropic volume has been shown to improve performance on downstream tasks~\cite{zhao2019applications}. For 2D multi-slice protocols, the through-plane point-spread function~(PSF) is known as the slice profile. When the sampling step is an integer, the through-plane signals of an acquired MR image can be modeled as a strided 1D convolution between the slice profile and the object to be imaged~\cite{han2021mr,prince2006medical,sonderby2016amortised}. Commonly, the separation between slices is equivalent to the full-width-at-half-max~(FWHM) of the slice profile, but volumes can also be acquired where the slice separation is less than or greater than the slice profile FWHM, corresponding to ``slice overlap'' and ``slice gap'' scenarios, respectively. Data-driven SR methods usually simulate low-resolution~(LR) data from high-resolution~(HR) data using an assumed slice profile~\cite{zhao2020smore,oktay2016multi,chen2018efficient,du2020super}, or an estimated slice profile according to the image data or acquisition~\cite{han2021mr}. In either case, SR methods are generally formulated as a classical inverse problem: \begin{equation} y = Ax, \label{eq:inverse} \end{equation} where $y$ is the LR observation, $A$ is the degradation matrix, and $x$ is the underlying HR image. Commonly, this is precisely how paired training data is created for supervised machine learning methods; HR data is degraded by $A$ to obtain the LR $y$ and weights $\theta$ of a parameterized function $\phi$ (e.g., a neural network) are learned such that $\phi_\theta(y) \approx x$. However, under this framework there is no specification of information lost by application of $A$; contemporary SR models train end-to-end and are directed only by the dataset. In our work, we propose an entirely novel SR framework based on perfect reconstruction~(PR) filter banks. From filter bank theory, PR of a signal $x$ is possible through an $M$-channel filter bank with a correct design of an analysis bank $H$ and synthesis bank $F$~\cite{strang_nguyen_1997}. Under this formulation, we do not change Eq.~\ref{eq:inverse} but explicitly recognize our observation $y$ as the ``coarse approximation'' filter bank coefficients and the missing information necessary to recover $x$ as the ``detail'' coefficients (see Fig.~\ref{fig:obs_model}). For reference, in machine learning jargon, the analysis bank is an encoder, the synthesis bank is a decoder, and the coarse approximation and detail coefficients are analogous to a ``latent space.'' \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/filter_bank.png} \caption{The filter bank observation model. Both $y$ and $H_0$ (green) are given and fixed. In stage 1, filters $H_1, \ldots, H_{M-1}$ and $F_0, F_1, \ldots, F_{M-1}$ (purple) are learned; in stage 2, a mapping from $y$ to $d_1, \ldots, d_{M-1}$ (gold) is learned.} \label{fig:obs_model} \end{figure} The primary contribution of this work is to reformulate SR to isotropy of 2D-acquired MR volumes as a filter bank regression framework. The proposed framework has several benefits. First, the observed low-frequency information is untouched in the reconstruction; thus, our method explicitly synthesizes the missing high frequencies and does not need to learn to preserve acquired low frequency information. Second, the downsampling factor $M$ specifies the number of channels in the $M$-channel filter bank, constraining the solution space in tougher scenarios such as ``slice gap'' acquisition recovery. Third, the analysis filters of PR filter banks necessarily introduce aliasing which is canceled via the synthesis filters; therefore, we do not need to directly handle the anti-aliasing of the observed image. Fourth, our architecture has a dynamic capacity for lower-resolution images. The rationale behind the dynamic capacity is intuitive: when fewer measurements are taken, more estimates must be done in recovery and a more robust model is required. Fifth, our method exploits the nature of anisotropic volumetric data; in-plane slices are HR while through-plane slices are LR. Thus, we do not rely on external training data and only need the in-plane HR data to perform internal supervision. In the remainder of the paper, we describe this framework in detail, provide implementation details, and evaluate against a state-of-the-art internally supervised SR technique. We demonstrate the feasibility of formulating SR as filter bank coefficient regression and believe it lays the foundation for future theoretical and experimental work in SR of MR images. \section{Methods} The analysis bank $H$ and synthesis bank $F$ each consist of $M$ 1D filters represented in the $z$-domain as $H_k$ and $F_k$, respectively, with corresponding spatial domain representations $h_k$ and $f_k$. As illustrated in Fig.~\ref{fig:obs_model}, input signal $X(z) = \mathcal{Z}(x)$\footnote{$\mathcal{Z}(x)$ is the $\mathcal{Z}$-transform of $x$\cite{strang_nguyen_1997}.} is filtered by $H_k$, then decimated with $\downarrow M$ (keeping every $M^\text{th}$ entry) to produce the corresponding coefficients. These coefficients exhibit aliasing and distortion which are corrected by the synthesis filters~\cite{strang_nguyen_1997}. Reconstruction from coefficients comes from zero-insertion upsampling with $\uparrow M$, passing through filters $F_k$, then summing across the $M$ channels. \begin{figure}[!tb] \centering \includegraphics[ width=0.9\textwidth, page=2, trim=3cm 37cm 25cm 7cm, clip, ]{figs/multi_figs.pdf} \caption{This network architecture, used in the second stage of our algorithm, has the same structure for both the generator and discriminator but with different hyperparameters. All convolutional layers used a $3\times 3$ kernel. The generator and discriminator used $16$ and $2$ residual blocks, respectively. The generator had $128 \times M$ features per convolutional layer while the discriminator had $64 \times M$ features per convolutional layer. The final convolution outputs $M-1$ channels corresponding to the missing filter bank detail coefficients. The internal structure of the residual block is encapsulated in green.} \label{fig:network} \end{figure} Traditional design of $M$-channel PR filter banks involves a deliberate choice of a prototype low-pass filter $H_0$ such that modulations and alternations of the prototype produce the remaining filters for both the analysis and synthesis filter banks~\cite{strang_nguyen_1997}. $M$ is also chosen based on the restrictions of the problem at hand. However, for anisotropic 2D-acquired MRI, the slice profile \textit{is} the low-pass filter and as such we have a fixed, given $H_0$. The separation between slices is equal to the FWHM of $h_0$ plus any further gap between slices. We denote the slice separation as $M$, corresponding to the number of channels in the PR filter bank. We use $A \| B$, read ``A skip B'', to denote a FWHM of $A$~mm and slice gap of $B$~mm and note that $M = A + B$. For this preliminary work, we assume $A, B$, and $M$ are all integer and, without loss of generality, assume that the in-plane resolution is $1 \| 0$. Our goal is to estimate filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$ and the detail coefficients $d_1, \ldots, d_{M-1}$ which lead to PR of $x$. We approach this problem in two stages. In stage~1, we approximate the missing analysis and synthesis filters, assuming there exists a set of filters to complete the $M$-channel PR filter bank given that $H_0$ and $M$ are fixed and known ahead of time. These must be learned first to establish the approximate PR filter bank conditions on the coefficient space. Then, in stage~2, we perform a regression on the missing coefficients. Both of these stages are optimized in a data-driven end-to-end fashion with gradient descent. After training, our method is applied by regressing $d_1, \ldots, d_{M-1}$ from $y$ and feeding all coefficients through the synthesis bank to produce $\hat{x}$, our estimate of the HR signal. The stage~2 coefficient regression occurs in 2D, so we construct our estimate of the 3D volume by averaging stacked 2D predictions from the synthesis bank from both cardinal planes containing the through-plane axis. \textbf{Stage 1: Filter Optimization}\qquad{}Previous works assumed the slice profile is Gaussian with FWHM equal to the slice separation~\cite{zhao2020smore,oktay2016multi}; instead, we estimate the slice profile, $H_0$, directly with ESPRESO\footnote{\url{https://github.com/shuohan/espreso2}}~\cite{han2021mr}. We next aim to estimate the filters $H_1, \ldots, H_{M-1}$ and $F_0, \ldots, F_{M-1}$. To achieve this, we learn the spatial representations $h_1, \ldots, h_{M-1}$ and $f_0, \ldots, f_{M-1}$ from 1D rows and columns drawn from the high resolution in-plane slices of $y$, denoted $\mathcal{D}_1 = \{x_i\}_{i=1}^N$. We initialize these filters according to a cosine modulation~\cite{strang_nguyen_1997} of $h_0$, which is defined as \begin{equation*} f_k[n] = h_k[n] = h_0[n] \sqrt{\frac{2}{M}} \cos{\left[ \left( k + \frac{1}{2} \right) \left( n + \frac{M + 1}{2} \right) \frac{\pi}{M}\right] }, \end{equation*} for $ k \in \{1, 2, \ldots, M-1\}$. Accordingly, we initialize $f_0$ to $h_0$. We estimate $\hat{x}_{i}$ by passing $x_i$ through the analysis and synthesis banks, then (since the entire operation is differentiable) step all filters except $h_0$ through gradient descent. The reconstruction error is measured with mean squared error loss and the filters are updated based on the AdamW~\cite{loshchilov2017decoupled} optimizer with a learning rate of $0.1$, the one-cycle learning rate scheduler~\cite{smith2019super}, and a batch size of $32$ for $100,000$ steps. \textbf{Stage 2: Coefficient Regression}\qquad{}From stage~1, we have the analysis and synthesis banks and now want to estimate the missing detail coefficients given only the LR observation $y$. With the correct coefficients and synthesis filters, PR of $x$ is possible. For this stage, we use 2D patches, in spite of the 1D SR problem, as a type of ``neighborhood regularization''. Let $\mathcal{D}_2 = \{x_i\}_{i=1}^N$, $x_i \in \mathbb{R}^{p \times pM}$; i.e., the training set for stage~2 consists of 2D $p \times pM$ patches drawn from the in-plane slices of $y$. The second dimension will be decimated by $M$ after passing through the analysis banks, resulting in $y, d_1, \ldots, d_{M-1} \in \mathbb{R}^{p \times p}$. We use the analysis bank (learned in stage~1) to create training pairs $\{(y_i, (d_1, d_2, \ldots, d_{M-1})_i)\}_{i=1}^N$ and fit a convolutional neural network (CNN) $G: \mathbb{R}^{p \times p} \rightarrow \mathbb{R}^{{p \times p}^{M-1}}$ to map $y_i$ to $(d_1, \ldots, d_{M-1})_i$. In this work, we set $p = 32$. Since this is an image-to-image translation task, we adopt the widely used approach proposed in Pix2Pix~\cite{pix2pix2017} including the adversarial patch discriminator. Empirically, we found more learnable parameters are needed with greater $M$. Thus, our generator $G$ is a CNN illustrated in Fig.~\ref{fig:network} with $16$ residual blocks and $128 \times M$ kernels of size $3\times 3$ per convolutional layer. The discriminator $D$ has the same architecture but with only $2$ residual blocks and $64 \times M$ kernels per convolutional layer. Our final loss function for stage~2 is identical to the loss proposed in~\cite{pix2pix2017} and is calculated on the error in $(d_1, \ldots, d_{M-1})_i$. We use the AdamW optimizer~\cite{loshchilov2017decoupled} with a learning rate of $10^{-4}$ and the one-cycle learning rate scheduler~\cite{smith2019super} for $500,000$ steps at a batch size of $32$. \section{Experiments and Results} \begin{table}[!tb] \centering \caption{Mean $\pm$ std. dev. of volumetric PSNR values for stage~1 reconstruction of the $30$ subjects. ``Self'' indicates a reconstruction of the input low-resolution volume on which the filter bank was optimized, while ``GT'' indicates reconstruction of the isotropic ground truth volume. (L-R), (A-P), and (S-I) are the left-to-right, anterior-to-posterior, and superior-to-inferior directions, respectively.} \label{tab:autoencoding} \begin{tabular}{c|c|c|c|c|c} \toprule \hspace*{9ex} & Self (L-R) & Self (A-P) & GT (L-R) & GT (A-P) & GT (S-I)\\ \cmidrule{1-6} $2\|0$& ~$62.24\pm 0.97$~ & ~$60.19\pm 3.74$~ & ~$60.63\pm 0.56$~ & ~$59.59\pm 2.54$~ & ~$55.47\pm 4.69$\\ $2\|1$ & $63.01\pm 4.91$ & $62.25\pm 5.09$ & $64.32\pm 0.63$ & $59.49\pm 5.52$ & $53.81\pm 6.50$\\ $2\|2$ & $62.57\pm 1.59$ & $57.93\pm 5.32$ & $60.62\pm 1.34$ & $59.31\pm 3.65$ & $52.09\pm 4.34$\\ \cmidrule{1-6} $4\|0$ & $55.47\pm 3.81$ & $52.36\pm 5.32$ & $48.91\pm 4.65$ & $48.77\pm 4.68$ & $44.08\pm 4.78$\\ $4\|1$ & $53.03\pm 1.54$ & $50.31\pm 3.41$ & $44.19\pm 1.57$ & $45.65\pm 1.63$ & $44.28\pm 2.14$\\ $4\|2$ & $54.71\pm 2.61$ & $51.08\pm 4.51$ & $46.75\pm 2.83$ & $46.39\pm 3.27$ & $43.27\pm 2.80$\\ \cmidrule{1-6} $6\|0$ & $49.97\pm 1.07$ & $40.18\pm 4.77$ & $40.14\pm 1.35$ & $41.04\pm 1.40$ & $35.76\pm 3.19$\\ $6\|1$ & $52.35\pm 0.55$ & $45.69\pm 5.24$ & $42.11\pm 0.84$ & $42.74\pm 1.25$ & $39.76\pm 3.47$\\ $6\|2$ & $53.17\pm 3.17$ & $49.11\pm 3.41$ & $43.66\pm 4.12$ & $44.87\pm 3.99$ & $41.50\pm 2.29$\\ \bottomrule \end{tabular} \end{table} \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{figs/example_filters.png} \caption{Estimated PR filters from stage~1 for a single subject at $4\| 1$ resolution ($M = 5$) in the frequency domain. Note the amplitudes for analysis and synthesis banks are on different scales, DC is centered, and $h_0$ is estimated by ESPRESO~\cite{han2021mr}.} \label{fig:example_filters} \end{figure} \begin{figure}[!tbp] % \centering % \includegraphics[ width=\textwidth, page=1, trim=4cm 0cm 7.5cm 0cm, clip, ]{figs/multi_figs.pdf} % \caption{Mid-sagittal slice for a representative subject at different resolutions and gaps for each method. The low resolution column is digitally upsampled with k-space zero-filling. $A\| B$ signifies a slice thickness of $A$~mm and a gap of $B$~mm. Fourier magnitude is displayed in dB on every other row. The top two rows correspond to $2\| 0$ ($M=2$) for the MR slice and Fourier space, the second two rows are for $4\| 1$ ($M=5$), and the bottom two rows are for $6\| 2$ ($M=8$). } % \label{fig:qualitative} % \end{figure} \begin{figure}[!tb] \centering % \begin{tabular}{c c c} \includegraphics[width=0.47\textwidth]{figs/n=30psnr.pdf} && \includegraphics[width=0.47\textwidth]{figs/n=30ssim.pdf} \\[-0.5em] % \textbf{(a)} && \textbf{(b)}\\[0em] % \end{tabular} \caption{Quantitative metrics PSNR in \textbf{(a)} and SSIM in \textbf{(b)}, computed over the $30$ image volumes. Significance tests are performed between SMORE and our proposed method with the Wilcoxon signed rank test; $\ast$ denotes $p$-values $ < 0.05$; ``ns'' stands for ``not significant''.} % \label{f:oasis30} \end{figure} \textbf{Experiments}\qquad{}We performed two experiments to evaluate the efficacy of each stage in our approach. We randomly selected $30$ T1-weighted MR brain volumes from the OASIS-3 dataset~\cite{LaMontagne2019_OASIS3} to validate both stages and simulated LR acquisition via convolution with a Gaussian kernel with FWHM $\in \{2, 4, 6\}$ and slice gap $\in \{0, 1, 2\}$, yielding nine combinations of FHWM and slice gap in total. Since $M = A + B$ for a scan of resolution $A \| B$, $M \in \{2, 3, 4, 5, 6, 7, 8\}$. For these experiments, the HR plane was axial while the cardinal LR planes were sagittal and coronal. We note that both stage~1 and stage~2 are trained for each LR volume separately as our proposed method does not use external training data, but instead relies on the inherent anisotropy in the multi-slice volume (i.e., HR in-plane and LR through-plane data). \textbf{Stage 1 Results}\qquad{}We trained stage~1 using both cardinal 1D directions from in-plane data; that is, left-to-right (L-R) and anterior-to-posterior (A-P) directions. We then performed 1D reconstruction along these cardinal directions and collated all reconstructions into 3D volumes. In other words, this is an evaluation of self-auto-encoding. The mean volumetric reconstruction PSNR $\pm$ std. dev. across the $30$ subjects is shown in Table~\ref{tab:autoencoding}. In addition to applying the learned filters to the LR image itself, we would also like to test the extent of signal recovery for the HR counterpart that is the ground truth (GT) of the LR volume. Indeed, the coefficients generated by our learned analysis bank are what we will regress in stage~2, so a reconstruction of the GT is also shown in the right three columns of Table~\ref{tab:autoencoding}. This serves as a sort of ``upper bound'' on our super-resolution estimate and also answers the question of how well internal training generalizes to reconstruction of an isotropic volume. We note that if we had attained PR filters, the PSNR would be $\infty$; our estimates fall short of this. Notably, reconstruction performance drops in the (S-I) direction; this is likely due to the fact that signals along this direction were not included in the training data. Additionally, an example of learned filters in the frequency domain for one resolution, $4\| 1$ ($M=5$), is shown in Fig.~\ref{fig:example_filters}. Recall that the fixed filter $h_0$ is the slice selection profile. We observe that our optimization approximated bandpass filters. \textbf{Stage 2 Results}\qquad{}To evaluate stage~2, we compared our method to two approaches which also do not rely on external training data: cubic b-spline interpolation and SMORE~\cite{zhao2020smore}, a state-of-the-art self-super-resolution technique for anisotropic MR volumes. For a fair comparison and improving SMORE results, SMORE was trained with the same slice profile that we use (the ESPRESO estimate~\cite{han2021mr}) instead of a Gaussian slice profile used in the original paper. Qualitative results are displayed in Fig.~\ref{fig:qualitative} of a mid-sagittal slice for a representative subject at $2\| 0$, $4\| 1$, and $6\| 2$. This subject is near the median PSNR value for that resolution across the $30$ subjects evaluated in our experiments and for which SMORE outperforms our method at $2\| 0$, is on par with our method at $4\| 1$, and is outperformed by our method at $6\| 2$. Also shown in Fig.~\ref{fig:qualitative} is the corresponding Fourier space, and we see that our proposed method includes more high frequencies than the other methods. For quantitative results, PSNR and SSIM were calculated on entire volumes, as illustrated in box plots in Fig.~\ref{f:oasis30}. \section{Discussion and conclusions} In this paper, we have presented a novel filter bank formulation for SR of 2D-acquired anisotropic MR volumes as the regression of filter-specified missing detail coefficients in an $M$-channel PR filter bank that does not change the low-frequency sub-bands of the acquired image. We would emphasize that our approach establishes a new theoretic basis for SR. In theory, these coefficients exist and give exact recovery of the underlying HR signal. However, it is unknown whether a mapping of $y \rightarrow (d_1, \ldots, d_{M-1})$ exists, and whether it is possible to find filters to complete the analysis and synthesis banks to guarantee PR. In practice, we estimate these in two stages: stage~1 estimates the missing analysis and synthesis filters towards PR and stage~2 trains a CNN to regress the missing detail coefficients given the coarse approximation $y$. According to our experiments, as the resolution worsens and slice gap increases our proposed method better handles the SR task than the competitive approach, validating the usefulness of our method for super resolving anisotropic MR images with large slice gaps. Future work will include: 1)~deeper investigation into the limits of the training set in learning the regression; 2)~the degree to which the mapping $G$ is valid; 3)~more analysis of the super-resolved frequency space; and 4)~develop methods to exactly achieve or better approximate PR. True PR filter banks should greatly improve the method, as Table~\ref{tab:autoencoding} serves as a type of ``upper bound'' for our method; regardless of the quality of coefficient regression, even given the ideal ground truth coefficients, reconstruction accuracy would be limited. Furthermore, our work suffers two major shortcomings. First, our current assumptions are integer slice thickness and slice separation, which is not always true in reality. To address this, the use of fractional sampling rates with filter banks~\cite{strang_nguyen_1997} may be a promising research direction. Second, our model in stage 2 scales the number of convolution kernels per layer by $M$. This induces a longer training and testing time when the image is of lower resolution. For reference, SMORE produced the SR volume in about 86 minutes on a single NVIDIA V100 regardless of the input resolution, but our proposed method produced the SR volume in 27 minutes for $2 \| 0$, 85 minutes for $4 \| 1$, and 127 minutes for $6 \| 2$. Additionally, further investigation into improved regression is needed---a model which can better capture the necessary aliasing in the coefficient domain is vital for PR. \section{Acknowledgements} This material is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1746891. Theoretical development is partially supported by NIH ORIP grant R21~OD030163 and the Congressionally Directed Medical Research Programs (CDMRP) grant MS190131. This work also received support from National Multiple Sclerosis Society RG-1907-34570, CDMRP W81XWH2010912, and the Department of Defense in the Center for Neuroscience and Regenerative Medicine. \bibliographystyle{splncs04}
1,108,101,564,045
arxiv
\section{INTRODUCTION} With the discovery of more than 300 extrasolar planets, considerable interest is now focused on finding and characterizing terrestrial mass planets in habitable zones around their host stars. Such planets are extremely difficult to detect around F, G and K stars, requiring either extremely high radial velocity precision ($< 1$ m s$^{-1}$) or space based photometry to detect a transit. Radial velocity and transit efforts are now beginning to focus on M dwarfs, the most numerous stars, where the lower luminosity shifts the habitable zone much closer to the star. Terrestrial mass planets in such regions are detectable with the current radial velocity precision obtained with high resolution echelle spectrographs \citep{Butler96, Pepe04}. A number of the brighter M dwarfs are being surveyed in the optical with existing high precision radial velocity instruments. Neptune and super-Earth mass planets have been discovered around a number of such objects. Examples are GJ436 (Butler et al. 2004), GL581 (Udry et al. 2007), and GL176 (Forveille et al. 2008) and they suggest that such planets may be rather common around M stars. The habitability of terrestrial planets around M stars has been explored by \citet{Tarter07} and \citet{Scalo07}. However, most of these stars are intrinsically faint in the optical, emitting most of their flux in the 1-1.8 $\mu m$ wavelength region, the near infrared (NIR) J (1.1-1.4 $\mu m$) and H (1.45-1.8 $\mu m$) bands. Stellar activity, coupled with the relative faintness, can make detections in the optical difficult. For example, Endl et al. (2008) claim the detection of a 24 Earth-mass planet around Gl 176 using 28 radial velocities obtained with the High Resolution Spectrograph (HRS) on the Hobby-Eberley Telescope, while Forveille et al. (2008), with 58 higher precision measurements obtained with the HARPS (High Accuracy Radial velocity Planet Searcher) instrument, find evidence for stellar activity as well as a different orbital period and a much smaller minimum mass. Similar observation in the NIR may require less telescope time and are less prone to systematics because the activity induced radial velocity jitter is expected to be smaller in the NIR than in the optical. As an example, Setiawan et al. (2008) announced the discovery of a planet around the young active star TW Hya, but Huelamo et al. (2008) see no such variability on precision radial velocity data obtained with the CRIRES (Cryogenic Infrared Echelle Spectrograph) instrument in the infra-red (Seifhart \& Kaufl 2008) and they attribute the velocity variability in the optical to star spots. Rucinski et al. (2008) also observe photometric periodicity at the proposed planet's period in one season of observations with the MOST (Microvariability $\&$ Oscillations of Stars) satellite, and the variations are absent in another season- suggesting that the cause of the radial velocity is activity-induced. Prato et al. (2008) have discussed radial velocity observations of young stars in the optical and NIR and conclude that observations in the near-infrared are essential to discriminate between activity and the presence of planets. NIR spectroscopy can explore new regimes of planet formation as well as complement existing observational programs in the optical that target young or active stars. In this article, we explore the calibration challenges faced by a high-resolution spectrograph operating in the NIR and suggest that easily available absorption gas cells can be used for wavelength calibration purposes for any such instruments that are currently being designed. Our discussion focuses on using a simultaneous wavelength calibrator along a separate optical fiber \citep{Baranne96} to track and calibrate out instrument drift, similar to the Th-Ar calibration technique in the optical. We briefly highlight important aspects on such instruments in \S2. In \S3 and \S4 we discuss currently used emission wavelength sources and their relative merits and disadvantages. In \S5 we explore the use of commercially available absorption cells for precision wavelength calibration. \section{Fiber-Fed High Resolution Spectrographs in the NIR} The use of optical fibers allows a spectrograph to be placed in a stable environmentally controlled enclosure. The intrinsic radial scrambling properties of an optical fiber can, with the use of a double scrambler \citep{HR92}, lead to a very stable illumination profile on the spectrograph slit. Commercially available optical fibers provide high transmission in the J and H NIR bands. Fiber-feeds allow a second calibration fiber to be used to simultaneously track the instrument drift during an object exposure. If the science and calibration fiber images are relatively close on the focal plane, then the measured drift of the calibration spectrum is a very good estimate of the drift estimated on the science spectrum. The major requirements on the calibration source are that it be stable enough to achieve the required precision, have a number of lines with known wavelengths, and be bright enough that sufficient S/N is obtained for it to not compromise the possible radial velocity accuracy on bright sources. Design studies for such instruments in the NIR have been conducted, including one for a proposed Precision Radial Velocity Spectrograph \citep{Rayner07} for the Gemini telescope. On the fiber-fed bench top PRVS Pathfinder instrument, \citet{Ramsey08} have demonstrated a short term radial velocity precision of 7-10 m s$^{-1}$ in the NIR using integrated sunlight with Thorium-Argon as a wavelength reference. The use of silicon immersion gratings is also being explored (eg. Ge at al. 2006, Jaffe et al. 2006) to facilitate compact spectrographs capable of providing spectral resolutions of R=50-100k in the NIR. \section{Thorium-Argon Emission Lamps: } \subsection{History \& Advantages} \citet{Kerber07} provide a summary of the development and use of Thorium--Argon (Th-Ar) hollow cathode emission lamps. The only naturally occurring Thorium (Th) isotope, \isotope[232]{Th}, has zero nuclear spin, leading to sharp symmetric emission lines, even at very high resolutions. The monatomic Argon gas also has a number of bright lines. Th-Ar emission lines span the UV-NIR regions, making such a lamp a very useful and convenient source of wavelength calibration. Most wavelength calibration applications using this lamp are tied to the \citet{PE83} (PE83 hereafter) measurements of the Thorium lines in the 280-1100nm region. Their quoted measurement precision, $\sim$ 0.001 cm$^{-1}$ - 0.005 cm$^{-1}$, corresponds to an intrinsic velocity uncertainty of 15-80 ms$^{-1}$ per emission line (at 550nm). For Argon (Ar) lines, the wavelength measurements typically used are adopted from \citet{Norlen73}. More recent measurements of Ar lines have been taken by Whaling et al. (1995, 2002), who extend measurements to the mid-IR. Argon is not a heavy element like Thorium and the central wavelengths of the Argon emission lines are susceptible to pressure shifts, making them sensitive to environmental conditions in the lamp. The lines may be unstable by many tens of ms$^{-1}$ \citep{LP07}, and for the highest possible precision ($\sim 1-3$ ms$^{-1}$), Ar lines should generally be avoided. At intermediate spectral resolutions, a further problem is line blends, because the intensity ratios of Th and Ar lines is a function of the current supplied \citep{Kerber07}. Very stable spectrographs offer the opportunity of decreasing the internal errors associated with the original Thorium measurements and PE83s use of the Fourier Transform Spectrograph (FTS) at Kitt-Peak National Observatory (KPNO). Stable instruments allow one to acquire many Th-Ar spectra and co-add them, thereby increasing the S/N and reducing the photon noise uncertainties on the line-centers of known lines, as well as enabling wavelength estimation of lines not initially present in the original PE83 atlas. Such a technique has been applied by \citet{LP07} with the vacuum enclosed HARPS high-resolution echelle spectrograph, resulting in a line-list with improved wavelength precision (though inheriting the zero-point of the original KPNO atlas). This improved line-list has certainly helped HARPS achieve radial velocity precisions of $<$1 ms$^{-1}$ and discover super-Earths \citep{Mayor08}. \subsection{Calibration in the NIR} In the NIR J and H bands Th lines are few and relatively faint, making it difficult to find accurate dispersion solutions in the J and H band without substantial integration time. The \citet{Hinkle01} atlas contains FTS and grating spectrograph wavelength measurements of ~ 500 lines in the 1-2.5 $\mu m $ range and Engleman, Hinkle \& Wallace (2003) derive FTS wavelengths for a much larger sample using a cooled Th-Ar cell operated at a high current. \citet{Kerber08} also measured wavelength for Th-Ar lines for multiple cells with an FTS for use in calibration of the CRIRES instrument model. Although these atlases help to identify lines when they are detected, the Th lines in commercially-available lamps are still very low in intensity. The high contrast between the Th and Ar lines leads to the bright Ar lines dominating the spectra and these can be a source of significant scattered light. For example, the SOPHIE instrument (Perruchot et al. 2008) requires the use of a ~700 nm shortpass filter to block scattered light from bright Ar lines in the infrared. For high precision radial velocity studies, the situation is further exacerbated by the need to acquire a high enough signal level on sufficient lines to achieve a dispersion solution accurate enough to be able to measure radial velocities at a high precision. Another minor concern is that simultaneous calibration with Th-Ar also makes scattered light subtraction of echelle frames difficult. The need to have an acceptable signal on most lines leads to the strongest lines saturating and bleeding into the adjacent stellar spectrum, making reduction more complex. The relative lack of sufficient Th lines in the NIR has also been noted by \citet{Ramsey08} in their laboratory tests with PRVS Pathfinder and these authors propose combining light from multiple lamps (most notable Uranium-Ar) with Th-Ar to increase the line density available for wavelength calibration. \section{Broadband Frequency Combs} The calibration advantages of using a laser frequency comb with high resolution spectrographs have been discussed by \citet{Murphy07} and \citet{Braje08}. Though these have the desired frequency coverage, the intrinsically small frequency mode spacing (250MHz-- 1GHz) makes their direct use unsuitable for astronomical spectrographs since the individual lines would blur together. The use of a Fabry Perot filter to select only well-spaced emission lines has been demonstrated by \citet{Li08} for optical wavelengths and in parts of the NIR H band (1530-1600nm) by Steinmetz et al. (2008), who achieved high-wavelength precision when using the comb for solar observations. Although there is little doubt that such femtosecond frequency combs offer the highest possible precision, they are not yet easily available and are relatively expensive (though their cost is expected to come down in the future). While facility-class instruments on large telescopes will likely be able to invest in such calibrators, they are not as easy to obtain, maintain and operate as Th-Ar or other emission lamps. Many high resolution NIR spectrographs may also not need this level of sub 1 ms$^{-1}$ wavelength calibration since their performance may be dominated by other systematics. The atmospheric absorption and OH emission lines may also set the limit to the precision that can realistically be achieved in the NIR. Recent progress in generating THz spacing frequency combs (Del'Haye et al. 2007) by using a micro-toroidal oscillator and a laser may offer some promise in making an inexpensive, readily available device for future NIR spectrographs. \section{Absorption Gas Cells} Absorption gas cells have a long history of usage for calibration. Molecular absorption caused by rotational and vibrational energy levels is well understood, stable, and repeatable. The \citet{CW79} radial velocity survey used the sharp isolated absorption lines of \isotope[]{H}\isotope[]{F} gas in the red to calibrate out instrument drifts. The use of molecular Iodine (I$_2$) gas as a simultaneous absorption calibrator by \citet{Butler96} has lead to the ability to measure very precise velocities, with long term precision approaching 1 ms$^{-1}$, and the stability of such cells is now widely accepted. Absorption cells are not quite as popular as Th-Ar in the UV-optical region because the lines tend to be localized in a smaller wavelength regimes (eg. 500-620nm for I$_2$). In the case of I$_2$, the lines are all blended even at spectral resolutions of 100k, requiring FTS spectra of that particular cell and PSF modeling to determine a wavelength scale when used with a spectrograph. Absorption cells are in use in some astronomical instruments in the infra-red for wavelength calibration. The CRIRES instrument on the VLT uses a \isotope[]{N}$_2$\isotope[]{O} gas cell as a calibrater for wavelengths longer than 2.2 $\mu$m \citep{Kerber08}. Gas mixtures for an I$_2$-like absorbtion gas cell for the planned NAHUAL spectrograph have been discussed by \citet{Martin05} and for the planned GIANO spectrograph by D'Amato et al. (2008) who have developed a combined HCL-HBr-HI cell that has $\sim$200 lines, spanning J, H, \& K bands. \citet{Ramsey08} also mention plans to use \isotope[]{H}\isotope[]{F} and water vapour cells in their J-band pathfinder instrument to monitor velocity drifts. HF has deep lines, but they are few, and the gas is corrosive and difficult to work with. Such a cell also needs to be heated to more than $70\,^{\circ}\mathrm{C}$ to prevent polymerization of HF \citep{CW79}, resulting in line broadening. Water vapour also has many lines, but is also present in the atmosphere, making identification of isolated individual lines for wavelength calibration quite challenging. In the NIR H band ($\sim$ 1.45-1.8 $\mu$m) a number of absorption calibrators with well spaced lines have been well characterized, primarily owing to the needs for ever higher wavelength division multiplexing in the telecommunication industry. Although no single cell covers the entire spectral region, we show that a significant fraction of the H band can be covered using combinations of four commercially available cells. Our proposed use is not to pass the starlight through the cells, but to use the isolated absorption lines as wavelength markers to determine the dispersion solution and track instrument drifts, thereby enabling the measurement of precise velocities and accelerations. \subsection{NIST SRMS} The United States National Institute of Standards \& Technology (NIST) has designated four standard reference materials (SRMs hereafter) for use in wavelength calibration. Together these four materials span the C \& L telecom wavelength bands and have $\sim 200$ isolated sharp lines in the 1510-1630 nm wavelength region of the H band. Table 1 lists these NIST SRM gas cells, their effective wavelength coverage and the number of lines with NIST-certified wavelengths. These gas cells are commercially available, and have an effective absorption path length of 5-80 cm. The low pressure H\isotope[13]{C}\isotope[14]{N} (Swann \& Gilbert 2000) and \isotope[12]C$_2$H$_2$ \citep{SG05} cells have been primarily designed for high resolution application and have narrow linewidths of 7-15 pm, essentially unresolved at spectral resolutions as high as R=50-70k at 1550nm. The high pressure CO gas cells were originally designed for use with an instrumental resolution of 0.05nm and have linewidths of 50pm \citep{SG02}. Decreasing the cell pressure can easily make the lines sharper, if necessary, and increasing cell length can increase absorption for unsaturated lines. These gas cells are available coupled to single-mode fibers with standardized FC/PC connectors for light input and output, allowing the CO gas cells to be used in a multi-pass configuration, achieving 80 cm of absorption length with four passes through a compact 20 cm cell. The use of the single mode fiber of input and output makes it easy to pass continuum light through multiple cells to obtain an imprint of the absorbtion lines over larger wavelength regimes. The use of the fibers is not necessary, but is a convenience. \subsection{Astronomical Echelle Spectrographs} Modern high resolution spectrographs generally use coarse ruled echelle gratings that have high blaze angles (R2-R4 or $\tan{\theta_B}=2-4$ is fairly typical). To estimate the achievable accuracy of wavelength calibration, we consider an R2 echelle spectrograph capable of a resolution of R=50k at 1600 nm. We assume a 4 pixel sampling the FWHM of the Gaussian resolution element. We assume a grating with a groove density such that a wavelength of 1550nm corresponds to an order number of $\sim 100$. While such low groove densities are not available in commercially ruled echelles, they can be created on silicon immersion gratings \citep{Ge06} enabling high dispersion as well as the ability to pack all orders from the H band into a single 1k by 1k detector. Figure 1 shows the expected absorption spectra from these gas cells with such an instrument. For HCN and C$_2$H$_2$ we have used the high resolution spectra scans from \citet{SG00,SG05}\footnote{Also available at www.nist.gov/srm} and convolved them with a Gaussian instrument profile (R=50k at 1600nm). For CO the full spectral scans are not available and we have simulated the data using line centers and broadening coefficients from \citet{SG02}. We have not convolved the simulated CO data with the Gaussian instrument profile since the measurements were obtained at a lower resolution than R=50k. We would expect these lines to be slightly deeper when observed at the resolution we assume for the spectrograph. Figure \ref{fig:gascells} shows a schematic arrangement of the gas cells, coupled with optical fibers, that can be used to generate the absorption spectra seen in Figure 1. The effective free spectral range of each order, $\sim \lambda/m$, corresponds to ~15nm bandwidth at 1550nm. As can be seen from Figure 1, the HCN cell alone provides 20 sharp reference lines within $\pm8$ nm of the order center, a region where the blaze efficiency is expected to be $> 40$\% of peak \citep{Schroeder}. In reality the echelle order spans a larger wavelength coverage than $\sim \lambda/m$ on the CCD, making more lines accessible for wavelength calibration. The design of echelles also leads to some wavelength overlap between adjacent orders, allowing the same line to be used to calibrate more than one order. Gratings operating at lower order numbers will have even higher line densities per order if the orders fit on the detector array. Typical echelle spectra span 7-12 pixels in the slit direction, and we assume 7 pixels here. Assuming that the continuum light source is bright enough (discussed in detail later), a typical continuum S/N of 200 per pixel is a reasonable assumption, assuring also that one is operating in the linear regime in NIR detectors (which typically have quantum wells of ~100k). The data reduction process will optimally combine all 7 pixels in the slit direction into a single pixel with a S/N of $200\sqrt{7}$, or S/N $\sim 500$ per pixel, in the extracted one-dimensional spectrum. Although the actual shape of the absorption line is a Voigt profile, we can approximate the line as a Gaussian for the purpose of estimating the velocity precision. Following the procedure outlined in \citet{Butler96} for calculating the photon-noise limited velocity precision yields an error of ~24 m s$^{-1}$ for a line with a depth (after convolution with the spectrograph instrument profile) of 20\%. So the use of $\sim$20 lines reasonably well spaced across the echelle order can, in principle, enable wavelength calibration accuracy at the level of $\sim 5$ m s$^{-1}$ per echelle order. This uncertainty is smaller than the photon noise error expected per order on most stars observed, and is therefore sufficient. Many HCN and C$_2$H$_2$ lines are deeper than 30\%, while the CO lines have lower depths of $\sim 10-18$\% and have line widths broader than the spectrograph resolution, leading to typical errors of 25-50 m s$^{-1}$ per line. However, these calculations are specifically for the NIST SRM cells and custom-made longer pathlength CO cells (or more multiple passes through the same cell) can create deeper absorption lines if they are required. We speculate that significantly larger path lengths than provided by conventional cells may be achievable in the future by directly using long lengths of gas-filled hollow-core photonic crystal fibers now under development \citep{Benabid05, Tuominen}. These gas cells can also be used at higher (or lower) spectral resolutions than the R =50-70k we have considered here. To demonstrate this, we follow the prescription of Bouchy, Pepe \& Queloz (2001) to calculate a Quality factor (Q) that is an estimate of the intrinsic radial velocity information contained in the spectrum. The Q factor depends on factors like the spectral richness of the wavelength region being considered, the sharpness of the absorption lines, as well as the instrument profile. For a given wavelength region, the limiting radial velocity precision ($\sigma_{RMS}$) is given by \begin{equation} \sigma_{RMS} = \frac{c}{Q\sqrt{N}}, \end{equation} where $c$ is the velocity of light, $Q$ the Quality factor, and $N$ the total number of photons collected. As expected, the radial velocity precision is proportional to $\sqrt{N}$. We calculate the Q factor for the HCN SRM for different spectral resolutions using the wavelength range 1524-1565nm. Figure \ref{fig:qfactor} shows the Quality factor as a function of the spectral resolution. At low spectral resolutions all the individual lines blur into each other, leading to very low velocity information content. At intermediate spectral resolutions (R=50-100k) the information content is a strong function of spectral resolution, as the absorption lines, which have line widths of 7-12pm, are still unresolved. When the spectral resolution exceeds 200,000, the individual lines themselves begin to be resolved and the Q factor begins to plateau, asymptotically approaching its finite value at very high resolutions. Beyond a resolution of R=300k the lines are fully resolved, and the Q factor increase is minimal. Currently planned NIR high-resolution instruments all operate in the R=20-100k regime, and the HCN \& C$_2$H$_2$ SRM cells are well-suited to these. The commercially available CO cells, however, have larger line-widths, making them less desirable for the higher spectral resolutions. Custom cells, however, can easily be manufactured. Other than high-resolution spectrographs, the absorption cells can be used in other NIR instruments. Planned H band spectrographs like the fiber-fed APOGEE instrument for SDSS-III (Allende-Prieto et al . 2008), or dispersed fixed-delay interferometer instruments (Guo et al. 2006) in the NIR, can use such cells for wavelength calibration. It is important to recognize that the absorption line spacings in these cells are sparse when compared to absorption line densities in typical M dwarf spectra. The high continuum S/N is absolutely necessary to ensure that the error in the wavelength calibration is smaller than the achievable precision on M dwarfs. This is one of the reasons we have discussed this approach only in the context of a fiber-fed instrument. If such cells were to be used for simultaneous calibration by passing starlight through them, then the achievable precision may be severely limited by the reference rather than the star. Such an approach should be considered only when the need to calibrate out instrument drifts is more important than achieving close to photon noise limited precision on the stellar target. \subsection{Wavelength Accuracy \& Stability:} The vacuum wavelengths of the line centers for the species in the SRMs can, and have, been measured very accurately at low temperatures and pressures. For example, the measured vacuum wavelengths at low pressure for the $3\nu$ lines of \isotope[12]{C}\isotope[16]{0} \citep{PG97} agree with the theoretically calculated values from the HITRAN database \citep{Rothman05} to 0.02 pm (or $< 4$ m s$^{-1}$ at 1.6 um). For higher cell pressures, the lines begin to broaden and also shift due to interaction of molecules during elastic collisions. Accounting correctly for this pressure shift is the dominant source of uncertainty for determining the line centers of the absorption lines. This shift is accounted for by explicitly measuring it at various pressures and extrapolating a linear relationship. The resulting line centers for the higher pressure CO gas SRMS are certified by NIST to 0.4-0.7 pm (all $2\sigma$ errors), HCN from 0.04 - 0.24 pm and C$_2$H$_2$ from 0.1-0.6 pm \citep{SG00,SG02,SG05}. So the line centers themselves are known to an accuracy of 4-60 m s$^{-1}$ ($1 \sigma$ errors), quite comparable with the Th-Ar emission line errors quoted by PE83. A significant fraction of this error is actually the uncertainty in the pressure of the cells themselves, which may be possible to better constrain with custom-made cells. If a higher accuracy is really necessary, then high-resolution FTS spectra can be used to independently calibrate the cells. Instruments known to possess high stability over a few hours can also take the approach of \citet{LP07} by acquiring many Th-Ar exposures to build up high S/N to determine a wavelength solution, and using that solution to determine the line centers of the absorption cells. Of perhaps more concern is the inherent stability of the absorption lines themselves. A change in operating temperature affects the pressure of the cell, which, due to the pressure shift, leads to a shift of the line center. The wavelength shift $\Delta \lambda$ due to pressure at temperatures $T$ and $T_m$ are related by \citep{SG00} \begin{equation} \Delta \lambda (T) = \Delta \lambda (T_m) \sqrt{(T/T_m)} . \end{equation} A one degree change in temperature at a operating temperature of 296 degrees leads to a 0.17\% change in the pressure-induced wavelength shift. For the high pressure CO cells, the pressure-induced shift is at most 3pm, and that one degree change leads to a line shift of $\sim0.005$pm, which is a sub-ms$^{-1}$ effect. Given the fairly loose temperature requirements the entire enclosure containing the cell assembly (Figure \ref{fig:gascells}) could be temperature stabilized to $\pm 1$C using heating elements running in a proportional-integral-derivative (PID) loop with temperature sensors. If necessary, temperature control to $\pm 0.1$ degrees is easily achieved if each cell is stabilized independently with a heating element wrapped around it,and can make temperature effects negligible. The absorption cells are relatively immune to any other environmental effect since they are contained in sealed glass tubes. Unlike Th-Ar lamps, absorption cells do not have a warm up period (typically 15-30 minutes) where the lines are not stable enough for precision applications, or a limit on their operating lifetime. They are also passive, requiring only a light source for operation. \subsection{Bright White Light Source} The single-mode fiber-coupled gas cells makes for a compact and easily configurable set of wavelength references, but the narrow core and spatial filtering properties of such fibers make them inherently difficult to couple to traditional illumination sources like quartz lamps. A number of white light sources coupled to single mode fibers are now commercially available for the telecom S, C and L bands (1400-1630nm), which are well matched to the spectral regime covered by these refereence cells. Alternatively, tunable diode lasers are also available that can be used to scan through the regions of interest at many times per second. Such sources may easily be adaptable for spectroscopic applications in astronomy. The science fiber coupled to the spectrograph needs to be multi-mode to couple effectively to the telescope. To ensure that the science and calibration fiber illuminate the spectrograph optics in the same way, the calibration fiber should be identical to the science fiber. Expanding the light from a single mode fiber to the correct focal ratio and coupling into the multi-mode science and calibration fibers poses no major challenge if this is deemed necessary. As mentioned before, the use of single-mode fibers is not mandatory if multi-pass configurations are not required. In such a case, a multi-mode fiber is adequate, still enabling the light to be passed through multiple cells. The penalty of this approach is the unwieldy 80 cm length of the \isotope[]{C}\isotope[]{O} cells, and the advantage is that a conventional quartz-lamp light source is quite sufficient. \subsection{Isotopologues} The NIST standards are deliberately designed to span as large a range as possible in the telecom bands. For such applications, line densities are not as critical as for the broadband wavelength calibration application in astronomy. Available line densities and wavelength coverage can be increased by using isotopologues of these molecules, which exhibit similar levels of stability but have absorption lines at different wavelengths. For CO the NIST cells are specifically chosen to be \isotope[12]{C}\isotope[16]{O} and \isotope[13]{C}\isotope[16]{O}. The HITRAN database contains line transition parameters for 4 other CO isotopalogues: \isotope[12]{C}\isotope[17]{O}, \isotope[12]{C}\isotope[18]{O}, \isotope[13]{C}\isotope[17]{O}, and \isotope[13]{C}\isotope[18]{O}. Figure \ref{fig:CO} shows the absorption lines for all 6 CO isotopologues listed above. Together, they substantially increase the line densities available for calibration in this region of the H band, and extend coverage to longer wavelengths. The filled circles in the figure correspond to zero-pressure line centers from the HITRAN database. Transmission values are all scaled to that of \isotope[12]{C}\isotope[16]{O} because the purpose of the figure is to illustrate the increase in density and coverage. Actual transmission is best derived from experimental data for such cells and the length required for gas cells of the four additional isotopologues may be different than those of \isotope[12]{C}\isotope[16]{O} and \isotope[13]{C}\isotope[16]{O}. Even higher line densities and coverage than shown here may be possible using \isotope[14]{C}\isotope[]{O}, even though \isotope[14]{C} is unstable and undergoes $\beta$ decay to \isotope[14]{N} with a half-life of $\sim 5730$ years. Similarly the many known isotopalogues of \isotope[12]{C}$_2$H$_2$ and HCN can also be used to generate additional absorption lines. While the use of isotopologues is attractive for increasing line densities, the pressure-shifted line centers for many of these species need to be determined before they can be used. Bootstrapping a wavelength solution off the known NIST calibrated lines is possible for very stable instruments, as described earlier. In general, however, the isotopologue gas cells should also be calibrated independently, like the NIST SRMS. \subsection{Additional Molecular References in the NIR J \& H bands} Thus far we have primarily discussed the use of the NIST absorption gas cells and their isotopologues in the NIR H band. As discussed earlier, gas cells calibrations in this wavelength regime exist primarily due to the needs of the telecommunication industry. Although such well-characterized cells are not commercially available in the J band, it is worth briefly discussing the molecules that are promising possibilities in this wavelength regime as well. Hydrogen Fluoride has a series of sharp absorption lines in the 867-909nm and 1257-1340nm regions. Methane and water have lines in the J \& H but both are also atmospheric species. D'Amato et al. (2008) have demonstrated that HCl, HBr and HI exhibit absorption lines in the 1.2-1.4$\mu$m region, though getting deep absorption lines requires path lengths approaching 1 meter. Fiber fed gas cells similar to the NIST SRMs can be easily used to provide the path lengths necessary for moderately deep absorption lines in such cases. HCL has a series of lines in the 1185-1240nm region and 1720-1870nm regions, which well-complements the molecules we have discussed for the H band, and spans parts of the J band. \subsection{Safety Considerations in Dealing with Gas Cells} For practicality, the gas cells must not be toxic or lethal since a leak, or a breakage, would significantly impact operations. Acetylene is not known to be toxic even if inhaled in large concentrations. Hydrogen Cyanide is lethal in large amounts since it inhibits enzymes in the electron transport chain of cells, thereby making normal cell functioning impossible. The trace amounts of HCN found in the NIST SRM cells is quite safe. Even if the HCN contents on the {\it entire} SRM gas cell were to be respired- the increase in the cyanide content in blood would be minimal. Inhaled Carbon Monoxide is lethal in large concentrations since it binds to haemoglobin, preventing oxygen transport. However CO is also harmless in the small trace amounts found in the gas cells. Methane and water are commonly occurring atmospheric species and are not toxic. Hydrogen Fluoride is very corrosive and attacks glass. HF gas cells have to be specially designed with materials that do not react with the gas and the cells have to be handled carefully. Such HF cells have been in use for calibration for a number of years now. The gas cells we have explored here are all safe to use in a laboratory or a confined environment and breakages or leaks, if they happen, are not a cause for major concern. \section{Discussion} We have explored the relative advantages of different wavelength calibration techniques for high-resolution fiber-fed NIR echelle spectrographs. We conclude that a series of commercially available absorption cell standards can be used to wavelength calibrate echelle data over a significant fraction of the H band, covering over 120nm with four gas cells. Some of these cells require long path lengths, but the use of single-mode fibers enables compact multi-pass configurations with small diameter cells that can easily be integrated into a calibration unit. Although the absorption lines are very stable, their absolute line centers are not known a-priori to better than 4-60 m s$^{-1}$. In principle, this uncertainty can be redressed by explicitly acquiring a high-resolution FTS spectrum or measuring them with tunable diode lasers. In practice, even this solution may not be necessary for most applications. We hope to demonstrate the achievable precision of these cells with FIRST, a high-resolution silicon immersion grating based instrument in the preliminary design stage \citep{Ge06}, and to further explore their potential use as calibrators. We have also considered only the four commercially available NIST cells. By using additional gases, isotopologues and cell lengths, one may be able to extend this technique to span larger regions of the H band with more and deeper absorption lines. Many NIR instruments in the design or construction stage may benefit from using the calibration approach we have outlined here. Such an approach may, in concert with Th-Ar lamp used simultaneously for regions without a absorption reference, be able to provide adequate wavelength calibration for most high precision applications until frequency-comb technology is mature, commonly available, and less expensive. We are very grateful to Dimitri Veras, Curtis DeWitt and Fred Hearty for useful discussions and a careful reading of this manuscript. We thank Steve Blazo, Wavelength References, for useful discussions about multi-gas cells and light sources. We acknowledge the support from NSF with grant NSF AST-0705139, NASA with grants NNX07AP14G and NNG05GR41G, UCF-UF SRI program, and the University of Florida.
1,108,101,564,046
arxiv
\section{Introduction} Over the past ten years, Artificial Neural Networks (ANNs) have become the model of choice for machine learning tasks in many modern applications. Although not completely understood today, the beliefs of the reasons for their success are mathematical, statistical and computational. From the point-of-view of approximation theory, ANNs approximate well smooth functions. For instance a single hidden layer neural net with a diverging number of neurons is dense in the class of compactly supported continuous functions \citep{citeulike:3561150} and the first error rate derived \citep{256500} motivates shallow learning (few layers) \citep{7069264,Kostadinov2018:EUVIP}. Some results show that deep learning is superior to shallow learning in the sense that less parameters are needed to achieve the same level of accuracy for a smoothness and compositional class of functions, in which case deep learning avoids the curse of dimensionality; see \citet{Poggio2017} for a review. \citet{DBLP:journals/corr/abs-1901-02220} prove that deep neural networks provide information-theoretically optimal approximation of a very wide range of functions used in signal processing. \citet{Chen:1995:UAN:2325866.2328543} and related papers extend the results to wider classes of functions. Approximation bound of sparse neural network, that is with bounded network connectivity, has been studied for instance by \citet{DBLP:journals/corr/BolcskeiGKP17} who show a link between the degree of connectivity and the complexity of a function class. In machine learning, the success of ANNs is huge and, in part, can be attributed to their expressiveness or capacity (ability to fit a wide variety of functions). The very large number of parameters and the layer structure of ANNs make them impossible to interpret. ANNs are overparametrized with multiple distinct settings of the parameters leading to the same prediction. So traditional measures of model complexity based on the number of parameters do not apply. This makes understanding and interpreting the predictions challenging. Yet in scientific applications, one often seeks to do just that. In keeping with Occam's razor, among all the models with similar predictive capability, the one with the smallest number of features should be selected. Statistically, models with fewer features not only are easier to interpret but can produce predictors with good statistical properties because such models disregard useless features that contribute only to higher variance. Operationally, the model selection paradigm often uses a validation set or cross-validation (in which the data is randomly splitted, models are built on a training set and predictions are evaluated on the testing set). While conceptually elegant, (cross-)validation sets are of limited use if feature selection is of interest (it tends to select many irrelevant features), if fitting a single model is computationally expensive or if the sample size is small (in which case, splitting the data leaves few observations). ANNs and in particular deep ANNs are computationally expensive to fit, so cross-validation is an expensive way of selecting model complexity. Aiming at good predictive performance on a test set, also known as \emph{generalization}, cross-validation is a poor feature selector as it tends to select too many features. In addition, quadratic prediction error from cross-validation exhibits an unexpected behavior with models of increasing complexity: as expected, the training error always decreases with increasing number of input features, but while the quadratic prediction error on the test set is at first U-shaped (initially decreasing thanks to decreasing bias, and then increasing due to an excess of variance), it then unexpectedly decreases a second time. This phenomenon known as \emph{double descent} has been empirically observed \citep{AdvaniSaxe2017,ClementHongler2019}. For least squares estimation regularized by an $\ell_2$ ridge penalty \citep{ridgeHK}, double descent has been mathematically described for two-layer ANNs with random first-layer weights by \citet{MeiMontanari2019} and \citet{HastieMRT2019}. They show that for high signal-to-noise ratio (SNR) and large sample size, high complexity is optimal for the ridgeless limit estimator of the weights, leading to a smooth and more expressive interpolating learner. In other words, interpolation is good and leads to double descent, which after careful thinking should not be a surprise since the interpolating ANN becomes smoother with increasing number of layers, and therefore better interpolates between training data. Indeed with high SNR, the signal is almost noiseless, so a smooth interpolating function shall perform well for future prediction. But data are not always noiseless, and in noisy regimes, that is with low SNR and small sample size, \citet{MeiMontanari2019} observe that regularization is needed, as expected. In this paper, we present an alternative to the use of a validation set geared towards identifying important features. Specifically, we develop an automatic feature selection method for simultaneous feature extraction and generalization. For ease of exposition, we present our novel method in the context of regression and classification, noting that the ideas can be ported beyond. Our approach exploits ideas from statistical hypothesis testing that directly focus on identifying significant features, and this without explicitly considering minimizing the generalization error. Similar ideas percolate the statistics literature, see for example \citet{JS04}, \citet{CDS99}, \citet{Tibs:regr:1996} with LASSO, \citet{BuhlGeer11} who propose methods for finding {\em needles in a haystack} in linear models. In this context, the optimized criterion is not the prediction error, but is the ability to retrieve the needles (i.e., relevant features). Useful criteria include the stringent exact support recovery criterion, and softer criteria such as the false discovery rate (FDR) and true positive rate (TPR). Of course some regularization methods have already been developed to enforce sparsity to the weights of ANNs. For example, {\em dropout} leaves out a certain number of neurons to prevent overfitting, which incidentally can be used to perform feature selection \citep{DBLP:journals/corr/abs-1207-0580,DBLP:journals/jmlr/SrivastavaHKSS14}. Sparse neuron architectures can be achieved by other means: \citet{pmlr-v70-mollaysa17a} enforce sparsity based on the Jacobian and \citet{DeepFeatureSelection2016,10.55552976456.2976557,10.5555/2981562.2981711,MemoryBound2014,DBLP:journals/corr/abs-1901-01021} employ $\ell_1$-based LASSO penalty to induce sparsity. \citet{TSNNaT21} prune their ANNs based on a metric for neuron importance. \citet{Evci2019TheDO} discuss the difficulty of training sparse ANNs. {\tt spinn} (sparse input neural networks) \citep{feng2019sparseinput} have a sparsity inducing penalty and is governed by two hyperparameters chosen on a validation set; its improved version {\tt spinn-dropout} (the former originally published in 2017) adds a dropout mechanism governed by an additional hyperparameter \citep{pmlr-v80-ye18b}. So {\tt spinn-dropout} is a mix between $\ell_1$ and $\ell_0$ (subset selection) sparsity inducing method, similar to the pruning idea \citep{8578988,ChaoWXC20}. None of these learners have been studied in terms of phase transition in the probability of retrieving features. All of these sparsity inducing methods suffer from two drawbacks: (1) the selection of the penalty parameter is rarely addressed, and when it is, the selection is based on a validation set, two methods geared towards good generalization performance, not feature identification; (2) the ability to recover the ``right'' features has not been quantified through the prism of a phase transition in the probability of support recovery; only {\tt spinn} and {\tt spinn-dropout} consider criteria related to FDR and TPR. This paper is organized as follows. Section~\ref{sct:tf} presents the theoretical framework and defines our LASSO ANN learner. Section~\ref{subsct:functionestimation} defines the statistical model and notation. Section~\ref{subsct:SANN} reviews the LASSO sparsity paradigm for linear models and extends it to ANNs. Section~\ref{subsct:activation} discusses the choice of activation functions. Section~\ref{subsct:lambda} derives a selection rule for the penalty parameter, a generalization of the universal threshold \citep{Dono94b} to non-convex optimization due to the nonlinearity of ANN models. Section~\ref{subsct:opti} discusses optimization issues to solve the non-convex high-dimensional and non-differentiable optimization problem. Section~\ref{sct:MCsimu} evaluates via simulations the ability of our method to exhibit a phase transition in the probability of exact support recovery for the regression task. Section~\ref{sct:appli} evaluates with a large number of real data sets the ability of our method to perform feature selection and generalization for the classification task. Section~\ref{sct:conclusion} summarizes the findings and points to future developments. \section{LASSO ANN} \label{sct:tf} \subsection{Function estimation model and notation} \label{subsct:functionestimation} Suppose $n$ pairs of ouput-input data $({\cal Y}, {\cal X})=\{({\bf y}_i,{\bf x}_i) \}_{i=1}^n$ are collected to learn about their association. For example, in some medical applications (see Section~\ref{subsct:classif}), ${\bf x}\in{\mathbb R}^{p_1}$ is an input vector of $p_1$ gene expressions and ${\bf y}$ is any of $m$ cancer types that is coded as a one-hot output vector of ${\mathbb R}^{m}$; classification aims at assigning the correct type of cancer given an input vector. In regression, $y$ is a scalar ($m=1$), for instance riboflavin production rate in a bacteria (see Section~\ref{subsct:regression}). To model their stochastic nature, data can be modeled as realizations from the pair of random vectors $({\bf Y},{\bf X})$. We assume the real-valued response ${\bf Y} \in {\mathbb R}^{m}$ is related to real-valued feature vector ${\bf X}\in {\mathbb R}^{p_1}$ through the conditional expectation \begin{equation} \label{eq:condexpect} {\mathbb E}[{\bf Y}\mid {\bf X}={\bf x}] = \mu({\bf x}), \end{equation} for some unknown function $\mu: {\mathbb R}^{p_1} \rightarrow \Gamma\subseteq {\mathbb R}^m$. In regression, $\Gamma={\mathbb R}$ and in classification, $\Gamma=\{{\bf p}\in ({\mathbb R}^+)^m: \sum_{k=1}^m p_k=1\}$. Many learners have been proposed to model the association $\mu$ between input and output. A recent approach that is attracting considerable attention models $\mu$ as a standard fully connected ANN with $l$ layers \begin{equation} \label{eq:muANN} \mu_{\boldsymbol \theta}({\bf x})= S_l \circ \ldots \circ S_1\left( {\bf x}\right), \end{equation} where ${\boldsymbol \theta}$ are the parameters (see \eqref{eq:theta12}) indexing the ANN, and letting ${\bf u}={\bf x}$ at the first layer, the nonlinear functions $S_k({\bf u})=\sigma({\bf b}_k + W_k {\bf u})$ maps the $p_k\times 1$ vector ${\bf u}$ into a $p_{k+1}\times 1$ latent vector obtained by applying an activation function $\sigma$ component-wise, for each layer $k < l$. The vectors ${\bf b}_k$ are commonly named ``biases.'' The matrix of weights $W_k$ is $p_{k+1} \times p_k$ and the operation $+$ is the broadcasting operation. The last layer $k=l$ has two requirements. First we must have $p_{l+1}=m$ to match the output dimension, so the last function is $S_l({\bf u})=G({\bf c}+W_l {\bf u})$ where $W_l$ is $m \times p_l$ and the intercept vector ${\bf c}\in {\mathbb R}^{m}$. Second the function $G: {\mathbb R}^{m}\rightarrow \Gamma$ is a link function that maps ${\mathbb R}^{m}$ into the parameter space $\Gamma$. Commonly used link functions for classification are \small \begin{eqnarray} G({\bf u})&=&\left(\frac{\exp\{u_1\}}{\sum_{k=1}^{m}\exp\{u_k\}}, \cdots, \frac{\exp\{u_m\}}{\sum_{k=1}^{m}\exp\{u_k\}}\right)^{\rm T} \label{eq:GSoft} \\ G({\bf u})&=&\left(\frac{\exp\{u_1\}}{\sum_{k=1}^{m-1}\exp\{u_k\} +1}, \cdots, \frac{\exp\{u_{m-1}\}}{\sum_{k=1}^{m-1}\exp\{u_k\}+1},\frac{1}{\sum_{k=1}^{m-1}\exp\{u_k\}+1}\right)^{\rm T}\label{eq:Glogit} \end{eqnarray} \normalsize respectively called Softmax and multiclass-Logit. For regression, $G(u)=u$. The parameters indexing the neural network are therefore \begin{equation} \label{eq:theta12} {\boldsymbol \theta}=(( W_1, {\bf b}_1, \ldots, {\bf b}_{l-1}), (W_2, \ldots, W_l,{\bf c}))=:({\boldsymbol \theta}_1, {\boldsymbol \theta}_2) \end{equation} for a total of $\gamma=\sum_{k=1}^l p_{k+1}(p_k+1)$ parameters. The following property is straightforward to prove, but is crucial for our methodology; it is the reason for splitting ${\boldsymbol \theta}$ into ${\boldsymbol \theta}_1$ and ${\boldsymbol \theta}_2$. \begin{property} \label{prop:propconstant} Assuming the activation function satisfies $\sigma(0)=0$, then setting ${\boldsymbol \theta}_1={\bf 0}$ implies $\mu_{\boldsymbol \theta}({\bf x})$ is the constant function $\mu({\bf x})={\bf c}$ for all~${\bf x} \in \mathbb{R}^{p_1}$. \end{property} Our estimation goal for ${\boldsymbol \theta}$ is two-fold. First, we want to generalize well, that is, given a new input vector, we want to predict the output with precision. Second, we believe that only a few features in the $p_1$-long input vector carry information to predict the output. So our second goal is to find needles in the haystack by selecting a subset of the $p_1$-long inputs. For many medical data treated in Section~\ref{sct:appli}, the input ${\bf x}$ is a vector of hundreds of gene expressions, and genetic aims to identify the ones having an effect on the output. Feature selection has been extensively studied for linear associations, showing a phase transition between regimes where features can be retrieved with probability near one to regimes where the probability of retrieving the features is essentially zero. Our goal is to investigate such a phase transition with ANN learners to retrieve features in nonlinear associations. \subsection{Sparse estimation} \label{subsct:SANN} Finding needles amounts to setting the weights to non-zero values when corresponding to features in ${\bf x}$ that have predictive information. So we seeks sparsity in the first layer on the weights~$W_1$. For the other layers, large weights in a layer could compensate small weights in the next layer, so we bound them by forcing unit $\ell_2$-norm; instead, \citet{feng2019sparseinput} and \citet{pmlr-v80-ye18b} take the approach of a ridge penalty controlled by an additional hyperparameter fixed to the arbitrary value of $0.0001$. More precisely we slightly modify the nonlinear terms in~\eqref{eq:muANN} and define the $j^\text{th}$ nonlinear function $S_{k,j}$ in layer $k$ as \begin{equation} \label{eq:Skj} S_{k,j}({\bf u})= \left \{ \begin{array}{ll} \sigma\left({\bf b}_1^{(j)} + \langle {\bf w}_1^{(j)}, {\bf u} \rangle \right) & k=1\\ \sigma\left({\bf b}_k^{(j)} +\frac{ \langle {\bf w}_k^{(j)}, {\bf u} \rangle}{\left \Vert{\bf w}_k^{(j)} \right\Vert_2} \right) & 1< k <l \\ G\left ({\bf c}+ \frac{ \langle {\bf w}_k^{(j)},{\bf u} \rangle}{\left \Vert {\bf w}_k^{(j)}\right \Vert_2}\right ) & k=l \end{array} \right . , \quad j \in \{1,\ldots,p_{k+1}\}, \end{equation} where ${\bf w}_k^{(j)}$ is the $j^\text{th}$ row of $W_k$. At the last layer ($k=l$), ${\bf c}$ plays the role of an intercept. Sparsity in the first layer allows interpretability of the fitted model. To enforce sparsity and control overfitting, we take the conventional approach inspired by LASSO of minimizing a compromise between a measure~${\cal L}_n$ of closeness to the data and a measure of sparsity $P$. Owing to Property~\ref{prop:propconstant}, we estimate the parameters ${\boldsymbol \theta}=({\boldsymbol \theta}_1, {\boldsymbol \theta}_2)$ defined in~\ref{eq:theta12} by aiming the best local minimum \begin{equation} \label{eq:L1} \hat {\boldsymbol \theta}_\lambda = \arg \min_{ {\boldsymbol \theta}\in {\mathbb R}^\gamma} {\cal L}_n ({\cal Y} , {\mu}_{\boldsymbol \theta}( {\cal X})) + \lambda\ P({\boldsymbol \theta}_1) \end{equation} found by a numerical scheme, where $\lambda>0$ is the regularization parameter of the procedure and $P$ is sparsity-inducing penalty \citep{10.1561/2200000015}. We stress out that our method is driven by the selection of a single regularization parameter $\lambda$, as opposed to other methods that use two or three hyperparameters \citep{pmlr-v80-ye18b,feng2019sparseinput}. Common loss functions between training responses ${\cal Y}$ and predicted values ${\mu}_{\boldsymbol \theta}( {\cal X})$ include: for $m$-class classification the cross-entropy loss ${\cal L}_n ({\cal Y} , {\mu}_{\boldsymbol \theta}( {\cal X}))=\sum_{i=1}^n {\bf y}_i^{\rm T}\log\mu_{\boldsymbol{\theta}}({\bf x}_i)$, where the $\log$ function is applied component-wise to the $m$-long vectors ${\bf y}_i$ and $\mu_{\boldsymbol{\theta}}({\bf x}_i)$; for regression ${\cal L}_n ({\cal Y} , {\mu}_{\boldsymbol \theta}( {\cal X}))=\sum_{i=1}^n ( {y}_i- {\mu}_{\boldsymbol{\theta}}( {\bf x}_i))^2$. A commonly used penalty is the $\ell_q$ sparsity-inducing penalty used by waveshrink~\citep{Dono94b} and LASSO~\citep{Tibs:regr:1996} for $q=1$ and group-LASSO~\citep{Yuan:Lin:mode:2006} for $q=2$ \begin{equation} \label{eq:Pq} P({\boldsymbol \theta}_1)= \sum_{j=1}^{p_1}\|{\bf w}_{1,j}\|_q + \sum_{k=1}^{l-1} \|{\bf b}_k \|_q , \end{equation} where ${\bf w}_{1,j}$ is the $j^\text{th}$ column of $W_1$. The choice $q=2$ forces the $j^\text{th}$ feature to be either on or off across all neurons. The former is more flexible since a feature can be on in one neuron and off in another one, so, in the sequel, we use $q=1$. The reason for penalizing the biases as well is that the gradient of the loss function with respect to the biases at zero is zero and that the hessian is positive semi-definite (see Appendix~\ref{app:Hessian}), hence no guaranteeing a local minimum. ANNs are flexible in the sense that they can fit nonlinear associations. A more rigid and older class of models that has been extensively studied is the class of linear models \begin{equation} \label{eq:lm} \mu_{\boldsymbol \theta}^{\rm lin}({\bf x})= c+\sum_{j=1}^{p_1} \beta_j x_j, \end{equation} where here the set of parameters ${\boldsymbol \theta}=(\beta_1, \ldots, \beta_{p_1}, c)=:({\boldsymbol \theta}_1, c)$ is assumed $s$-sparse, that is only $s$ entries in ${\boldsymbol \theta}_1$ are different from zero. Here again, like for $W_1$ in ANNs, a non-zero entry in ${\boldsymbol \theta}_1$ corresponds to an entry in the input vector ${\bf x}$ that is relevant to predict the response. For a properly chosen penalty parameter $\lambda$, LASSO has the remarkable property of retrieving the non-zero entries of ${\boldsymbol \theta}_1$ in certain regimes (that depend on $n$, $p_1$, SNR, training locations~${\cal X}$ and amount $s$ of sparsity); this has been well studied in the noiseless and noisy scenarios by \citet{CandesTao05,DonohoDL06,6034731,BuhlGeer11}, for instance. In particular, the value of $\lambda$ must bound the sup-norm of the gradient of the empirical loss at zero with high probability when ${\boldsymbol \theta}_1={\bf 0}$ for LASSO to satisfy oracle inequalities. For linear models in wavelet denoising theory \citep{Dono94b}, this approach leads to an asymptotic minimax property. Our contribution is to extend the linear methodology to the nonlinear one, and to investigate how well our extension leads to a phase transition to discover underlying nonlinear lower-dimensional structures in the data. \subsection{Choice of activation functions} \label{subsct:activation} Since the weights from level two and higher are bounded on the $\ell_2$-ball of unit radius \eqref{eq:Skj}, we require the activation function $\sigma\in {\cal C}^2({\mathbb R})$ to be unbounded. For reasons related to Property~\ref{prop:propconstant} and the choice of the hyperparameter $\lambda$, it must also be null and have a positive derivative at zero: \begin{equation} \label{sigma(0)=0} \sigma(0)=0 \quad {\rm and} \quad \sigma'(0)>0. \end{equation} The centered {\tt softplus} function $\sigma_{\rm softplus}(u)=\log(1+\exp(u))-\log(2)$ for example satisfies this requirement. The {\tt ReLU} (Rectified Linear Unit) function $\sigma_{\rm ReLU}(u)=\max(u,0)$ does not because not differentiable at zero. A legitimate question for a statistician is to ask whether ANNs can retrieve interactions between covariates. Projection pursuit models \citep{FS81} have this ability, which additive models do not have. For ANNs, owing to their mathematical property of being dense in smooth function spaces, the answer is yes, but with a large number of neurons/parameters when conventional activation functions like {\tt softplus} and {\tt ReLU} are used. The following activation functions (that satisfy the requirements \eqref{sigma(0)=0}) allow to identify interactions in a sparse way. \begin{definition} The smooth activation rescaled dictionary is the collection of activation functions defined by \begin{equation}\label{eq:sigmafamily} \sigma_{M,u_0,k}(u)=\frac{1}{k}(f(u)^k-f(0)^k) \quad {\rm with} \quad f(u)=\frac{1}{M}\log(1+\exp\{M (u+u_0)\}) \end{equation} indexed by $M>0, u_0>0, k >0$. For $u_0=1$ the dictionary is rescaled in the sense that $\lim_{M\rightarrow \infty}\sigma'_{M,u_0,k}(0)=1$. \end{definition} For finite $M$, $\sigma_{M,u_0,j}\in {\cal C}^\infty$. For $(u_0,k)=(0,1)$, the family includes two important activation functions: {\tt softplus} for $M=1$ and {\tt ReLU} as $M$ tends to infinity; with $M=20$, an excellent smooth approximation of ReLU is achieved. Supposing the association is a single second-order interaction, that is $\mu(x)=x_i x_j$ for some pair $(i,j)$, then $\mu(x)=\mu_\theta^{\rm ANN}(x)$ with \begin{eqnarray*} \mu_\theta^{\rm ANN}(x)&=&-1+ \sigma_{\infty,1,2}(x_i+x_j-1)+ \sigma_{\infty,1,2}(-x_i-x_j-1)\\ &&- \sigma_{\infty,1,2}(x_i-1)- \sigma_{\infty,1,2}(-x_i-1)- \sigma_{\infty,1,2}(x_j-1)- \sigma_{\infty,1,2}(-x_j-1) \end{eqnarray*} since $x^2/2=1+ \sigma_{\infty,1,2}(x-1)+ \sigma_{\infty,1,2}(-x-1)$. When the ANN model employs both linear and quadratic ReLU, selecting neurons with $\sigma_{\infty,1,k}$ with $k=2$ may reveal interactions between its selected features. Moreover since $\lim_{M\rightarrow \infty}\sigma'_{M,u_0,k}(0)=u_0^{k-1}$, choosing $u_0=1$ scales all the activation functions in the sense that their derivatives at zero are asymptotically (as $M\rightarrow \infty$) equal for all~$k$. Rescaling allows to mix activation functions with different $k$ in the same ANN; in particular for our choice of hyperparameter $\lambda$, it allows to factorize by $\sigma'(0)$ in Theorem~\ref{th:lambda0} below. Moreover, since sparsity is of interest, zero is a region where the cost function ought to be smooth for optimization purposes; hence choosing $u_0=1$ also makes the wiggliness of the loss function bounded at zero since $\sigma_{M,1,1}''(0)=M/\exp(M)$ while $\sigma_{M,0,1}''(0)=M/4$ (which reflects that ReLU is not differentiable at zero). \subsection{Selection of penalty $\lambda$} \label{subsct:lambda} The proposed choice of $ \lambda$ is based on Property~\ref{prop:propconstant}. It shows that fitting a constant function is achieved by choosing $\lambda$ large enough to set the penalized parameters $ {\boldsymbol \theta}_1$ to zero when solving the penalized cost function~\eqref{eq:L1}. For convex loss functions and linear models, the quantile universal threshold \citep{Giacoetal17} achieves this goal with high probability under the null model that the underlying function is indeed constant. This specific value $\lambda_{\rm QUT}$ has good properties for model selection outside the null model as well \citep{Dono94b, Dono95asym}. The quantile universal threshold has so far been developed and employed for cost functions that are convex in the parameters, hence guaranteeing that any local minimum is also global. The cost function in~\eqref{eq:L1} is not convex for ANN models, so we extend the quantile universal threshold by guaranteeing with high probability a local minimum at the sparse point of interest ${\boldsymbol \theta}_{1}={\bf 0}$. This can be achieved thanks to the penalty term $\lambda\ P({\boldsymbol \theta}_{1})$ that is part of the cost function in~\eqref{eq:L1}, provided $\lambda$ is large enough to create a local minimum with $\hat {\boldsymbol \theta}_1={\bf 0}$. The following theorem derives an expression for the zero-thresholding function $\lambda_0({\cal Y}, {\cal X})$ which gives the smallest $\lambda$ that guarantees a minimum with $\hat {\boldsymbol \theta}_1={\bf 0}$, for given output--input data $({\cal Y}, {\cal X})$. \begin{theorem} \label{thm:lambda_choice} Consider the optimization problem~\eqref{eq:L1} with $P({\boldsymbol \theta}_{1})$ defined in~\eqref{eq:Pq} with $q=1$, activation function $\sigma \in {\cal C}^2({\mathbb R})$ and loss function ${\cal L}_n\in {\cal C}^2(\Gamma^n)$ such that $\hat {\bf c}= \arg \min_{ {\bf c}\in {\mathbb R}^m} {\cal L}_n({\cal Y},{\bf c}^{\otimes n})$ exists. Let $ {\boldsymbol \theta}^0=({\bf 0}, W_2 , \ldots, W_l,\hat {\bf c})$ with arbitrary values $W_{k}$ for layers $2$ to $l$. Define ${ g}_0({\cal Y}, {\cal X}, {\boldsymbol \theta}^0)=\nabla_{{\boldsymbol \theta}_{1}} {\cal L}_n({\cal Y},{\boldsymbol \mu}_{{\boldsymbol \theta}^0}({\cal X}))$. If $\lambda > \lambda_0({\cal Y}, {\cal X})=\sup_{(W_2 \ldots, W_{l})}\| { g}_0({\cal Y}, {\cal X}, {\boldsymbol \theta}^0) \|_\infty$, then there is a local minimum to~\eqref{eq:L1} with $(\hat {\boldsymbol \theta}_{1,\lambda}, \hat {\bf c}_\lambda)=({\bf 0}, \hat {\bf c})$. \end{theorem} The proof of Theorem~\ref{thm:lambda_choice} is provided in the appendix; it could be made more general for $q\geq 1$ using H{\"o}lder's inequality. In regression for instance, if the loss function between ${\cal Y}\in {\mathbb R}^n$ and $c {\bf 1}$ with $c\in {\mathbb R}$ and ${\bf 1} \in {\mathbb R}^n$ is ${\cal L}_n({\cal Y},c{\bf 1}) = \|{\cal Y}-c {\bf 1}\|_2$, then $\hat c=\bar {\cal Y}$, the average of the responses. Based on~$\lambda_0({\cal Y}, {\cal X})$, the following theorem extends the universal threshold to non-convex cost functions. \begin{theorem} \label{th:QUT} Given training inputs ${\cal X}$, define the random set of outputs ${\cal Y}_0$ generated from from~\eqref{eq:condexpect} with $\mu(\cal X)=\mu_{\boldsymbol \theta}({\cal X})$ defined in~\eqref{eq:muANN} for any activation function satisfying~\eqref{sigma(0)=0} under the null hypothesis $H_0: {\boldsymbol \theta}_{1}={\bf 0}$, that is $H_0: \mu_{\boldsymbol \theta}={\bf c}$ is a constant function. Letting the random variable $\Lambda=\lambda_0({\cal Y}_0, {\cal X})$ and $F_\Lambda$ be the distribution function of $\Lambda$, the quantile universal threshold is $\lambda_{\rm QUT}=F^{-1}_\Lambda(1-\alpha)$ for a small value of $\alpha$. It satisfies that \begin{equation} {\mathbb P}_{H_0}(\mbox{there exists a local minimum to~\eqref{eq:L1} such that } \mu_{\hat {\boldsymbol \theta}_{\lambda_{\rm QUT}}}\mbox { is constant})\geq 1-\alpha. \end{equation} \end{theorem} The law of $\Lambda$ is unknown but can be easily estimated by Monte Carlo simulation, provided there exists a closed form expression for the zero-thresholding function $\lambda_0({\cal Y}, {\cal X})=\sup_{(W_2 \ldots, W_{l})}\| { g}_0({\cal Y}, {\cal X}, {\boldsymbol \theta}^0) \|_\infty$. The following theorem states a simple expression for $\lambda_0({\cal Y}, {\cal X})$ in two important cases: classification and regression. \begin{theorem} \label{th:lambda0} Consider a fully connected $l$-layer ANN employing a differentiable activation function $\sigma$ and let $\pi_l=\sqrt{\Pi_{j=3}^l p_{j}}$ for $l\geq 3$, $\pi_2=1$, ${\cal Y}_\bullet={\cal Y}-{\bf 1}_{n}\bar {\cal Y}$ and $\|A\|_\infty=\max_{j=1,\ldots,p} \sum_{i=1}^k |a_{ji}|$ for a $p\times k$ matrix $A$. \begin{itemize} \item In classification, using the cross-entropy ${\cal L}_n ({\cal Y} , {\mu}_{\boldsymbol \theta}( {\cal X}))=\sum_{i=1}^n {\bf y}_i^{\rm T}\log\mu_\theta({\bf x}_i)$ and for the Softmax link function $G$ in~\eqref{eq:GSoft}, we have \begin{equation} \label{eq:lambda0C} \lambda_0({\cal Y}, {\cal X}) = \pi_l \sigma'(0)^{l-1} \|{\cal X}^{\rm T} {\cal Y}_\bullet \|_\infty; \end{equation} \item In regression, for ${\cal L}_n=\| {\cal Y} - \mu_{ \boldsymbol\theta}({\cal X}) \|_2$, we have \begin{equation} \label{eq:lambda0G} \lambda_0({\cal Y}, {\cal X}) = \pi_l \sigma'(0)^{l-1} \frac{\|{\cal X}^{\rm T} {\cal Y}_\bullet \|_\infty}{\|{\cal Y}_\bullet\|_2} . \end{equation} \end{itemize} \end{theorem} Theorem~\ref{th:QUT} states that the choice of $\lambda$ is simply an upper quantile of the random variable $\Lambda=\lambda_0({\cal Y}_0, {\cal X})$, where ${\cal Y}_0$ is the distribution of the response under the null distribution that ${\boldsymbol \theta}_1={\bf 0}$. The upper quantile of $\Lambda$ can be easily estimated by Monte-Carlo simulation. In regression and assuming Gaussian errors, the null distribution is ${\cal Y}_0 \sim {\rm N}(c {\bf 1}, \xi^2 I_n)$. Both the constant $c$ and $\xi^2$ are unknown however, and $\xi^2$ is difficult to estimate in high dimension. Fortunately, one observes first that \eqref{eq:lambda0G} involves only the mean centered responses ${\cal Y}_\bullet$ and therefore do not dependent on $c$. Second, both numerator and denominator are proportional to $\xi$. Consequently, $\Lambda$ is a pivotal random variable in the Gaussian case. Knowledge of $c$ and $\xi^2$ are therefore not required to derive our choice of hyperparameter $\lambda_{\rm QUT}$. This well-known fact inspired by square-root LASSO \citep{BCW11} motivates the use of ${\cal L}_n=\| {\cal Y} - \mu_{ \boldsymbol\theta}({\cal X}) \|_2$ rather than ${\cal L}_n=\| {\cal Y} - \mu_{ \boldsymbol\theta}({\cal X}) \|_2^2$. In classification, the null distribution is ${\cal Y}_0 \sim {\rm Multinomial}(n, {\bf p}=G({\bf c}))$. The constant vector ${\bf c}$ is unknown and the random variable $\Lambda$ with $\lambda_0$ defined in~\eqref{eq:lambda0C} is not pivotal. Moreover \citet{Holland73} proved no covariance stabilizing transformation exists for the trinomial distribution. So the approach we take is to assume the training outputs ${\cal Y}$ reflect the proportion of classes in future samples seeking class prediction. So if $\hat {\bf p}$ are the proportions of classes in the training set, then the null distribution is ${\cal Y}_0 \sim {\rm Multinomial}(n, {\bf p}=\hat {\bf p})$. The quantile universal threshold derived under this null hypothesis is appropriate if future data come from the same distribution, which is a reasonable assumption. \subsection{Optimization for LASSO ANN} \label{subsct:opti} For a given $\lambda$, we solve~\eqref{eq:L1} first by steepest descent with a small learning rate, and then employ a proximal method to refine the minimum by exactly setting to zero some entries of $\hat W_{1,\lambda_{\rm QUT}}$ \citep{FISTA09,10.1561/2200000015}. Solving \eqref{eq:L1} directly for the prescribed $\lambda=\lambda_{\rm QUT}$ risks getting trapped at some poor local minimum however. Instead, inspired by simulated annealing and the warm start, we avoid thresholding too hardly at first and possibly missing important features by solving \eqref{eq:L1} for an increasing sequence of $\lambda$'s tending to $\lambda=\lambda_{\rm QUT}$, namely $\lambda_k=\exp(k)/(1+\exp(k))\lambda_{\rm QUT}$ for $k\in \{-1,0,\ldots,4\}$. Taking as initial parameter values the solution corresponding to the previous~$\lambda_k$ leads to a sequence of sparser approximating solutions until solving for $\lambda_{\rm QUT}$ at the last step. The computational cost is low. It requires solving~\eqref{eq:L1} approximately on the small grid of $\lambda$'s tending to $\lambda_{\rm QUT}$ using the warm start to finally solve~\eqref{eq:L1} precisely for $\lambda_{\rm QUT}$. Calculating $\lambda_{\rm QUT}$ is also cost efficient (and highly parallelizable) since it is based on an $M$-sample Monte Carlo that calculates $M$ gradients $\{{ g}_0({ y}_k, { X}, {\boldsymbol \theta}_0)\}_{k=1}^M$ using backpropagation \citep{Rumelhart:1986we} for $M$ Gaussian samples $\{y_k\}_{k=1}^M$ under $H_0$. Using $V$-fold cross-validation instead would require solving ~\eqref{eq:L1} a total of $V*L$ times, where $L$ is the number of $\lambda$'s visited until finding a (hopefully global) minimum to the cross-validation function. Using a validation set reduces complexity by a factor $V$, at the cost of using data to validate. Instead, our quantile universal threshold approach does not require a validation set. \section{Regression simulation study}\label{sct:simulation} \label{sct:MCsimu} The regression problem is model~\eqref{eq:condexpect} for scalar output ($m=1$), Gaussian additive noise and (unknown) standard deviation, here chosen $\xi=1$. To evaluate the ability to retrieve needles in a haystack, the true associations $\mu$ is written as sparse ANNs that uses only $s$ of the $p_1$ entries in the inputs ${\bf x}$. We say an association $\mu$ is $s$-sparse when it uses only $s$ input entries, that is $s=|S|$ where $S=\{j\mid x_j \mbox{ carries information} \}$ in the association $\mu$. A sparse ANN learner estimates which inputs are relevant by estimating the support with \begin{equation} \label{eq:Shat} \hat S=\{j \mid \|\hat {\bf w}_{1,j}\| > \epsilon \}, \end{equation} where $\hat {\bf w}_{1,j}$ is the $j^\text{th}$ column of the estimated weights $\hat W_1$ at the first layer. Likewise for linear model~\eqref{eq:lm}, the support is estimated with $\hat S=\{j \mid \hat \beta_j \neq 0\}$. Since we employ a precise thresholding algorithm to solve~\ref{eq:L1}, we use $\epsilon=0$ to determine $\hat S$ in \eqref{eq:Shat}; other methods aiming at model selection apply a hard thresholding step with a choice for a second hyperparameter $\epsilon$ to get rid of many small values. Our method could be improved by using $\epsilon$ as another hyperparameter, but our aim is to investigate a phase transition with LASSO ANN, so we consider a single hyperparameter $\lambda$, and show that choosing $\lambda=\lambda_{\rm QUT}$ leads to a phase transition. To quantify the performance of the tested methods, we use four criteria: the probability of exact support recovery ${\rm PESR}={\mathbb P}(\hat S=S)$, the true positive rate ${\rm TPR}={\mathbb E}\left ( \frac{|S \bigcap \hat S|}{|S|} \right )$, the false discovery rate ${\rm FDR}={\mathbb E}\left ( \frac{| \bar S \bigcap \hat S|}{|\hat S| \lor 1} \right )$, and the generalization or predictive error ${\rm PE}^2={\mathbb E}(\mu(X)-\hat \mu(X))^2$. Although stringent, the PESR criterion reaches values near one in certain regimes. In fact, a phase transition has been observed for linear models: PESR is near one when the complexity parameter $s$ is small, and PESR suddenly decreases to zero when $s$ becomes larger. One wonders whether this phenomenon is also present for nonlinear models, which we are investigating below. Also, a high TPR with a good control of low FDR is also of interest, but less strict criteria than high PESR. Generalization remains of great concern: ideally a learner should have high TPR and low FDR along with a good generalization performance. We consider four learners: a standard ANN with {\tt keras} available in TensorFlow (with its {\tt optimizer=`sgd'} option) with no sparsity inducing mechanism; {\tt spinn} (sparse input neural networks) \citep{feng2019sparseinput} with sparsity mechanisms governed by two hyperparameters chosen on a validation set; {\tt spinn-dropout} (which {\tt Python} code was kindly provided to us by the first author) \citep{pmlr-v80-ye18b} with sparsity inducing mechanisms (including dropout) governed by three hyperparameters chosen on a validation set; and our LASSO ANN with a sparsity inducing penalty governed by a single hyperparameter chosen by QUT (i.e., no validation set required). For LASSO ANN we use two to four-layer ANNs with $(p_2,p_3,p_4)=(20, 10, 5)$, activation function $\sigma_{20,1,1}$ defined in~\eqref{eq:sigmafamily} and the $\ell_1$-LASSO penalty. {\tt spinn} and {\tt spinn-dropout} use ReLU. The ReLU activation function allows to sparsely write a linear association (Section~ \ref{subsct:lin}) and the nonlinear absolute value function (Section~\ref{subsct:nonlin}). With Monte-Carlo simulations to estimate PESR, TPR, FDR and PE in two different settings, we compare four learners as a function of the model complexity parameter $s$, for fixed sample size $n$ and signal to noise ratio governed by $(\xi, \theta)$. The first simulation assumes a sparse linear association and compares LASSO ANN to the benchmark square-root LASSO for linear models. The second simulation assumes a sparse nonlinear association. These allegedly simple sparse associations allow to reveal interesting phase transitions in the ability of LASSO ANN to retrieve needles in a haystack. Comparing to two other sparsity inducing ANN learners, we observe more coherent phase transitions with LASSO ANN than with the more complex (i.e., more than one hyperparameter) {\tt spinn} and {\tt spinn-dropout} learners in terms of PESR, TPR and FDR. \subsection{Linear associations} \label{subsct:lin} The linear model~\eqref{eq:lm} is the most commonly used and studied model, so we investigate in this section how LASSO ANN compares to a state-of-the-art method for linear models, here square-root LASSO \citep{BCW11} (using the {\tt slim} function in the {\tt flare} library in {\tt R}). This allows to investigate the impact of the loss off convexity for ANNs. Assuming the linear association is $s$-sparse, this section compares the ability to retrieve the $s$ relevant input entries assuming either a linear model (the benchmark) or a non-linear model using fully connected ANNs. The aim of the Monte Carlo simulation is to investigate: \begin{enumerate} \item a phase transition with LASSO ANN and if so, how close it is to the phase transition of square-root LASSO which, assuming a linear model, should be difficult to improve upon. We consider two selection rules for $\lambda$ for square-root LASSO: QUT and using a validation set to minimize the predictive error. \item how the quantile universal threshold $\lambda_{\rm QUT}$ based on~\eqref{eq:lambda0G} performs for LASSO ANN with two, three and four layers. \item a phase transition with {\tt spinn} and {\tt spinn-dropout}. In an attempt to make them comparable to LASSO, we set their parameter controlling the trade--off between LASSO and group-LASSO to a small value so that their penalty is essentially LASSO's. Like LASSO, {\tt spinn} and {\tt spinn-dropout} use a validation set to tune their hyperparameters. Results with their default values are not as good and not reported here. \end{enumerate} This experimental setting allows various interesting comparisons: linear versus nonlinear models to retrieve a linear model, and model selection- (QUT) versus validation set-based choice of the hyperparameter(s). We estimate the PESR criterion of the three methods with a Monte-Carlo simulation with $100$ repetitions. Each sample is generated from an $s$-sparse linear model with $s\in \{0,1,2,\ldots,16\}$, the sample size is $n=100$ from and the dimension of input variables is $p_1=2n$. \citet{preciseunder10} studied in the noiseless case the performance of $\ell_1$-regularization as a function of $\delta=n/p_1$ and $\rho=s/n$ (for us, $\delta=1/2$ and $\rho=s/100$) and found a PESR phase transition. To be close to their setting, we assume the input variables are i.i.d.~standard Gaussian with a moderate signal-to-noise ratio: the $s$ non-zero linear coefficients $\beta_j$ in~\eqref{eq:lm} are all equal to $3$ and the standard deviation of the Gaussian noise is $\xi=1$. ANN models with ReLU fits linear models sparsely. Indeed a two-layer ANN with a single activated neuron with $s$ non-zero entries in the weights $W_1$ matches the linear function in the convex hull of the data, as stated in the following property. \begin{property} \label{property:linearANN} Using the ReLU activation function, an $s$-sparse linear function restricted to the convex hull of the $n$ data vectors $\{{\bf x}_i\}_{i=1,\ldots,n}$ can be written as a two-layer neural network with a single neuron with a row matrix $W_1$ with $s$ non-zero entries. \end{property} The proof of Property~\ref{property:linearANN} is provided in the appendix. The convex hull includes the $n$ observed covariates which enter the square-root $\ell_2$-loss in~\eqref{eq:L1}. So the sparsest two-layer ANN model that solves the optimization and that is a linear model in the convex hull of the data has a single neuron. But the ANN fit is no longer linear outside the convex hull, which makes prediction error PE poor outside the convex hull range of the data; we therefore do not report PE for the linear model since the ANN model will have poor performance for test data outside the convex hull of the training data. Figure~\ref{fig:linear} summarizes the results of the Monte-Carlo simulation. As in \citet{preciseunder10}, we observe a PESR phase transition. Surprisingly, little is lost with LASSO ANN (red curve) compared to linear model based on QUT (black line), showing the good performances of both the choice of $\lambda_{\rm QUT}$ and the optimization employed for LASSO ANN. Linear model based on a validation set (black dashed line) shows poor performance in terms of PESR, as expected. In summary, LASSO ANN compares surprisingly well to the benchmark linear square-root LASSO with QUT by not losing much in terms of PESR. The other two ANNs learners {\tt spinn} and {\tt spinn-dropout} cannot directly be compared to the other since governed by more than one hyperparameter, but, while we observe good PESR for $s$ large, their global behavior does not follow the conventional phase transition (that is, no high plateau near one for small $s$ and rapidly dropping down to zero with larger~$s$); the nonlinear simulation of next Section also reveals some non-conventional behaviors for these two ANN learners. Going back to LASSO ANN, we observe on the right plot of Figure~\ref{fig:linear} that using more layers slightly lowers the performance, as expected, but that the choice of $\lambda_{\rm QUT}$ for more layers still leads to a conventional phase transition. \begin{figure} \includegraphics[width=6.4in]{figlinear} \caption{Monte-Carlo simulation results for linear association plotting the estimated probability of exact support recovery (PESR). Left plot: the two black curves assume a linear model while the color curves assume an ANN model; the two blue lines (light for {\tt spinn} and dark for {\tt spinn-dropout}) are governed by more than one hyperparameter while the red line (LASSO ANN) is governed by a single hyperparameter; the two continuous lines (black for square-root LASSO linear and red for LASSO ANN) select the hyperparameter with QUT while the dashed lines require a validation set. Right plot: LASSO ANN with 2 to 4 layers with its hyperparameter based on QUT.} \label{fig:linear} \end{figure} \subsection{Nonlinear associations} \label{subsct:nonlin} To investigate a phase transition for nonlinear sparse associations, we consider $s$-sparse functions of the form $ \mu_{\boldsymbol{\theta}}({\bf x})= \sum_{i=1}^h 10\cdot |x_{2i}-x_{2i-1}| $ for $h \in \{0,1,\ldots,8 \}$, which corresponds to $s$ needles in a nonlinear haystack with $s\in \{ 0, 2, \ldots, 16 \}$. Because this association is harder to retrieve than the linear one (due to the non-monotone nature of the absolute value function), the haystack is of size $p_1=50$ and the training set is of size $n=500$. This ratio $\delta=n/p_1=10$ seems to be the limit where needles can be recovered with LASSO ANN. The association $\mu_{\boldsymbol{\theta}}({\bf x})$ is well approximated by a sparse two-layer ANN employing the smooth activation function $\sigma_{20,1,1}$ and with $c=10s$, ${\bf w}_{2}=(10\cdot {\bf 1}_{h}^{T}$, ${\bf 0}_{p_2/2-h}^{T},10\cdot {\bf 1}_{h}^{T}$, ${\bf 0}_{p_2/2-h}^{T})$, ${\bf b}_{1}=-{\bf 1}_{p_2}$ and \newcommand\coolrightbrace[2]{% \left.\vphantom{\begin{matrix} #1 \end{matrix}}\right\}#2} \begin{equation} W_1=\left [ \begin{array}{c} W \\ W \end{array} \right ] \ {\rm with} \ W =\left [ \begin{array}{cccccccccccccc} -1 & 1 & 0 & 0 & \ldots & & & & & &\ldots & 0\\ 0 & 0 & -1 & 1 & 0 & \ldots & & & & & \ldots& 0\\ \vdots & &&&& \vdots &\vdots&&& && & \vdots \\ 0 & \ldots & & & \ldots& 0 & -1 & 1 & 0 & \ldots & \ldots & 0 \\ 0 & \ldots & &&& \ldots &&& & &\ldots & 0 \\ \vdots & &&&&&&&&&& \vdots \\ 0 & \ldots & &&& \ldots &&&&&\ldots& 0\\ \end{array} \right ] \begin{matrix} \coolrightbrace{0 \\ 0 \\ \vdots \\ 0}{h\hspace{21.5pt}}\\ \coolrightbrace{0 \\ \vdots \\ 0 }{\frac{p_2}{2}-h}\\ \end{matrix}. \end{equation} The columns of $W_1$ being sparse, a LASSO penalty is more appropriate than a group-LASSO penalty. Figure~\ref{fig:nonlinear} reports the estimated PESR, TPR, FDR and PE criteria as a function of the sparsity level $s$. We observe that, as for linear models, LASSO ANN (red lines for two to four layers) has a PESR phase transition thanks to a good trade--off between high TPR and low FDR. Moreover LASSO ANN has better generalization performance in this setting than the off-the-shelf ANN learner (green lines). The other two ANNs learners {\tt spinn} and {\tt spinn-dropout} (light and dark blue, respectively) perform somewhat better in terms of PESR thanks to more than one hyperparameter, but not with a monotone way for {\tt spinn-dropout}; moreover, the FDR of {\tt spinn} and {\tt spinn-dropout} is not well controlled along the sparsity range indexed by $s$. The good FDR control of LASSO ANN is striking, in particular at $s=0$ where its value is near $\alpha=0.05$, as expected, proving the effectiveness of not only QUT but also of the optimization algorithm. Finally, as far as generalization is concerned, the sparsity inducing learners perform better than the conventional ANN learner since the underlying ANN model is indeed sparse. Because LASSO ANN not only selects a sparse model but also shrinks, its predictive performance is not as good as with {\tt spinn} and {\tt spinn-dropout} which regularization parameters are selected to generalize well. \begin{figure} \includegraphics[width=6.4in]{fignonlinear} \caption{Monte-Carlo simulation results for nonlinear association plotting the estimated probability of exact support recovery (PESR -- top left), generalization performance (PE -- top right), true positive rate (TPR -- bottom left) and false discovery rate (FDR -- bottom right). The red curves are for LASSO ANN with its hyperparameter based on QUT with two (continuous) to four layers (dashed). The two blue lines (light for {\tt spinn} and dark for {\tt spinn-dropout}) are governed by more than one hyperparameter selected based on a validation set. The green curve is a standard ANN (without sparsity constraint).} \label{fig:nonlinear} \end{figure} \subsection{Conclusions of the Monte Carlo simulations} With a single hyperparameter, LASSO ANN has a phase transition for both linear and nonlinear associations and a good FDR control. This reveals that the quantile universal threshold and the optimization scheme employed are performant. With the linear simulation, we observe that the impact of the loss off convexity is mild with LASSO ANN since we essentially get the same phase transition as with a linear model. The other ANN learners considered do not have a conventional phase transition and do not control their FDR well; yet, with the help of more hyperparameters, they are able to generalize well. \section{Application to real data} \label{sct:appli} \subsection{Classification data} \label{subsct:classif} The characteristics of 26 classification data sets are listed in Table~\ref{tab:classification-data}, in particular the sample size $n$, the number of inputs $p_1$ and the number of classes $m$. Most inputs are gene expressions, but there are also FFT preprocessed time series and other types of inputs. We randomly split the data into training (70\%) and test (30\%) sets, repeating the operation 100 times. Figure~\ref{fig:classif} reports the results for four data sets chosen for their ratios $n/p_1$ and their number of classes $m$ (marked with a $\dagger$ in Table~\ref{tab:classification-data}). The left boxplots of Figure~\ref{fig:classif} report classification accuracy, and the right boxplots report the number of selected needles $\hat s$. High accuracy along with low $\hat s$ reflects good needles selection. The results of the remaining 22 sets are plotted in the scatter plot of Figure~\ref{fig:scattplot}. We train and test the following learners: LASSO GLM with $\lambda$ chosen to minimize $10$-fold cross validation \citep{GLMnet} in {\tt R} with {\tt glmnet}, CART \citep{cart84} in {\tt R} with {\tt rpart}, random forest \citep{breiman2001random} in {\tt R} with {\tt randomForest}, SPINN in {\tt Python} for binary classification ({\tt https://github.com/jjfeng/spinn}; no code for multiclass and for {\tt spinn-dropout} available), standard ANN learner in {\tt Python} with {\tt keras} and its {\tt optimizer=`adam'} option, and our LASSO ANN with two layers in {\tt Python}. For random forest, there is no clear way of counting the number of needles, but we choose to select as needles those inputs which corresponding p-values (provided by {\tt randomForestExplainer}) are smaller than $\alpha=0.05$ after a Bonferoni adjustment. Random forest is an ensemble learner that combines CARTs; so the comparison between CART and random forest quantifies the improvement achieved by ensembling learners, and the comparison between CART and LASSO ANN is more fair since both are no ensemble learners. \tiny \begin{table}[h!] \caption{Some data characteristics (results for data with $\dagger$ are plotted in Figure~\ref{fig:classif}).} \begin{center} \resizebox{1\columnwidth}{!}{ \begin{tabular}{llrrrrl} \toprule {\bf Dataset} & {\bf Domain} & {\bf n} & ${\bf p}_1$ &{\bf n}/${\bf p}_1$& {\bf m}&{\bf Source} \\ \midrule Climate &Climate model&540&18&30.0000&2&UCI-MLR\\ Breast$\dagger$ &Breast cancer &569&30&18.9667&2&python sklearn\\ Wine$\dagger$&Wine&178&13&13.6923&3&python sklearn\\ Connectionist&Connectionism&208&60&3.4667&2&UCI-MLR\\ Bearing&Engine noise&952&1024&0.9297&4&CWRU data center\\ Sorlie&Breast cancer&85&456&0.1864&5&R: datamicroarray\\ BCI$\_2240^\dagger$&Brain signal&378&2240&0.1688&2& BCI competition \\ Christensen&Medical&217&1413&0.1536&3&R: datamicroarray\\ Genes&Cancer RNA&801&12356&0.0648&5&UCI-MLR\\ Gravier&Breast cancer&168&2905&0.0578&2&R: datamicroarray\\ Alon&Colon cancer&62&2000&0.0310&2&R: datamicroarray\\ Khan&Blue cell tumors&63&2308&0.0273&4&R: datamicroarray\\ Yeoh$\dagger$&Leukemia&248&12625&0.0196&6&R: datamicroarray\\ Su&Medical&102&5565&0.0183&4&R: datamicroarray\\ Gordon&Lung cancer&181&12533&0.0144&2&R: datamicroarray\\ Tian&Myeloma&173&12625&0.0137&2&R: datamicroarray\\ Shipp&Lymphoma&77&7129&0.0108&2&R: datamicroarray\\ Golub&Leukemia&72&7129&0.0101&2&R: datamicroarray\\ Pomeroy&Nervous system&60&7128&0.0084&2&R: datamicroarray\\ Singh&Prostate cancer&102&12600&0.0081&2&R: datamicroarray\\ West&Breast cancer&49&7129&0.0069&2&R: datamicroarray\\ Burczynski&Crohn's disease&127&22283&0.0057&3&R: datamicroarray\\ Chin&Breast cancer&118&22215&0.0053&2&R: datamicroarray\\ Subramanian&Medical&50&10100&0.0050&2&R: datamicroarray\\ Chowdary&Breast cancer&104&22283&0.0047&2&R: datamicroarray\\ Borovecki&Medical&31&22283&0.0014&2&R: datamicroarray\\ \bottomrule \end{tabular} } \end{center} \label{tab:classification-data} \end{table} \normalsize \begin{figure} \includegraphics[width=6.5in, height=7.5in]{figclassification} \caption{Monte-Carlo simulation results based on four representative data sets, namely, Breast, Wine, BCI$\_$2240, Yeoh of Table~\ref{tab:classification-data}. The left boxplots are the accuracy results and the left boxplots are the number of selected needles. The horizontal red line is the accuracy by always predicting the most frequent class (that is, without looking at the inputs).} \label{fig:classif} \end{figure} Figure~\ref{fig:scattplot} visualizes the sparsity--accuracy trade--off by plotting accuracy versus $\log(a \hat s/p_1+1)$ with $a=\exp(1)-1$, so that both axes are on $[0,1]$. Learners with points near $(0,1) $ offer the best trade--off. Left is for binary and right for multiclass classifications. Among all ANN-based learners (represented with ``o''), LASSO ANN is clearly the best. \begin{center} \begin{figure}[h] \includegraphics[width=6.5in, height=4.0in]{scattplot} \caption{Summary of Monte-Carlo results for all data sets of Table~\ref{tab:classification-data}. The $x$-axis measures sparsity on a log-scale with $\log((\hat s+1)/(p_1+1))$ and the $y$-axis is accuracy.} \label{fig:scattplot} \end{figure} \end{center} The main lesson of this experiment on real data sets is that LASSO ANN offers a good compromise between high accuracy and low number of selected needles. Yet, linear learners are difficult to beat when $n/p_1\ll 1$, which corroborates our findings in regression that the sample size must be large to identify nonlinear associations. \subsection{Regression data} \label{subsct:regression} \citet{PeterBulbiology:14} reported genetic data measuring the expression levels of $p_1= 4088$ genes on $n= 71$ Bacillus subtilis bacteria. The logarithms of gene expression measurements are known to have some strongly correlated genes, which also makes selection difficult. The output is the riboflavin production rate of the bacteria. This is a high-dimensional setting in the sense that the training set is small compared to the size of the haystack. Generalization is not the goal here, but finding the informative genes; the scientific questions are: what genes affect the riboflavin production rate? Is the association linear? The ground truth is not known here, but LASSO-zero, a conservative method with low false discovery rate \citep{DesclouxSardy2018}, selects genes $4003$ and $2564$. Standard LASSO (using {\tt cv.glmnet} in {\tt R}) selects 30 genes including $4003$ and $2564$. Using $p_2=20$ neurons, LASSO ANN finds a single active neuron containing 9 non-zero parameters including genes $4003$ and $2564$. \citet{feng2019sparseinput} reports 45 important genes with {\tt spinn}, and running {\tt spinn-dropout} 100 times (randomly splitting into $70\%$ training and $30\%$ validating) we find an average of 6 genes (in which $4003$ and $2564$ are rarely present). So the answers to the scientific questions are that few genes seem responsible for riboflavin production and that a linear model seems sufficient (a single neuron is active). \section{Conclusion} \label{sct:conclusion} For finding needles in a nonlinear haystack, LASSO ANN is an artificial neural networks learner that, with a simple principle to select a single hyperparameter, achieves: (1) a phase transition in the probability of exact support recovery and controls well the false discovery rate; (2) a consistent good trade--off between generalization and low number of selected needles whether in regression, binary or multiclass classification or various $n/p_1$ ratios. This makes it a good candidate to discover important features without too many spurious ones. Our empirical findings call for more theory to mathematically predict the regimes indexed by $(n,{\bf p},s,\xi,\theta,\sigma )$ where feature recovery is highly probable. We also introduced a class of rescaled activation functions $\sigma_{M,u_0,k}$ that can be employed within the same ANN model, for instance to fit interactions in a sparse way. ANN models are widely used state-of-the-art black boxes. There is a keen interest, especially in scientific applications, to understand the why of model predictions. Sparse encoding automatic feature selection provides a path towards such an understanding. Our work makes sparse encoding with LASSO ANN closer to practical applications. Its coherent PESR behavior and FDR control make it reliable for finding needles in nonlinear haystacks, but could also be used for other ANN tasks requiring sparsity, e.g., sparse auto-encoding or convolutional ANN \citep{cnn2020}. Inspired by {\tt spinn-dropout}, the idea of pruning \citep{8578988,ChaoWXC20} and more generally of subset selection that preceded LASSO, we could still improve LASSO ANN with dropout. \section{Reproducible research} Our codes are available at \href{https://github.com/StatisticsL/ANN-LASSO}{https://github.com/StatisticsL/ANN-LASSO}. \section{Acknowledgments} The first author has been supported in Switzerland by China Scholarship Council, Award Number 202006220228. Yen Ting Lin and Nick Hengardner have been supported by the Joint Design of Advanced Computing Solutions for Cancer program established by the U.S.~Department of Energy and the National Cancer Institute of the National Institutes of Health under Contract DE-AC5206NA25396 and Laboratory Directed Research and Development program under project number 20210043DR (Uncertainty Quantification for Robust Machine Learning). We thank Professor Mao Ye for providing us with the {\tt spinn} and {\tt spinn-dropout} Python codes, and Dr.~Thomas Kerdreux and Mr.~Pablo Strasser for their help with Python.
1,108,101,564,047
arxiv
\section{Introduction \label{sec1}} \input{section1} \section{Machine Concepts \label{sec2}} \input{section2} \section{Beam properties \label{sec3}} \input{section3} \section{Neutrino Oscillation Physics Reach of a Neutrino Factory and Beta Beam \label{sec4}} \input{section4} \section{Progress on Neutrino Factory and Beta Beam Facility Design\label{sec5}} \input{section5-subsec1} \input{section5-subsec1-1} \input{section5-subsec2} \input{section5-subsec2-2} \input{section5-subsec2-1} \section{Neutrino Factory and Beta Beam R\&D \label{sec6}} \input{section6} \section{Summary \label{sec7}} \input{Updated-Summary} \clearpage \section{Recommendations \label{sec8}} \input{recommendations} \begin{acknowledgments} This research was supported by the U.S. Department of Energy under Contracts {No. DE-AC02-98CH10886}, {No. DE-AC02-76CH03000}, and {No. DE-AC03-76SF00098}. \end{acknowledgments} \section{Appendix A \label{appendixa}} \input{cost-MZ} \subsection{Cost Reduction} Here we present the cost scaling we have done with respect to FS2 cost numbers~\cite{fs2}. Since there was neither time nor engineering effort available to perform a bottom-up cost estimate for the new systems we have developed during the present Study, we have based our costs on the FS2 numbers and scaled them appropriately to derive the estimated savings from our new technical approaches. For that reason, we quote the results as a percentage of the original FS2 estimates, to avoid giving the impression that this is anything more than a ``physicist's estimate'' at this point in time. The method we employed was as follows: \begin{itemize} \item Starting from the FS2 Work Breakdown Structure system costs, we derive useful element costs per unit length, per integral rf voltage, or per unit acceleration. \item We then applied these scaling rules to the new parameters derived from this Study (see Section~\ref{sec5}) to obtain a first approximation to the revised cost. Our results are reported as costs relative to FS2. We have ignored minor corrections, such as escalating the costs to FY2004 dollars, as these are small compared with the precision of our estimate. \item Because it is expected that a future Neutrino Factory would be built as an ``upgrade'' or follow-on to a Superbeam facility, we think it likely that the Proton Driver---and quite possibly the Target facility as well---will already exist at the time the Neutrino Factory construction commences. For this reason, our costs are given with and without including the Proton Driver or Target Station. \end{itemize} Based on the costing approach we used, we expect that the unloaded hardware cost of the updated Neutrino Factory design will be reduced by about one-third compared with the original FS2 cost estimate of \$1.8B (see Table~\ref{tab:FS2costs}). \begin{table}[bhtp!] \caption{Original (unloaded) costs from FS2.\label{tab:FS2costs}} \begin{ruledtabular} \begin{tabular*}{5in}[c]{lccc} & \textbf{All} & \textbf{No Driver} & \textbf{No Driver, No Target}\\ & (\$M) & (\$M) & (\$M)\\\hline TOTAL\footnote{No ``other'', no EDIA, no contingency.} & \textbf{1832} & \textbf{1641} & \textbf{1538}\\ \end{tabular*}% \end{ruledtabular} \end{table}% \subsubsection{Proton Driver} The cost basis for FS2 was an upgrade of the AGS to 1~MW beam power. The cost used here is taken without change from FS2. As noted earlier, we anticipate that this component would already be in place to support a prior Superbeam experiment. In that case, it would not be part of the cost to construct a Neutrino Factory. Since the Proton Driver tends to be the most site-specific component of a Neutrino Factory, we expect the remaining costs to be largely site independent. \subsubsection{Target and Capture} The updated Target and Capture system is almost the same as that in FS2, but differs in the details. In particular, the region over which the field tapers down is shorter by 5.5~m because it tapers only to 1.75~T rather than the 1.25~T used in FS2. The cost will thus be somewhat less. We estimated this savings by subtracting the cost of 5.5~m of a 1.25~T transport channel, whose cost per meter was taken from the drift region in FS2. This is a conservative estimate, because the section eliminated had fields varying from 1.75~T to 1.25~T, whereas the savings are estimated assuming lower field transport at 1.25~T. \subsubsection{Drift Region} The first 18~m of drift is more expensive than later beam transport sections because of the required radiation shielding. We therefore treat this first 18~m of drift separately from the subsequent transport. To evaluate the cost, we took the FS2 costs for the first 18~m, and then made a correction due to the higher solenoid field in the new channel compared with FS2 (1.75~T vs. 1.25~T). Specifically, the correction involved increasing the magnet, power supply, and cryogenic costs using the second scaling formula from Green \textit{et al.}~\cite{MAGref}, \begin{equation} \text{Cost (in \$M)}\propto (BR^2L)^{0.577}. \label{green2} \end{equation} The subsequent 82~m drift requires less shielding and will thus be less expensive. In FS2, there was no equivalent simple drift from which to scale this cost. Therefore, we estimated the costs based on the magnets, power supplies, and cryogenics included in the induction linac region of FS2. As these costs were for 1.25~T magnets, we corrected for the higher 1.75~T field using Eq.~\ref{green2}. This estimate is quite conservative, because the transport magnets in the induction linacs of FS2, which were introduced inside the induction linac cores, had to meet more difficult requirements, and had more complicated cryostats. \subsubsection{Buncher and Phase Rotation} As discussed in Section~\ref{sec5}, the Buncher and Phase Rotation section adopted here is quite different from the induction-linac-based system used in FS2. The focusing now consists of an essentially continuous solenoid at 1.75~T, as in the drift, but with a radius (65~cm) sufficient for it to be located outside of the rf cavities. To estimate the cost of this solenoid, we again use the FS2 induction linac transport magnets, scaled to the appropriate parameters via Eq.~\ref{green2} (now correcting for both the higher field and the larger radius). This estimate is again conservative, because it is scaled from the more difficult transport solenoids inside the induction linacs of FS2. As described in Section~\ref{sec5}, in place of induction linacs, the present study uses a sequence of rf cavities at frequencies in the range of 200--300~MHz. The cost of these cavities, and their required rf power supplies, are scaled from the FS2 costs of cooling channel rf cavities. These costs are scaled for the different average accelerating gradients as follows: cavity cost per GeV $\propto\frac{1}{V}$, power supply cost per GeV proportional to $V$. \subsubsection{ Cooling Section} The rf system for the cooling channel used in the present study is essentially identical to that in FS2, so the costs per GeV are taken to be the same. The focusing lattice, however, is quite different---a simple alternating solenoid array (FOFO) instead of the more complicated, and tapered, super-FOFO lattices in FS2. We estimate the new magnet system cost by scaling from Lattice-1 of the FS2 cooling channel, using the first scaling formula in Ref.~\cite{MAGref}, which depends on the total stored energy $U$ (cost $\propto U^{0.662}$). The stored energy per unit length in the present study and the earlier FS2 lattice are in the ratio of 189:382, so the new cost per meter is taken to scale as $(\frac{189}{382})^{0.662}.$ The cryogenic system cost is also scaled with the magnet costs, but based on the cryogenic costs of the FS2 phase rotation section. (We do not use cryogenic costs from the FS2 cooling channel, as these are heavily biased by the cooling requirements of the LH$_{2}$ absorbers.) This is a quite conservative estimate, because the new lattice not only has a smaller stored energy, it is simpler. In particular, the channel adopted in this study uses only a single type of solenoid and, when powered, there are no inter-coil forces. In contrast, the FS2 lattice employed two types of solenoid magnets and had very large inter-coil forces between the ``focus'' coil pair. No cost was included for the LiH absorbers in the present estimate, as we do not yet have a good basis for one. Our expectation is that these rf windows will cost no more than the Be windows they replace, but this is presently unverified. \subsubsection{Match to Pre-Accelerator} A section is required to match the (momentum-dependent) beta function in the cooling channel to that in the pre-accelerator linac. In the FS2 case, beta vs. momentum in the cooling lattice was highly non-linear, with low betas at the upper and lower momentum limits and a maximum beta in the center, whereas the beta functions in the pre-accelerator were approximately linear in momentum. As failure to match the two would have resulted in significant emittance growth and particle loss, we included a matching section using 18~m of a modified 1.65~m cell (Lattice-2) cooling lattice. The optics of the first two-thirds of the matching section was adjusted to adiabatically change the beta vs. momentum shape, and raise the central beta from 20~cm to about 60~cm. The final one-third of the matching section increased the beta function to about 3~m to match the pre-accelerator optics. In the present case, the match will be simpler and less expensive because \textit{a)} the beta functions both before and after the match have similar linear momentum dependence, \textit{b)} the match requires a smaller change in beta function than was needed in FS2, and \textit{c)} the lattice on which it will be based has a considerably lower stored energy per unit length (189/1039). As a new matching section has not yet been designed, we correct the cost only due to \textit{c)} by scaling the cost by this factor using the first formula in Ref.~\cite{MAGref}. This too is a conservative approach, as it is expected that the length of the new matching section will be much less than the original one. \subsubsection{Pre-Acceleration} For the rf system and cryogenics, the Pre-Acceleration cost is scaled from that in FS2 by the energy gain from the rf cavities. For magnets and vacuum, we scaled with length. \subsubsection{RLA} The present study makes use of a dogbone RLA to accelerate from 1.5--5~GeV. The RLA cost is scaled from the 2.5--20~GeV RLA in FS2. The number of passes is 3.5, compared with 4 for FS2. Although we favor a dogbone geometry for ease of the switchyard design, as opposed to the FS2 racetrack layout, costs per unit length, or per unit energy gain, are expected to be very similar. We took these to be the same. Similarly, the arcs are assumed to have the same average bending field as the final FS2 arc, so the cost per unit length is taken to be the same. The lengths of the arcs were chosen to provide an absolute bend angle of $420^{\circ}$ at each end of the linac. Magnet costs for the special and transport magnets were scaled with the final RLA energies. \subsubsection{FFAG} FFAG costs for all technical and conventional systems are taken from a cost algorithm based on similar scaling arguments to those used above for the other beam line sections. The algorithm we used, when applied to the FS2 RLA as a ``reality check'', gave a higher cost than determined in FS2. It thus appears to be conservative in its cost estimation. Injection and extraction kickers are assumed to be driven by typical induction-linac pulsed power sources, and will contain similar amounts of magnetic materials. Our estimated costs were based on a length of the FS2 induction linac having the same pulsed energy as required for the kickers. Transfer line lengths are taken from Ref.~\cite{InjExtRef} and include lines for both $\mu^{+}$ and $\mu^{-}$. The cost per meter of these transport lines is based on RLA arcs (magnets, power supplies, and vacuum) from FS2. \subsubsection{Storage Ring} Storage ring costs are taken, without modification, from FS2. However, in that case, there was a site-dependent constraint that no part of the downward tilted ring should fall below the nearby water table. This constraint forced the design to assume construction of the ring in an artificial hill, and also to require unusually high (hence not cost optimized) bending fields to keep the ring small. The cost at another site, without this constraint, would likely be less. \subsubsection{Overall Relative Costs} The result of applying the scaling rules outlined in this Appendix is summarized in Table~\ref{tab:FS2Acosts}. As can be seen, the present design exercise, completed as part of the APS Neutrino Physics Study, has maintained the original performance of the Neutrino Factory designed in FS2 for either muon sign, yielding either neutrinos or antineutrinos. Unlike FS2, however, the present design will supply both $\mu^{+}$ and $\mu^{-}$ simultaneously (interleaved in the bunch train), thus effectively doubling the performance compared with FS2. This has been accomplished while reducing the cost of the facility by about 1/3. While the present scaling estimate is not a replacement for a detailed engineering cost estimate, we are confident that the majority of the cost reductions identified here will survive a more rigorous treatment. We note that the design progress made in this Study is a direct result of the funding made available to the \textit{Neutrino Factory and Muon Collider Collaboration} for Neutrino Factory accelerator R\&D. Optimizing and refining the design of state-of-the-art facilities such as this, as well as verifying that component specifications can be met and that component costs are realistic, is critical to allowing the high-energy physics community to make sound technical choices in the future.% \begin{table}[htbp!] \caption{Scaled (unloaded) costs from the present study, quoted as percentages of costs determined for FS2.\label{tab:FS2Acosts}} \begin{ruledtabular} \begin{tabular*}{5in}[c]{lccc} & \textbf{All} & \textbf{No Driver} & \textbf{No Driver, No Target}\\ \hline TOTAL\footnote{Percentages of the original FS2 costs summarized in Table~\ref{tab:FS2costs}.} (\%) & \textbf{67} & \textbf{63} & \textbf{60}\\ \end{tabular*}% \end{ruledtabular} \end{table}% \subsubsection{Possible Further Savings} \begin{itemize} \item Earlier studies indicated small performance loss if the capture solenoid is reduced from 20 to 17--18~T. If we find that this is still the case, the field specified would be reduced and some savings made. \item Reducing the cooling channel length to 50~m would lower the cost of the channel while reducing the performance by only about 15\%. With better optimization, some or all of this loss may be recoverable. \item The expected shorter match from the Cooling Section to the Pre-Acceleration Section should reduce the cost somewhat. \item Increasing the number of turns in the RLA, and lowering its injection energy, should reduce the costs of the early acceleration portion of the Neutrino Factory. \item A lower field, larger storage ring should result in some savings. \end{itemize} The above list suggests that, while the collaboration's efforts have been effective in reducing the costs of the major items (see Table~\ref{tab:FS2Acosts}), options still exist to reduce the costs of the lesser items as well. Thus, we are hopeful that some further cost reductions are achievable. \section*{Preface} \begin{center} \huge \textbf{Preface} \normalsize \end{center} In response to the remarkable recent discoveries in neutrino physics, the APS Divisions of Particles and Fields, and of Nuclear Physics, together with the APS Divisions of Astrophysics and the Physics of Beams, have organized a year long \textit{Study on the Physics of Neutrinos}~\cite{aps-study} that began in the fall of 2003. Within the context of this study, the \textit{Neutrino Factory and Beta Beam Experiments and Development Working Group} was charged with reviewing, and if possible advancing, our understanding of the physics capabilities and design issues for these two new types of future neutrino facilities. To fulfill this charge, the working group conducted a Workshop at ANL March 3--4, 2004. The presentations and discussion at this \textit{Neutrino Factory and Beta Beams Workshop}, together with the Neutrino Factory Design work of the \textit{Neutrino Factory and Muon Collider Collaboration}~\cite{MC}, form the basis for this report. Over the last few years, there have been a series of workshops that have explored the design and physics capabilities of Neutrino Factories. These meetings include the international NUFACT Workshop series~\cite{nufact99,nufact00,nufact01,nufact02,nufact03}, many smaller, more specialized, workshops focused on specific parts of Neutrino Factory design and technology, and two more detailed \textit{Feasibility Studies}~\cite{fs1,fs2}. In addition, a large body of literature documents the physics motivation for Neutrino Factories and the progress that has been made towards realizing this new type of neutrino facility. The Neutrino Factory related goals for the working group were therefore to (i) review and summarize the results of the extensive work already done, and (ii) to update the picture utilizing the design study resources of the \textit{Neutrino Factory and Muon Collider Collaboration}, and the latest results from those making detailed studies of the physics capabilities of Neutrino Factories. The Beta Beam concept is several years younger than the Neutrino Factory concept, and the community's understanding of both the physics capabilities and the required design parameters (particularly the beam energy) is still evolving. Beta Beam R\&D is being pursued in Europe, but there is no significant Beta Beam R\&D activity in the U.S. Hence, Beta Beam related goals of the working group were necessarily more modest than the equivalent Neutrino Factory related goals. We restricted our ambitions to reviewing the evolving understanding of the physics reach coming out of work from Beta Beam proponents in Europe, and the R\&D challenges that must be met before a Beta Beam facility could be built. Possibilities for Neutrino Factory and Beta Beam facilities seem to have caught the imagination of the community. We hope that this report goes some way towards documenting why, and what is required to make these new and very promising neutrino tools a reality. \vspace{0.5in} \large \begin{flushright} Steve Geer and Mike Zisman \end{flushright} \normalsize \section{Introduction} Neutrino Factory~\cite{geer98,status_report,blondel} and Beta Beam~\cite{zucchelli} facilities offer two exciting options for the long-term neutrino physics program. In the U.S. there has been a significant investment in developing the concepts and technologies required for a Neutrino Factory, but no equivalent investment in developing Beta Beams. In the following we consider first the Neutrino Factory, and then the Beta Beam case. New accelerator technologies offer the possibility of building, in the not-too-distant future, an accelerator complex to produce and capture more than $10^{20}$ muons per year~\cite{status_report}. It has been proposed to build a Neutrino Factory by accelerating the muons from this intense source to energies of several tens of GeV, injecting them into a storage ring having long straight sections, and exploiting the intense neutrino beams that are produced by muons decaying in the straight sections. The decays \begin{equation} \mu^{-} \to e^{-}\nu_{\mu}\bar{\nu}_{e}\; , \qquad \mu^{+} \to e^{+}\bar{\nu}_{\mu}\nu_{e} \label{mumpdk} \end{equation} offer exciting possibilities to pursue the study of neutrino oscillations and neutrino interactions with exquisite precision. To create a sufficiently intense muon source, a Neutrino Factory requires an intense multi-GeV proton source capable of producing a primary proton beam with a beam power of 1~MW or more on target. This is just the proton source required in the medium term for Neutrino Superbeams. Hence, there is a natural evolution from Superbeam experiments in the medium term to Neutrino Factory experiments in the longer term. The physics case for a Neutrino Factory will depend upon results from the next round of planned neutrino oscillation experiments. If the unknown mixing angle $\theta_{13}$ is small, such that $\sin^{2}2\theta_{13} < O(10^{-2})$, or if there is a surprise and three-flavor mixing does not completely describe the observed phenomenology, then answers to some or all of the most important neutrino oscillation questions will require a Neutrino Factory. If $\sin^{2}2\theta_{13}$ is large, just below the present upper limit, and if there are no experimental surprises, the physics case for a Neutrino Factory will depend on the values of the oscillation parameters, the achievable sensitivity that will be demonstrated by the first generation of $\nu_e$ appearance experiments, and the nature of the second generation of basic physics questions that will emerge from the first round of results. In either case (large or small $\theta_{13}$), in about a decade the neutrino community may need to insert a Neutrino Factory into the global neutrino plan. The option to do this in the next 10--15~years will depend upon the accelerator R\&D that is done during the intervening period. In the U.S., the \textit{Neutrino Factory and Muon Collider Collaboration} (referred to herein as the Muon Collaboration, or MC)~\cite{MC} is a collaboration of 130 scientists and engineers devoted to carrying out the accelerator R\&D that is needed before a Neutrino Factory could be inserted into the global plan. Much technical progress has been made over the last few years, and the required key accelerator experiments are now in the process of being proposed and approved. The 2001 HEPAP subpanel~\cite{HEPAP} recommended a level of support that is sufficient to perform the critical accelerator R\&D during the next 10--15 years. This support level significantly exceeds the present investment in Neutrino Factory R\&D. In addition to the U.S. effort, there are active Neutrino Factory R\&D groups in Europe~\cite{UK},~\cite{CERN} and Japan~\cite{JAPAN}, and much of the R\&D is performed and organized as an international endeavor. Thus, because a Neutrino Factory is potentially the key facility for the long-term neutrino program, Neutrino Factory R\&D is an important part of the \textit{present} global neutrino program. Indeed, the key R\&D experiments are seeking funding now, and will need to be supported if Neutrino Factories are to be an option for the future. Consider next Beta Beam facilities~\cite{zucchelli}, \cite{autin}. It has been proposed to modify the Neutrino Factory concept by injecting beta-unstable radioactive ions, rather than muons, into a storage ring with long straight sections. This would produce a pure $\nu_e$ or $\bar{\nu_e}$ beam, depending on the stored ion species. The very low \textsl{Q} value for the decay means that the resulting neutrino beam will have a very small divergence, but it also means that the parent ions must be accelerated to high energies to produce neutrinos with even modest energies. The baseline Beta Beam concept involves accelerating the radioactive ions in the CERN SPS, which yields neutrino beams with energies of a few hundred MeV. The sensitivity of these Beta Beams to small values of $\theta_{13}$ appears to be comparable with the ultimate sensitivity of Superbeam experiments. Better performance might be achieved with higher energy Beta Beams, requiring the ions to be accelerated to at least TeV energies. This requires further study. This R\&D is currently being pursued in Europe, where the proponents hope that a Beta Beam facility together with a Superbeam at CERN and a very massive water Cerenkov detector in the Fr\'{e}jus tunnel, would yield a very exciting neutrino program. In this report, we summarize the expected sensitivities of Neutrino Factory and Beta Beam neutrino oscillation experiments, and the status of the R\&D required before these exciting facilities could become a part of the neutrino community's global plan. Exploiting the enthusiastic involvement of the Muon Collaboration in the study, we also describe an updated Neutrino Factory design that demonstrates significant progress toward cost reduction for this ambitious facility. The report is organized as follows. Section~\ref{sec2} describes in some detail the Neutrino Factory and Beta Beam design concepts. In Section~\ref{sec3}, Neutrino Factory and Beta Beam properties are described and compared with conventional neutrino beams. The neutrino oscillation physics reach is presented in Section~\ref{sec4}. Progress on Neutrino Factory designs along with some comments on the possibility of a U.S.-based Beta Beam facility are discussed in Section~\ref{sec5}. The Neutrino Factory and Beta Beam R\&D programs are described in Section~\ref{sec6}. A summary is given in Section~\ref{sec7} and some recommendations are presented in Section~\ref{sec8}. Finally, in Appendix A a cost scaling with respect to the Feasibility Study-II cost numbers is presented. \section{Machine Concepts} In this Section we describe the basic concepts that are used to create a Neutrino Factory or a Beta Beam facility. Though the details of the two facilities are quite different, many of the required features have common origins. Both facilities are ``secondary beam'' machines, that is, a production beam is used to create the secondary beam that eventually provides the neutrino flux for the detector. For a Neutrino Factory, the production beam is a high intensity proton beam of moderate energy (beams of 2--50 GeV have been considered by various groups) that impinges on a target, typically a high-$Z$ material. The collisions between the proton beam and the target nuclei produce a secondary pion beam that quickly decays into a longer-lived (2.2 $\mu$s) muon beam. The remainder of the Neutrino Factory is used to condition the muon beam (see Section~\ref{neufact}), accelerate it rapidly to the desired final energy of a few tens of GeV, and store it in a decay ring having a long straight section oriented such that decay neutrinos produced there will hit a detector located thousands of kilometers from the source. A Beta Beam facility is one in which a pure electron neutrino $\left(\text{from }\beta^{+}\right)$ or antineutrino $\left(\text{from }\beta^{-}\right)$ beam is produced from the decay of beta unstable radioactive ions circulating in a storage ring. As was the case for the Neutrino Factory, current Beta Beam facility concepts are based on using a proton beam to hit a layered production target. In this case, nuclear reactions are used to produce secondary particles of a beta-unstable nuclide. The proposed approach uses either spallation neutrons from a high-$Z$ target material or the incident protons themselves to generate the required reactions in a low-$Z$ material. The nuclide of interest is then collected, ionized, accumulated, and accelerated to its final energy. The process is relatively slow, but this is acceptable as the lifetimes of the required nuclides, of order 1~s, are sufficiently long. \subsection{Neutrino Factory} \label{neufact} The various components of a Neutrino Factory, based in part on the most recent Feasibility Study (Study-II, referred to herein as FS2)~\cite{fs2} that was carried out jointly by BNL and the U.S. \textit{Neutrino Factory and Muon Collider Collaboration}, are described briefly below. Details of the design discussed here are based on the specific scenario of sending a neutrino beam from BNL to a detector in Carlsbad, New Mexico. More generally, however, the design exemplifies a Neutrino Factory for which two Feasibility Studies~\cite{fs1,fs2} have demonstrated technical feasibility (provided the challenging component specifications are met), established a cost baseline, and established the expected range of physics performance. It is worth noting that the Neutrino Factory design we envision could fit comfortably on the site of an existing laboratory, such as BNL or FNAL. As part of the current Study, we have developed improved methods for accomplishing some of the needed beam manipulations. These improvements are included in the description below. The main ingredients of a Neutrino Factory include: \begin{itemize} \item{\textbf{Proton Driver:}} Provides 1--4~MW of protons on target from an upgraded AGS; a new booster at Fermilab would perform equivalently. \item{\textbf{Target and Capture:}} A high-power target immersed in a 20~T superconducting solenoidal field to capture pions produced in proton-nucleus interactions. The high magnetic field at the target is smoothly tapered down to a much lower value, 1.75~T, which is then maintained through the bunching and phase rotation sections of the Neutrino Factory. \item{\textbf{Bunching and Phase Rotation:}} We first accomplish the bunching with rf cavities of modest gradient, whose frequencies change as we proceed down the beam line. After bunching the beam, another set of rf cavities, with higher gradients and again having decreasing frequencies as we proceed down the beam line, is used to rotate the beam in longitudinal phase space to reduce its energy spread. \item{\textbf{Cooling:}} A solenoidal focusing channel, with high-gradient 201.25~MHz rf cavities and LiH absorbers, cools the transverse normalized rms emittance from 17~mm$\cdot$rad to about 7~mm$\cdot$rad. This takes place at a central muon momentum of 220~MeV/c. \item{\textbf{Acceleration:}} A superconducting linac with solenoidal focusing is used to raise the muon beam energy to 1.5~GeV, followed by a Recirculating Linear Accelerator (RLA), arranged in a ``dogbone'' geometry, to provide a 5~GeV muon beam. Thereafter, a pair of cascaded Fixed-Field, Alternating Gradient (FFAG) rings, having quadrupole triplet focusing, is used to reach 20~GeV. Additional FFAG stages could be added to reach a higher beam energy, if the physics requires this. \item{\textbf{Storage Ring:}} We employ a compact racetrack-shaped superconducting storage ring in which $\approx35$\% of the stored muons decay toward a detector located some 3000~km from the ring. Muons survive for roughly 500 turns. \end{itemize} \subsubsection{Proton Driver} The proton driver considered in FS2, and taken here as well, is an upgrade of the BNL Alternating Gradient Synchrotron (AGS) and uses most of the existing components and facilities; parameters are listed in Table~\ref{Proton:tb1}. To serve as the proton driver for a Neutrino Factory, the existing booster would be replaced by a 1.2~GeV superconducting proton linac. The modified layout is shown in Fig.~\ref{Proton:bnl}. \begin{figure}[tbh] \includegraphics[width=5.5in]{sec2-Proton_driver_bnl} \caption{(Color) AGS proton driver layout.}% \label{Proton:bnl}% \end{figure} The AGS repetition rate would be increased from 0.5~Hz to 2.5~Hz by adding power supplies to permit ramping the ring more quickly. No new technology is required for this---the existing supplies would be replicated and the magnet strings would be split into six sectors rather than the two used presently. The total proton charge ($10^{14}$ ppp in six bunches) is only 40\% higher than the current performance of the AGS. However, the bunches required for a Neutrino Factory are shorter than those used in the AGS at present, so there is a large increase in peak current and concomitant need for an improved vacuum chamber; this is included in the upgrade. The six bunches are extracted separately, spaced by 20~ms, so that the target and rf systems that follow need only deal with single bunches at an instantaneous repetition rate of 50~Hz (average rate of 15~Hz). The average proton beam power is 1~MW. A possible future upgrade to $2\times10^{14}$~ppp and 5~Hz could give an average beam power of 4~MW. At this higher intensity, a superconducting bunch compressor ring would be needed to maintain the rms bunch length at 3~ns. If the facility were built at Fermilab, the proton driver would be newly constructed. A number of technical options are presently being explored~\cite{fnal1},\cite{fnal2}. \begin{table}[tbh] \caption{Proton driver parameters for BNL design.}% \label{Proton:tb1} \begin{ruledtabular} \begin{tabular}[c]{lc} \multicolumn{2}{c}{AGS}\\% \hline Total beam power (MW) & 1\\ Beam energy (GeV) & 24\\ Average beam current ($\mu$A) & 42\\ Cycle time (ms) & 400\\ Number of protons per fill & $1\times10^{14}$\\ Average circulating current (A) & 6\\ No. of bunches per fill & 6\\ No. of protons per bunch & $1.7\times10^{13}$\\ Time between extracted bunches (ms) & 20\\ Bunch length at extraction, rms (ns) & 3\\ \end{tabular} \end{ruledtabular} \end{table} \subsubsection{Target and Capture} A mercury-jet target is chosen to give a high yield of pions per MW of incident proton power. The 1-cm-diameter jet is continuous, and is tilted with respect to the beam axis. The target layout is shown in Fig.~\ref{tgtc}. \begin{figure}[tbh] \includegraphics*{sec2-study2-target} \caption{(Color) Target, capture solenoids and mercury containment.}% \label{tgtc}% \end{figure} We assume that the thermal shock from the interacting proton bunch fully disperses the mercury, so the jet must have a velocity of 20--30 m/s to allow the target material to be renewed before the next proton bunch arrives. Calculations of pion yields that reflect the detailed magnetic geometry of the target area have been performed with the MARS code~\cite{mars1} and are reported in Section~\ref{sec5}. The FS2 design was updated for the present study to improve muon throughput. To avoid mechanical fatigue problems, a mercury pool serves as the beam dump. This pool is part of the overall target system---its mercury is circulated through the mercury jet nozzle after passing through a heat exchanger. Pions emerging from the target are captured and focused down the decay channel by a solenoidal field that is 20~T at the target center, and tapers down, over 12~m, to 1.75~T. The 20~T solenoid, with a resistive magnet insert and superconducting outer coil, is similar in character to higher field (up to 45~T), but smaller bore, magnets existing at several laboratories~\cite{ITERmag}. The magnet insert is made with hollow copper conductor having ceramic insulation to withstand radiation. MARS simulations~\cite{mars2} of radiation levels show that, with the shielding provided, both the copper and superconducting magnets will have reasonable lifetime. \subsubsection{Buncher and Phase Rotation} Pions, and the muons into which they decay, are generated in the target over a very wide range of energies, but in a short time pulse ($\approx3$~ns rms). To prepare the muon beam for acceleration thus requires significant ``conditioning.'' First, the bunch is drifted to develop an energy correlation, with higher energy particles at the head and lower energy particles at the tail of the bunch. Next, the long bunch is separated into a number of shorter bunches suitable for capture and acceleration in a 201-MHz rf system. This is done with a series of rf cavities having frequencies that decrease along the beam line, separated by suitably chosen drift spaces. The resultant bunch train still has a substantial energy correlation, with the higher energy bunches first and progressively lower energy bunches coming behind. The large energy tilt is then ``phase rotated'', using additional rf cavities and drifts, into a bunch train with a longer time duration and a lower energy spread. The beam at the end of the buncher and phase rotation section has an average momentum of about 220~MeV/c. The proposed system is based on standard rf technology, and is expected to be much more cost effective than the induction-linac-based system considered in Ref.~\cite{fs2}. A fringe benefit of the rf-based system is the ability to transport both signs of muon simultaneously. \subsubsection{Cooling} Transverse emittance cooling is achieved by lowering the beam energy in LiH absorbers, interspersed with rf acceleration to keep the average energy constant. Both transverse and longitudinal momenta are lowered in the absorbers, but only the longitudinal momentum is restored by the rf system. The emittance increase from Coulomb scattering is controlled by maintaining the focusing strength such that the angular spread of the beam at the absorber locations is reasonably large. In the present cooling lattice, the energy absorbers are attached directly to the apertures of the rf cavity, thus serving the dual purposes of closing the cavity apertures electromagnetically (increasing the cavity shunt impedance) and providing energy loss. Compared with the approach used in FS2, the absorbers are more distributed, and do not lend themselves to being located at an optical focus. Therefore, the focusing is kept essentially constant along the cooling channel, but at a beta function somewhat higher than the minimum value achieved in FS2. A straightforward Focus-Focus (FOFO) lattice is employed. The solenoidal fields in each half-cell alternate in sign, giving rise to a sinusoidal field variation along the channel. Use of solid absorbers instead of the liquid-hydrogen absorbers assumed in FS2 will considerably simplify the cooling channel, and the new magnet requirements are also more modest, since fewer and weaker components are needed compared with FS2. Together, these features reduce the cost of the cooling channel with respect to the FS2 design. Although the cooling performance is reduced, the overall throughput is comparable to that in FS2 due to the increased acceptance built into the downstream acceleration system. Here too, the ability to utilize both signs of muon is available. \subsubsection{Acceleration} Parameters of the acceleration system are listed in Table~\ref{tab:acc:parm}. A matching section, using normal conducting rf systems, matches the cooling channel optics to the requirements of a superconducting rf linac with solenoidal focusing which raises the energy to 1.5~GeV. The linac is in three parts (see Section~\ref{sec5-sub2}). The first part has only a single-cell 201~MHz cavity per period. The second part, with longer period, has a 2-cell rf cavity unit per period. The third part, as a still longer period becomes possible, accommodates two 2-cell cavity units per period. Figure~\ref{fig:acc:cryomod} shows the three cryomodule types that make up the pre-accelerator linac. \begin{figure}[tbhp!] \includegraphics{sec2-pict5v4.eps} \caption{(Color) Layouts of superconducting linac pre-accelerator cryomodules. Blue lines are the SC walls of the cavities and solenoid coils are indicated in red. The dimension of the cryomodules are shown in Table~\ref{tab:acc:cryo} and Table~\ref{tab:acc:linac} summarizes parameters for the linac.}% \label{fig:acc:cryomod}% \end{figure} This linac is followed by a 3.5-pass \textsl{dogbone} RLA (see Fig.~\ref{fig:acc:rlalinac}) that raises the energy from 1.5 to 5~GeV. The RLA uses four 2-cell superconducting rf cavity structures per cell, and utilizes quadrupole triplet (as opposed to solenoidal) focusing. \begin{figure}[tbh!] \includegraphics{sec2-Dogbone-only} \caption{(Color) Layout of the RLA.}% \label{fig:acc:rlalinac}% \end{figure} Following the RLA are two cascaded FFAG rings that increase the beam energy from 5--10~GeV, and 10--20~GeV, respectively. Each ring uses combined-function magnets arranged in a triplet (F-D-F) focusing arrangement. The lower energy FFAG ring has a circumference of about 400~m; the higher energy ring is about 500~m in circumference. As discussed in Section~\ref{sec5-sub2}, an effort was made to achieve a reasonably cost-optimized design. Without detailed engineering, it is not possible to fully optimize costs, but we have employed general formulae that properly represent the cost trends and that were considered adequate to make choices at the present stage of the design. As the acceleration system was one of the dominant cost items in FS2, we are confident that the approach adopted here will result in a less expensive Neutrino Factory facility with essentially the same performance as calculated for the FS2 design. Achieving a higher beam energy would require additional FFAG acceleration stages. \begin{table}[tbh] \caption{Main parameters of the muon accelerator driver.}% \label{tab:acc:parm} \begin{ruledtabular} \begin{tabular}[c]{lc} Injection momentum (MeV/c) & 273\\ Injection kinetic energy (MeV) & 187\\ Final total energy (GeV) & 20\\ Initial normalized acceptance (mm-rad) & 30\\ \quad rms normalized emittance (mm-rad) & 3.84\\ Initial longitudinal acceptance, $\Delta pL_{b}/m_{\mu}c$ (mm) & 150\\ \quad Total energy spread, $\Delta E$ (MeV)& $\pm45.8$\\ \quad Total time-of-flight (ns) & $\pm1.16$\\ \quad rms energy spread (MeV) &19.8\\ \quad rms time-of-flight (ns) &0.501 \\ Number of bunches per pulse & 89\\ Peak number of particles per bunch & $1.1\times10^{11}$\\ Number of particles per pulse (per charge) & $3\times10^{12}$\\ Bunch frequency\textbf{/}accelerating frequency (MHz) & 201.25\textbf{/}201.25\\ Average beam power (per charge) (kW) & 144\\ \end{tabular} \end{ruledtabular} \end{table} \subsubsection{Storage Ring} After acceleration in the final FFAG ring, the muons are injected into the upward-going straight section of a racetrack-shaped storage ring with a circumference of 358~m. Parameters of the ring are summarized in Table~\ref{SRING:tb}. High-field superconducting arc magnets are used to minimize the arc length and maximize the fraction (35\%) of muons that decay in the downward-going straight, generating neutrinos headed toward the detector located some 3000~km away. All muons are allowed to decay; the maximum heat load from their decay electrons is 42~kW (126~W/m). This load is too high to be dissipated in the superconducting coils. For FS2, a magnet design was chosen that allows the majority of these electrons to exit between separate upper and lower cryostats, and be dissipated in a dump at room temperature. To maintain the vertical cryostat separation in focusing elements, skew quadrupoles are employed in place of standard quadrupoles. In order to maximize the average bending field, Nb$_{3}$Sn pancake coils are employed. One coil of the bending magnet is extended and used as one half of the previous (or following) skew quadrupole to minimize unused space. For site-specific reasons, the ring is kept above the local water table and is placed on a roughly 30-m-high berm. This requirement places a premium on a compact storage ring. In the present study, no attempt was made to revisit the design of the FS2 storage ring. For further technical details on this component, see FS2, Ref.~\cite{fs2}. \begin{table}[tb] \caption{Muon storage ring parameters.}% \label{SRING:tb} \begin{ruledtabular} \begin{tabular}[c]{ll} Energy (GeV) & 20\\ Circumference (m) & 358.18\\ Normalized transverse acceptance (mm-rad) & 30\\ Energy acceptance (\%) & 2.2\\ \hline \multicolumn{2}{c}{Arc}\\ \hline Length (m) & 53.09\\ No. cells per arc & 10\\ Cell length (m) & 5.3\\ Phase advance ($\deg$) & 60\\ Dipole length (m) & 1.89\\ Dipole field (T) & 6.93\\ Skew quadrupole length (m) & 0.76\\ Skew quadrupole gradient (T/m) & 35\\ $\beta_{\text{max}}$ (m) & 8.6\\ \hline \multicolumn{2}{c}{Production Straight}\\ \hline Length (m) & 126\\ $\beta_{\text{max}}$ (m) & 200\\ \end{tabular} \end{ruledtabular} \end{table} The footprint of a Neutrino Factory is reasonably small, and such a machine would fit easily on the site of an existing laboratory. \subsection{Beta Beam Facility} The idea of a Beta Beam facility was first proposed by P. Zucchelli in 2002~\cite{zucchelli}. As the name suggests, it employs beams of beta-unstable nuclides. By accelerating these ions to high energy and storing them in a decay ring (analogous to that used for a muon-based Neutrino Factory) a very pure beam of electron neutrinos (or antineutrinos) can be produced. As the kinematics of the beta decay is well understood, the energy distribution of the neutrinos can be predicted to a very high accuracy. Furthermore, as the energy of the beta decay is low compared with that for muon decay, the resulting neutrino beam has a small divergence. For low-$Z$ beta-unstable nuclides, typical decay times are measured in seconds. Thus, there is not so high a premium on rapid acceleration as is true for a Neutrino Factory, and conventional (or even existing) accelerators could be used for acceleration in a Beta Beam facility. Two ion species, both having lifetimes on the order of 1 s, have been identified as optimal candidates: $^{6}$He for producing antineutrinos and $^{18}$Ne for neutrinos. Following the initial proposal, a study group was formed at CERN to investigate the feasibility of the idea, and, in particular, to evaluate the possibility of using existing CERN machines to accelerate the radioactive ions. Their study took an energy of $\gamma=150$ for $^{6}$He, which corresponds to the top energy of the SPS for this species and also matches the distance to the proposed neutrino laboratory in the Fr\'{e}jus tunnel rather well. In the spring of 2003, a European collaboration, the Beta Beam Study Group, was formed. Eventually, they obtained funding from the European Union to produce a conceptual design study. Here, we take our information from recent presentations made by members of this group~\cite{BetaBeamWGpage}. The EU Beta Beam Study Group has undertaken the study of a Beta Beam facility with the goal of presenting a coherent and realistic scenario for such a device. Their present ``boundary conditions'' are to re-use a maximum of existing (CERN) infrastructure and to base the design on known technology---or reasonable extrapolations thereof. In this sense, the approach taken is similar to that of the Neutrino Factory feasibility studies. For practical reasons, the Beta Beam study was included in the larger context of the EURISOL study, due to the large synergies between the two at the low energy end. (The EURISOL study aims to build a next-generation facility for on-line production of radioactive isotopes, including those needed for the Beta Beam facility.) The basic ingredients of a Beta Beam facility are: \begin{itemize} \item{\textbf{Proton Driver:}} A 2.2\textbf{\ }GeV proton beam from the proposed Super Proton Linac (SPL) at CERN would be used to initiate the nuclear reactions that ultimately generate the required beta-unstable nuclides ($^{6}$He is used as the antineutrino source and $^{18}$Ne is used as the neutrino source). \item{\textbf{ISOL Target and Ion Source:}} The target system is patterned after that of the EURISOL facility~\cite{EURISOLref}. For $^{6}$He production, the target core would be a heavy metal (mercury~\cite{helge}) that converts the incoming proton beam into a neutron flux. Surrounding the core is a cylinder of BeO~\cite{koster} that produces $^{6}$He via the $^{9}$Be(n,$\alpha$) reaction. $^{18}$Ne would be produced via direct proton spallation on a MgO target~\cite{ravn2}. The nuclides of interest will be extracted from the target as neutral species, and so must be ionized to produce the beam to be accelerated. The proposed ion source technology, shown in Fig.~\ref{ISOLtargetFig}, is based on a pulsed ``ECR-duoplasmatron.'' \begin{figure} \includegraphics[width=3.5in,angle=-90]{sec2-Moriond-Sortais-1} \caption{(Color) Proposed ion source system for production of $^{6}$He beam.} \label{ISOLtargetFig} \end{figure} \item{\textbf{Acceleration:}} Low energy acceleration would make use of a linac to accelerate the nuclide of interest to 20--100~MeV/u, followed by a Rapid Cycling Synchrotron (RCS) with multi-turn injection that would accelerate the ion beam to 300~MeV/u. This system would feed the CERN PS with 16~bunches (2.5 $\times$ 10$^{12}$ ions per bunch), which would be merged to 8 bunches during the acceleration cycle to $\gamma=9$. Finally, the bunches would be transferred to the SPS and accelerated to $\gamma\thickapprox150$, which corresponds to the maximum magnetic rigidity of that accelerator. \item{\textbf{Decay Ring:}} The racetrack decay ring would have the same circumference as the SPS (6880 m), with a long straight section, some 2500~m, aimed at the detector. At the final energy, the lifetime of the beam becomes minutes rather than seconds. Stacking is required to load the ring with enough ions to get an acceptable neutrino flux. \end{itemize} The parallels with the Neutrino Factory are obvious. The main difference between the two types of facility is in the initial capture and beam preparation. In the Neutrino Factory, the beam must be bunched, phase rotated, and ionization cooled. In the Beta Beam facility, the beam must be collected, ionized, and bunched. \subsubsection{Proton Driver} The proposed proton driver for the Beta Beam facility is the SPL, a 2.2~GeV Superconducting Proton Linac~\cite{SPLref}\ presently being designed at CERN to serve both the LHC and the EURISOL facility. The machine will operate at 50 Hz and will be designed to provide up to 4~MW of proton beam power. The present scenario is illustrated in Fig.~\ref{fig:SPLlayout}. It is anticipated that the ISOL target will require only about 5\% of the proton beam power, i.e., about 200~kW. % \begin{figure}[hptb!] \includegraphics{sec2-spl_layout_02_04}% \caption{(Color) Baseline layout of the SPL facility at CERN.}% \label{fig:SPLlayout}% \end{figure} \subsubsection{ISOL Target and Ion Source} As noted earlier, the target for $^{6}$He will use a water-cooled tungsten or a liquid-lead core to serve as a proton-to-neutron converter. Surrounding this core will be a cylinder of BeO, as shown in Fig.~\ref{ISOLtgtPict}. In the case of $^{18}$Ne, a more straightforward approach will suffice. The proton beam will impinge directly on a MgO target, producing the required nuclide via spallation. An ion source capable of producing the required intense pulses is proposed; development work on this device (see Fig.~\ref{ISOLtargetFig}) is under way at Grenoble~\cite{sortais}. The device uses a very high density plasma ($n_{e}\thicksim10^{14}$ cm$^{-3}$) in a 2--3~T solenoidal field and operates at 60--90~GHz. It is expected to provide pulses of 10$^{12}$--10$^{13}$ ions per bunch.% \begin{figure}[hptb!] \includegraphics[width=3.6391in]{sec2-ISOL_target}% \caption{(Color) Proposed ISOL-type target for production of $^{6}$He beam.}% \label{ISOLtgtPict}% \end{figure} \subsubsection{Acceleration} The proposed acceleration scheme is based on the existing CERN machines (PS and SPS). Initial acceleration would be via a linac, followed by a rapid cycling synchrotron that would be filled by multiturn injection. The RCS would provide a single bunch, 150~ns long, at 300~MeV/u. The PS is a relatively slow machine, and this results in substantial radiation levels due to decays while the beam energy, and hence the lifetime, is low. A rapid-cycling PS replacement would be of considerable benefit in this regard, though it is not part of the baseline scenario. Another idea that merit consideration is the use of a FFAG, which perhaps could be used to accelerate muons at a latter time. The SPS space-charge limit at injection is another issue to deal with, and will likely require a transverse emittance blowup to manage it. A new 40~MHz rf system will be added to the existing 200~MHz system in the SPS to accelerate the beam to $\gamma=150.$ \subsubsection{Decay Ring} The beam is transferred at full energy to a racetrack-shaped Decay Ring having the same circumference as the SPS. The length of the decay straight section (the one aimed at the detector) is chosen to permit about 35\% of the decays to occur there. At full energy, the lifetime is minutes rather than seconds. This allows---and also \textit{requires}---the beam to be stacked in the Decay Ring to provide the required decay intensity. The proposed stacking technique, asymmetric bunch merging, is based on somewhat complicated rf gymnastics, but has already been shown to work experimentally in tests~\cite{BunchMergeRef}. An interesting possibility that has arisen only recently is the idea of storing both $^{6}$He and $^{18}$Ne in the ring simultaneously. This requires that the neon beam have the same rigidity as the helium beam, which corresponds to $\gamma_{Ne}=250.$ For a detector at Fr\'{e}jus, the optimum energies~\cite{mezzetto} are $\gamma_{He}=60$ and $\gamma_{Ne}=100.$ \section{Beam properties} The most important neutrino oscillation physics questions that we wish to address in the coming decades require the study of $\nu_e \leftrightarrow \nu_\mu$ transitions in long baseline experiments. Conventional neutrino beams are almost pure $\nu_\mu$ beams, which therefore permit the study of $\nu_\mu \to \nu_e$ oscillations. The experiments must look for $\nu_e$ CC interactions in a distant detector. Backgrounds that fake $\nu_e$ CC interactions, together with a small $\nu_e$ component in the initial beam, account for $O(1\%)$ of the total interaction rate. This makes it difficult for experiments using conventional neutrino beams to probe very small oscillation amplitudes, below the $0.01-0.001$ range. This limitation motivates new types of neutrino facilities that provide $\nu_e$ beams, permitting the search for $\nu_e \to \nu_\mu$ oscillations, and if the beam energy is above the $\nu_\tau$ CC interaction threshold, the search for $\nu_e \to \nu_\tau$ oscillations. Neutrino Factory and Beta Beam facilities both provide $\nu_e$ (and $\bar{\nu}_e$) beams, but with somewhat different beam properties. We will begin by describing Neutrino Factory beams, and then describe Beta Beam facility beams. \subsection{Neutrino Factory Beams} Neutrino Factory beams are produced from muons decaying in a storage ring with long straight sections. Consider an ensemble of polarized negatively-charged muons. When the muons decay they produce muon neutrinos with a distribution of energies and angles in the muon rest--frame described by~\cite{gaisser}: \begin{eqnarray} \frac{d^2N_{\nu_\mu}}{dxd\Omega_{c.m.}} &\propto& \frac{2x^2}{4\pi} \left[ (3-2x) + (1-2x) P_\mu \cos\theta_{c.m.} \right] \, , \label{eq:n_numu} \end{eqnarray} where $x\equiv 2E_\nu/m_\mu$, $\theta_{c.m.}$ is the angle between the neutrino momentum vector and the muon spin direction, and $P_\mu$ is the average muon polarization along the beam direction. The electron antineutrino distribution is given by: \begin{eqnarray} \frac{d^2N_{\bar\nu_e}}{dxd\Omega_{c.m.}} &\propto& \frac{12x^2}{4\pi} \left[ (1-x) + (1-x) P_\mu\cos\theta_{c.m.} \right] \, , \label{eq:n_nue} \end{eqnarray} and the corresponding distributions for $\bar\nu_\mu$ and $\nu_e$ from $\mu^+$ decay are obtained by the replacement $P_{\mu} \to -P_{\mu}$. Only neutrinos and antineutrinos emitted in the forward direction ($\cos\theta_{lab}\simeq1$) are relevant to the neutrino flux for long-baseline experiments; in this limit $E_\nu = x E_{max}$ and at high energies the maximum $E_\nu$ in the laboratory frame is given by $E_{max} = \gamma (1 + \beta \cos\theta_{c.m.})m_{\mu}/2 $, where $\beta$ and $\gamma$ are the usual relativistic factors. The $\nu_\mu$ and $\bar{\nu}_{e}$ distributions as a function of the laboratory frame variables are then given by: \begin{eqnarray} \frac{d^2N_{\nu_{\mu}}}{dxd\Omega_{lab}} &\propto& \frac{1}{\gamma^2 (1- \beta\cos\theta_{lab})^2}\frac{2x^2}{4\pi} \left[ (3-2x) + (1-2x)P_{\mu}\cos\theta_{c.m.} \right] , \label{eq:numu} \end{eqnarray} and \begin{eqnarray} \frac{d^2N_{\bar{\nu}_{e}}}{dxd\Omega_{lab}} &\propto& \frac{1}{\gamma^2 (1- \beta\cos\theta_{lab})^2}\frac{12x^2}{4\pi} \left[ (1-x) + (1-x)P_{\mu}\cos\theta_{c.m.} \right] \; . \label{eq:nue} \end{eqnarray} Thus, for a high energy muon beam with no beam divergence, the neutrino and antineutrino energy and angular distributions depend upon the parent muon energy, the decay angle, and the direction of the muon spin vector. With the muon beam intensities that could be provided by a muon--collider type muon source~\cite{status_report} the resulting neutrino fluxes at a distant site would be large. For example, Fig.~\ref{fluxes} shows as a function of muon energy and polarization, the computed fluxes per $2\times 10^{20}$ muon decays at a site on the other side of the Earth ($L = 10000$~km). Note that the $\nu_e$ ($\bar{\nu}_e$) fluxes are suppressed when the muons have $P = +1\, (-1).$ This can be understood by examining Eq.~(\ref{eq:nue}) and noting that for $P = -1$ the two terms cancel in the forward direction for all $x$. \begin{figure}[hbtp!] \includegraphics[width=4in]{sec3-s2_fluxes_fig} \caption{Calculated $\nu$ and $\bar{\nu}$ fluxes in the absence of oscillations at a far site located 10000 km from a Neutrino Factory in which $2 \times 10^{20}$ muons have decayed in the storage ring straight section pointing at the detector. The fluxes are shown as a function of the energy of the stored muons for negative muons (top two plots) and positive muons (bottom two plots), and for three muon polarizations as indicated. The calculated fluxes are averaged over a circular area of radius 1~km at the far site. Calculation from Ref.~\cite{geer98}.} \label{fluxes} \end{figure} At low energies, the neutrino CC interaction cross section is dominated by quasi-elastic scattering and resonance production. However, if $E_\nu$ is greater than $\sim10$~GeV, the total cross section is dominated by deep inelastic scattering and is approximately~\cite{CCFRsigma}: \begin{eqnarray} \sigma(\nu +N \rightarrow \ell^- + X) &\approx& 0.67\times 10^{-38} \; \times E_{\nu}(\hbox{\rm GeV})\, \hbox{\rm cm}^2\, , \\ \sigma(\overline{\nu} +N \rightarrow \ell^+ + X) &\approx& 0.34\times10^{-38} \; \times E_{\overline{\nu}}(\hbox{\rm GeV})\, \hbox{\rm cm}^2 \; . \end{eqnarray} The number of $\nu$ and $\bar{\nu}$ CC events per incident neutrino observed in an isoscalar target is given by: \begin{eqnarray} N(\nu +N \rightarrow \ell^- + X) &=& 4.0 \times 10^{-15}\times E_{\nu}(\hbox{\rm GeV}) \;\text{events per}\; \text{g/cm}^2, \\ N(\overline{\nu} +N \rightarrow \ell^+ + X) &=& 2.0 \times 10^{-15}\times E_{\overline{\nu}}(\hbox{\rm GeV}) \;\text{events per}\; \text{g/cm}^2. \end{eqnarray} Using this simple form for the energy dependence of the cross section, the predicted energy distributions for $\nu_e$ and $\nu_\mu$ interacting in a far detector $(\cos\theta = 1)$ at a Neutrino Factory are shown in Fig.~\ref{polarization}. The interacting $\nu_\mu$ energy distribution is compared in Fig.~\ref{minos_wbb} with the corresponding distribution arising from the high--energy NUMI~\cite{numi} wide-band beam. Note that neutrino beams from a Neutrino Factory have no high energy tail, and in that sense can be considered narrow-band beams. \begin{figure}[hbtp!] \includegraphics[width=4in]{sec3-s2_polarization} \caption{Charged current event spectra at a far detector. The solid lines indicate zero polarization, the dotted lines indicate polarization of $\pm 0.3$ and the dashed lines indicate full polarization. The $P=1$ case for electron neutrinos results in no events and is hidden by the $x$ axis.} \label{polarization} \end{figure} \begin{figure}[hbtp!] \mbox{ \includegraphics*[width=3.in]{sec3-s2_minos_wbb} } \mbox{ \includegraphics*[width=3.in]{sec3-s2_minos_wbb_2900km} } \caption{(Color) Comparison of interacting $\nu_\mu$ energy distributions for the NUMI high energy wide-band beam (Ref.~\cite{numi}) with a 20~GeV Neutrino Factory beam (Ref.~\cite{geer98}) at $L = 730$~km and a 30~GeV Neutrino Factory beam at $L = 2900$~km. The Neutrino Factory distributions have been calculated based on Eq.~(\ref{eq:n_numu}) (no approximations), and include realistic muon beam divergences and energy spreads. } \label{minos_wbb} \end{figure} \begin{figure}[hbtp!] \includegraphics*[width=3.5in]{sec3-s2_elept} \caption{Lepton energy spectra for CC $\bar{\nu}_\mu$ (top left), $\nu_\mu$ (top right), $\nu_e$ (bottom left), and $\bar{\nu}_e$ (bottom right) interactions. Note that $z$ is the energy normalized to the primary muon energy $z = E_{\ell}/E_\mu$. Calculation from Ref.~\cite{bgw99}.} \label{fig:elept} \end{figure} In practice, CC interactions can only be cleanly identified when the final state lepton exceeds a threshold energy. The calculated final state lepton distributions are shown in Fig.~\ref{fig:elept}. Integrating over the energy distribution, the total $\nu$ and $\bar{\nu}$ interaction rates per muon decay are given by: \begin{eqnarray} N_\nu &=& 1.2 \times 10^{-14} \; \biggr[\frac{E_{\mu}^3(\hbox{\rm GeV})}{L^2(km)}\biggl] \times C(\nu) \;\; \hbox {events per kton} \end{eqnarray} and \begin{eqnarray} N_{\bar{\nu}}&=&0.6\times10^{-14} \; \biggr[\frac{E_{\mu}^3(\hbox{\rm GeV})}{L^2(km)}\biggl] \times C(\nu) \;\; \hbox{events per kton} \, , \end{eqnarray} where \begin{equation} C(\nu_{\mu})= \frac{7}{10} + P_{\mu} \frac{3}{10}\quad , \quad C(\nu_{e})=\frac{6}{10} - P_{\mu} \frac{6}{10}. \end{equation} The calculated $\nu_e$ and $\nu_\mu$ CC interaction rates resulting from $10^{20}$ muon decays in the storage ring straight section of a Neutrino Factory are compared in Table~\ref{table:rates} with expectations for the corresponding rates at the next generation of accelerator--based neutrino experiments. Note that event rates at a Neutrino Factory increase as $E_\mu^3$, and are significantly larger than expected for the next generation of approved experiments if $E_\mu > 20$~GeV. The radial dependence of the event rate is shown in Fig.~\ref{fig:radial} for a 20~GeV Neutrino Factory and three baselines. \begin{table}[hbtp!] \caption{\label{table:rates} Muon neutrino and electron antineutrino CC interaction rates in the absence of oscillations, calculated for baseline length $L = 732$~km (FNAL $\to$ Soudan), for MINOS using the wide-band beam and a muon storage ring delivering $10^{20}$ decays with $E_\mu=10,20$, and $50$~GeV at three baselines. The Neutrino Factory calculation includes a realistic muon beam divergence and energy spread.} \begin{ruledtabular} \begin{tabular}{c|cc|cc|cc} Experiment& &Baseline & $\langle E_{\nu_\mu} \rangle$ & $\langle E_{\bar \nu_e} \rangle$& N($\nu_\mu$ CC) & N($\bar\nu_e$ CC) \\ & &(km) & (GeV) & (GeV) & (per kton--yr) & (per kton--yr) \\ \hline MINOS& Low energy &732& 3 & -- & 458 & 1.3 \\ & Medium energy &732& 6 & -- & 1439 & 0.9 \\ & High energy &732& 12 & -- & 3207 & 0.9 \\ \hline Muon ring & $E_\mu$ (GeV) & & & & & \\ \hline & 10 &732& 7.5 & 6.6 & 1400 & 620 \\ & 20 &732& 15 & 13 & 12000 & 5000\\ & 50 &732& 38 & 33 & 1.8$\times$10$^5$ & 7.7$\times$10$^4$ \\ \hline Muon ring& $E_\mu$ (GeV)& & & & & \\ \hline & 10 &2900& 7.6 & 6.5 & 91 &41\\ & 20 &2900& 15 & 13 & 740 & 330\\ & 50 &2900& 38 & 33 & 11000& 4900 \\ \hline Muon ring& $E_\mu$ (GeV)& & & & & \\ \hline & 10 &7300& 7.5 & 6.4 & 14 & 6 \\ & 20 &7300& 15 & 13 & 110 & 51 \\ & 50 &7300& 38 & 33 & 1900 & 770 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[hbtp!] \includegraphics[width=3in]{sec3-s2_20gev} \caption{(Color) Events per kton of detector as a function of distance from the beam center for a 20 GeV muon beam.} \label{fig:radial} \end{figure} \begin{table}[bhtp!] \caption{Dependence of predicted charged current event rates on muon beam properties at a Neutrino Factory. The last column lists the required precisions with which each beam property must be determined if the uncertainty on the neutrino flux at the far site is to be less than $\sim1$\%. Here $\Delta$ denotes uncertainty while $\sigma$ denotes the spread in a variable. Table from Ref.~\cite{cg00}. \label{tab:flux}} \begin{ruledtabular} \begin{tabular}{c|c|cc} Muon Beam & Beam & Rate & Target\\ property & Type & Dependence & Precision \\ \hline Energy ($E_\mu$) & $\nu$ (no osc.) & $\Delta N / N = 3 \; \Delta E_\mu/E_\mu$ & $\Delta(E_\mu)/E_\mu < 0.003$ \\ & $\nu_{e} \to \nu_{\mu}$ &$\Delta N / N = 2 \; \Delta E_\mu/E_\mu$ & $\Delta(E_\mu)/E_\mu < 0.005$ \\ \hline Direction ($\Delta\theta$) & $\nu$ (no osc.) & $\Delta N/N \leq 0.01$ & $\Delta\theta < 0.6 \; \sigma_\theta$ \\ & & (for $\Delta\theta < 0.6\; \sigma_\theta$) & \\ \hline Divergence ($\sigma_\theta$) & $\nu$ (no osc.) & $\Delta N / N \sim 0.03 \; \Delta\sigma_\theta / \sigma_\theta$ & $\Delta\sigma_\theta / \sigma_\theta < 0.2$ \\ & & (for $\sigma_\theta \sim 0.1/\gamma$) & (for $\sigma_\theta \sim 0.1/\gamma$)\\ \hline Momentum spread ($\sigma_p$) & $\nu$ (no osc.) & $\Delta N / N \sim 0.06 \; \Delta\sigma_p / \sigma_p$ & $\Delta\sigma_p / \sigma_p < 0.17$ \\ \hline Polarization ($P_\mu$) & $\nu_e$ (no osc.) & $\Delta N_{\nu_e} / N_{\nu_e} = \Delta P_\mu$ & $\Delta P_\mu < 0.01$ \\ & $\nu_{\mu}$ (no osc.) & $\Delta N_{\nu_\mu} / N_{\nu_\mu} = 0.4 \; \Delta P_\mu$ & $\Delta P_\mu < 0.025$ \\ \end{tabular} \end{ruledtabular} \end{table} We next consider the systematic uncertainties on the neutrino flux. Since muon decay kinematics is very well understood, and the beam properties of the muons in the storage ring can be well determined, we expect the systematic uncertainties on the neutrino beam intensity and spectrum to be small compared to the corresponding uncertainties on the properties of conventional neutrino beams. In the muon decay straight section of a Neutrino Factory, the muon beam is designed to have an average divergence given by $\sigma_\theta =~O(\frac{0.1}{\gamma}).$ The neutrino beam divergence will therefore be dominated by muon decay kinematics, and uncertainties on the beam direction and divergence will yield only small uncertainties in the neutrino flux at a far site. However, if precise knowledge of the flux is required, the uncertainties on $\theta$ and $\sigma_\theta$ must be taken into account, along with uncertainties on the flux arising from uncertainties on the muon energy distribution and polarization. The relationships between the uncertainties on the muon beam properties and the resulting uncertainties on the neutrino flux are summarized in Table~\ref{tab:flux}. If, for example, we wish to know the $\nu_e$ and $\nu_{\mu}$ fluxes at a far site with a precision of 1\%, we must determine the beam divergence, $\sigma_\theta$, to 20\% (see, Fig.~\ref{fig:flux_xy}), and ensure that the beam direction is within $0.6\times \sigma_\theta$ of the nominal direction~\cite{cg00} (see, Fig.~\ref{fig:flux_d}). We point out that it should be possible to do much better than this, and consequently, to know the fluxes at the far site with a precision much better than 1\%. \begin{figure}[hbtp!] \includegraphics*[width=3.5in]{sec3-s2_flux_xy}% \vspace{-2.3cm} \caption{(Color) Dependence of CC interaction rates on the muon beam divergence for a detector located at $L = 2800$~km from a muon storage ring containing 30~GeV unpolarized muons. Rates are shown for $\nu_e$ (boxes) and $\nu_\mu$ (circles) beams in the absence of oscillations, and for $\nu_e \to \nu_\mu$ oscillations (triangles) with the three--flavor oscillation parameters, $\delta m_{12}^2=5\times10^{-5}\, \text{eV}^2/\text{c}^4,$ $\delta m_{32}^2=3.5\times10^{-3}\, \text{eV}^2/\text{c}^4,$ $s_{13}=0.10,$ $s_{23}=0.71,$ $s_{12}=0.53,$ $\delta=0.$ The calculation is from Ref.~\cite{cg00}.} \label{fig:flux_xy} \end{figure} \begin{figure}[thbp!] \vspace{1.5cm} \includegraphics*[width=3.5in]{sec3-s2_flux_d}% \vspace{-2.0cm} \caption{(Color) Dependence of CC interaction rates on the neutrino beam direction. Relative rates are shown for a detector at a far site located downstream of a storage ring containing 30~GeV unpolarized muons, and a muon beam divergence of 0.33~mrad. Rates are shown for $\nu_e$ (triangles) and $\nu_\mu$ (circles) beams in the absence of oscillations, and for $\nu_e \to \nu_\mu$ oscillations (boxes) with the three--flavor oscillation parameters shown in Fig.~\ref{fig:flux_xy}. The calculation is from Ref.~\cite{cg00}. } \label{fig:flux_d} \end{figure} We now consider the event distributions in a detector at a near site, close to the Neutrino Factory, which will be quite different from the corresponding distributions at a far site. There are two main reasons for this difference. First, the near detector accepts neutrinos over a large range of muon decay angles $\theta$, not just those neutrinos traveling in the extreme forward direction. This results in a broader neutrino energy distribution that is sensitive to the radial size of the detector (Fig.~\ref{nearspectra}). \begin{figure}[hbtp!] \includegraphics[width=3.5in]{sec3-s2_nearspectra} \caption{Events per g/cm$^2$ per GeV for a detector 40~m from a muon storage ring with a 600~m straight section. The three curves show all events and those falling within 50 and 20~cm of the beam center.} \label{nearspectra} \end{figure} Second, if the distance of the near detector from the end of the decay straight section is of the order of the straight section length, then the $\theta$ acceptance of the detector varies with the position of the muon decay along the straight section. This results in a more complicated radial flux distribution than expected for a far detector. However, since the dominant effects are decay length and muon decay kinematics, it should be modeled quite accurately (Fig.~\ref{xplot}). \begin{figure}[hbtp!] \includegraphics[width=3in]{sec3-s2_x} \caption{(Color) Events per g/cm$^2$ as a function of the transverse coordinate, $x,$ 50~m downstream of a 50~GeV neutrino factory providing $10^{20}$ muon decays. The central peak is mainly due to decays in the last hundred meters of the decay pipe while the large tails are due to upstream decays.} \label{xplot} \end{figure} Note that, even in a limited angular range, the event rates in a near detector are very high. Figure~\ref{eventrates} illustrates the event rates per g/cm$^2$ as a function of energy. Because most of the neutrinos produced forward in the center of mass traverse the detector fiducial volume, the factor of $\gamma^2$ present in the flux for $\theta \sim0$ is canceled and the event rate increases linearly with $E_{\mu}$. For a 50~GeV muon storage ring, the interaction rate per 10$^{20}$ muon decays is $7\times10^6 \text{ events per g/cm}^2.$ Finally, in the absence of special magnetized shielding, the high neutrino event rates in any material upstream of the detector will cause substantial backgrounds. The event rate in the last three interaction lengths ($300~\text{g/cm}^2$) of the shielding between the detector and the storage ring would be 30 interactions per beam spill at a 15~Hz machine delivering $2\times 10^{20}$ muon decays per year. These high background rates will require clever magnetized shielding designs and fast detector readout to avoid overly high accidental rates in low mass experiments. \begin{figure}[hbtp!] \includegraphics[width=3in]{sec3-s2_eventrates} \caption{(Color) Events per year and per g/cm$^2$ at a near detector as a function of muon beam energy in GeV. The solid curves indicate all events, the dashed and dotted curves show the effects of radial position cuts.} \label{eventrates} \end{figure} \subsection{Beta Beams} We now consider the beam properties at a Beta Beam facility. In a Beta Beam facility the neutrinos are generated by the decay of radioactive nuclei rather than muons. The two ions deemed optimal are $^{18}$Ne for $\nu_e$ and $^6$He for $\bar{\nu}_e$ production. The resulting initial neutrino beam consists of a single flavor. In addition, since the decay kinematics is well known, the uncertainties on the neutrino energy spectrum are expected to be small. The electron energy spectrum produced by a nuclear $\beta$-decay at rest is \begin{equation} \frac{dN^{\rm rest}}{dE_e} \sim E^2_e (E_e-E_0)^2 \end{equation} where $E_0$ is the electron end-point energy, which is 3.5~MeV for $^6$He and 3.4~MeV for $^{18}$Ne. In the rest frame of the ion, the spectrum of the neutrinos~\cite{jj-beta} is \begin{equation} \frac{dN^{\rm rest}}{d \cos\theta d E_{\nu}} \sim E^2_{\nu} (E_0-E_{\nu})\sqrt{(E_0-E_{\nu})^2-m^2_e}. \end{equation} After performing a boost and normalizing to the total decays (in the straight section) $N_\beta$, the neutrino flux per solid angle in a detector located at a distance $L$ and aligned with the straight section can be calculated as \begin{equation} \frac{d\Phi^{\rm lab}}{dS dy}\Bigg|_{\theta\simeq 0} \approx \frac{N_\beta}{\pi L^2} \frac{\gamma^2}{g(y_e)} y^2 (1-y) \sqrt{(1-y)^2-y_e^2} \end{equation} where $0\leq y=\frac{E_\nu}{2 \gamma E_0} \leq 1-y_e$, $y_e=m_e/E_0$ and \begin{equation} g(y_e)=\frac{1}{60} \left\{ \sqrt{1-y_e^2}(2-9y_e^2-8y_e^4)+15 y_e^4 \log \left[\frac{y_e}{1-\sqrt{1-y_e^2}}\right] \right\}. \end{equation} The neutrino flux and energy distribution depend upon the boost $\gamma$, and hence upon the energies of the stored radioactive ions. The original Beta Beam proposal~\cite{autin} was to use the CERN SPS to accelerate the ions. The desire to simultaneously store ions of both species in the storage ring, and build a large detector in the Fr\'{e}jus tunnel in France (which fixes the baseline), has led to proposed Beta Beam energies corresponding to $\gamma \sim 60$ and 100 for the two ion species, yielding mean neutrino energies of 0.2~GeV and 0.3~GeV. Recently it has been suggested that these energies are too low for optimal sensitivity to the interesting physics, and hence higher energy scenarios are being considered, using the Fermilab Tevatron, or the CERN LHC to accelerate the ions. Figure \ref{fig:betabeam} shows the expected fluxes for the three scenarios, ``low" energy (e.g., SPS), ``medium" energy (e.g., Tevatron), and ``high" energy (e.g., LHC). Although the integrated fluxes are similar, the cross section grows with energy, yielding more events for the higher energies. Table~\ref{tab:betabeam} shows the expected charged current event rates for the three setups. \begin{table}[thbp!] \caption{Number of charged current events without oscillations per kton-year for the three reference setups described in the text. Also is shown the average neutrino energy. Table from Ref.~\cite{jj-beta}. \label{tab:betabeam}} \begin{ruledtabular} \begin{tabular}{ccccc} $\gamma$ & L (km) & $\bar{\nu}_e$ CC & $\nu_e$ CC & $<E_\nu>$ (GeV) \\ \hline 60/100 & 130 & 1.9 & 25.7 & 0.2/0.3 \\ 350/580 & 730 & 48.6 & 194.2 & 1.17/1.87 \\ 1500/2500 & 3000 & 244.5 & 800.2 & 5.01/7.55 \\ \end{tabular} \end{ruledtabular} \end{table} However, it should be noted that the higher energy options require both a TeV (or multi-TeV) accelerator and storage ring, which are expensive and introduce additional technical challenges. Finally, further study is needed to fully explore the systematic uncertainties on the beam properties of a Beta Beam facility. Note however, that the neutrino beam divergence is controlled by the \textit{Q} value of the beta decay, and the beam divergence in the straight section. In the CERN (low energy) case, the typical decay angle is 7~mrad. By contrast, the parent beam divergence would be O(100)~$\mu$rad, assuming a 200~m beta function in the decay section. For higher energies, both inherent neutrino divergence, and the parent beam divergence scale like 1/$\gamma$. Hence the decay kinematics is expected to dominate the beam divergence for all the Beta Beam scenarios. A more detailed understanding of the systematics must await a detailed design for the storage ring and an understanding of the beam halo, etc. Background conditions for near detectors also deserve study. \begin{figure}[bhtp!] \mbox{ \includegraphics*[viewport=0 0 650 560,width=3.5in]{sec3-newleft} } \mbox{ \includegraphics*[viewport=0 0 540 557,width=3.0in]{sec3-newright} } \caption{(Color) Comparison of Beta Beam neutrino fluxes for the three setups described in the text, shown as a function of the neutrino energy for $\bar{\nu_e}$ (solid) and $\nu_e$ (dashed). Figures from Ref.~\cite{jj-beta}. } \label{fig:betabeam} \end{figure} \section{Physics Reach} Ultimately, to fully test the three-flavor mixing framework, determine all of the relevant neutrino oscillation parameters, and answer the most important neutrino-oscillation related physics questions, we would like to measure the oscillation probabilities $P(\nu_\alpha \to \nu_\beta)$ as a function of the baseline $L$ and neutrino energy $E$ (and hence $L/E$) for all possible initial and final flavors $\alpha$ and $\beta$. This requires a beam with a well known initial flavor content, and a detector that can identify the flavor of the interacting neutrino. The neutrinos interact in the detector via charged current (CC) and neutral current (NC) interactions to produce a lepton accompanied by a hadronic shower arising from the remnants of the struck nucleon. In CC interactions, the final-state lepton tags the flavor ($\beta$) of the interacting neutrino. To accomplish our ultimate goal, we will need $\nu_e$ in addition to $\nu_\mu$ beams, and detectors that can distinguish between NC, $\nu_e$ CC, $\nu_\mu$ CC, and $\nu_\tau$ CC interactions. Conventional neutrino beams are $\nu_\mu$ beams, Beta Beams provide $\nu_e$ beams, and Neutrino Factories provide $\nu_e$ and $\nu_\mu$ beams. The sensitivities of experiments at the different facilities will depend on their statistical precision, the background rates, the ability of the experiments to discriminate between true and false solutions within the three-flavor mixing parameter space, and the ability of the experimental setups to detect as many of the oscillation modes as possible. In the following, we will first consider the experimental signatures and sensitivities at a Neutrino Factory, and then the corresponding signatures and sensitivities at a Beta Beam facility. \subsection{Neutrino Factory Sensitivity} \subsubsection{Wrong-Sign Muons} At a Neutrino Factory in which, for example, positive muons are stored, the initial beam consists of 50\% $\nu_e$ and 50\% $\bar{\nu}_\mu$. In the absence of oscillations, the $\nu_e$ CC interactions produce electrons and the $\bar{\nu}_\mu$ CC interactions produce positive muons. Note that the charge of the final state lepton tags the flavor of the initial neutrino or antineutrino. In the presence of $\nu_e \to \nu_\mu$ oscillations, the $\nu_\mu$ CC interactions produce negative muons (i.e., wrong--sign muons). This is a very clean experimental signature since, with a segmented magnetized iron-scintillator sampling calorimeter for example, it is straightforward to suppress backgrounds to 1 part in $10^4$ of the total CC interaction rate, or better. This means that at a Neutrino Factory backgrounds to the $\nu_e \to \nu_\mu$ oscillation signal are extremely small. The full statistical sensitivity can therefore be exploited down to values of $\sin^2 2\theta_{13}$ approaching $10^{-4}$ before backgrounds must be subtracted and further advances in sensitivity scale like $\sqrt{N}$ rather than $N$. This enables Neutrino Factories to go beyond the sensitivities achievable by conventional neutrino Superbeams, by about two orders of magnitude. A more complete discussion of backgrounds at a Neutrino Factory can be found in Refs.~\cite{fn-692,DeRujula:1998hd,Cervera:2000kp,Apollonio:2002en}. \begin{figure}[!btph] \includegraphics*[width=4in]{sec4-fig3} \caption{(Color) Predicted ratios of wrong--sign muon event rates when positive and negative muons are stored in a 20~GeV Neutrino Factory, shown as a function of baseline. A muon measurement threshold of 4~GeV is assumed. The lower and upper bands correspond, respectively, to the two possible neutrino mass eigenstate orderings, as labeled. The widths of the bands show how the predictions vary as the \textsl{CP} violating phase $\delta$ is varied from $-\frac{\pi}{2}$ to $+\frac{\pi}{2}$, with the thick lines showing the predictions for $\delta = 0$. The statistical error bars correspond to a Neutrino Factory yielding a data sample of $10^{21}$ decays with a 50~kton detector. Figure from Ref.~\cite{comments}.} \label{sec4:fig2} \end{figure} We now consider how wrong-sign muon measurements at a Neutrino Factory are used to answer the most important neutrino oscillation physics questions. Suppose we store positive muons in the Neutrino Factory, and measure the number of events tagged by a negative muon in a distant detector, and then store negative muons and measure the rate of events tagged by a positive muon. To illustrate the dependence of the expected measured rates on the chosen baseline, the neutrino mass hierarchy, and the complex phase $\delta$, we will fix the other oscillation parameters and consider an experiment downstream of a 20~GeV Neutrino Factory. Let half of the data taking be with $\mu^+$ stored, and the other half with $\mu^-$ stored. In Fig.~\ref{sec4:fig2}, the predicted ratio of wrong-sign muon events $R \equiv N(\bar{\nu}_e \to \bar{\nu}_\mu) / N(\nu_e \to \nu_\mu)$ is shown as a function of baseline for $\Delta m^2_{32} = +0.0035$~eV$^2$ and $- 0.0035$~eV$^2$, with $\sin^2 2\theta_{13}$ set to the small value 0.004. (Although these $\Delta m^2$ values are now a little different from those emerging from global analyses of the atmospheric and solar neutrino data, they are the ones used for the figure, which comes from Ref.~\cite{comments}, and are still useful to illustrate how the measurements can be used to determine the oscillation parameters.) Figure~\ref{sec4:fig2} shows two bands. The upper (lower) band corresponds to $\Delta m^2_{32} < 0\, (> 0).$ Within the bands, the \textsl{CP} phase $\delta$ is varying. At short baselines the bands converge, and the ratio $R = 0.5$ since the antineutrino CC cross section is half of the neutrino CC cross section. At large distances, matter effects enhance $R$ if $\Delta m^2_{32} < 0$ and reduce $R$ if $\Delta m^2_{32} > 0,$ and the bands diverge. Matter effects become significant for baselines exceeding about 2000~km. The error bars indicate the expected statistical uncertainty on the measured $R$ with a data sample of $5\times 10^{22}$~kton-decays. With these statistics, the sign of $\Delta m^2_{32}$ is determined with very high statistical significance. With an order of magnitude smaller data sample (entry level scenario~\cite{entry-level}) or with an order of magnitude smaller $\sin^2 2\theta_{13}$ the statistical uncertainties would be $\sqrt{10}$ larger, but the sign of $\Delta m^2_{32}$ could still be determined with convincing precision. \begin{figure}[tbph!] \includegraphics*[width=3.5in]{sec4-fig_v3} \caption{(Color) Predicted measured energy distributions for CC events tagged by a wrong-sign (negative) muon from $\nu_e \to\nu_\mu$ oscillations (no cuts or backgrounds), shown for various $\delta m^2_{32}$, as labeled. The predictions correspond to $2 \times 10^{20}$ decays, $E_\mu = 30$~GeV, $L = 2800$~km, and a representative set of values for $\delta m^2_{12}$, $\sin^22\theta_{13}$, $\sin^22\theta_{23}$, $\sin^22\theta_{12}$, and $\delta.$ Results are from Ref.~\cite{bgrw99}.} \label{fig:v3} \end{figure} \begin{figure}[btph!] \includegraphics*[width=3.5in]{sec4-fig_v4} \caption{(Color) Same as in Fig.~\ref{fig:v3}, for CC events tagged by a wrong-sign (positive) muon from $\bar{\nu}_e \to \bar{\nu}_\mu$ oscillations. } \label{fig:v4} \end{figure} In addition to the ratio of wrong--sign muon signal rates $R$, the two energy-dependent wrong-sign muon event energy distributions can be separately measured. To show how this additional information can help, the predicted measured energy distributions 2800~km downstream of a 30~GeV Neutrino Factory are shown in Figs.~\ref{fig:v3} and \ref{fig:v4} for, respectively, $\nu_e \to \nu_\mu$ and $\bar{\nu}_e \to \bar{\nu}_\mu$ wrong--sign muon events. The distributions are shown for a range of positive and negative values of $\delta m^2_{32}$. Note that, after allowing for the factor of two difference between the neutrino and antineutrino cross sections, for a given $|\delta m^2_{32}|$, if $\delta m^2_{32} > 0$ we would expect to observe a lower wrong--sign muon event rate and a harder associated spectrum when positive muons are stored in the Neutrino Factory than when negative muons are stored. On the other hand, if $\delta m^2_{32} < 0$ we would expect to observe a higher wrong--sign muon event rate and a softer associated spectrum when positive muons are stored in the Neutrino Factory than when negative muons are stored. Hence, measuring the differential spectra when positive and negative muons are alternately stored in the Neutrino Factory can both enable the sign of $\delta m^2_{32}$ to be unambiguously determined~\cite{bgrw99}, and also provide a measurement of $\delta m^2_{32}$ and a consistency check between the behavior of the rates and energy distributions. \begin{figure}[!bhtp] \includegraphics*[width=5in]{sec4-geer-figure} \caption{(Color) The predicted number of wrong--sign muon events when negative muons are stored in the Neutrino Factory, versus the corresponding rate when positive muons are stored, shown as a function of $\theta_{13}, \theta_{23}, \delta$ and the assumed mass hierarchy, as labeled. The calculation corresponds to a 16~GeV Neutrino Factory with a baseline of 2000~km, and 10~years of data taking with a 100~kton detector and $2 \times 10^{20} \; \mu^+$ and $2 \times 10^{20} \; \mu^-$ decays in the beam-forming straight section per year. The ellipses show how the predicted rates vary as the \textsl{CP} phase $\delta$ varies.} \label{fig:ellipses} \end{figure} \subsubsection{Other Channels} In practice, to measure $\theta_{13}$, determine the mass hierarchy, and search for \textsl{CP} violation, the analysis of the wrong-sign muon rates must be performed allowing all of the oscillation parameters to simultaneously vary within their uncertainties. Since the relationship between the measured quantities and the underlying mixing parameters is complicated, with a minimal set of measurements it may not be possible to identify a unique region of parameter space consistent with the data. For Superbeams a detailed discussion of this problem can be found in Refs.~\cite{Minakata,Fogli:1996pv,Winter,Burguet-Castell:2001ez,Barger:2001yr,Burguet-Castell:2002qx}. To understand the nature of the challenge, Fig.~\ref{fig:ellipses} shows, as a function of $\theta_{13}, \theta_{23}, \delta$ and the assumed mass hierarchy, the predicted number of wrong--sign muon events when negative muons are stored in the Neutrino Factory, versus the corresponding rate when positive muons are stored. The example is for a 16~GeV Neutrino Factory with a baseline of 2000~km, and 10~years of data taking with a 100~kton detector and $2 \times 10^{20} \; \mu^+$ and $2 \times 10^{20} \; \mu^-$ decays in the beam-forming straight section per year. The ellipses show how the predicted rates vary as the \textsl{CP} phase $\delta$ varies. All of the \textsl{CP} conserving points ($\delta = 0$ and $\pi$) lie on the diagonal lines. Varying the mixing angles moves the ellipses up and down the lines. Varying the mass hierarchy moves the family of ellipses from one diagonal line to the other. Note that the statistics are large, and the statistical errors would be barely visible if plotted on this figure. Given these statistical errors, for the parameter region illustrated by the figure, determining the mass hierarchy (which diagonal line is the measured point closest to) will be straightforward. Determining whether there is \textsl{CP} violation in the lepton sector will amount to determining whether the measured point is consistent with being on the \textsl{CP} conserving line. Determining the exact values for the mixing angles and $\delta$ is more complicated, since various combinations can result in the same predicted values for the two measured rates. This is the origin of possible false solutions in the three--flavor mixing parameter space. To eliminate those false solutions, event samples other than $\nu_e \to \nu_\mu$ transitions tagged by wrong-sign muons will be important. We have seen that, in the presence of $\nu_e \to \nu_\mu$ oscillations, the $\nu_\mu$ CC interactions produce negative muons (i.e., wrong--sign muons). Similarly, $\bar{\nu}_\mu \to \bar{\nu}_e$ oscillations produce wrong--sign electrons, $\bar{\nu}_\mu \to \bar{\nu}_\tau$ oscillations produce events tagged by a $\tau^+,$ and $\nu_e \to \nu_\tau$ oscillations produce events tagged by a $\tau^-$. Hence, there is a variety of information that can be used to measure or constrain neutrino oscillations at a Neutrino Factory, namely the rates and energy distributions of events tagged by \begin{description} \item{(a)} right--sign muons \item{(b)} wrong--sign muons \item{(c)} electrons or positrons (their charge is difficult to determine in a massive detector) \item{(d)} positive $\tau$--leptons \item{(e)} negative $\tau$--leptons \item{(f)} no charged lepton. \end{description} If these measurements are made when there are alternately positive and negative muons decaying in the storage ring, there are a total of 12~spectra that can be used to extract information about the oscillations. Some examples of the predicted measured spectra are shown as a function of the oscillation parameters in Figs.~\ref{fig:m1} and \ref{fig:m2} for a 10~kton detector sited 7400~km downstream of a 30~GeV Neutrino Factory. These distributions are sensitive to the oscillation parameters, and can be fit simultaneously to extract the maximum information. Clearly, the high intensity $\nu_e$, $\bar{\nu}_e$, $\nu_\mu$, and $\bar{\nu}_\mu$ beams at a Neutrino Factory would provide a wealth of precision oscillation data. The full value of this wealth of information has not been fully explored, but some specific things to be noted are: \begin{enumerate} \item It has been shown~\cite{donini,synergy1,huber,lindner} that the various measurements at a Neutrino Factory provide sufficient information to eliminate false solutions within the three--flavor parameter space. Indeed the wealth of information in the Neutrino Factory data is essential for this purpose. \item If $\sin^2 2\theta_{13}$ exceeds $\sim 0.001$ the $\nu_e \to \nu_\tau$ channel is particularly important, both as a means to suppress the false solutions~\cite{donini,synergy1,donini2}, and also as the only direct experimental probe of $\nu_e \leftrightarrow \nu_\tau$ transitions. The ability of the $\nu_e \to \nu_\tau$ measurements to eliminate false solutions is illustrated in Fig.~\ref{fig:olga}, which, for a representative set of oscillation parameters, shows as a function of the \textsl{CP} phase $\delta$ the location of the false solution with respect to the correct solution in $\theta_{13}$--space (or more precisely, the distance between the two solutions $\Delta\theta$). Note that, when compared to the $\bar{\nu_e} \to \bar{\nu_\mu}$ case, $\Delta\theta$ has the opposite sign for $\bar{\nu_e} \to \bar{\nu_\tau}$. In practice, this means that together the two measurements enable the false solution to be effectively eliminated. \item Within the three--flavor framework, the relationship between the measured oscillation probabilities and the associated oscillation parameters is complicated. Experimental redundancy, permitting the over-determination of the oscillation parameters, is likely to prove essential, both to weed out misleading measurements and to ensure that the three-flavor framework is correct. \end{enumerate} \begin{figure}[thbp!] \includegraphics*[width=4.5in]{sec4-mum} \caption{(Color) Visible energy spectra for four event classes when $10^{21}\,\mu^-$ decay in a 30~GeV Neutrino Factory at $L = 7400$~km. Black solid histogram: no oscillations. Blue dotted histogram: $\delta m^2_{32}=3.5\times 10^{-3}$~eV$^2$/c$^4$, $\sin^2\theta_{23}=1$. Red dashed histogram: $\delta m^2_{32}=7\times 10^{-3}$~eV$^2$/c$^4$, $\sin^2\theta_{23}=1$. The distributions in this figure and the following figure are for an ICANOE-type detector, and are from Ref.~\cite{camp00}.} \label{fig:m1} \end{figure} \begin{figure}[bthp!] \includegraphics*[width=4.5in]{sec4-mup} \caption{(Color) Same as in Fig.~\ref{fig:m1}, but with positive muons circulating in the storage ring. The difference between the two figures is due to the different cross section for neutrinos and antineutrinos, and to matter effects.} \label{fig:m2} \end{figure} \begin{figure}[thbp!] \includegraphics*[width=3.5in]{sec4-olgafig} \caption{(Color) Equiprobability curves in the ($\Delta \theta, \delta$) plane, for $\bar \theta_{13} = 5^\circ,$ $ \bar \delta = 60^\circ$, $E_\nu \in [5, 50] $ GeV and $L = 732$ km for the $\nu_e \to \nu_\mu$ and $\nu_e \to \nu_\tau$ oscillation (neutrinos on the left, antineutrinos on the right). $\Delta \theta$ is defined as the difference between the reconstructed parameter $ \theta_{13}$ and the input parameter $\bar{\theta}_{13}$, i.e., $\Delta \theta = \theta_{13}- \bar{\theta}_{13}$. From Ref.~\cite{donini}.} \label{fig:olga} \end{figure} \subsubsection{Neutrino Factory Calculations} To understand how sensitive Neutrino Factory measurements will be in determining $\theta_{13}$ and the neutrino mass hierarchy, and the sensitivity to \textsl{CP} violation in the lepton sector, we must consider the impact of statistical and systematic uncertainties, correlations between the parameters that vary within fits to the measured distributions, and the presence or absence of false solutions in the three-flavor mixing parameter space. To take account of these effects, and to see which different neutrino oscillation experiments best complement one another, a global fitting program has been created~\cite{lindner,globes} that uses simulated right-sign muon and wrong-sign muon data sets, and includes: \begin{enumerate} \item Beam spectral and normalization uncertainties. \item Matter density variations of 5\% about the average value. \item Constraint of solar neutrino oscillation parameters within the post-KamLAND LMA region. \item Simulation of $\nu_\mu$ CC QE, $\nu_\mu$ and $\nu_e$ CC inelastic, and NC events for all flavors. Note that the NC events are included in the analysis as a source of background. The NC signal is not yet exploited as an additional constraint. \item A check of the influence of cross section uncertainties (this mostly affects energies lower than those of interest for Neutrino Factories). \item Energy-dependent detection efficiencies, enabling energy threshold effects to be taken into account. \item Gaussian energy resolutions. \item Flavor, charge, and event misidentification. \item Overall energy-scale and normalization errors. \item An analysis of statistical and systematic precisions, and the ability to eliminate false solutions. \end{enumerate} \begin{table}[thbp!] \caption{Signal and background rates for a CERN SPS Beta beam, a high performance Superbeam (a 4~MW JHF beam with a 1~Mton water Cerenkov detector), and a Neutrino Factory. The numbers correspond to $\sin^2 2\theta_{13} = 0.1$ and $\delta = 0$. The rates have been calculated by the authors of Ref.~\cite{lindner}. \label{tab:SoverB}} \begin{ruledtabular} \begin{tabular}{lccc} & $\beta$-Beam & JHF-HK& Nu-Factory\\ \hline \multicolumn{4}{c}{$\nu$}\\ \hline Signal & 4967 & 13171 & 69985 \\ Background & 397 & 2140 & 95.2 \\ Signal/Background & 12.5 & 6.2 & 735 \\ \hline \multicolumn{4}{c}{$\bar{\nu}$}\\ \hline Signal & 477 & 9377 & 15342 \\ Background & 1 & 3326 & 180 \\ Signal/Background & 477.5 & 2.8 & 85.2 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[thbp!] \includegraphics*[width=3.5in]{sec4-figure3} \caption{(Color) The sensitivity reaches as functions of $\sin^2 2 \theta_{13}$ for $\sin^2 2 \theta_{13}$ itself, the sign of $\Delta m_{31}^2>0$, and (maximal) \textsl{CP} violation $\delta_{\mathrm{CP}}=\pi/2$ for each of the indicated baseline combinations. The bars show the ranges in $\sin^2 2 \theta_{13}$ where sensitivity to the corresponding quantity can be achieved at the $3\sigma$ confidence level. The dark bars mark the variations in the sensitivity limits by allowing the true value of $\Delta m_{21}^2$ to vary in the $3\sigma$ LMA-allowed range given in Ref.~\cite{Maltoni:2002aw} and others $(\Delta m_{21}^2 \sim 4\times 10^{-5} \, \text{eV}^2 - 3\times 10^{-4} \, \text{eV}^2).$ The arrows/lines correspond to the LMA best-fit value. Figure from Ref.~\cite{huber}.} \label{fig:lindner} \end{figure} The calculated signal and background rates are listed in Table~\ref{tab:SoverB}. The roughly two orders of magnitude improvement in the signal/background ratio at a Neutrino Factory, compared with the corresponding ratio at a high performance Superbeam, is evident. The results from the full calculations are shown in Fig.~\ref{fig:lindner}. The calculation is more fully described in Ref.~\cite{lindner}. The figure shows the minimum value of $\sin^2 2\theta_{13}$ for which three experimental goals could be achieved (with $3\sigma$ significance). First, the observation of a finite value of $\theta_{13}$. Second, the determination of the neutrino mass hierarchy. Third, the observation of non-zero \textsl{CP} violation in the lepton sector if the underlying $\delta$ corresponds to maximal \textsl{CP} violation. The three groups of bars correspond to three different experimental scenarios, with different baselines. The favored scenario is the one illustrated by the bottom group of three bars, for which there are two detectors, one at $L = 7500$~km and the other at $L = 3000$~km. Note that: \textbf {At a Neutrino Factory ${\sin^2 2\theta_{13}}$ can be measured, the neutrino mass hierarchy determined, and a search for \textsl{CP} violation in the lepton sector made for all values of $\sin^2 2\theta_{13}$ down to \textit{O}$(10^{-4}$), or even a little less.} \begin{figure}[thbp!] \includegraphics*[width=4in]{sec4-Figure_delta_sensitivity} \caption{(Color) The $1\sigma$ precision on the determination of the phase $\delta$ at a Neutrino Factory, and at a representative high-performance Superbeam, together with the combined Neutrino Factory plus Superbeam sensitivity. The sensitivities are shown as a function of the underlying value of $\sin^2 2\theta_{13}.$ The thin curves correspond to cases where the the \textit{sign-degeneracy} is not taken into account. Calculation from the authors of Ref.~\cite{lindner}.} \label{fig:delta} \end{figure} If $\sin^2 2\theta_{13}$ is fairly large, Superbeam experiments may also establish its value, and perhaps determine the mass hierarchy and begin the search for \textsl{CP} violation. Figure~\ref{fig:delta} illustrates the role of a Neutrino Factory over a broad range of $\sin^2 2\theta_{13}$ values. The figure shows, as a function of the underlying value of $\sin^2 2\theta_{13}$, the $1\sigma$ precision on the determination of the phase $\delta$ at a Neutrino Factory, and at a representative high-performance Superbeam, together with the combined Neutrino Factory plus Superbeam sensitivity. Below values of $\sin^2 2\theta_{13} \sim 10^{-2}$ the Neutrino Factory sensitivity is significantly better than the sensitivity that can be achieved with Superbeams, and indeed provides the only sensitivity to the \textsl{CP} phase if $\sin^2 2\theta_{13}$ is significantly smaller than $10^{-2}$. Above $\sin^2 2\theta_{13} \sim 10^{-2}$ the Neutrino Factory measurements still enable a modest improvement to the \textsl{CP} violation measurement sensitivity, but the exact impact that a Neutrino Factory might have in this case is less clear. The uncertainty on the matter density, which is believed to be \textit{O}$(5\%),$ is likely to be a limiting uncertainty for \textsl{CP} violation measurements~\cite{Ohlsson}. Improved knowledge of the matter density along the neutrino flight-path would improve the expected Neutrino Factory sensitivity. In addition, Bueno \textit{et al.}~\cite{camp00} have shown that the energy dependencies of matter and \textsl{CP} violating effects are different, and can be exploited to further separate the two effects. For $\sin^2 2\theta_{13} > 0.01,$ the case for a Neutrino Factory will depend upon just how well Superbeam experiments will ultimately be able to do, whether any new discoveries are made along the way that complicate the analysis, whether any theoretical progress is made along the way that leads to an emphasis on the type of measurements that a Neutrino Factory excels at, how important further tests of the oscillation formalism is in general, and the importance of observing and measuring $\nu_e \to \nu_\tau$ oscillations in particular. \textbf{We conclude there is a strong physics case for a Neutrino Factory if $\sin^2 2\theta_{13}$ is less than $\sim 0.01$. There may also be a strong case if $\sin^2 2\theta_{13}$ is larger than this, but it is too early to tell.} \subsubsection{Special Case: $\theta_{13} = 0$} The case $\theta_{13} = 0$ is very special. The number of mixing angles needed to describe the $3 \times 3$ unitary neutrino mixing matrix would be reduced from three to two, suggesting the existence of a new conservation law resulting in an additional constraint on the elements of the mixing matrix. The discovery of a new conservation law happens rarely in physics, and almost always leads to revolutionary insights in our understanding of how the physical universe works. Hence, if it were possible to establish that $\theta_{13} = 0$, it would be a major discovery. Note that in the limit $\theta_{13} \to 0,$ the oscillation probability for $\nu_e \leftrightarrow \nu_\mu$ transitions is finite, and is given by: \begin{eqnarray} P\left( \nu_e \to \nu_\mu \right) & = & \frac{\Delta m_{21}^2}{\Delta m_{31}^2} \sin^2 2\theta_{12} \cos^2 \theta_{23} \frac{\sin^2 A \Delta}{A^2} \, , \end{eqnarray} where the matter parameter $A = 1$ if the neutrino energy corresponds to the matter resonance, which for a long-baseline terrestrial experiment means neutrino energies $E \sim 12$~GeV. In addition, if the baseline $L$ is chosen such that $L/E$ corresponds to the oscillation maximum, then $\sin^2\Delta$ = 1, and we have that \begin{equation} P(\nu_e \to \nu_\mu)\sim\sin^2 2\theta_{12} \cos^2 \theta_{23} \frac{\Delta m_{21}^2}{\Delta m_{31}^2}. \end{equation} Substituting into this expression values for the oscillation parameters that are consistent with the present solar and atmospheric neutrino data, we are led to conclude that even if $\theta_{13} = 0$, provided the neutrino energy and baseline are chosen appropriately, $\nu_e \leftrightarrow \nu_\mu$ transitions are still directly observable in an appearance experiment if oscillation probabilities of \textit{O}$(10^{-4})$ are observable. Hence, if $\theta_{13}$ is very small, the ideal neutrino oscillation experiment will be a long baseline experiment that uses neutrinos with energies close to 12~GeV, i.e., uses a baseline such that $L/E$ corresponds to the oscillation maximum, and is sensitive to values of $P(\nu_e \leftrightarrow \nu_\mu)\sim 10^{-4}$ or smaller. Neutrino Factories provide the only way we know to satisfy these experimental requirements. \textbf{If $\theta_{13} = 0$ a Neutrino Factory experiment would enable (i) the first observation of $\nu_e \leftrightarrow \nu_\mu$ transitions in an appearance experiment, and (ii) an upper limit on $\sin^2 2\theta_{13}$ of \textit{O}($10^{-4}$) or smaller.} These are major experimental results that would simultaneously provide a final confirmation the three-flavor mixing framework (by establishing $\nu_e \leftrightarrow \nu_\mu$ transitions in an appearance experiment) while strongly suggesting the existence of a new conservation law. In considering the case $\theta_{13} = 0,$ it should be noted that within the framework of GUT theories, radiative corrections will change the value of $\sin^2 2\theta_{13}$ measured in the laboratory from the underlying value of $\sin^2 2\theta_{13}$ at the GUT scale. Recent calculations~\cite{radiative} have suggested that these radiative corrections to $\sin^2 2\theta_{13}$ will be \textit{O}$(10^{-4}).$ If this is the case, the ultimate Neutrino Factory experiment would not only provide the first direct observation of $\nu_e \to \nu_\mu$ transitions, but would also \begin{itemize} \item establish a finite value for $\theta_{13}$ at laboratory scales consistent with being zero at the GUT scale, \item determine the sign of $\Delta m_{31}^2$, and hence determine whether the neutrino mass hierarchy is normal or inverted, and \item detect maximal \textsl{CP} violation in the lepton sector. \end{itemize} These would be tremendously important results. \subsection{Beta Beam Calculations and Results} The Beta Beam concept is more recent than the Neutrino Factory idea, and the performance of Beta Beam experiments is less well established. Recent calculations of the $\sin^2 2\theta_{13}$ sensitivity for low energy Beta Beam scenarios~\cite{lindner-beta,Donini:2004hu} have included the effects of systematic uncertainties, correlations, and false solutions in parameter space. Expected signal and background rates are summarized in Table~\ref{tab:SoverB}. The expected signal rates are relatively modest. The neutrino Beta Beam signal would be a factor of 2--3 less than expected at a high--performance Superbeam, and a factor of 14 less than at a Neutrino Factory. The rates are even lower for an antineutrino Beta Beam; a factor of 20 less than the rates at a high--performance Superbeam, and a factor of 32 less than at a Neutrino Factory. In addition, it has been pointed out~\cite{jj-beta} that the neutrino energies are comparable to the target nucleon kinetic energies due to Fermi motion, and therefore there is no useful spectral information in the low energy Beta Beam measurements. Hence, the useful information is restricted to the measured muon neutrino (and antineutrino) appearance rates. Nevertheless, the signal/background ratios are good: 12.5 for the neutrino Beta Beam (compared with 6.2 for the Superbeam and 735 for the Neutrino Factory), and an impressive 478 for the antineutrino Beta Beam (compared with 2.8 for the Superbeam and 85 for the Neutrino Factory). Hence the interest in Beta Beams. \begin{figure}[hbtp!] \includegraphics*[width=3in,angle=-90]{sec4-Huber} \caption{(Color) CERN SPS Beta Beam sensitivity. } \label{betabeam_comp} \end{figure The ability of a low-energy Beta Beam to discover a finite value for $\sin^2 2\theta_{13}$ is compared in Fig.~\ref{betabeam_comp} with the corresponding $3\sigma$ sensitivities at a Neutrino Factory and high performance Superbeam. The leftmost limits of each of the bars in Fig.~\ref{betabeam_comp} show the statistical sensitivities, and the shaded regions within the bars show the degradation of the sensitivities due to irreducible experimental systematics, the effects of correlations, and the effects of false solutions in the three-flavor mixing parameter space. The rightmost limit of the bars therefore gives the expected sensitivities for each experiment. The sensitivity of the low--energy Beta Beam experiment is expected to be comparable to the corresponding Superbeam sensitivity. A Neutrino Factory would improve on the Beta Beam sensitivity by about a factor of 40. Combining low-energy Beta Beam results (the two measured rates) with Superbeam results would enable the impact of correlations and ambiguities to be reduced, which would potentially enable an improvement in the $\sin^2 2\theta_{13}$ sensitivity by a factor of 2--3 over the standalone results. Hence, low energy Beta Beams offer only a modest improvement in the $\sin^2 2\theta_{13}$ sensitivity beyond that achievable with a high--performance Superbeam, and this realization has led to the consideration of higher energy Beta Beams~\cite{jj-beta,Terranova:2004hu}. In particular, it has been proposed that the energies be increased by at least a factor of a few so that the neutrino and antineutrino energies are well above the Fermi motion region, which would enable useful spectral information to be extracted from the Beta Beam measurements. In addition, this would increase the signal rates (Table~\ref{tab:betabeam}), and if the energy were sufficiently high to result in significant matter effects, then it would be possible (if $\theta_{13}$ is sufficiently large) to use Beta Beams to determine the neutrino mass hierarchy. The particular scenarios that have been considered~\cite{jj-beta} are: \begin{description} \item{\textsl{Low Energy Beta Beam}:} This is the standard CERN scenario using the SPS for acceleration, and a 1~megaton water Cerenkov detector in the Fr\'{e}jus tunnel ($\gamma = 60$, L = 130~km). \item{\textsl{Medium Energy Beta Beam}:} This would require the Fermilab Tevatron (or equivalent) for acceleration, and a 1~megaton water Cerenkov detector in the Soudan mine ($\gamma = 350$, L = 730~km). \item{\textsl{High Energy Beta Beam}:} This would require the LHC for acceleration, with $\gamma = 1500$, L = 3000~km. \end{description} In all three cases, the running time is assumed to be 10~years. The improvement in statistical precision enabled by the higher energy Beta Beam scenarios is illustrated in Table~\ref{tab:betabeam} and Fig.~\ref{betabeam_fig}. \begin{figure}[thbp!] \includegraphics*[width=3.5in]{sec4-hep-ph0312068_fig14} \caption{(Color) Low-, Medium-, and High-Energy Beta Beam sensitivities. The estimated $\,1\sigma, 2\,\sigma$ and $3\,\sigma$ contours are shown for the setups described in the text. See Ref.~\cite{jj-beta}.} \label{betabeam_fig} \end{figure The figure shows, for the three scenarios, the 1$\sigma$, 2$\sigma$, and 3$\sigma$ contours in the ($\theta_{13},\delta$)--plane. Note that the expected sensitivity for the medium energy case with a ``small'' water Cerenkov detector is comparable to the low energy case with the megaton water Cerenkov detector. However, the medium energy sensitivity is dramatically improved with the much bigger detector. The further improvement obtained by going to LHC energies seems to be marginal. Given the likelihood that the LHC would not be available as a Beta Beam accelerator for a very long time, perhaps the most interesting scenario is the medium energy one. To understand the ability of medium energy Beta Beams to establish a finite value for $\theta_{13}$, determine the neutrino mass hierarchy, and search for \textsl{CP} violation in the lepton sector, the full analysis must be performed, taking care of all known systematic effects, and the impact of correlations and degeneracies. Although this full analysis has not yet been done, a step towards it has been made, and the results are encouraging. \begin{figure}[bthp!] \includegraphics*[width=3in]{sec4-exclu} \caption{Region where $\delta$ can be distinguished from $\delta=0^{\circ}$ or $\delta=180^{\circ}$ with a $99\%$ \textsl{CL} for the low energy Beta Beam(solid), medium energy Beta Beam with an UNO-type detector of 400~kton(dashed) and with the same detector with a factor 10 smaller mass (dashed-dotted), and finally for the high energy Beta Beam (dotted) with a 40~kton tracking calorimeter. Figure from Ref.~\cite{jj-beta}.} \label{betabeam_cpv_fig} \end{figure Figure~\ref{betabeam_cpv_fig} shows the region of the ($\theta_{13},\delta$)--plane within which $\sin\delta = 1$ (maximal \textsl{CP} violation) can be separated from $\sin\delta = 0$ (no \textsl{CP} violation) at the 99\% C.L. The medium energy setup is sensitive to maximal \textsl{CP} violation for values of $\theta_{13}$ exceeding $\sim 0.5$~degrees ($\sin^2 2\theta_{13} \sim 3 \times 10^{-4}$). This is within a factor of a few of the expected sensitivity that can be achieved at a Neutrino Factory. It will be interesting to see if this calculated medium energy Beta Beam sensitivity is significantly degraded when the uncertainties on all the oscillation parameters and the systematic uncertainties on the neutrino cross sections, etc., are included in the calculation. \begin{figure}[bhtp!] \includegraphics*[width=3in]{sec4-sign} \caption[a]{Regions where the true sign$(\Delta m^2_{23})=+1$ can be measured at $99\%$ C.L. (i.e., no solution at this level of confidence exists for the opposite sign). The lines correspond to the medium energy Beta Beam with a 400~kton water Cerenkov (solid), a 40~kton detector (dashed), and to the high energy Beta Beam(dotted). Figure from Ref.~\cite{jj-beta}.} \label{betabeam_hierarchy_fig} \end{figure}% Finally, Fig.~\ref{betabeam_hierarchy_fig} shows, for the medium energy Beta Beam scenario, the region of the ($\theta_{13},\delta$)--plane within which the neutrino mass hierarchy can be determined. The smallest value of $\theta_{13}$ for which this can be accomplished is seen to be $2-3$~degrees ($\sin^2 2\theta_{13} = 0.005 - 0.01$), which is perhaps a little better than with a Superbeam, but is not competitive with a Neutrino Factory.% \subsubsection{Other Variants} The present scenario is not completely optimized in either performance or cost. In this section we discuss some of the options that have already been studied briefly or that might be developed in future studies. Some variations we have considered: \begin{itemize} \item Be absorbers in place of LiH absorbers \item Shorter buncher rotator \item Shorter bunch train \item Different rf frequency \item Gas-filled cavities \item Quadrupole-based cooling channel \end{itemize} The configuration of LiH absorbers with Be windows could be simplified by substituting thicker Be windows as the end plates of the cavities, with a thickness chosen to make the total energy loss the same as that in the baseline foil + LiH absorber case. This would eliminate the need for thin Be windows. Cooling would be a bit less effective because of the greater multiple scattering in Be absorbers. An initial evaluation~\cite{NeufferRef} of a Be-only scenario showed somewhat less capture into the acceleration channel acceptance ($\thickapprox$15\%). A scenario in which Be absorbers are initially installed and then upgraded later to more efficient LiH absorbers is, of course, also possible. The baseline scenario requires a roughly 110~m drift, a 51~m buncher and a 52~m high-voltage phase rotation section. These parameters have not been optimized. For comparison, we considered an example having only a 26~m phase rotation section~\cite{NeufferRef}. The shorter rotation section would be significantly less expensive since it is not only shorter but provides about 200~MV less of high-gradient rf voltage. Initial evaluations indicate only small decreases in captured muons ($\thickapprox$10\%). The baseline case generates $\mu^{+}$ and $\mu^{-}$ bunch trains that are about 100~m long. These bunch trains are matched to the FS2 scenario requirements; in particular, they fit within the circumferences of the FS2, and the presently envisioned, accelerators and storage ring. However, other scenarios might make use of smaller circumference ring coolers, accelerators, and storage rings, and thus require shorter bunch trains. For example, a scenario with a 20~m drift, 20~m buncher and 20~m phase rotator has been explored~\cite{NeufferRef}. This produces a roughly 20~m long bunch train. Although this shorter system would be much less expensive than the present roughly 200~m long system, an initial evaluation showed that the total number of captured muons was substantially reduced (by about 50\%). (On the other hand, a longer system, capturing longer bunch trains, might produce more muons, at a small incremental cost.) Both FS2 and our present scenario use 201.25~MHz rf as the baseline final operating frequency, because of the availability of rf components at that frequency and because it is a plausible optimum for large-aperture and high-gradient operation. Other baseline frequencies could be considered, e.g., scenarios at 50, 100, 300 or 400~MHz. Lower frequencies (larger bunches) may be desirable if the accelerator longitudinal motion requires larger phase-space buckets. Muons Inc. has an STTR grant to explore the use of hydrogen gas-filled rf cavities for muon cooling~\cite{MuonsIncRef}. This approach simplifies the cooling channel design by integrating the energy-loss material into the rf system. Moreover, it may be more effective in permitting high-gradient operation of the cavities. Such cavities could also be used in the cooling and phase-rotation (and possibly buncher) sections; an exploration with cost-performance optimization is planned. The transport and cooling system in the front-end scenario considered here uses high-field solenoids for focusing. A cooling system with similar performance parameters using large-aperture quadrupoles has also been examined~\cite{JohnstoneRef}, though a cost-performance comparison has not yet been made. \subsection{Neutrino Factory Front End\label{sec5-sub1}} The front end of the neutrino factory (the part of the facility between the target and the first linear accelerator) represented a large fraction of the total facility costs in FS2~\cite{fs2}. However, several recent developments have given hope that a new design for the front end may be possible that is significantly less expensive: \begin{itemize} \item A new approach to bunching and phase rotation using the concept of adiabatic rf bunching~\cite{adiab1,adiab2,adiab3,adiab4,adiab5} eliminates the very expensive induction linacs used in FS2. \item For a moderate cost, the transverse acceptance of the accelerator chain could be doubled from its FS2 value. \item This diminished the demands on the transverse ionization cooling section and allowed the design of a simplified cooler with fewer components and reduced magnetic field strength. \end{itemize} We denote as ``Study 2a'' the simulations that have been made of the performance of this new front end, together with the new scheme for acceleration. The Monte Carlo simulations were performed with the code ICOOL~\cite{icool}. \begin{figure}[ptbh!] \includegraphics*[viewport=20 275 570 750]{sec5-sub1-ST2-study2a}% \caption{(Color) Comparison of the buncher concept used here with the bunching system used in FS2.}% \label{fig100}% \end{figure} The concept of the adiabatic buncher is compared with the system used in FS2 in Fig.~\ref{fig100}. The longitudinal phase space after the target is the same in both cases. Initially, there is a small spread in time, but a very large spread in energy. The target is followed by a drift space in both cases, where a strong correlation develops between time and energy. In FS2, the energy spread in the correlated beam was first flattened using a series of induction linacs. The induction linacs did an excellent job, reducing the final rms energy spread to 4.4\%. The beam was then sent through a series of rf cavities for bunching, which increased the energy spread to $\approx8\%.$ In the new scheme, the correlated beam is first adiabatically bunched using a series of rf cavities with decreasing frequencies and increasing gradients. The beam is then phase rotated with a second string of rf cavities with decreasing frequencies and constant gradient. The final rms energy spread in the new design is 10.5\%. This spread is adequate for the new cooling channel. \begin{figure}[ptbh] \includegraphics[width=4.5in,clip]{sec5-sub1-d-b-r-c} \caption{(Color) Overall layout of the front-end.}% \label{fig101}% \end{figure} The overall layout of the new front--end design is shown in Fig.~\ref{fig101}. The first $\approx$12~m is used to capture pions produced in the target. The field here drops adiabatically from 20~T over the target down to 1.75~T. At the same time, the radial aperture of the beam pipe increases from 7.5~cm at the target up to 25~cm. Next comes $\approx$100~m for the pions to decay into muons and for the energy-time correlation to develop. The adiabatic bunching occupies the next $\approx$50~m and the phase rotation $\approx$50~m following that. Lastly, the channel has $\approx$80~m of ionization cooling. The total length of the new front end is $295$~m. \begin{figure}[ptbh] \includegraphics[angle=90,width=5in]{sec5-sub1-magnetic-field}% \caption{Longitudinal field component $B_{z}$ on-axis along the Study2a front-end.}% \label{fig102}% \end{figure} The longitudinal field component on-axis is shown for the full front-end in Fig.~\ref{fig102}. The field falls very rapidly in the collection region to a value of 1.75~T. It keeps this value with very little ripple over the decay, buncher and rotator regions. After a short matching section, the 1.75~T field is changed adiabatically to the alternating field used in the cooler. The beam distributions used in the simulations were generated using MARS~ \cite{mars1}. The distribution is calculated for a 24~GeV proton beam interacting with a Hg jet~\cite{target}. The jet is incident at an angle of 100~mrad to the solenoid axis, while the beam is incident at an angle of 67~mrad to the solenoid axis. An independent study showed that the resulting 33~mrad crossing angle gives near-peak acceptance for the produced pions. An examination of particles that were propagated to the end of the front-end channel shows that they have a peak initial longitudinal momentum of $\approx$300~MeV/c with a long high-energy tail, and a peak initial transverse momentum $\approx$180~MeV/c. \begin{figure}[ptbh] \includegraphics{sec5-sub1-desired} \caption{(Color) Comparison of the capture region magnetic field used in the present simulation with that used in FS2.}% \label{fig103}% \end{figure} We used an improved axial field profile in the capture region that increased the final number of muons per proton in the accelerator acceptance by $\approx$10\%. The new axial field profile (marked KP) is compared in Fig.~\ref{fig103} with the profile used in FS2. Figure~\ref{fig104} shows the actual coil configuration in the collection region. The end of the 60~cm long target region is defined as $z = 0.$ The three small radius coils near $z=0$ are Cu coils, while the others are superconducting. The left axis shows the error field on-axis compared with the desired field profile. We see that the peak error field is $\approx0.07$~T. \begin{figure}[ptbh] \includegraphics{sec5-sub1-diff_F34}\caption{(Color) Actual coil configuration in the collection region.The left axis shows the error field on-axis compared with the optimal capture field profile, denoted KP in Fig.~\ref{fig103}. }% \label{fig104}% \end{figure} Figure~\ref{fig105} shows a MARS calculation of the absorbed radiation dose in the collection region. The peak deposition in the superconducting coils is $\approx1$~Mgy/yr for a 1~MW beam running for 1~Snowmass year of $10^{7}$~s. Assuming a lifetime dose for the insulation of 100~Mgy, there should be no problem with radiation damage in the coils. \begin{figure}[ptbh] \includegraphics[scale=1.75]{sec5-sub1-dose} \caption{(Color) MARS calculation of the absorbed radiation dose in the collection region.}% \label{fig105}% \end{figure} Two cells of the buncher lattice are shown schematically in Fig.~\ref{fig106}. Most of the 75~cm cell length is occupied by the 50-cm-long rf cavity. The cavity iris is covered with a Be window. The limiting radial aperture in the cell is determined by the 25~cm radius of the window. The 50-cm-long solenoid was placed outside the rf cavity in order to decrease the magnetic field ripple on the axis and minimize beam losses from momentum stop bands. The buncher section contains 27~cavities with 13~discrete frequencies and gradients varying from 5--10~MV/m. The frequencies decrease from 333 to 234~MHz in the buncher region. The cavities are not equally spaced. Fewer cavities are used at the beginning where the required gradients are small. Figure~\ref{fig108} shows the correlated longitudinal phase space and the bunching produced by the buncher. \begin{figure}[ptbh] \includegraphics[width=4in]{sec5-sub1-latt-rot} \caption{(Color) Schematic of two cells of the buncher or phase rotator section.}% \label{fig106}% \end{figure} The rotator cell is very similar to the buncher cell. The major difference is the use of tapered Be windows on the cavities because of the higher rf gradient. There are 72~cavities in the rotator region, with 15~different frequencies. The frequencies decrease from 232 to 201~MHz in this part of the front end. All cavities have a gradient of 12.5~MV/m. The energy spread in the beam is significantly reduced. \begin{figure}[ptbh] \includegraphics[width=3in,angle=90]{sec5-sub1-endbuncher-p-vs-z}% \caption{(Color) Longitudinal phase space after the buncher section.}% \label{fig108}% \end{figure} The cooling channel was designed to have a relatively flat transverse beta function with a magnitude of about 80~cm. One cell of the channel is shown in Fig.~\ref{fig111}. \begin{figure}[ptbh] \includegraphics[width=4in]{sec5-sub1-latt-cool} \caption{(Color) Schematic of one cell of the cooling section.}% \label{fig111}% \end{figure} Most of the 150~cm cell length is taken up by the 50-cm-long rf cavities. The cavities have a frequency of 201.25~MHz and a gradient of 15.25~MV/m. A novel aspect of this design comes from using the windows on the rf cavity as the cooling absorbers. This is possible because the near constant $\beta$ function does not significantly increase the emittance heating at the window location. The window consists of a 1~cm thickness of LiH with $25~\mu$m thick Be coatings (The Be will, in turn, have a thin coating of TiN to prevent multipactoring~\cite{multipac}.) The alternating 2.8~T solenoidal field is produced with one solenoid per half cell, located between the rf cavities. Figure~\ref{fig111a} shows the longitudinal phase space at the end of the cooling section. The reduction in normalized transverse emittance along the cooling channel is shown in the left plot of Fig.~\ref{fig112} and the right plot shows the normalized longitudinal emittance. \begin{figure}[ptbhptbh] \mbox{ \includegraphics[width=0.35\linewidth,angle=90]{sec5-sub1-endcool-p-vs-z}% \includegraphics[width=0.35\linewidth,angle=90]{sec5-sub1-density-long}% }\caption{(Color) Longitudinal phase space at the end of the channel.}% \label{fig111a}% \end{figure} \begin{figure}[ptbh] \mbox{ \includegraphics[angle=90,width=3in]{sec5-sub1-emitt-vs-z} \includegraphics[angle=90,width=3in]{sec5-sub1-emitl-vs-z}} \caption{(Color) Normalized transverse emittance (left) and longitudinal emittance (right) along the front-end for a momentum cut $0.1 \leq p \leq0.3$~GeV/c.}% \label{fig112}% \end{figure} The channel produces a final value of $\epsilon_{T} =7.1$~mm rad, that is, more than a factor of two reduction from the initial value. The equilibrium value for a LiH absorber with an 80~cm $\beta$ function is about $\epsilon_{T}^{\text{equil.}% }\approx5.5$~mm rad. Figure~\ref{fig113} shows the muons per proton that fit into the accelerator transverse normalized acceptance of $A_{T}=30$~mm rad and normalized longitudinal acceptance of $A_{L}=150$~mm. The 80-m-long cooling channel raises this quantity by about a factor of 1.7. The current best value is $0.170\pm0.006.$ This is the same value obtained in FS2. Thus, we have achieved the identical performance at the entrance to the accelerator as FS2, but with a significantly simpler, shorter, and presumably less expensive channel design. In addition, unlike FS2, this channel transmits both signs of muons produced at the target. With appropriate modifications to the transport line going into the storage ring, this design could deliver both (time tagged) neutrinos and antineutrinos to the detector. \begin{figure}[ptbh!] \includegraphics[angle=90,width=4in]{sec5-sub1-number-vs-z} \caption{(Color) The muons per proton into the accelerator transverse normalized acceptance of $A_{T}=30$~mm rad and normalized longitudinal acceptance of $A_{L}=150$~mm for a momentum cut $0.1 \leq p \leq0.3$~GeV/c.}% \label{fig113}% \end{figure} The beam at the end of the cooling section consists of a train of bunches (one sign) with a varying population of muons in each one; this is shown in Fig.~\ref{fig114}. \begin{figure}[tpbh!] \includegraphics[width=4in]{sec5-sub1-bunches-scott} \caption{Bunch structure of the beam delivered to the accelerator transverse normalized acceptance of $A_{T}=30$~mm rad and normalized longitudinal acceptance of $A_{L}=150$~mm for a momentum cut $0.1 \leq p \leq0.3$~GeV/c.}% \label{fig114}% \end{figure} Figure~\ref{fig115} depicts the longitudinal phase space of one of the bunches and Fig.~\ref{fig116} shows a few interleaved $\mu^{+}$ and $\mu^{-}$ bunches exiting the cooling section. \begin{figure}[ptbh!] \includegraphics[width=4.5in]{sec5-sub1-long-scott} \caption{Longitudinal phase space of one bunch in the train at the end of the cooling section. The open circles are all the particles that reach the end of the channel and the filled circles are particles within the accelerator transverse normalized acceptance of $A_{T}=30$~mm rad and normalized longitudinal acceptance of $A_{L}=150$~mm for a momentum cut $0.1 \leq p \leq0.3$~GeV/c.}% \label{fig115}% \end{figure} \begin{figure}[bpth!] \includegraphics[width=4.5in]{sec5-sub1-bunch-pm} \caption{(Color) A sample of the train of interleaved $\mu^{+}$~(red) and $\mu^{-}$~(blue) bunches exiting the cooling section.}% \label{fig116}% \end{figure} \printfigures \subsection{Considerations on a Beta Beam Facility in the U.S. \label{sec5-sub2-1}} Motivated by the recent suggestion that a higher energy Beta Beam facility might have considerable scientific merit~\cite{jj-beta}, we consider here some of the possibilities of a U.S.-based scenario. There was neither the time nor the effort available to carry out a study equivalent to the baseline scenario prepared by the European Beta Beam Study Group, and we make no pretense of having done so. Nonetheless, it was felt to be interesting to look briefly at U.S. options that have the potential for higher energy beams than likely to be available at CERN in the foreseeable future. In particular, an ``intermediate'' beam energy of $\gamma =350,$ which corresponds well to the top energy available at the Tevatron, is expected to have much better resolution than the CERN low-energy option in terms of both \textsl{CP} violation and sensitivity to the mass hierarchy. The two possible U.S. machines to consider are RHIC at BNL and the Tevatron at Fermilab. Of these, the Tevatron looks more attractive due to its higher energy reach. (RHIC has a top energy comparable to the SPS at CERN.) Both BNL and Fermilab are interested in Superbeams, which complement the Beta Beam in terms of physics reach, and both Laboratories are pursuing the possibility of obtaining a high-intensity proton driver, a prerequisite for a Superbeam and helpful, but not critical, for a Beta Beam facility. It is worth noting, of course, that an intermediate-energy Beta Beam facility would require a very large decay ring to store a $\gamma=350$ beam. This would add substantially to the cost of implementing such a facility in the U.S. \subsubsection{Estimate of Decay Losses} It is important to know how many ions survive the acceleration process without decaying. This can be calculated as \begin{equation} N=N_{0}e^{-\frac{1}{\tau }\int_{0}^{T}\frac{dt^{\prime }}{\gamma (t^{\prime })}} \label{eqn:one} \end{equation} where $\tau $ is the decay time at rest. If the ramp rate is constant, the integral (which represents the time passed in the rest frame) becomes \begin{equation} \int_{t_{0}}^{t}\frac{dt^{\prime }}{\gamma (t^{\prime })}=\frac{T\ln (\gamma_{1}/\gamma_{0})}{\gamma_{1}-\gamma_{0}}. \end{equation} Here, $T$ is the ramp time, $\gamma_{0}$ the initial energy, and $\gamma_{1}$ final energy. In most accelerators, the energy changes by an order of magnitude ($\gamma_{1}\approx 10\gamma_{0}$), so \begin{equation} \frac{T\ln (\gamma_{1}/\gamma_{0})}{\gamma_{1}-\gamma_{0}}\approx \frac{T}{0.4\gamma_{1}}. \end{equation} Substituting this back into Eq.~(\ref{eqn:one}) yields \begin{equation} N\approx N_{0}e^{-\frac{T}{0.4\gamma_{1}\tau }} \label{eqn:two} \end{equation} which is just the standard decay formula with the decay time calculated at an ``effective'' or average energy equal to 40\% of the top energy. In reality, the ramp rate is not constant. In particular, the start of the ramp is usually slower, leading to a larger total decay than the above estimate would predict. Nonetheless, Eq.~(\ref{eqn:two}) gives a useful first approximation. If a more accurate result were needed, and the ramp function is known, losses could be calculated directly from Eq.~(\ref{eqn:one}). To calculate the Lorenz gamma of the ions, based on the corresponding gamma of full energy protons in the same machine, we recall that the magnetic rigidity is the same for both particles: \begin{equation} B\rho =\frac{p}{q}\Big|_{\mathrm{proton}}=\frac{p}{q}\Big|_{\mathrm{ion}} \end{equation} which, since the momentum is \begin{equation} p=m_{0}c\beta \gamma =m_{0}c\sqrt{\gamma ^{2}-1} \end{equation} gives \begin{equation} \gamma _{\mathrm{ion}}=\sqrt{1+\left( \gamma _{\mathrm{proton}}^{2}-1\right) \left( q/M\right) ^{2}} \end{equation} where $q/M$ is the charge-to-mass ratio of the ion. Table~\ref{t:gammas} lists the U.S. machines relevant to Beta Beam acceleration, and their maximum gamma $(\gamma_{\text{max}})$ for protons, $^{6}$He and $^{18}$Ne. Table~\ref{t:losses} gives the approximate amount of beam loss due to beta decay during acceleration. Due to its long ramp time, it is clear that RHIC would not be very efficient at accelerating the ions in question. The Tevatron, on the other hand, would be relatively efficient, as it was originally designed as a fixed-target machine and hence has a reasonably fast ramp. The Tevatron has the additional feature of being, for now, the world's highest energy machine. Assuming that the LHC will be busy with collider physics for the foreseeable future, the Tevatron would thus seem to be the obvious candidate for generating an ``intermediate energy'' Beta Beam. There remains one key question, however---we must assess how the Tevatron's superconducting magnets would be affected by the decay products of the Beta Beam. This, in turn, determines how many ions the machine could accelerate to top energy in a single batch. If there is continued interest in exploring the possibility of using the Tevatron for an intermediate energy Beta Beam facility, this issue must be studied. \begin{table}[tbph!] \caption{Parameters of U.S. machines that could potentially be used to accelerate ions for a Beta Beam.} \label{t:gammas} \begin{ruledtabular} \begin{tabular}{ccccc} Machine &Proton kinetic energy (GeV)&$\gamma (p)$&$\gamma (^{6}\mathrm{% He}^{2+})$&$\gamma (^{18}\mathrm{Ne}^{10+})$ \\\hline FNAL Booster & 8 & 9.5 & 3.3 & 5.4 \\ Main Injector & 150 & 161 & 64 & 89 \\ Tevatron & 980 & 1045 & 349 & 581 \\ \hline\hline BNL Booster & 2 & 3.1 & 1.4 & 1.9 \\ AGS & 30 & 34 & 11 & 19 \\ RHIC & 250 & 268 & 89 & 149 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[tbp!] \caption{Expected losses during acceleration, calculated using Eq.~\ref {eqn:two}. For RHIC, the improved values in parentheses would require a modest modification to the power supplies.} \label{t:losses} \begin{ruledtabular} \begin{tabular}{cccc} Machine&Ramp time (s)&$^{6}\mathrm{He}^{2+}$ loss (\%) & $^{18}\mathrm{Ne}^{10+}$ loss (\ \%) \\ \hline FNAL Booster & 0.03 & 2 & 1 \\ Main Injector & 0.7 & 2 & 1 \\ Tevatron & 17 & 10 & 3 \\ \hline\hline BNL Booster & 0.1 & 14 & 5 \\ AGS & 0.5 & 9 & 3 \\ RHIC & 100 (40) & 91 (62) & 50 (24) \\ \end{tabular} \end{ruledtabular} \end{table} \subsubsection{Estimate of Power Deposition} The total deposited power from decay products per unit machine length can be written \begin{equation} P=\frac{-\dot{N}E_{\mathrm{kin}}}{L}=\frac{N}{\gamma \tau }\frac{% E_{0}(\gamma -1)}{L} \end{equation} where, $\dot{N}$ is the number of decays per unit of time, $\tau$ is the decay time at rest, and $E_{0}$ is the rest energy. For high energies (${\gamma\to \infty}$), the deposited power $P\approx\frac{NE_{0}}{L \tau}$ is independent of gamma, depending only on the number of ions per machine length. To obtain the time-averaged power, one must multiply by the duty factor $f$ (fraction of time with beam in the machine) \begin{equation} \left\langle P\right\rangle \approx f\frac{NE_{0}}{L \tau}. \end{equation} For the Tevatron, this equation can be used to estimate the number of ions that would generate 1~W/m from decay losses, which is about $1\times 10^{13}$ for both types of ions. For the lower energy machines, supplying the Tevatron with this intensity would yield much lower power deposition, even though their circumferences are smaller. This is because their duty factor (ramp time divided by Tevatron cycle time) is very small. Anticipated levels are about 0.05~W/m in the Main Injector, and 0.03~W/m in the Booster, based on the simplified formula above. \subsubsection{Design of Combined-Function Superconducting Magnet for FFAGs} An initial design of a superconducting combined-function (dipole--quadrupole) magnet has been developed~\cite{CaspiRef}. A first-cut design of a superconducting combined function, dipole-quadrupole magnet, is outlined. The design is for one of the QD magnets requiring the highest field and gradient. The parameters of the QD cell are shown in Table~\ref{qdcell}, \begin{table}[htbp!] \caption{Parameters of the QD cell} \label{qdcell} \begin{ruledtabular} \begin{tabular*}{10cm}{cc} $E_{\text{min}}$ (GeV)& 10\\ $E_{\text{max}}$ (GeV) &20\\ $L_0$ (m) &2\\ $L_q$ (m) &0.5\\ Type & QD\\ $L$ (m)&1.762\\ $r$ (m)& 18.4\\ $X_0$ (mm)&1.148\\ $R$ (cm)&10.3756\\ $B_0$ (T)&2.7192\\ $B_1$ (T/m)&-15.495\\ \end{tabular*} \end{ruledtabular} \end{table} where $L_0$ is the length of the long drift between the QF magnets, $L_q$ is the length of the short drift between QF and QD magnets; $L$ is the length of the reference orbit inside the magnet, $r$ is the radius of curvature of the reference orbit, $X_0$ is the displacement of the center of the magnet from the reference orbit, $R$ is the radius of the magnet bore, $B_0$ is the vertical magnetic field at the reference orbit, and $B_1$ is the derivative of the vertical magnetic field at the reference orbit. The magnet design is based on a cosine-theta configuration with two double layers for each function. The quadrupole coil is located within the dipole coil and both coils are assembled using key-and-bladder technology. All coils are made with the same Nb--Ti cable capable of generating the operating dipole field and gradient with about the same current of 1800~A (a single power supply is thus possible with a bit of fine tuning). The maximum central dipole field and gradient at short sample are 4.1~T and 26~T/m, as compared with the requirements of 2.7~T and 15.4~T/m, respectively. At this early design stage, excess margin is left for safety and perhaps a field-rise in the magnet end region. The maximum azimuthal forces required for magnet pre-stress are of the order of 1~MN/m (assuming maximum safety). The conductor strand size and cable parameters common to both dipole and quadrupole are listed in Table~\ref{nbti}. \begin{table}[htbp!] \caption{Nb--Ti conductor for dipole and quadrupole coils.\label{nbti}} \begin{ruledtabular} \begin{tabular}{lc} Strand diameter (mm) &0.6477\\ Cable width, bare (mm) & 6.4\\ Cable thickness, insulated (mm) &1.35\\ Keystone angle (deg.) &0.6814\\ Conductor type &Nb--Ti\\ Cu:SC ratio & 1.8:1\\ Current density (at 5~T, 4.2~K) (A/{mm}$^2$)& 2850\\ Number of strands& 20\\ \end{tabular} \end{ruledtabular} \end{table} The initial cross sections of both dipole and quadrupole were designed to give less than one part in one hundred units of systematic multipole errors at a radius of 70~mm. It is straightforward to readjust the design to cancel the end-field multipoles as proposed in Section~\ref{sec5-sub2}. The combined cross section is shown in Fig.~\ref{first-quad} for one quadrant. Figure~\ref{first-dip} shows the combined dipole-quadrupole magnetic flux. The calculated mid-plane field profile of the magnet (plotted in Fig.~\ref{first-field}) clearly shows the superposed dipole and quadrupole fields. A mechanical layout for the magnet has also been developed, as shown in Figs.~\ref{nbti-struc}, \ref{nbti-explo}, and \ref{nbti-close}. \begin{figure}[hbtp!] \includegraphics*[width=4in]{sec5-sub2-1-cfn03} \caption{First quadrant of the combined magnet cross section.} \label{first-quad} \end{figure} \begin{figure}[hbtp!] \includegraphics*[width=4in]{sec5-sub2-1-cfn04a} \caption{(Color) Flux plot corresponding to a dipole field of 2.7~T and a gradient of 15~T/m.} \label{first-dip} \end{figure} \begin{figure}[hbtp!] \includegraphics[width=4in]{sec5-sub2-1-cfn04b} \caption{(Color) B${}_y$ along the mid-plane showing a central 2.7~T field and a 15~T/m gradient.} \label{first-field} \end{figure} \begin{table}[htbp!] \caption{Coil current parameters.\label{tab:current}} \begin{ruledtabular} \begin{tabular}{ccc} Current density (A/{mm}$^2$)&Central Field (T)&Gradient\\ \hline 730&2.5&15.4\\ 800&2.7&17\\ 1220 (maximum)&4.1&26\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[hbtp!] \includegraphics[width=4in]{sec5-sub2-1-cfn06a} \caption{(Color) End viw of magnet structure.} \label{nbti-struc} \end{figure} \begin{figure}[hbtp!] \includegraphics[width=4in]{sec5-sub2-1-cfn07a} \caption{(Color) Exploded view showing the two quadrupole layers (dipole coils not shown).} \label{nbti-explo} \end{figure} \begin{figure}[hbtp!] \includegraphics[width=4in]{sec5-sub2-1-cfn07b} \caption{(Color) Close-up of quadrupole coil return-end windings.} \label{nbti-close} \end{figure} \subsection{Neutrino Factory Acceleration\label{sec5-sub2}} \begin{table}[tbp] \caption{Acceleration system requirements.} \label{tab:acc:pars}% \begin{ruledtabular} \begin{tabular}{lr} Initial kinetic energy (MeV)&187\\ Final total energy (GeV)&20 \\ Normalized transverse acceptance (mm)&30\\ Normalized longitudinal acceptance (mm)&150\\ Bunching frequency (MHz)&201.25\\ Maximum muons per bunch&$1.1\times10^{11}$\\ Muons per bunch train per sign&$3.0\times10^{12}$\\ Bunches in train&89\\ Average repetition rate (Hz)&15\\ Minimum time between pulses (ms)&20 \end{tabular} \end{ruledtabular} \end{table} The acceleration system takes the beam from the end of the cooling channel and accelerates it to the energy required for the decay ring. Table~% \ref{tab:acc:pars} gives the design parameters of the acceleration system. Acceptance is defined such that if $A_\bot$ is the transverse acceptance and $\beta_\bot$ is the beta function, then the maximum particle displacement (of the particles we transmit) from the reference orbit is $\sqrt{\beta_\bot A_\bot mc/p}$, where $p$ is the particle's total momentum, $m$ is the particle's rest mass, and $c$ is the speed of light. The acceleration system is able to accelerate bunch trains of both signs simultaneously. To reduce costs, the RLA acceleration systems from FS2 will be replaced, as much as possible, by Fixed-Field Alternating Gradient (FFAG) accelerators. \subsubsection{Initial Parameter Sets} \begin{table}[tbp] \caption{Parameters for FFAG lattices. See Fig.~\ref{fig:acc:ffaggeom} to understand the signs of the parameters.} \label{tab:acc:ffag}% \begin{ruledtabular} \begin{tabular}{lrrrr} Maximum energy gain per cavity (MeV)&\multicolumn{4}{c}{7.5}\\ Stored energy per cavity (J)&\multicolumn{4}{c}{368}\\ Cells without cavities&\multicolumn{4}{c}{8}\\ RF drift length (m)&\multicolumn{4}{c}{2}\\ Drift length between quadrupoles (m)&\multicolumn{4}{c}{0.5}\\ Initial total energy (GeV)&\multicolumn{2}{c}{5}&\multicolumn{2}{c}{10}\\ Final total energy (GeV)&\multicolumn{2}{c}{10}&\multicolumn{2}{c}{20}\\ Number of cells&\multicolumn{2}{c}{90}&\multicolumn{2}{c}{105}\\ Magnet type&\multicolumn{1}{c}{Defocusing}&\multicolumn{1}{c}{Focusing}& \multicolumn{1}{c}{Defocusing}&\multicolumn{1}{c}{Focusing}\\ Magnet length (m)&1.612338&1.065600&1.762347&1.275747\\ Reference orbit radius of curvature (m)&15.2740&-59.6174&18.4002&-70.9958\\ Magnet center offset from reference orbit (mm)&-1.573&7.667&1.148&8.745\\ Magnet aperture radius (cm)&14.0916&15.2628&10.3756&12.6256\\ Field on reference orbit (T)&1.63774&-0.41959&2.71917&-0.70474\\ Field gradient (T/m)&-9.1883&8.1768&-15.4948&12.5874\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[htbp] \includegraphics[width=0.65\textwidth]{sec5-sub2-TripGeometry-old} \caption{(Color) Geometry of the Triplet Lattice. The ``magnet center offset from reference orbit" listed in Table~\ref{tab:acc:ffag} is positive for both magnets in this diagram.} \label{fig:acc:ffaggeom} \end{figure} Based on an earlier version of the cost optimization process described below, it was decided that two factor-of-two FFAG stages would be used: one from 5 to 10~GeV, the other from 10 to 20~GeV total energy. Triplet lattices were chosen for their good longitudinal performance and the extensive study that had been done on them. The parameters that we adopted are given in Table~\ref{tab:acc:ffag}. The 201.25~MHz cavities were taken to be single-cell superconducting cavities. The energy gain per rf cavity was chosen based on the gradients already achieved in the cavity studied at Cornell, namely 10~MV/m at 4.2~K (they have achieved 11~MV/m at 2.5~K)~\cite{pac03:1309}. This is a very conservative number, as there is every reason to believe that improved sputtering techniques will allow the cavity to achieve a gradient of 15~MV/m or higher. Energy gain and stored energy are computed by scaling from the values for the 300~mm aperture FS2 cavities~\cite{fs2},\cite{pac03:1309}. With the beam intensity given in Table~\ref{tab:acc:pars}, and both signs of muons, about 16\% of the stored energy will be extracted from the cavities in the 5--10~GeV FFAG, and about 27\% will be extracted in the 10--20~GeV. While this may seem substantial, it is easily handled. To keep the average voltage to 7.5~MV per cavity, one need only increase the initial voltage to 7.8~MV for the 5--10~GeV FFAG and 8.1~MV for the 10--20~GeV FFAG. The most important effect is a differential acceleration between the head and tail of the bunch train, which is about 1\% for both cases. This may be at least partially correctable by a phase offset between the cavity and the bunch train. \subsubsection{Low Energy Acceleration} Based on cost considerations (see Section~\ref{sec5-sub2}) we have chosen not to use FFAGs below 5~GeV total energy. Therefore, we must provide alternative acceleration up to that point. As in FS2, we use a linac from the lowest energies to 1.5~GeV, followed by a recirculating linear accelerator (RLA). The linac turns out to be strongly constrained by the transverse acceptance. In FS2, there were three types of cryomodules, containing one, two, and four cavities, respectively. With our larger acceptance, the cryomodules from FS2 would require the beam to have a momentum of at least 420~MeV/c, 672~MeV/c, and 1783~MeV/c, respectively. Note that the lowest momentum is much higher than the average momentum in the cooling channel, which is about 220~MeV/c. Thus, we need to make adjustments to the FS2 design to be able to accelerate this larger beam. \begin{table}[tbp] \caption{Linac cryomodule structure. Numbers are lengths in m.\label{tab:acc:cryo}} \begin{ruledtabular} \begin{tabular}{rlrlrl} \multicolumn{2}{c}{Cryostat I}&\multicolumn{2}{c}{Cryostat II}&\multicolumn{2}{c}{Cryostat III}\\ \hline End to solenoid&0.25&To solenoid&0.25&To solenoid&0.25\\ Solenoid&1.50&Solenoid&1.50&Solenoid&1.50\\ Input coupler&0.50&Input coupler&0.50&Input coupler&0.50\\ Cavity&0.75&Cavity&1.50&Cavity&1.50\\ To end&0.25&Input coupler&0.50&Between cavities&0.75\\ \cline{1-2} Total&3.25&To end&0.25&Cavity&1.50\\ \cline{3-4} &&Total&4.50&Input coupler&0.50\\ &&&&To end&0.25\\ \cline{5-6} &&&&Total&6.75 \end{tabular} \end{ruledtabular} \end{table} In particular, to increase the acceptance, we must reduce the lengths of the cryomodules. We first construct a very short cryomodule by using a single one-cell cavity instead of the two-cell cavities in the FS2 cryomodules. Not only does this shorten the cavity itself, it also eliminates one of the input couplers. Secondly, we remove 50~cm between the solenoid and the input coupler. We intend to run the cavities with up to 0.1~T on them \cite{Ono99}; this is acceptable provided the cavities are cooled down before the magnets are powered. The field profile of the solenoids shown in FS2 indicates that the iron shield on the solenoids is sufficient to bring the field down to that level even immediately adjacent to the solenoid shield. Finally, the FS2 cryomodules left 75~cm for the end of the cryostat; we have reduced this to 50~cm. Together, these changes permit a total length for the first module type of 3.25~m. Table~\ref{tab:acc:cryo} shows the dimensions of this cryostat. The two shortest cryostats from FS2 have been adjusted to meet these specifications and, in addition, for the ``intermediate'' cryostat the spacing between the cavities was reduced to 75~cm, under the assumptions that the cavities will be allowed to couple weakly, and that the entire module will be tuned appropriately to take this into account. \begin{table}[tbp] \caption{Linac cryomodule parameters.\label{tab:acc:linac}} \begin{ruledtabular} \begin{tabular}{lrrr} &{Cryo I}&{Cryo II}&{Cryo III}\\ \hline Length (m)&3.25&4.50&6.75\\ Minimum allowed momentum (MeV/c)&273&378&567\\ Number of modules&18&12&23\\ Cells per cavity&1&2&2\\ Cavities per module&1&1&2\\ Maximum energy gain per cavity (MeV)&11.25&22.5&22.5\\ RF Frequency (MHz)&201.25&201.25&201.25\\ Solenoid length (m)&1&1&1\\ Solenoid field (T)&2.1&2.1&2.1 \end{tabular} \end{ruledtabular} \end{table} Table~\ref{tab:acc:linac} summarizes parameters for the linac. The phase of the cavities in the linac will be varied linearly with length from about 65$^\circ$ at the beginning of the linac to 0$^\circ$ at the end. As indicated in Table~\ref{tab:acc:linac}, we must inject into the linac at a momentum of 273~MeV/c, which is still higher than the average momentum in the cooling channel. We deal with this by designing a matching section from the cooling channel to the linac in which sufficient acceleration will occur to reach the required momentum for the linac. That matching section will consist of cavities similar to those in the cooling channel, but with thinner windows. \begin{figure}[tbp] \includegraphics[width=\textwidth]{sec5-sub2-Dogbone.eps} \caption{(Color) Dogbone (top) and racetrack (bottom) layout for the RLA.} \label{fig:acc:layout} \end{figure} Compared to FS2, we are injecting into the RLA at a lower energy and are accelerating over a much smaller energy range. This will make it more difficult to have a large number of turns in the RLA. To mitigate this, we choose a ``dogbone'' layout for the RLA \cite{pac01:3323}. For a given amount of installed rf, the dogbone layout has twice the energy separation of the racetrack layout at the spreaders and recombiners, making the switchyard much easier and allowing more passes through the linac. One disadvantage of the dogbone layout is that, because of the longer linac and the very low injection energy, there is a significant phase shift of the reference particle with respect to the cavity phases along the length of the linac in the first pass (or the last pass, depending on which energy the cavities are phased for). To reduce this effect, we inject into the center of the linac (as shown in Fig.~\ref{fig:acc:layout}). In the dogbone RLA, we have just over 1~GeV of linac, and we make three and a half passes through that linac to accelerate from a total energy of 1.5~GeV to 5~GeV. The linac will use the same cryomodules that were used in the RLA in FS2. \begin{figure}[tbp] \includegraphics[width=\textwidth]{sec5-sub2-Bogacz-040420-12b-fix.eps} \caption{(Color) A section of the dogbone arc where the bend changes direction, showing the dispersion (solid) and beta functions (dashed).} \label{fig:acc:disp} \end{figure} Since the dogbone arc changes its direction of bend twice in each arc, dispersion matching must be handled carefully. This is done by having a 90$^\circ$ phase advance per cell, and removing the dipoles from two consecutive cells. This will cause the dispersion to switch to the other sign as desired, as shown in Figure~\ref{fig:acc:disp}. \begin{figure}[tbp] \includegraphics[width=\textwidth]{sec5-sub2-StudyIIa-layout.eps} \caption{(Color) Potential layout for the acceleration systems.} \label{fig:acc:alllayout} \end{figure} Figure~\ref{fig:acc:alllayout} shows a compact potential layout for all the acceleration systems described here. \subsubsection{FFAG Tracking Results} \begin{figure}[tbp] \includegraphics[width=0.65\textwidth]{sec5-sub2-040224a-nu} \caption{(Color) Tunes as a function of energy in the 5--10~GeV FFAG reference design.} \label{fig:acc:nu} \end{figure} Initial experience with FFAG lattices having a linear midplane field profile has shown them to have a good dynamic aperture at fixed energies. We are careful to avoid single-cell linear resonances to prevent beam loss. However, since the tune is not constant (see Fig.~\ref{fig:acc:nu}), the single-cell tune will pass through many nonlinear resonances. Nonlinearities in the magnetic field due to end effects are capable of driving those nonlinear resonances, and we must be sure that there is no beam loss and minimal emittance growth because of this. Furthermore, there is the potential to weakly drive multi-cell linear resonances because the changing energy makes subsequent cells appear slightly different from each other. These effects can be studied through tracking. ICOOL~\cite{icool} is used for tracking for several reasons. It will allow for a fairly arbitrary end-field description, it will attempt to make that description consistent with Maxwell's equations, and it will track accurately when the lattice acceptances, beam sizes, and energy spread are all large. \begin{figure}[tbp] \includegraphics[width=0.5\textwidth]{sec5-sub2-end0}% \includegraphics[width=0.5\textwidth]{sec5-sub2-end1} \caption{(Color) Relative dipole field (left) and quadrupole field (right) near the magnet end. The dashed line is the field from TOSCA, while the solid line is our model.} \label{fig:acc:end0} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.65\textwidth]{sec5-sub2-end2} \caption{(Color) Peak magnitude of the sextupole end field at radius $R$ (the magnet aperture), divided by the dipole field. The dashed line is the field from TOSCA, while the solid line is our model.} \label{fig:acc:end2} \end{figure} We begin by constructing a simple model of both a quadrupole and dipole $% \cos\theta$-type magnet, without iron, using TOSCA \cite{opera3d}. At the end of the magnet, the field does not immediately drop to zero, but falls gradually, as shown in Fig.~\ref{fig:acc:end0}. The end-field falloff in the dipole and quadrupole generates nonlinear fields, which ICOOL calculates. In addition, there are higher-order multipoles generated by breaking the magnet symmetry at the ends where the coils form closed loops. We use TOSCA to compute the sextupole components that arise from this effect, as shown in Fig.~\ref{fig:acc:end2}, and include them in our computation. The TOSCA computation is done without iron, which leads to the overshoot in the field values in Figs.~\ref{fig:acc:end0}--\ref{fig:acc:end2}. Iron in the magnet will likely eliminate that overshoot. Thus, we approximate the fields from TOSCA using functions without the overshoot. Fitting roughly to the TOSCA results, the fields are approximated by \begin{equation} \begin{gathered} B_0(z) = B_{00}\dfrac{1+\tanh{\dfrac{z}{0.7R}}}{2},\qquad B_1(z) = B_{10}\dfrac{1+\tanh{\dfrac{z}{0.35R}}}{2}\\ B_2(z) = -0.2B_{00}\exp\left[-\dfrac{1}{2} \left(\dfrac{z-0.36 R}{0.57 R}\right)^2\right], \end{gathered} \end{equation} where $R$ is the magnet aperture radius, $B_0(z)$ is the dipole field, $% B_{00}$ is the dipole field in the center of the magnet, $B_1$ is the quadrupole field, $B_{10}$ is the quadrupole field in the center of the magnet, and $B_2$ is the maximum magnitude of the sextupole field at the radius $R$. These fitted functions are shown in their corresponding plots in Figs.~\ref {fig:acc:end0}--\ref{fig:acc:end2}. \begin{figure}[tbp] \centering \includegraphics[width=0.65\textwidth]{sec5-sub2-res3} \caption{(Color) Horizontal phase space of tracking at 5.1~GeV/c at the outer edge of the acceptance. Open circles are without the body sextupole fields and show a third-order resonance; filled circles are with the body sextupole fields.} \label{fig:acc:res3} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=0.65\textwidth]{sec5-sub2-shapek} \caption{(Color) Sextupole field components in the 5--10~GeV FFAG reference lattice. The dotted line is the dipole field, the dashed line is $B_2/(20R^2)$ with zero body sextupole field, and the solid line is with sufficient body sextupole field to eliminate the third-order resonance.} \label{fig:acc:sex} \end{figure} \begin{figure}[tbp] \includegraphics[width=0.5\textwidth]{sec5-sub2-accres}% \includegraphics[width=0.5\textwidth]{sec5-sub2-partial} \caption{(Color) Tracking of a particle at the edge of the acceptance with uniform acceleration in the 5--10~GeV reference lattice. On the left, the dashed line is without any body sextupole, and the solid line is with the corrected body sextupole. On the right, a smaller integrated sextupole correction is used (40\% instead of 68\%), and significant emittance growth is observed.} \label{fig:acc:trkacc} \end{figure} Injecting particles at the outer edge of the acceptance and tracking through several cells, indicated a large third-order resonance at around 5.1~GeV/c as shown in Fig.~\ref{fig:acc:res3}. This resonance is presumably being driven by the sextupole fields at the magnet ends. The strength of the resonance can be reduced if the integrated sextupole in the magnet is made zero. With some experimentation, it was found that if the integrated body sextupole was set to 68\% of the integrated end sextupoles, (see Fig.~\ref{fig:acc:sex}), the resonance was eliminated (also shown in Fig.~\ref{fig:acc:res3}). When acceleration is included, one sees particle loss when accelerating through the resonance if there is no body sextupole correcting the end sextupoles, while there appears to be almost none with the body correction included (see Fig.~\ref{fig:acc:trkacc}). If the body correction is only partially included, there is significant emittance growth, as seen in Fig.~\ref {fig:acc:trkacc}. With these sextupole corrections, we can uniformly accelerate over the entire 5--10~GeV energy range of the lower energy reference FFAG without losing a high-amplitude particle or having its amplitude grow by a large amount. \begin{figure}[tbp] \includegraphics[width=0.5\textwidth]{sec5-sub2-ephi2bk}% \includegraphics[width=0.5\textwidth]{sec5-sub2-ephi2b} \caption{(Color) Longitudinal tracking starting from an upright ellipse for the 5--10~GeV FFAG. On the left, with only 201.25~MHz rf. On the right, with third-harmonic rf having voltage equal to 2/9 of the fundamental rf voltage. Curves are labeled with their corresponding acceptance. Crosses for both cases started out as horizontal and vertical lines in phase space.} \label{fig:acc:longtrk} \end{figure} When tracking with rf is considered, the longitudinal dynamics behavior is complex \cite{KoscielniakPAC03}. If one begins with an upright ellipse, there is considerable emittance growth if only the 201.25~MHz rf is used (see Fig.~\ref{fig:acc:longtrk}). Adding a third-harmonic rf considerably reduces the emittance growth, as shown in Fig.~\ref{fig:acc:longtrk}. The amount of third-harmonic rf required is substantial and that, combined with space considerations, makes this alternative unattractive. Tilting the initial ellipse in phase space will reduce the emittance growth, but a means to produce that tilt must be developed. \subsubsection{FFAG Cost Optimization} \begin{figure}[tbp] \centering \includegraphics[width=0.65\textwidth]{sec5-sub2-040224a-dt} \caption{(Color) Time-of-flight deviation per cell as a function of energy for the 5--10~GeV reference design.} \label{fig:acc:tof} \end{figure} The designs for the FFAG lattices are chosen based on a cost optimization. The ``costs'' computed by our model are not intended to be used in computing the cost of an actual machine, but instead are used to find the cost of one design relative to another. For a given lattice type and a given energy range, the magnet lengths, their dipole fields, and their quadrupole fields are allowed to vary to produce the minimum cost for the lattice. Magnet apertures are determined by finding a circle that encloses all of the beam ellipses within a magnet (both at different energies and positions) for transverse amplitudes that are equal to the transverse acceptance. The difference between the time-of-flight at the minimum energy and the minimum time-of-flight for all energies (see Fig.~\ref{fig:acc:tof}) is constrained to be a certain value; similarly for the time-of-flight at the maximum energy. That value is chosen to make the quantity $V/\omega\Delta T\Delta E$ take on specific values. These values depend on the energy of the lattice, as well as the desired longitudinal acceptance. In the above equation, $V$ is the total rf voltage in the ring as an energy gain, $\omega$ is the angular rf frequency, $\Delta T$ is the time-of-flight difference described above, and $\Delta E$ is the energy range over which the beam is accelerated. We examined FFAG lattice designs that minimize the relative cost. Some constraints were assumed in performing the optimization: \begin{itemize} \item There is at least one 2-m-long drift in each cell to make room for a superconducting rf cavity. This is substantially longer than the rf cavity, but the extra length is needed to reduce the field from the magnets to below 0.1~T at the cavity surface \cite{Ono99}. \item All shorter drifts in the cell are 0.5~m long. This is needed to maintain sufficient space between magnets for cryostats and other necessary equipment. \item The minimum and maximum energy of each accelerating stage are fixed. \item The type of lattice (doublet, triplet, FODO, etc.) is fixed. \item The time-of-flight on the energy-dependent closed orbit is the same at the minimum and maximum energies (see Fig.~\ref{fig:acc:tof}). This minimizes the deviation of the bunch from the rf crest and therefore, presumably, maximizes the longitudinal acceptance of the lattice. \item The quantity $V/(\omega \Delta T\Delta E)$ is a fixed value that depends on the energy range being considered. This value characterizes the longitudinal acceptance of the system. \item The rf gradient is not allowed to exceed a specified value. Here we take a conservative value of 10~MV/m. This corresponds to a value that has already been achieved~\cite{pac03:1309}. \end{itemize} \begin{table}[tbp] \caption{Cost-optimum lattices with cavities in all but 8 cells.} \label{tab:acc:shortopt}% \begin{ruledtabular} \begin{tabular}{l|rrr|rrr|rrr} Minimum total energy (GeV)&\multicolumn{3}{c|}{2.5}&\multicolumn{3}{c|}{5}&\multicolumn{3}{c}{10}\\ Maximum total energy (GeV)&\multicolumn{3}{c|}{5}&\multicolumn{3}{c|}{10}&\multicolumn{3}{c}{20}\\ $V/(\omega\Delta T\Delta E)$&\multicolumn{3}{c|}{1/6}&\multicolumn{3}{c|}{1/8}&\multicolumn{3}{c}{1/12}\\ Type &FD &FDF &FODO&FD &FDF &FODO&FD &FDF &FODO\\ No.\ of cells &65 &60 &76 &79 &72 &91 &93 &85 &105 \\ D length (cm) &62 &96 &56 &82 &119 &77 &105 &143 &98 \\ D radius (cm) &13.6&16.5&16.0&10.2&12.7&11.7&7.8 &9.7 &8.7 \\ D pole tip field (T) &3.7 &3.3 &1.9 &4.6 &4.2 &3.8 &5.8 &5.5 &5.0 \\ F length (cm) &99 &48 &93 &126 &64 &119 &162 &85 &151 \\ F radius (cm) &19.1&15.8&22.8&15.3&12.8&17.8&12.7&10.9&14.6\\ F pole tip field (T) &2.2 &2.4 &1.7 &2.8 &3.1 &2.2 &3.5 &3.7 &2.8 \\ No.\ of cavities &57 &52 &68 &71 &64 &83 &85 &77 &97 \\ RF voltage (MV) &428 &390 &510 &533 &480 &623 &638 &578 &728 \\ $\Delta E/V$ &5.8 &6.4 &4.9 &9.4 &10.4&8.0 &15.7&17.3&13.7\\ Circumference (m) &268 &295 &418 &362 &393 &543 &481 &521 &681 \\ Decay (\%) &6.8 &8.2 &8.8 &7.4 &8.9 &9.4 &8.5 &10.1&10.4\\ Magnet cost (A.U.)\footnote{Arbitrary units} &36.4&41.6&49.6&32.8&37.4&40.0&34.1&39.2&38.4\\ RF cost (A.U.) &27.7&25.3&33.0&34.5&31.1&40.3&41.3&37.4&47.1\\ Linear cost (A.U.) &6.7 &7.4 &10.4&9.1 &9.8 &13.6&12.0&13.0&17.0\\ Total cost (A.U.) &70.8&74.3&93.1&76.3&78.3&93.8&87.4&89.6&102.5\\ Cost per GeV (A.U.)&28.3&29.7&37.2&15.3&15.7&18.8&8.7 &9.0 &10.2 \end{tabular} \end{ruledtabular} \end{table} To get an idea of what is achievable, we developed a set of cost-optimized lattices where a cavity is placed in every cell, except for 8 cells left open for injection and extraction hardware. The goal is to minimize the time spent accelerating and therefore minimize the decays. This drives the design toward a smaller ring but more rf. The results of this optimization are shown in Table~\ref{tab:acc:shortopt}. The costs are a significant improvement over the FS2 acceleration costs. Table~\ref{tab:acc:shortopt} leads to several conclusions: \begin{itemize} \item The doublet lattice is the most cost-effective design. The triplet lattice requires less voltage, but has a higher magnet cost due to having more magnets per cell. \item The cost per GeV of acceleration increases as the energy decreases. The RLA from FS2 has a cost per GeV around 30 in the units of Table~\ref{tab:acc:shortopt}, so this in some sense sets a baseline for determining when an FFAG approach becomes cost effective. Thus, a 2.5--5~GeV FFAG is borderline in its cost effectiveness, while the higher energy FFAGs are clearly cost effective. \end{itemize} \begin{figure}[tbp] \includegraphics[width=0.65\textwidth]{sec5-sub2-040528a-c-au.eps} \caption{(Color) Costs of the doublet lattices in Table~\ref{tab:acc:shortopt} with the rf cost modified for higher cavity gradients.} \label{fig:acc:shortvsv} \end{figure} The effects of increasing the rf gradient on the costs of the doublet designs in Table~\ref{tab:acc:shortopt} are shown in Fig.~\ref{fig:acc:shortvsv}. \begin{figure}[tbp] \centering \includegraphics[width=0.65\textwidth]{sec5-sub2-040726a-c-au.eps} \caption{(Color) Costs of the optimized doublet lattices as a function of the transverse normalized acceptance.} \label{fig:acc:accopt} \end{figure} The choice of the acceptance also has a strong effect on the optimized cost, as shown in Fig.~\ref{fig:acc:accopt}. \section{Neutrino Factory and Beta Beam R\&D} \label{r_and_d} As should be clear from the design descriptions in Section~\ref{sec5}, both the muon-based Neutrino Factory and the Beta Beam facility are demanding projects. Both types of machine make use of novel components and techniques that are, in some cases, at or beyond the state of the art. For this reason, it is critical that R\&D efforts to study these matters be carried out. In this Section we describe the main areas of R\&D effort under way in support of the two projects. We give an overview of the R\&D program goals and list the specific questions we expect ultimately to answer. We also summarize briefly the R\&D accomplishments to date and give an indication of R\&D plans for the future. Since neither of these projects is expected to begin construction in the near future, it might be asked why it is necessary to pursue a vigorous R\&D program now. One answer is that this R\&D is what allows us to determine---with some confidence---both the expected performance and expected cost of such machines. This information must be available in a timely way to permit the scientific community to make informed choices on which project(s) they wish to request at some future time. Experience has shown that large, complex accelerator projects take many years of preparatory R\&D in advance of construction. It is only by supporting this R\&D effort now that we can be ready to provide a Neutrino Factory or Beta Beam facility when the proper time comes. \subsection{Neutrino Factory R\&D} Successful construction of a muon storage ring to provide a copious source of neutrinos requires many novel approaches to be developed and demonstrated; a high-luminosity Muon Collider, which might someday follow, would require an even greater extension of the present state of accelerator design. Thus, reaching the desired facility performance requires an extensive R\&D program. Each of the major systems has significant issues that must be addressed by R\&D activities. Component specifications need to be verified. For example, the cooling channel assumes a normal conducting rf (NCRF) cavity gradient of 15 MV/m at 201.25 MHz, and the acceleration section demands similar performance from superconducting rf (SCRF) cavities at this frequency. In both cases, the requirements are beyond the performance reached to date for cavities in this frequency range. The ability of the target to withstand a proton beam power of up to 4 MW must be confirmed. Finally, an ionization cooling experiment should be undertaken to validate the implementation and performance of the cooling channel, and to confirm that our simulations of the cooling process are accurate. \subsubsection{R\&D Program Overview} A Neutrino Factory comprises the following major systems: Proton Driver; Target, (Pion) Capture, and (Pion-to-Muon) Decay Section; Bunching and Phase Rotation Section; Cooling Section; Acceleration Section; and Storage Ring. The R\&D program we envision is designed to answer first the key questions needed to embark upon a Zeroth-order Design Report (ZDR). The ZDR will examine the complete systems of a Neutrino Factory, making sure that nothing is forgotten, and will show how the parts merge into a coherent whole. While it will not present a fully engineered design with a detailed cost estimate, enough detail will be presented to ensure that the critical items are technically feasible and that the proposed facility could be successfully constructed and operated at its design specifications. By the end of the full R\&D program, it is expected that a formal Conceptual Design Report for a Neutrino Factory could begin. The CDR would document a complete and fully engineered design for the facility, including a detailed bottom-up cost estimate for all components. This document would form the basis for a full technical, cost, and schedule review of the construction proposal, subsequent to which construction could commence (assuming strong community support and government approval). The R\&D issues for each of the major systems must be addressed by a mix of theoretical, simulation, modeling, and experimental studies, as appropriate. A list of the key physics and technology issues for each major Neutrino Factory system is given below. These issues are being actively pursued as part of the ongoing worldwide Neutrino Factory R\&D program, with participation from Europe, Japan, and the U.S. \textbf{Proton Driver} \begin{itemize} \item Production of intense, short proton bunches, e.g., with space-charge compensation and/or high-gradient, low frequency rf systems \end{itemize} \textbf{Target, Capture, and Decay Section} \begin{itemize} \item Optimization of target material (low-\textit{Z} or high-\textit{Z}) and form (solid, moving band, liquid-metal jet) \item Design and performance of a high-field solenoid ($\approx$20 T) in a very high radiation environment \end{itemize} \textbf{Bunching and Phase Rotation Section} \begin{itemize} \item Design of efficient and cost-effective bunching system \item Examination of alternative approaches, e.g., based upon combined rf phase rotation and bunching systems or fixed-field, alternating gradient (FFAG) rings \end{itemize} \textbf{Cooling Section} \begin{itemize} \item Development and testing of high-gradient normal conducting rf (NCRF) cavities at a frequency near 200 MHz \item Development and testing of efficient high-power rf sources at a frequency near 200 MHz \item Development and testing of LH$_{2}$, LiH, and other absorbers for muon cooling \item Development and testing of candidate diagnostics to measure emittance and optimize cooling channel performance \end{itemize} \textbf{Acceleration Section} \begin{itemize} \item Optimization of acceleration techniques to increase the energy of a muon beam (with a large momentum spread) from a few GeV to a few tens of GeV (e.g., recirculating linacs, rapid cycling synchrotrons~\cite{rcsync}, FFAG rings) \item Development of high-gradient superconducting rf (SCRF) cavities at frequencies near 200~MHz, along with efficient power sources (about 10~MW peak) to drive them \item Design and testing of components (rf cavities, magnets, diagnostics) that will operate in the muon-decay radiation environment \end{itemize} \textbf{Storage Ring} \begin{itemize} \item Design of large-aperture, well-shielded superconducting magnets that will operate in the muon-decay radiation environment \end{itemize} \subsubsection{Recent R\&D Accomplishments} \paragraph{Targetry} The BNL Targetry experiment, E951, has carried out initial beam tests~\cite{TgtRef} of both a solid carbon target and a mercury target at a proton beam intensity of about $4\times 10^{12}$ ppp. In the case of the solid carbon target, it was found that a carbon-carbon composite having nearly zero coefficient of thermal expansion is largely immune to beam-induced pressure waves. A carbon target in a helium atmosphere is expected to have negligible sublimation loss. A program to verify this is under way at ORNL~\cite{ORNLsublimation}. If radiation damage is the limiting effect for a carbon target, the predicted lifetime would be about 12 weeks when bombarded with a 1 MW proton beam. For a mercury jet target, tests with about 2 $\times$\ 10$^{12}$ ppp showed that the jet is not dispersed until long after the beam pulse has passed through the target (see Fig. \ref{fig:HgJet}). Measurements of the velocity of droplets emanating from the jet as it is hit with the proton beam pulse from the AGS ($\thickapprox$10 m/s for 25 J/g energy deposition) compare favorably with simulation estimates. High-speed photographs indicate that the beam disruption at the present intensity does not propagate back upstream toward the jet nozzle. If this remains true at the higher intensity of 1.6 $\times $\ 10$^{13}$ ppp, it will ease mechanical design issues for the nozzle.% \begin{figure}[hptb!] \includegraphics[width=5.5in]{sec6-HgJetPic}% \caption{Disruption of Hg jet hit with AGS beam bunch containing $2\times10^{12}$ protons. Frames from left to right correspond to time steps of 0, 0.75, 2, 7, and 18~ms, respectively.}% \label{fig:HgJet}% \end{figure} \paragraph{MUCOOL} A primary effort has been to carry out high-power tests of 805-MHz rf cavities in the Lab G test area at Fermilab. A 5-T test solenoid for the facility, capable of operating either in solenoid mode (its two independent coils powered in the same polarity) or gradient mode (with the two coils opposed), was used to study the effects of magnetic field on cavity performance. Most recently, a single-cell 805-MHz pillbox cavity (Fig. \ref{fig:pillbox-805}) having Be foils to close the beam iris was tested. This cavity permitted an assessment of the behavior of the foils under rf heating and was used to study dark current effects \cite{DarkCurrentnote}. The cavity reached 40 MV/m (exceeding its design specification) in the absence of a magnetic field, but was limited by breakdown to less than 15 MV/m at high magnetic field ($\thickapprox2$ T). Understanding the effects of the magnetic field on cavity performance is crucial, as this is the environment required for cavities in a muon cooling channel.% \begin{figure}[ptbh!] \includegraphics[scale=0.3]{sec6-Mutac}% \caption{(Color) 805 MHz pillbox rf cavity used for testing. The cavity has removable windows to permit tests of different window materials, and a thin exit port to permit dark current studies.}% \label{fig:pillbox-805}% \end{figure} Development of a prototype LH$_{2}$ absorber, the material chosen for FS2 and also for MICE~\cite{MICEref} (the Muon Ionization Cooling Experiment, see Section~\ref{MICEtext}) is well along. Several large diameter, thin (125--350 $\mu$m) aluminum windows have been successfully fabricated by machining from solid disks. These have been pressure tested with water and found to break at a pressure consistent with finite-element design calculations \cite{AbsorberRef}. Another absorber material that must be studied is LiH, the material on which the cooling channel used in this report is based. In the new scheme, the LiH serves both as an absorber and an rf window. This configuration could be tested in the 805-MHz pillbox cavity described above. A new area, the MUCOOL\ Test Area (MTA), is nearly completed at FNAL and will be used for initial testing of the liquid-hydrogen absorbers. It will also have access to both 805-MHz and 201-MHz high-power rf amplifiers for continuing rf tests of the 805-MHz pillbox cavity and, soon, for testing a prototype 201-MHz cavity. The MTA is located at the end of the Fermilab proton linac, and is designed to eventually permit beam tests of components and detectors with 400 MeV protons. \paragraph{Beam Simulations and Theory} Subsequent to work on FS2, present effort has focused on further optimization of Neutrino Factory performance and costs. The more cost effective front-end design reported in this paper is a result of this work. \paragraph{SCRF Development} This work is aimed at development of a high-gradient 201-MHz SCRF cavity for muon acceleration. (The choice of SCRF for a cooling channel is excluded because of the surrounding high magnetic field; the acceleration system does not suffer this limitation.) A test area of suitable dimensions was constructed at Cornell (Fig.~\ref{SCRFpict}) and used to test a prototype cavity fabricated for the Cornell group by CERN colleagues. The cavity reached 11~MV/m in initial tests, but exhibited a significant ``$Q$ slope'' as the gradient increased~\cite{pac03:1309}. To better understand the origins of this phenomenon, effort will shift to studies on a smaller 500~MHz cavity. Different coating and cleaning techniques will be explored to learn how to mitigate the observed $Q$ slope.% \begin{figure}[ptbh!] \includegraphics[width=4in]{sec6-SCRFpict1}% \caption{(Color) 201~MHz SCRF cavity being prepared for testing at Cornell.}% \label{SCRFpict}% \end{figure} \subsubsection{R\&D plans} \paragraph{Targetry} For the targetry experiment, design of a pulsed solenoid and its power supply are under way. A cost-effective design capable of providing up to a 15~T field has been developed (see Fig.~\ref{fig:TgtMag}).% \begin{figure}[ptbh!] \includegraphics[width=2in,angle=90]{sec6-cryostat_side_082802}% \includegraphics[width=2in]{sec6-cryo_front}% \caption{(Color) Design of targetry test magnet. The magnet has three nested coils that permit operation at 5, 10, and 15~T. The coils are normal conducting but cooled to liquid-nitrogen temperature to ease the requirements on the power supply.}% \label{fig:TgtMag}% \end{figure} Tests of a higher velocity mercury jet (about 20~m/s velocity, compared with about 2.5 m/s in the jet system initially tested), will be carried out. To complement the experimental program, target simulation efforts are ongoing. These aim at a sufficiently detailed understanding of the processes involved to reproduce the observed experimental results both with and without a magnetic field. Fully 3D magneto-hydrodynamics codes are being utilized for this effort. \paragraph{MUCOOL} Further testing work for 805~MHz components will continue in the MTA. Work will focus on understanding and mitigating dark current and breakdown effects at high gradient. Many aspects of cavity design, such as cleaning and coating techniques, will be investigated. In addition, tests of alternative designs for window or grid electromagnetic terminations for the rf cavity will be initially explored to identify the best candidates for the full-sized 201~MHz prototype cavity. Fabrication of the 201 MHz cavity by a group from LBNL, Jlab, and the University of Mississippi is nearly completed. This cavity will also be tested in the MTA. Thermal tests of a prototype absorber in the MTA are just getting under way. Fabrication of other cooling channel components required for the initial phase of testing will be carried out, including a large-bore superconducting solenoid, and diagnostics that could be used for the experiment. With these components, it will be possible eventually to assemble and bench test a full prototype cell of a realistic cooling channel. Provision will be made to test curved Be windows and grids in the 805~MHz cavity, followed by tests on the 201~MHz prototype. As already noted, the site of the MTA was selected with the goal of permitting beam tests of the cooling channel components with a high intensity beam of 400~MeV protons. While not the same as using an intense muon beam, such a test would permit a much better understanding of how the cooling channel would perform operationally, especially the high-gradient rf cavity and the LH$_{2}$ or LiH absorber. \paragraph{Beam Simulations and Theory} A major simulation effort will continue to focus on iterating the front-end channel design to optimize it for cost and performance. Further effort will be given to beam dynamics studies in the FFAG rings and storage ring, including realistic errors. Work on optimizing the optics design will be done. Assessment of field-error effects on the beam transport will be made to define acceptance criteria for the magnets. This will require use of sophisticated tracking codes, such as COSY~\cite{COSYref}, that permit rigorous treatment of field errors and fringe-field effects. In many ways, the storage ring is one of the most straightforward portions of a Neutrino Factory complex. However, beam dynamics is an issue here as the muon beam must circulate for many hundreds of turns. Use of a tracking code such as COSY is required to assess fringe field and large aperture effects. As with the FFAG rings, the relatively large emittance and large energy spread enhance the sensitivity to magnetic field and magnet placement errors. Suitable magnet designs are needed, with the main technical issue being the relatively high radiation environment. Another lattice issue that must be studied is polarization measurement. In the initial implementation of a Neutrino Factory it is expected that polarization will not be considered, but its residual value may nonetheless be important in analyzing the experiment. Simulation efforts in support of MICE will continue. We also plan to participate in a so-called ``World Design Study'' of an optimized Neutrino Factory. This study, an international version of the two previous U.S. Feasibility Studies, will likely be hosted in the UK by Rutherford Appleton Laboratory (RAL), the site for the MICE experiment (see Section~\ref{MICEtext}% ). It will be organized jointly by representatives from Europe, Japan, and the U.S. \paragraph{SCRF Development} A prototype 500~MHz SCRF cavity will be used to study the $Q$ slope phenomenon, with the goal of developing coating and cleaning techniques that reduce or eliminate it. Detuning issues at 201~MHz associated with the very large cavity dimensions and the pulsed rf system will be evaluated. Tests of the 201 MHz SCRF cavity will include operation in the vicinity of a shielded solenoid magnet, to demonstrate our ability to adequately reduce nearby magnetic fields in a realistic lattice configuration. If funds permit, design of a prototype high-power rf source will be explored, in collaboration with industry. This source---presently envisioned to be a multibeam klystron---must be developed for operation at two different duty factors, because the cooling channel requires a duty factor of about 0.002 whereas the acceleration chain requires 0.045. Magnet designs suitable for the FFAG rings and the muon storage ring will be examined further. Both conventional and superconducting designs will be compared where both are possible. With SC magnets, radiation heating becomes an issue and must be assessed and dealt with. \subsubsection{Cooling Demonstration Experiment} \label{MICEtext}Clearly, one of the most important R\&D tasks that is needed to validate the design of a Neutrino Factory is to measure the cooling effects of the hardware we propose. Participation in the International Muon Ionization Cooling Experiment (MICE) will accomplish this, and is therefore expected eventually to grow into a primary activity. Unquestionably, the experience gained from this experiment will be invaluable for the design of an actual cooling channel. At the NUFACT'01 Workshop in Japan, a volunteer organization was created to organize a cooling demonstration experiment that might begin as soon as 2004. Membership in this group includes representatives from Europe, Japan, and the U.S. The experimental collaboration now numbers some 140 members from the three geographical regions. The MICE Collaboration has received scientific approval for the experiment from RAL management, and is now in the process of seeking funding. The experiment will involve measuring, on a particle-by-particle basis, the emittance reduction produced by a single cell of the FS2 cooling channel. A schematic of the layout is shown in Fig.~\ref{fig:MICElayout}. The cooling channel cell is preceded and followed by nearly identical detector modules that accomplish particle identification and emittance measurement. Provision for testing a series of absorber materials, including both LH$_{2}$ and solid absorbers, has been made. A preliminary safety review of the liquid-hydrogen system has been successfully passed, and permission to begin detailed engineering has been granted.% \begin{figure}[ptbh!] \includegraphics*[bb=0 350 600 697]{sec6-MICE}% \caption{(Color) Schematic of the MICE layout.}% \label{fig:MICElayout}% \end{figure} \subsection{Beta Beam R\&D} Constructing a Beta Beam facility requires a number of new techniques to be developed. In the CERN-based scenario, where the facility incorporates a number of existing machines, a significant technical challenge is to ensure that the proposed parameters for the Beta Beam case are compatible with the capabilities of the present accelerators. If not, required modifications must be identified and demonstrated. An additional constraint in the CERN scenario---or a corresponding U.S. scenario based on existing machines---is that the modifications to accommodate Beta Beams must maintain compatibility with existing programs. There are several significant challenges in providing Beta Beams of the required intensity, and R\&D is required to validate the concepts proposed to deal with them. These issues include: \begin{itemize} \item \textbf{Target:} To provide the required ion intensities, the target must be able to handle the driver beam intensity for a reasonable lifetime. While the production of $^{6}$He looks fairly straightforward, the production of $^{18}$Ne is less so. The baseline production method for $^{18}$Ne relies on direct bombardment of the target material with the proton beam. Determining what intensity is acceptable to maintain a reasonable target lifetime must be done. The alternative production technique suggested for $^{18}$Ne, namely the $^{16}$O($^{3}$He,n) reaction, clearly requires a different incoming beam, which might require an additional driver accelerator. In the scenario where the outputs from three targets are combined, techniques for splitting the driver beam into multiple paths and combining the target outputs into a single ion source need to be worked out and tested. Since the ion source must necessarily be remote from the highly active target area, good transport efficiency for the nuclides of interest must be demonstrated. \item \textbf{Ion Source:} With an ISOL-type production system, the ions are produced continuously. To prepare the beam for acceleration, it must then be bunched considerably, to below $20\,\mu$s. It is proposed to get the required intensity and bunch structure by using a new ion source concept, with state-of-the-art specifications. The source is an ECR source operating at high frequency (60~GHz) with a high magnetic field (2--3~T) and high plasma density $(n_{e}\thicksim 10^{14} \text{cm}^{-3}).$ Such a source has never been built, though a development effort is now under way at Grenoble~\cite{sortais}. It will be necessary to study the influence of the carrier gas on the ionization efficiency. Also, the ability to produce fully stripped beams of $^{18}$Ne must be demonstrated. Typically ECR sources produce high charge states, but not so high as Ne$^{10+}.$ Even with an enhanced ion source of the type proposed, the expected~\cite{BouchezRef} antineutrino intensity is $2.1 \times 10^{18}$ per ``Snowmass year'' (1 $\times$\ 10$^{7}$ s), and that for neutrinos (assuming that the outputs from three production targets are combined) is $3.5 \times 10^{17}.$ \item \textbf{Decay Losses:} The losses in the low-energy portion of the accelerator chain are high because the beam is intense and its relativistic $\gamma$ is low. In the CERN\ scenario, the PS is somewhat vulnerable as it has already accumulated a high radiation dose over its operational lifetime and its cycle time is not rapid. For $^{6}$He, the PS losses are estimated~\cite{LindroosRef} to be about 1.2~W/m while those in the storage ring are 28~W/m. In the latter case, specially designed superconducting magnets with no coils in the midplane are required to avoid quenches. (A variant of this approach is used for the muon storage ring of a Neutrino Factory, for the same reason.) Building prototype magnets and measuring their field quality and quench resistance should be undertaken to validate the proposed approach. \item \textbf{Storage Ring Issues:} Since the storage ring must be frequently topped off, injection requires the use of bunch-merging techniques. A concept has been worked out for this, and an initial test was encouraging. Since many of the problems with rf manipulation techniques are intensity dependent, it will probably still be necessary to validate the proposed scheme under fully realistic conditions. The recent proposal to run both $^{6}$He and $^{18}$Ne beams simultaneously in the decay ring is another non-trivial complication. At the same rigidity, the heavier beam has a relativistic $\gamma$ that is $\frac{5}{3}$ that of the lighter beam, so the orbits must be adjusted to provide the same revolution period. This may well have some beam dynamics implications for the off-center beam, given that the proposed magnets may not have ideal fields far off axis. It will certainly complicate beam manipulations for the two beams, especially the injection and bunch merging. Another issue that needs to be evaluated in detail is the influence of the beam parameters (orbits, emittance, beta functions) on the neutrino spectrum at the detector. \end{itemize}
1,108,101,564,048
arxiv
\section{Introduction} Planets form in disks of dust and gas that surround young stars. In recent years the spatial resolution and sensitivity achieved by instruments such as the Atacama Large Millimeter/submillimeter Array (ALMA) has allowed us to characterize these objects in detail. A key feature that we can only now properly study is the disk vertical structure, if data at an adequate resolution are available. Protoplanetary disks have a vertical extent that depends on the system properties such as the stellar mass and radiation field, but also on the disk temperature structure and surface density distribution \citep{Bell_1997, Dalessio_1998, Aikawa_1999, van_Zadelhoff_2001_vert_models, Walsh_2010, Fogel_2011, Rosenfeld_2013}. The vertical extent of a disk is set by hydrostatic equilibrium: the balance between stellar gravity and pressure support \citep{Armitage_2015}. Tracing the vertical structure may be used to detect deviations from equilibrium that can be related to meridonial flows in the presence of planets \citep{Morbidelli_2014, Szulagyi_2022}, shadowing due to warped structures \citep{nealon_2019} or infall of material \citep{Hennebelle_2017}, among others. While larger, millimeter-sized, grains are expected to be settled in the midplane of disks, molecular and scattered light observations can be used to recover the vertical structure of the disk \citep[e.g.][]{Pinte_2018_method, Avenhaus_2018, Villenave_2020, MAPS_Law_Surf}. Studying the vertical location of various molecular tracers is particularly powerful for accurately mapping the disk structure due to the sensitivity of line emission to temperature, radiation and density variations. Several works have used the excitation temperature of line emission to infer the expected vertical location of a given molecule \citep[e.g.][]{van_Zadelhoff_2001_vert_models, Dartois_2003, Teague_2020_CN, MAPS_Bergner, MAPS_Ilee}. This indirect approach is promising, but it relies on physical-chemical models and assumptions about the disk conditions at the location of the emission. Direct tracing of molecular emission layers is fundamental for comparing with current models. Previous studies have focused on edge-on disks to do this \citep[e.g.][]{Podio_2020, vanthoff_2020, Villenave_2020, RuizRodriguez_2021}, however, with the development of new analysis techniques it is now possible to directly extract the vertical surfaces from moderately inclined (35-60 $^{\circ}$) sources \citep[e.g.][]{Pinte_2018_method, Paneque-Carreno_Elias1, Rich_2021, MAPS_Law_Surf, Law_2022_12CO}. Analysing systems that host moderately inclined disks allows us to study them in three dimensions, as it is also possible to trace the azimuthal and radial distribution of material. Of the handful of moderately inclined systems with direct extraction of their vertical surfaces, the vast majority have been analyzed using only CO isotopologue emission \citep{Pinte_2018_method, Paneque-Carreno_Elias1, Rich_2021, MAPS_Law_Surf, leemker_2022_LkCa15, Law_2022_12CO}. The exceptions are AS 209, HD163296 and MWC 480 where the vertical location of HCN and C$_2$H emission was constrained in the inner $\sim$150\,au \citep{MAPS_Law_Surf}. However, the emission extends to over 300\,au \citep{MAPS_Law_radial}, therefore a large portion of the vertical structure is still unconstrained. For Elias 2-27, the vertical location of CN has also been directly measured from the channel maps and by comparing its location to that of CO isotopologues, the density and optical depth conditions of the CN emission were constrained \citep{Paneque-Carreno_2022_Elias_CN}. $^{12}$CO and $^{13}$CO emission is expected to trace optically thick slabs of material in distinct vertical regions, while C$^{18}$O likely traces closer to the midplane and CO freeze-out region \citep{van_Zadelhoff_2001_vert_models,Miotello_2014}. Through the study of CO isotopologues it is possible to trace the temperature gradient in the vertical direction \citep{MAPS_Law_Surf} and study CO abundance variations \citep{MAPS_Zhang}. Tracing the location of less abundant molecules can offer complimentary constraints on the disk structure and the physical-chemical processes that construct the environment conditions of planet formation. Emission from CN, HCN and C$_{2}$H is expected to originate from UV-dominated regions \citep{Aikawa_2002, Cazzoletti_2018_CN, Visser_2018, MAPS_Bergner, MAPS_Guzman}. For these molecules, models predict that the emission should arise from elevated regions \citep{van_zadelhoff_2003, Cazzoletti_2018_CN, Cleeves_2018, MAPS_Bergner, MAPS_Guzman}. HCO$^{+}$ is a molecular ion, its abundance is expected to be dominant in the CO molecular layer and combined with the CO isotopologue distribution it can be used to study the disk ionization \citep{MAPS_Aikawa}. Formaldehyde (H$_{2}$CO) has two main formation pathways, gas-phase chemistry \citep{Loomis_2015} or grain-surface chemistry through the hydrogenation of CO ices \citep{Hiraoka_1994, Fuchs_2009}. H$_{2}$CO could be a better tracer of cold gas in the outer disk if it originates from desorption processes off the dust grains, however the emission may also arise from warmer and higher layers if gas-phase chemistry is dominant in its formation \citep{MAPS_Guzman, Loomis_2015}. Direct tracing of the vertical location of the H$_{2}$CO emission could shed light on the dominant mechanism leading to the formation of H$_{2}$CO. In this work we recover the location of the emitting regions for a sample of seven disks, using data from the MAPS ALMA Large Program \citep[\#2018.1.01055.L, ][]{MAPS_Oberg, MAPS_Czekala} and archival ALMA data of Elias 2-27 (\#2016.1.00606.S and \#2017.1.00069.S, P.I. L. P\'erez) and WaOph\,6 (\#2015.1.00168.S, P.I. G. Blake). The data selected for all the disks has sufficient signal-to-noise (SNR) and resolution (spectral and spatial) to recover the vertical structure from multiple CO isotopologues and additional molecular tracers in each system. Elias 2-27 and WaOph\,6 have spiral structures in the dust emission that have been studied at high angular resolution \citep{DSHARP_Andrews, DSHARP_Huang_Spirals} and may be originated by gravitational instabilities \citep{Perez_2016_Elias, DSHARP_Huang_Spirals, Paneque-Carreno_Elias1, Veronesi_2021_Elias}. IM Lup, GM Aur, HD 163296, MWC 480 and AS 209 are sources studied by the MAPS collaboration and each present a variety of dust and gas substructure \citep{MAPS_Oberg, MAPS_Law_radial}. Our complete sample is composed of a broad range in stellar parameters (0.5 - 2 M$_{\odot}$) and disk substructures (spirals, rings, gaps). The goal of this study is to offer observational constraints on the vertical location of the emission for a wide molecular reservoir, in a heterogeneous sample and try to relate the measured system properties to theoretical predictions. The remainder of this paper is organized as follows. Section 2 details the data that were used and the methodology for extracting the vertical profiles. Section 3 presents our results: we first highlight the case of HD 163296, where we obtain vertical and radial profiles for ten molecules; then we show the CO emission location for all disks and study vertical modulations. This is followed by results on the radial brightness temperature profiles and an estimate of the disk gas pressure scale height. Finally, we present results for multiple molecules, other than CO, and detail our findings on the structured vertical profiles of HCN and H$_2$CO. Section 4 presents a discussion on our main results, comparing our observational constraints to theoretical estimations and in Section 5 the main conclusions of this study are highlighted. \section{Observations and Method} \subsection{Data} In this work we use the publicly available data from the MAPS ALMA Large program \citep{MAPS_Oberg}. For each disk in the sample we download the image cubes that were produced using the JvM correction \citep{MAPS_Czekala}. An initial visual assessment is done to determine if a specific molecule/disk can be used for extracting the vertical structure from the channel maps, following the methodology detailed in section 2.2. Reasons for rejecting a molecule can be low SNR or lack of resolution to identify a Keplerian morphology in the channel emission. Table \ref{MAPS_files} shows the details of the selected files for each tracer , corresponding to those molecules for which it is possible to conduct our analysis in at least three disks. The data set used may be identified by either the robust parameter or the beam size. For each molecule, the selected robust or beam size is the same for all disks in the MAPS sample. For datasets selected based on the robust parameter the difference in beam value between major and minor axis is typically of order $\leq$0.02\arcsec. Between sources there is also a slight beam size difference of $\leq$0.02\arcsec. At a distance of $\sim$120\,au this represents a difference of $\sim$2.5\,au, therefore we consider beam size variations negligible in our calculations of the vertical profile. In addition to the information presented in Table \ref{MAPS_files}, for HD\,163296 we also study CN ($N =1-0 $, 113.499\,GHz) and c-C$_{3}$H$_{2}$ ($J =7_{07}-6_{16} $, 251.314\,GHz). The selection of these data is done by the beam size, 0.5\arcsec for CN and 0.3\arcsec for c-C$_{3}$H$_{2}$. A detailed study on HD\,163296 is presented in section 3.1. \begin{table}[h] \centering \def1.3{1.3} \setlength{\tabcolsep}{4pt} \caption{Selected files for molecules in MAPS sample disks} \begin{tabular}{c| c |c|c|c} \hline \hline Molecule & Transition & Freq. [GHz] & robust & beam $^a$ [\arcsec]\\ \hline $^{12}$CO & $J=2-1$ & 230.538 & 0.5& $\sim$0.13\\ $^{13}$CO & $J=2-1$ & 220.398 & 0.5& $\sim$0.13\\ C$^{18}$O & $J=2-1$ & 219.560 & 0.5& $\sim$0.13\\ $^{13}$CO & $J=1-0$ & 110.201 & 0.5& $\sim$0.26\\ \hline HCN & $J=3-2$ & 265.886 & - & 0.30\\ C$_{2}$H & $N=3-2$ & 262.040& - &0.15\\ H$_{2}$CO & $J=3_{03}-2_{02}$ & 218.222 & - & 0.30 \\ HCO$^{+}$ & $J=1-0$ & 89.188 & 0.5 & $\sim$0.32\\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{For molecules that were selected based on the robust parameter, the beam value corresponds to the mean value of all sources. In all $J = 2-1$ transitions the typical deviation from the mean value between sources is 6.7\% and in the $J = 1-0$ transitions it is 8.3\%} } \label{MAPS_files} \end{table} \begin{table*}[h!] \centering \def1.3{1.3} \setlength{\tabcolsep}{6pt} \caption{Studied systems, their properties and the analyzed molecules in each case.} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline Star $^a$& M$_*$ & L$_*$ & Distance & Inclination & PA & Studied Molecules $^b$ \\ & [M$_{\odot}$] & [L$_{\odot}$] & [pc] & [deg] & [deg] & \\ \hline IM Lup & 1.1 & 2.57 & 158& 47.5 & 144.5 & $^{12}$CO, $^{13}$CO, C$^{18}$O, HCN, HCO$^+$, H$_2$CO \\ GM Aur & 1.1 & 1.2 & 159& 53.2 & 57.2 & $^{12}$CO, $^{13}$CO, C$^{18}$O, HCN, HCO$^+$, H$_2$CO \\ AS 209 & 1.2 & 1.41 & 121 & 35.0 & 85.8 & $^{12}$CO, $^{13}$CO, C$^{18}$O, HCN, HCO$^+$, H$_2$CO, C$_2$H \\ HD 163296 & 2.0 & 17.0 & 101& 46.7 & 133.3 & $^{12}$CO, $^{13}$CO, C$^{18}$O, HCN, CN, HCO$^+$, H$_2$CO, C$_2$H, c-C$_3$H$_2$ \\ MWC 480 & 2.1 & 21.9 & 162 & 37.0& 148.0 & $^{12}$CO, $^{13}$CO, C$^{18}$O, HCN, HCO$^+$, H$_2$CO, C$_2$H \\ \hline Elias 2-27 & 0.46 & 0.91 & 116 & 56.7 & 118.8 & $^{12}$CO ($2-1$), $^{13}$CO ($3-2$), C$^{18}$O ($3-2$), CN ($7/2 - 5/2$) \\ WaOph 6 & 1.1 & 0.68 & 123 &47.3 & 174.2 & $^{12}$CO ($3-2$), $^{12}$CO ($2-1$), $^{13}$CO ($3-2$), HCO$^+$ ($3-2$)\\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{Stellar and disk parameters taken from \citet{MAPS_Oberg} for the 5 disks in the MAPS sample and from \citet{DSHARP_Huang_radial} and \citet{DSHARP_Andrews} for Elias\,2-27 and WaOph\,6, except for stellar masses. The stellar mass is estimated dynamically in \citet{Veronesi_2021_Elias} for Elias 2-27 and in \citet{Law_2022_12CO} for WaOph\,6.} \tablefoottext{b}{The transitions of each molecule for the disks of the MAPS sample are detailed in Section 2.1. For Elias 2-27 and WaOph\,6 the $J$ transition of each molecule is shown in parenthesis.} } \label{table_sample_all} \end{table*} To the MAPS sample, we add publicly available data of Elias 2-27 and WaOph 6. In both sources we study $^{12}$CO $J=2-1$ emission from DSHARP \citep{DSHARP_Andrews}. We also include $^{13}$CO and C$^{18}$O $J=3-2$ data of Elias 2-27 from ALMA programs \#2016.1.00606.S and \#2017.1.00069.S (P.I. L. P\'erez). Calibration and imaging procedures of Elias 2-27 can be found in \citet{DSHARP_Andrews} and \citet{Paneque-Carreno_Elias1}. For WaOph 6 we include archival data of ALMA program \#2015.1.00168.S (P.I. G. Blake) and the ALMA Large Program DSHARP \citep{DSHARP_Andrews}. After self-calibration we can detect emission from $^{12}$CO, $^{13}$CO, C$^{18}$O, HCN, CN and HCO$^{+}$ $J = 3-2$ in WaOph 6. Due to the moderate spatial resolution of the data set (0.3-0.4$\arcsec$), in this study we will only analyze $^{12}$CO, $^{13}$CO and HCO$^{+}$ data, where we can confidently study the vertical distribution of the disk. The integrated moment maps for all molecules observed in WaOph 6 and details on the self-calibration steps are found Appendix A. Overall, our sample consists on two Herbig and four T-Tauri stars \citep{MAPS_Oberg, DSHARP_Andrews}. There are three disks with spirals, and four with rings and gaps \citep{DSHARP_Huang_radial, DSHARP_Huang_Spirals}. Additionally, various sources show distinct kinematical features such as signatures of late infall in GM Aur \citep{MAPS_huang} and Elias 2-27 \citep{Paneque-Carreno_Elias1} and possible planetary perturbations in HD 163296 \citep{Pinte_2018_hd16planet, Teague_2018_hd16planet, Izquierdo_2021_hd16planet, MAPS_Teague} and MWC 480 \citep{MAPS_Teague}. \subsection{Method} \subsubsection{Vertical profile extraction} To extract the emission surfaces we use the geometrical method outlined in \citet{Pinte_2018_method} as applied in \citet{Paneque-Carreno_Elias1}. Knowing the central star position and geometrical properties, the vertical profile of the upper emitting layer can be recovered directly from the channel map observations by tracing the location of the maxima of emission along each channel, assuming that they are tracing the isovelocity curve. For the MAPS sample, the data have been centered and aligned following the location of the continuum peak emission \citep{MAPS_Oberg}, where we assume the central star is located. This same centering has been performed on the Elias 2-27 and WaOph-6 datasets, therefore we consider the center of the image as the stellar location. If the inclination, flaring and resolution of the disk allow the identification of upper and lower surfaces it is possible to confidently trace independently both vertical profiles. In cases when the surfaces can not be visually identified separately we assume that the peak of emission of the channel map comes from the upper surface, which is expected to be brighter than the lower surface. From the extracted location of the maxima, geometrical relations can be used to obtain the vertical profile of the emission \citep[see][for more details]{Pinte_2018_method}. To trace the maxima we use our own implementation of the method (ALFAHOR), which relies on masking each channel through visual inspection to identify separately the near and far sides of the emission from the upper layer (see Figure \ref{masks} for clarification). The masks are selected after the channel map emission has been rotated, using the position angles indicated in Table \ref{table_sample_all}, such that the bottom side of the disk emission is towards the South. The maxima of emission are retrieved automatically, but only from sampling the pixels within the masks. The masks are selected with as conservative margins as possible to avoid a bias in the recovered structure and there are no particular criteria regarding maximum outer radius or any other value\footnote{a repository with the mask and code developed for this study can be found in https://github.com/teresapaz/alfahor.}. In this work, we only study the upper layer of emission, but in those cases where the lower layer can be identified (as can be observed for $^{12}$CO in Figure \ref{masks}) it would be possible to additionally trace the vertical extent of the lower layer. \begin{figure}[h!] \centering \includegraphics[width=\hsize]{masks_show.pdf} \caption{Rotated 7.86\,km\,s$^{-1}$ channel map emission of the $J = 2-1$ transition of CO isotopologues in HD 163296. Overlayed are the regions masked as far and near sides of the upper surface in dashed and solid black lines, respectively. White dots trace the emission maxima within the masked regions. } \label{masks} \end{figure} Previous work for some of the disks and molecules in our sample \citep{MAPS_Law_Surf} used a publicly available implementation of the \citet{Pinte_2018_method} method, DISKSURF \citep{disksurf}. In DISKSURF the search of the emission maxima can be blind or constrained with initial expected values of the minimum and maximum $z/r$, the minimum SNR and selected channels. While this allows for a systematic search and characterization of the channel emission, in some cases pixels from the bottom surface are incorrectly classified as located in the upper surface, and the other way around. This induces noisier vertical profiles, as there is a large data spread caused by the contamination of pixels from the lower surface of the disk. The SNR cut-off is also problematic when dealing with emission from less abundant molecules. By masking the channels after visual inspection, we reach locations of lower SNR and avoid contamination from the lower surface, obtaining a more accurate description of the vertical profile. Our implementation has been tested and illustrated in previous works \citep{Paneque-Carreno_Elias1, leemker_2022_LkCa15} and in this analysis we also compare it to the results of \citet{MAPS_Law_Surf} using DISKSURF (See Appendix B). Our results are consistent for bright tracers such as $^{12}$CO and $^{13}$CO, particularly in the inner region ($<$200\,au), however, through our methodology we are able to trace a larger inventory of molecules to further radii. For each disk and molecule we obtain the maxima from the channel maps by sampling every quarter of the beam semi-major axis in cartesian coordinates, after correcting for the disk position angle (PA). From all the obtained data points (see Appendix B panels with all of the retrieved data points for each disk and tracer) we present the vertical profiles and the associated dispersion as the average value and the standard deviation within radial bins as wide as the beam semi-minor axis. In some disks, due to low SNR, there are less data points. To avoid biases due to lack of sampling, we only consider the data from radial bins where there are at least two independent data points. For Elias 2-27 it has been shown that there are azimuthal asymmetries in the vertical extent \citep{Paneque-Carreno_Elias1}, but we do not find any indication of this behaviour in any of the other disks and molecules of our sample. Therefore all of the data points are considered to compute the radial profiles. In the case of Elias 2-27 only the data points from the west side are considered. Table \ref{table_sample_all} details for each system the molecular emission from which we are able to produce vertical profiles. \begin{figure*}[h!] \centering \includegraphics[width=\hsize]{panel_hd163296.pdf} \caption{For HD 163296, vertical emission profiles in the left panel and azimuthally averaged peak brightness temperature profiles in the right panel for various tracers, beyond CO isotopologues. Solid colored lines show the mean value in each profile and shaded regions the 1$\sigma$ data dispersion. The vertical blue lines in the right panel indicate the location of millimeter continuum gaps and ring detected by \citet{DSHARP_Huang_radial}. } \label{panel_hd16} \end{figure*} Each emission surface in each disk and isotopologue can be parametrized using an exponentially tapered power law defined as, \begin{equation} z(r) = z_0\times \left(\frac{r}{100\,\mathrm{au}}\right)^{\phi} \times \exp\left[\left(\frac{-r}{r_{\mathrm{taper}}}\right)^{\psi}\right] \end{equation} The best fit values for $z_0$, $\phi$, $r_{\mathrm{taper}}$ and $\psi$ in each system are found by using a Monte Carlo Markov Chain (MCMC) sampler as implemented by emcee \citep{emcee_ref}. For the fitting procedure, we consider all data points. If convergence is not reached for $r_{\mathrm{taper}}$ and $\psi$, or if $r_{\mathrm{taper}}$ is much larger than the radial extent of the profiles, we assume a simple power-law profile and fit only for $z_0$ and $\phi$. Table \ref{table_vertical_co} presents the computed parameters of the exponentially tapered or single power laws for each disk and CO isotopologue. \subsubsection{Brightness temperature calculation} The brightness temperature ($T_b$) profiles are obtained from the peak intensity ($I$) using the complete Planck law. The relationship between both parameters is the following, \begin{equation} T_b = \frac{h \nu}{k} \ln^{-1}\left(1 + \frac{2h\nu^3}{I c^2}\right) \end{equation} where $h$ is the Planck constant, $k$ the Boltzmann constant, $c$ the speed of light and $\nu$ the frequency of the emission. The temperature profiles are computed from the azimuthally averaged peak intensity maps. To accurately deproject the data, we use the best-fit power-law or exponentially tapered power-law model of each molecule. The azimuthally averaged peak intensity profile is obtained using the GoFish package \citep{gofish}, a $\pm$30$^{\circ}$ wedge across the semi major axis \citep[as done in][]{MAPS_Law_radial} and radial bins half the size of the beam semi-major axis. \section{Results} \subsection{The special case of HD\,163296} HD\,163296 stands out from the rest of the sample, as it is the disk where we were able to trace the emission layer for the highest number of tracers (see Table \ref{table_sample_all}). For instance, it is the only system from the MAPS sample where we could obtain vertical profiles for CN and c-C$_3$H$_2$. HD\,163296 is also a system of interest due to the detection of at least two planetary signatures at 94 and 261\,au \citep{Pinte_2018_hd16planet, Teague_2018_hd16planet, MAPS_Teague, Izquierdo_2021_hd16planet}. Figure \ref{panel_hd16} presents the emission surface and brightness temperature profiles of each studied molecule. The emission surface of $^{12}$CO has the highest $z/r$ and traces a value of 0.3 up to $\sim$\,350\,au where it has a turning point and steeply declines. The emission surfaces of CN, HCN and $^{13}$CO $J=$2-1 are located in the interval of $z/r \sim$ 0.1-0.3. c-C$_3$H$_2$ and H$_2$CO trace close to $z/r \sim$ 0.1. C$_2$H also follows $z/r \sim$ 0.1, but its emission surface rapidly declines at $r\sim$100\,au. This is in agreement with the C$_2$H vertical profile derived in \citet{MAPS_Law_Surf} for HD\,163296. Below $z/r \sim$ 0.1 is the emission of HCO$^+$, $^{13}$CO $J=$1-0 and C$^{18}$O (only for the inner $\sim$150\,au in the case of C$^{18}$O). Beyond $\sim$200\,au, the vertical profile of C$^{18}$O elevates and traces a region closer to $^{13}$CO $J=$2-1 The right panel of Figure \ref{panel_hd16} shows our results for the brightness temperature profiles of each tracer. By comparing molecules found at similar vertical locations, but with different brightness temperatures, we can estimate the optical depth of the emission \citep[not presented in this work, but see][for an example]{Paneque-Carreno_2022_Elias_CN}. In the case of optically thick tracers, the brightness temperature will be a probe of the kinetic temperature of the studied region. \begin{figure*}[h!] \centering \includegraphics[width=\hsize]{panel_CO_all.pdf} \caption{Vertical profiles for CO isotopologues as extracted from the channel maps of each disk. Shaded region shows the dispersion of the data points and solid colored line traces the average values within each radial bin. Note that Elias 2-27 and WaOph 6 were observed at a higher transition ($J = 3-2$) in some CO isotopologues. Solid grey lines show constant $z/r$ of 0.1, 0.3 and 0.5. Each row has a different vertical extent. } \label{panel_CO_all} \end{figure*} As expected, due to its location, high abundance and optical depth, $^{12}$CO has the highest brightness temperature. CN, HCN and $^{13}$CO $J=2-1$ emit from the same vertical region, however, $^{13}$CO $J=2-1$ has a temperature of $\sim$20\,K, HCN is found at $\sim$10\,K and CN at $\sim$5\,K. We expect $^{13}$CO $J=2-1$ to be marginally optically thick, while theoretical models and observational constraints \citep{Cazzoletti_2018_CN, MAPS_Bergner} predict that CN and HCN are likely optically thin, which would lead to a lower brightness temperature, as measured in this work. For the $z/r \sim$ 0.1 molecules, we trace similar temperatures for c-C$_3$H$_2$ and H$_2$CO at $T_b\sim$5\,K and a higher temperature for C$_2$H, with a strong dip at 75-85\,au. This dip is also seen in the HCN and c-C$_3$H$_2$ profiles and coincides with a gap found in the line emission of these molecules \citep{MAPS_Law_radial}. Finally in the molecules closer to the midplane, $^{13}$CO $J=1-0$ and C$^{18}$O have a similar $T_b\sim$18\,K. HCO$^+$ has a lower temperature profile, indicating it is likely optically thin with $T_b\sim$8\,K. Besides optical depth differences, \citet{leemker_2022_LkCa15} show that differences in the spectral resolution may lead to variations in the extracted brightness temperature. Lower spectral resolution data will result in lower brightness temperature ranging from 10-60\% depending on the resolution difference \citep[see Appendix A.3 in][]{leemker_2022_LkCa15}. From the MAPS data, all observations of Band 6 (211-275\,GHz) have a velocity resolution of 0.2\,km\,s$^{-1}$ whereas observations taken in Band 3 (84-116\,GHz) have a resolution of 0.5\,km\,s$^{-1}$. Therefore, CN, HCO$^{+}$, and $^{13}$CO $J=1-0$ would likely have higher brightness temperature profiles if we compared them at the same spectral resolution as the rest of the tracers. The brightness temperature of all molecules shows a steep turnover in the inner 50\,au, which is likely due to effects of beam smearing. At the source distance, the beam major axis of each tracer takes a value between 13-30\,au and 50\,au in the case of CN. Additional perturbations in the temperature profiles may be due to line flux subtraction from the continuum emission, as we are using the continuum-subtracted datasets. The dust features are indicated in the top of the right panel of Fig. \ref{panel_hd16} for reference. \subsection{CO isotopologue vertical structure of the full sample} The extracted emission surfaces of the CO isotopologues from the complete sample of disks are presented in Figure \ref{panel_CO_all}. As explained in section 2.2, we are able to trace the emitting layer out to a larger radial extent than the previous work on the MAPS sample \citep{MAPS_Law_Surf}. The main difference is in the C$^{18}$O $J =$ 2-1 emission, which at larger radii is observed to trace a region very similar to $^{13}$CO $J = 2-1$. This is seen in all disks where both isotopologues are available and is in agreement with previous IM Lup \citep{Pinte_2018_method} and Elias 2-27 \citep{Paneque-Carreno_Elias1} analysis. We are also able to compute a vertical profile for $^{13}$CO $J =$ 1-0, which mostly traces a layer below $^{13}$CO $J =$ 2-1, except in MWC 480 and AS 209 where both transitions are very close to the midplane. The sample is divided in three groups, indicated by the three separate rows of Figure \ref{panel_CO_all}. This classification depends on the $z/r$ values that the $^{12}$CO traces and the radial extension of the emission. IM Lup, GM Aur and Elias 2-27 are the most vertically extended disks, with $z/r \geq$0.3. HD\,163296 and MWC\,480, the two Herbig Ae disks, have a $^{12}$CO vertical profile that traces $z/r \sim$0.3. Finally AS\,209 and WaOph\,6 are the flattest and least radially extended disks in our sample, with $z/r \sim$ 0.1-0.3. In some cases, such as for $^{12}$CO in Elias 2-27, we lack sampling of the outer regions due to cloud absorption \citep{Perez_2016_Elias, Paneque-Carreno_Elias1}. In this case the obtained best-fit for the vertical profile (methodology described in section 2.2) follows a single power-law. However this description of the data may only be valid in the inner $\sim$100\,au of the disk. For C$^{18}$O and $^{13}$CO ($J =$ 1-0) it is necessary to fit with a single power-law model in all disks due to the flat morphology or lack of turnover in the measured profile. It is important to emphasize that this does not mean there is no turnover as our parametric description is only valid up to the sampled radial location. \subsubsection{Modulations in the surface and correlation to kinematical and dust features.} Even though the data can be fitted using a power-law, the vertical profiles in Figure \ref{panel_CO_all} show modulations or "bumps". This behaviour is identified in the $^{12}$CO surface of HD\,163296, MWC\,480 and IM\,Lup. To trace the deviations from the possibly smooth vertical profile a reference profile is assumed, which we refer to as ``baseline'' (see Fig. \ref{example_mod}). This baseline does not coincide with the previously derived best-fit power-law or exponential profiles, which are not considered for this analysis because they cut through some of the features seen in Figure \ref{panel_CO_all}. It is important to note that the retrieval of vertical modulations will strongly depend on the assumed baseline, which we cannot know with certainty without dedicated modeling and understanding of the properties of each disk. We assume a baseline that traces the averaged vertical profile such that it follows a positive gradient at all radial locations. If there is a turnover, after which modulations are observed, we allow the baseline to follow only negative gradients, from the location where no further positive gradients are found. The final baseline profile is substracted from all the data points and we compute the binned residuals. It is assumed that data with lower vertical values with respect to the baseline trace modulations and we fit gaussians at these locations. The amount of gaussian features to be fitted and the initial guess on the radial location are determined through visual inspection of the residual data points. The best-fit surface is obtained by simultaneously fitting multiple gaussians to the assumed baseline. The fit is done considering all of the retrieved data points from the channel map analysis, the binned data are only used to determine the baseline and to aid visual inspection. This process is schematically shown in Figure \ref{example_mod}. \begin{figure}[h!] \centering \includegraphics[width=\hsize]{example_baseline.pdf} \caption{Example of baseline and best-fit gaussian modulation extraction for the MWC\,480 $^{12}$CO data set } \label{example_mod} \end{figure} \begin{figure*}[h!] \centering \includegraphics[width=\hsize]{all_together_bumps.pdf} \caption{Data corresponding to $^{12}$CO $J = 2-1$ in black and $^{13}$CO $J = 2-1$ in orange for three disks where modulations in the vertical surface are detected. Colored dots indicate the extracted data from the channel maps for each tracer. Continuous lines trace the best-fit surface considering the reference baseline and a number of fitted gaussians. The location and width of the gaussians for each molecule are signaled at the top of each panel. Vertical blue lines trace the dust structure from \citet{DSHARP_Huang_radial}. Vertical green lines show the reported kinematic residuals from \citet{MAPS_Teague} and \citet{Izquierdo_2021_hd16planet}. } \label{CO_bumps} \end{figure*} The $^{12}$CO and $^{13}$CO $J = 2-1$ isotopologues are analyzed for HD\,163296, MWC\,480 and IM\,Lup, which are the disks and tracers where this behaviour is most clearly identified. C$^{18}$O is also analysed for HD\,163296 and MWC\,480, but as no modulations are found in IM Lup the analysis is not shown in this section. Detailed plots and values for each tracer and disk can be found in Appendix B. The best-fit models of $^{12}$CO and $^{13}$CO are shown overlayed to the data in Figure \ref{CO_bumps}, where we compare the locations of the modulations in different tracers with the location of the dust rings and gaps found in ALMA millimeter continuum emission images \citep{DSHARP_Huang_radial} and reported kinematic features \citep{MAPS_Teague, Izquierdo_2021_hd16planet}. HD\,163296 displays a tight correlation between all of the modulations in both tracers. Additional analysis of the C$^{18}$O emission in HD\,163296 also finds modulations at similar radial locations as the inner three structures reported in $^{12}$CO and $^{13}$CO. In MWC\,480 we find a feature in $^{12}$CO and C$^{18}$O that does not seem to have a correspondent in $^{13}$CO, at 150\,au. IM\,Lup has a strong feature at 393\,au that is recovered in both tracers, however the other modulations do not relate between isotopologues. Contrary to the other disks, IM\,Lup does not have visually identifiable drops in the vertical height in the inner 300\,au and the features we obtain are broader than in the other systems. It may be that in this case we are tracing the flaring of the inner disk that due to our linear baseline is not considered properly in our reference surface model. It is found that HD\,163296 and MWC 480 present a strong correlation between the location of millimeter continuum emission gaps \citep{DSHARP_Huang_radial} and vertical modulation in the gas emission. A correlation is also found between the radial location of kinematic deviations detected in residuals from the velocity maps \citep{MAPS_Teague, Izquierdo_2021_hd16planet} and the vertical modulations traced in both HD\,163296 and MWC 480. In contrast, there is no relation between the emission gaps and rings traced from the integrated emission maps \citep{MAPS_Law_radial} in either CO isotopologue for any of the disks. An additional caveat to these results may be that the extraction of the emission surfaces assumes a smooth keplerian motion of the material \citep{Pinte_2018_method} and strong deviations could be responsible for the features we detect, precisely quantifying this effect will be studied in a future work. However, the vertical modulations are recovered for the complete azimuthal range, they are not localized in azimuth as is the case for planetary kinks \citep[e.g.][]{Pinte_2018_method, Perez_2018_dopplerflip, izquierdo_2021_discminer1}. Considering that the emission surfaces extracted for $^{12}$CO and $^{13}$CO are expected to be related to the surfaces of $\tau = $1, it is likely that the recovered modulations trace actual physical perturbations and decreases in the emitting surface or column density of CO. We will comment this further in section 4. Previous analysis of vertical modulations by \citet{MAPS_Law_Surf} studied the inner $\sim$100\,au for the MAPS sample and our results recover all of the features previously reported in HD\,163296 and MWC\,480. The depths and widths of our features slightly differ in some cases from the previous characterization due to the differences in extracting the vertical profile and in computing the assumed surface. The depth of each modulations is computed in the same way as in \citet{MAPS_Law_Surf}. We consider the height value of the assumed surface at the feature location ($z(r_0)$) and the height depletion caused by the gaussian feature ($\Delta z$), which is equivalent to the fitted amplitude. The ratio between these numbers results in a percentual description of the depletion from the assumed surface value. Higher depth values indicate a larger relative decrease in the scale height value. Our results show that, for features that are found in both tracers at similar locations, the relative depth in $^{13}$CO is larger than $^{12}$CO for all cases. Exact values are found in Appendix B. \subsubsection{Brightness temperature profiles} \begin{figure*}[h!] \centering \includegraphics[scale=0.72]{temperature_panel.pdf} \caption{Brightness temperature profiles of the CO isotopologue emission in each disk. Solid line indicates the mean value and the shaded region the standard deviation within each radial bin, divided by the number of beams in each annuli. Vertical and radial scales are shared in each panel. Dashed vertical lines in IM Lup, HD 163296 and MWC 480 indicate the location of the vertical modulations determined in this work for $^{12}$CO. } \label{panel_temp} \end{figure*} As done in HD\,163296, we obtain the brightness temperature profiles for the CO isotopologue emission of the complete sample. In particular, for Elias 2-27 we use only the emission from the east side, due to the elevation and brightness asymmetries \citep{Perez_2016_Elias, Paneque-Carreno_Elias1}. WaOph\,6 and AS\,209 also have strong absorption and a brightness asymmetry, therefore for the azimuthal profiles we only consider the south and west sides, respectively. The rest of the systems have their profiles extracted using a $\pm$30$^\circ$ wedge across the semi major axis. Figure \ref{panel_temp} shows the results for each system and CO isotopologue. The ordering in the panels matches Figure \ref{panel_CO_all}. The most vertically extended disks are in the top row, the intermediate disks in the middle row and the smaller, flatter disks in the bottom. All disks show a slowly decreasing temperature profile towards larger radii. Elias 2-27, HD\,163296, MWC\,480 and WaOph\,6 show a steeper slope in the temperature decrease of $^{12}$CO. HD\,163296 and MWC\,480 also display higher temperatures for all isotopologues, which is expected as they are both warm Herbig disks orbiting more luminous stars. In all disks from the MAPS sample $^{13}$CO $J = 1-0$ and C$^{18}$O $J = 2-1$ trace the same brightness temperatures, which are mostly below the CO freeze-out value ($\sim$ 21K). This happens particularly at radii beyond 80\,au for the Herbig stars, but at all radii for IM Lup, GM Aur and AS 209. We note that, as discussed previously, the spectral resolution of $^{13}$CO $J = 1-0$ is lower than that of the other transitions, which may artificially result in lower brightness temperatures \citep{leemker_2022_LkCa15}. Very low brightness temperatures may also be indicative of low optical depths of the studied isotopologues and do not represent the kinetic temperature of the gas at that location. High optical depth tracers, such as $^{12}$CO are expected to have brightness temperature profiles that are better tracers of the kinetic temperature at their location. The vertical dashed lines in Figure \ref{panel_temp} show the location of the modulations in the vertical surface of $^{12}$CO for IM Lup, HD 163296 and MWC 480 (see section 3.2.1). While the temperature profiles do not show such strong features, slight modulations are distinguished that may be coherent with the locations of the vertical modulations. We tentatively detect relations between dips in the brightness temperature profiles of $^{12}$CO and the location of the vertical modulations. Future detailed modelling of each source must be conducted to determine if the variations in the temperature profile are sufficient to explain the drops in the CO emission layer and determine the causes. Even though the measured $^{12}$CO vertical profiles of each disk are different and allow the classification of the sample in three categories based on vertical extension, the brightness temperature profiles of $^{12}$CO are very similar. Considering only the disks from the MAPS sample, there is a clear difference in the $^{12}$CO temperatures of T Tauri and Herbig Ae stars, as has already been reported in \citet{MAPS_Law_Surf}. However, within the T Tauri stars, the vertical extent of the $^{12}$CO emission in the disks is twice as large in IM Lup and GM Aur than in AS 209. WaOph\,6 and Elias 2-27 are also T Tauri stars and both show a warmer $^{12}$CO inner disk than the other low mass stars from the MAPS sample. \subsubsection{Calculation of disk scale height} We have presented the vertical location of several CO isotopologues in a sample of disks, however this emission surface depends on several physical-chemical conditions. A more transverse characterization of a disk's vertical structure is given by the gas pressure scale height ($H$), which can be used to compare theoretical models with observations. In order to derive this information, we design a simple single layer model. Despite the simplicity of the model, this allows us to provide first quantitative estimate of the material distribution. Assuming that the disk is vertically isothermal it is possible to relate the scale height to the total volumetric gas density ($\rho_\mathrm{gas}$) and the surface density of the disk ($\Sigma$) through, \begin{equation} \rho_\mathrm{gas}(z) = \Sigma \frac{e^{-z^2/2H^2}}{\sqrt{2\pi}H} \end{equation} We expect our derived vertical profile of $^{12}$CO to trace the region where the emission becomes optically thick ($\tau\geq1$) or where CO becomes self-shielding. For $^{12}$CO ($J = 2-1$) at a temperature of 40-60K this occurs when the vertically integrated column density of $^{12}$CO reaches the critical value $N_{\mathrm{CO}, \mathrm{crit}} \simeq 5 \times 10^{16} \mathrm{cm}^{-2}$ \citep[resulting in $\tau_{CO}\sim$1, values obtained using RADEX online simulator, ][]{van_der_Tak_2007_RADEX}. As we are studying CO column density, we assume a constant $x$(CO) abundance of 2.8$\times$10$^{-4}$ with respect to H$_2$ in the disk \citep{Lacy_1994_H2_CO, MAPS_Zhang}, such that $\Sigma_{\mathrm{CO}} = \Sigma x$(CO). Therefore we can relate our derived vertical $^{12}$CO location ($z_{\mathrm{CO}}$) to the scale height of the disk and critical column density following, \begin{equation} N_{\mathrm{CO}, \mathrm{crit}} = \frac{\Sigma_{\mathrm{CO}}}{\sqrt{2\pi}H}\int_{z_{\mathrm{CO}}}^{+\infty} \mathrm{exp} \left ( -\frac{z^2}{2H^2} \right) \,dz \end{equation} To solve the equation (4) we use a change in variable such that $z' = z/H$ and obtain an equation that can be solved numerically at each radial location where we have information on the $^{12}$CO emitting layer ($z_{\mathrm{CO}}$), to obtain the disk scale height if the emission is optically thick. \begin{equation} \int_{z_{\mathrm{CO}}/H}^{+\infty} e ^{z'^2/2} \,dz'= \frac{N_{\mathrm{CO}, \mathrm{crit}}\sqrt{2\pi}}{\Sigma_{\mathrm{CO}}} \end{equation} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{hydro_height_all.pdf} \caption{Derived gas pressure scale height ($H$) for each of the disks in the MAPS sample in solid colored lines from our analysis of the $^{12}$CO emitting surface. Dashed lines show the predicted scale height for each disk from \citet{MAPS_Zhang}. Grey solid lines show the location of constant $H/R$ at 0.1 and 0.2. } \label{scale_height} \end{figure} Considering the surface density best-fit parameters derived from models in \citet{MAPS_Zhang}, we obtain the scale height profiles from our measurements for the disks in the MAPS sample. We note that the CO depletion profiles from \citet{MAPS_Zhang} are not considered because they are obtained for C$^{18}$O and C$^{17}$O, therefore we would have to make assumptions on the isotope ratios to convert them into $^{12}$CO depletion profiles, adding an additional uncertainty \citep{Miotello_2014}. This analysis is not done for Elias 2-27 and WaOph\,6 as we do not have a model of the surface density and replicating the study of \citet{MAPS_Zhang}, which involves thermochemical modelling and radiative transfer, is beyond the scope of this work. Figure \ref{scale_height} shows our estimation for the scale height based on the best-fit parametrization of $^{12}$CO surfaces, compared to the scale height profiles from the best-fit models studied in \citet{MAPS_Zhang}. Our results are in agreement with the models and with the canonical assumption of $H/R$ = 0.1 for a standard irradiated disk beyond $\sim$100\,au \citep[see equation 5 in][]{Lodato_2019_H_R}. \begin{figure*}[h!] \centering \includegraphics[width=\hsize]{plot_DALI_model_test.pdf} \caption{Results on the vertical location for $^{12}$CO (pink line) and the inferred pressure scale height (blue line). The $^{12}$CO vertical location is obtained with ALFAHOR through the analysis of mock channel maps generated from a DALI model which uses the surface density prescription of \citet{MAPS_Zhang} for each disk. Points extracted from the synthetic channel maps are shown in grey for comparison with the average value ($z_{\mathrm{CO}}$). The inferred pressure scale height is calculated following equation 5. The pressure scale height used as input for the DALI models ($H$) is shown for comparison (yellow line). Dashed, dot-dashed and dotted lines show the model location of the CO millimeter optical depth ($\tau_{CO}$) 1, 10 and 100, respectively. } \label{DALI_test} \end{figure*} For IM Lup and HD\,163296 there are also estimates on the pressure scale height from the location of the scattered light surfaces \citep{ginski_2016_97048, Avenhaus_2018, Rich_2021}. Our values for the gas pressure scale height are below these estimates in both cases. The results show a layering where the $^{12}$CO emitting surface is the most vertically extended, followed by the scattering surface and then the gas pressure scale height (see Fig. \ref{comp_hd16_imlup_height}). The latter is in agreement with expectations from theoretical models, where the gas pressure scale height is estimated to be 3-4 times lower than the scattering surface \citep{chiang_2001_height, ginski_2016_97048}. Our results display a difference of only 1.5-2 between the scattering surface and the gas pressure scale height. We note that the estimates we obtain of the pressure scale height strongly depend on the assumed density profile and CO abundance ($x(\mathrm{CO})$). Low densities ($\Sigma_{\mathrm{CO}}$ below or close to the assumed CO critical density) push the pressure scale height to large values, up to a factor of a few, with respect to the derived CO emission layer. This indicates that if we would consider a depletion factor in the CO abundance then the $H/R$ value would increase. To test the robustness of our method we perform 2D thermochemical models with DALI \citep{Bruderer_DALI_2013}, using the gas surface density prescription of \citet[][]{MAPS_Zhang} and creating mock channel maps to retrieve the $^{12}$CO $J = 2-1$ vertical profile ($z\mathrm{_{CO}}$) for applying our methodology. The model details can be found in Appendix C.3 and the results of our test are displayed in Figure \ref{DALI_test}. We observe that, as expected in our model, the emitting layer we retrieve for $^{12}$CO from the mock channel maps is located at a similar height than the CO optical depth of $\tau_{CO}\geq1$. The optical depth value is an output of the DALI models and is calculated at each radial point by vertically integrating towards the disk midplane the sum of line and dust opacities and then subtracting the dust only integrated opacity value. In some cases the measured vertical profile for $^{12}$CO emission traces a location closer to the disk midplane, therefore it traces an optically thicker region reaching values of $\tau_{CO}\sim100$. This difference in the expected location of the $^{12}$CO emission does not seem to be related to any projection effects, such as disk inclination (see Figure \ref{DALI_inc}). IM\,Lup is the system that traces higher $\tau_{CO}$ in the outer disk and is also the source with the highest disk mass and radial extent (see Table \ref{table_DALI_param}). The resulting synthetic channel maps in this system are harder to trace in the outer radii than for the other sources, due to sudden brightness variations in the emission. The effect of this difficulty is reflected by the spread in the data points at 150-200\,au and low sampling further out (see Figure \ref{DALI_test}). Additionally, HD\,163296 and MWC\,480 show features in the model vertical structure that seem like the modulations studied in the previous section, however, it is uncertain if this is an actual physical effect or a structure that appears due to the coarser model resolution in the outer disk region. A future study focused on thermochemical models will aim at understanding the parameters that affect the exact location of the emission. The models shown in this section are just first approximations to future theoretical work and are not analyzed in detail here. Using the extracted vertical profiles from the models, we apply our method to obtain the pressure scale height and recover an almost exact match to that of the scale height used as input for each source (see Figure \ref{DALI_test}). Differences between the inferred pressure scale height and the model input value are most apparent in the outer regions (r$\geq$250\,au), where the emission comes from more optically thick zone (most apparent in IM Lup as mentioned before, see Figure \ref{DALI_test}). A plot relating the conversion factor between the $^{12}$CO emitting layer and the gas pressure scale height, for different $\tau_{CO}$ values can be found in Appendix C. \begin{figure*}[h!] \centering \includegraphics[scale=0.7]{panel_othermolec_all.pdf} \caption{Vertical profiles of the emission surface for molecules other than CO isotopologues in the studied disks. Elias 2-27 is not shown due to lack of data in other tracers. Black curves show the location of the CO isotopologues in $J = 2-1$ transition for reference. Solid line is $^{12}$CO, dashed line $^{13}$CO and dotted line is C$^{18}$O. Each colored curve shows a different tracer, where the solid line is the mean value and the shaded region shows the dispersion of the retrieved data points within each radial bin. Grey lines mark constant $z/r$ of 0.1, 0.3 and 0.5. } \label{panel_other_molec} \end{figure*} While this observationally-driven derivation of $H$ should be considered as an approximate first trial on describing the disk pressure scale height using the location of CO emission, our initial test with thermochemical models show it is a good approximation. We expect this method to hold if the gas in the disk follows Keplerian rotation, $^{12}$CO emission is optically thick and the vertical distribution is similar to a Gaussian profile. Under these assumptions, our estimate shows that all systems have scale heights between $H/R$ of 0.1-0.2 (Figure \ref{scale_height}), which is in agreement with theoretical predictions. An important caveat to our comparison with DALI models is that we have tested our method (equation 5) with a model that uses a Gaussian distribution for the vertical profile (see Appendix C for details), which is our initial assumption for inferring the pressure scale height (equation 3). While our tests are an important check to assure that our method is consistent, future work must focus on the errors that may appear if the vertical density profiles deviate from a Gaussian profile and consider an analytical disk prescription that gives a better representation of the vertical temperature structure \citep[e.g. ][]{Chiang_Goldreich_1997, Dullemond_2002}. \subsection{Other tracers} For the MAPS program targets we are able to trace the vertical profile of HCN, H$_2$CO, HCO$^+$ and C$_2$H in the same way as with the CO isotopologues. In the case of WaOph\,6, the lack of spectral and spatial resolution in the data only allowed us to recover the surface of HCO$^+$, even though HCN and H$_2$CO emission data is available too (see Appendix A). The vertical profiles of these molecules are shown, compared to the CO vertical layers, in Figure \ref{panel_other_molec}. Elias 2-27 has observations of CN $N=3-2$ hyperfine transitions from which the emitting surface has been recovered \citep{Paneque-Carreno_2022_Elias_CN}, however, as we are able to recover the emitting surface of CN $N=1-0$ only in HD\,163296 (shown in Figure \ref{panel_hd16}), we do not display this molecule in Figure \ref{panel_other_molec}. \begin{figure*}[h!] \centering \includegraphics[scale=0.42]{z_over_r_all.pdf} \caption{Average $z/r$ values of each tracer and disk under study. To avoid the effect of the turnover in the case of vertically extended molecules, only data within 80\% of the $^{13}$CO (or $^{12}$CO for MWC\,480 and AS\,209) r$_{taper}$ is considered. Note that for Elias 2-27 CN is a higher transition ($N = 3-2$) than the CN shown in HD\,163296 ($N = 1-0$). } \label{z_over_r} \end{figure*} \subsubsection{Complete sample} Molecules beyond CO isotopologues seem to mostly reside close to the midplane. Due to this, we are not able to visually separate upper and lower sides of the disk when masking the channel maps to extract the vertical profiles. This implies that there could be some contamination and scatter caused by the lower side of the disk. Additionally, for some tracers the vertical extent of the emission is comparable with the beam size, therefore the profiles of these molecules are considered tentative results. Higher spatial resolution and better SNR data may allow us to constrain them in more detail in the future. The gaps in the vertical profiles are due to lack of data (under two points) in the radial bin (see Method section). We note that our results are in agreement with those obtained by \citet{MAPS_Law_Surf} for HCN and C$_2$H in the inner $\sim$150\,au. \begin{figure*}[h!] \centering \includegraphics[width=\hsize]{panel_H2CO_HCN.pdf} \caption{Vertical surface profiles for HCN in the top row and H$_2$CO in the bottom row. Green vertical line marks the edge of the dust continuum as reported in Law et al. 2020a. Solid and dashed lines in each panel mark rings and gaps, respectively found in the emission maps of each molecule from Law et al. 2020a. In the profiles, solid line marks the mean value and grey shaded area the dispersion of the retrieved data points in each radial bin. } \label{panel_H2CO_HCN} \end{figure*} The overall average $z/r$ values of each tracer are shown in Figure \ref{z_over_r}. As various tracers have different turnover radii and extent, we follow a similar approach to \citet{Law_2022_12CO} and consider the emission only from within 80\% of the $^{13}$CO $J = 3-2$ r$_{taper}$. For MWC\,480 and AS\,209 we use $^{12}$CO, as no turnover is detected in $^{13}$CO (see Table \ref{table_vertical_co}). The emission from most tracers beyond CO isotopologues has a large scatter, but overall comes from regions with $z/r \leq$0.1 (see right panel of Fig. \ref{z_over_r}). This is in agreement with the flat morphology of the channel maps and the assumptions of flat disk used in MAPS for emission other than CO \citep[e.g.][]{MAPS_Law_radial, MAPS_Guzman, MAPS_Bergner}. HCN rises from an upper layer only in HD 163296 and HCO$^{+}$ shows a moderate elevation in IM Lup, AS 209 and WaOph 6. H$_{2}$CO has a consistent behaviour across all stellar masses, with $z/r \sim$0.1. The emitting surface of CN can only be traced in Elias 2-27 and HD\,163296. Both disks show CN is the highest emitting molecule, aside from $^{12}$CO, however in Elias 2-27 it traces the same region and in HD\,163296 it is in a middle layer closer to the rest of the molecular reservoir. The stars in Figure \ref{z_over_r} are ordered in the horizontal axis by increasing stellar mass, however the scale is not linear as several of them have almost equal mass value (see Table \ref{table_sample_all}). A tentative relation between the $z/r$ values of CO tracers and stellar mass may be recovered from the left panel of Figure \ref{z_over_r} such that $z/r$ decreases with increasing stellar mass. This relationship has been tested for $^{12}$CO \citep{MAPS_Law_Surf, Law_2022_12CO} and in this work we tentatively recover it for $^{13}$CO $J=2-1$ and C$^{18}$O $J = 2-1$ too (orange and light blue dots in Fig. \ref{z_over_r}). Contrary to the behaviour of its higher transitions, $^{13}$CO $J=1-0$ emits from a lower $z/r \leq$0.1 layer and does not have any correlation to the stellar mass. Considering the mean values we note that $^{13}$CO $J=2-1$ lies in a vertical region 2-3 times lower than $^{12}$CO $J=2-1$ for all the disks in the MAPS sample. This also applies for WaOph\,6 in $J=3-2$ transitions, but for Elias 2-27 $^{13}$CO is located closer to $^{12}$CO and the ratio between their mean values is 1.5. \subsubsection{Structure in HCN and H$_2$CO} The HCN and H$_2$CO vertical profiles are well sampled (see all of the extracted data points in Appendix B) and show distinct morphologies, that in most cases do not follow an exponentially tapered power-law or a single power-law model. These profiles are presented separately in Figure \ref{panel_H2CO_HCN} for all disks in the MAPS sample except MWC 480, which is not included due to the lack of data points for HCN emission. Overlayed to the profiles are the locations of rings and gaps found in the integrated emission of each corresponding molecule and the edge of the millimeter continuum \citep{MAPS_Law_radial}. HCN shows a step-like profile in HD 163296, with peaks that mostly coincide with gaps in the integrated HCN emission. IM Lup shows a peak in the HCN profile at $\sim 270$\,au, which is in between a gas gap and ring and also coincides with the single peak seen in the H$_2$CO vertical profile. GM Aur and AS 209 may have tentative HCN peaks around $\sim 150$\,au, however the data do not allow us to properly characterize the profiles and there is a large dispersion in the recovered data points from the channel maps. While HCN is expected to trace high layers in the disk atmosphere \citep{Visser_2018, MAPS_Bergner}, it is clear that this is only the case in HD 163296 (see also Fig. \ref{z_over_r}). The H$_2$CO vertical profiles can be traced with less scatter in all disks and for HD 163296, IM Lup and GM Aur we see clear peaks. In all cases there seems to be a peak close to and just inwards with respect to the edge of the millimeter continuum. The peaks in the vertical profile of HD 163296 and IM Lup coincide with gas gaps, as was the case for HCN. GM Aur has gas rings and gaps detected only in the molecular emission at small ($<$100\,au) radii, however it has a strong, well constrained rise in the emitting layer close to the continuum edge. In HD 163296 and IM Lup there seems to be a correlation between the presence of line emission gaps and the peaks in the vertical structure for both HCN and H$_2$CO. It could be that the substructure of the molecular emission affects our profiles, as we are tracing the data directly from the channel maps, assuming that the emission maxima trace an isovelocity curve. However, we do not see this effect in GM Aur, where H$_2$CO also has a strong vertical perturbation and we note that most of the gaps in the molecular emission at outer radii in HCN and H$_2$CO have very low-contrast \citep{MAPS_Law_radial}, therefore we do not expect them to have such a noticeable impact on the retrieved profile. All vertical profiles show peaked features beyond the continuum emission border, therefore it is unlikely that this is an effect of continuum over-subtraction either. Overall, if the emission is optically thin, which is expected from the brightness temperature profiles of HD\,163296 (Fig. \ref{panel_hd16}) and previous studies \citep{MAPS_Bergner, MAPS_Guzman}, the vertical profiles seem to be tracing distinct variations of the physical location from where HCN and H$_2$CO are being emitted. \section{Discussion} \subsection{CO isotopologue vertical layering} There have been dedicated studies to predict the regions from where molecules emit in protoplanetary disks \citep{van_Zadelhoff_2001_vert_models, Aikawa_2002, Walsh_2010, Miotello_2014, woitke_2016_prodimo}. We trace the location of the emitting region for various CO isotopologues in our seven disks, including different transitions of $^{13}$CO in the MAPS sample and $^{12}$CO in WaOph 6. Our analysis shows that $^{12}$CO emission comes from a wide range of vertical locations, which is in agreement with previous studies \citep{MAPS_Law_Surf, Law_2022_12CO}. This does not correlate directly with the emission location of other CO isotopologues or different molecules, except possibly $^{13}$CO (see Section 3.3 and Figure \ref{z_over_r}). We take as reference the work by \citet{van_Zadelhoff_2001_vert_models}, where literature disk models are processed using a full 2D Monte Carlo radiative transfer code to determine the vertical location of the emitting surfaces for CO and HCO$^{+}$isotopologues. The locations traced for the CO isotopologues in our work are qualitatively compared with the expected location of CO for a chemical abundance similar to the interstellar medium, and a more depleted abundance. In both scenarios it is expected to have a layering where $^{12}$CO traces the upper layers, followed by $^{13}$CO and C$^{18}$O, considering all transitions \citep[see also ][]{Miotello_2014}. For the disks in the MAPS sample, we clearly recover that $^{12}$CO emission comes from a higher region than $^{13}$CO. Also, for $^{13}$CO the $J=2-1$ transition emission surface is above the $J=1-0$ transition, as expected. However, we observe that C$^{18}$O $J = 2-1$ is emitting from a layer very close to $^{13}$CO $J = 2-1$ and in most cases above $^{13}$CO $J = 1-0$, which is not expected from the models. This overlap in the emission regions of C$^{18}$O $J = 2-1$ and $^{13}$CO $J = 1-0$ could be explained by both of them emitting from just above the CO freeze-out region, which is also coherent with the measured low brightness temperature (see Fig. \ref{panel_temp}). Indeed, it has been proposed that C$^{18}$O $J = 2-1$ emission in IM Lup traces the freeze out \citep{Pinte_2018_method}. Our new results showing that $^{13}$CO $J = 1-0$ emits from the same region or even closer to the midplane may be additional evidence of the freeze-out region location. C$^{18}$O $J = 1-0$ is also detected for the MAPS sample but unfortunately, due to the low SNR of this transition, it is not possible to trace its emitting surface, which would be useful for additional comparison. In WaOph 6 $^{12}$CO $J = 3-2$ and $J = 2-1$ trace the same region, which is in agreement with the previously discussed theoretical models. Future work obtaining observations at higher transition such as $^{12}$CO $J = 6-5$ could help differentiating between a depleted or ISM-like abundance, depending on how similar the emitting region is to that traced by lower transitions \citep{van_Zadelhoff_2001_vert_models}. Elias 2-27 shows a highly elevated emission layer, where the $J = 3-2$ transitions of $^{13}$CO and C$^{18}$O are very close to the $^{12}$CO $J = 2-1$. These two disks do not have a detailed description of their surface density or temperature structure, which would be useful for comparing them with the rest of the sample and obtaining the gas pressure scale height. Both disks have spiral structures in the dust continuum emission, which have been proposed to be linked to the effects of the disk self-gravity \citep{DSHARP_Huang_Spirals, Paneque-Carreno_Elias1}. Our results show that in the line emission they both have very different structures both radially and vertically, therefore it remains unclear how dust continuum features relate with the vertical material distribution. \subsection{Vertical distribution of other molecules} Our work presents direct constraints of the emitting location for several molecules, extended to radial distances beyond $\sim$150\,au. Overall, the results show that most of these molecules emit from regions very close to the midplane ($z/r$<0.1, see Fig. \ref{z_over_r}), which conflicts with theoretical predictions, in particular when analyzing the location of UV sensitive tracers such as HCN and C$_2$H \citep{Aikawa_2002, Walsh_2010, MAPS_Bergner, MAPS_Guzman}. Various physical-chemical models have predicted that CN and HCN emission should be sensitive to UV flux \citep{Cazzoletti_2018_CN, MAPS_Bergner}. The emitting layer of HCN should be located just below CN, due to the higher photodissociation rate of HCN compared to CN \citep{MAPS_Bergner}. Overall, the emission of both tracers is expected to arise from the upper atmosphere of disks, in regions close to the $^{12}$CO emitting layer \citep[see also][]{Walsh_2010}. The only system where we are able to extract the emitting surfaces of both tracers is HD\,163296. In this disk we find that CN $J = 1-0$ emission is located in an intermediate layer, between $^{12}$CO and $^{13}$CO $J = 2-1$ (see Figs. \ref{panel_hd16} and \ref{z_over_r}). In all the other disks, except for Elias 2-27, it is not possible to extract CN surfaces and HCN traces a region close to the midplane ($z/r$<0.1). Observing emission from regions closer to the midplane may be an indicator of deeply penetrating UV flux either from the central star or from an external source \citep[see models by][]{Flores_2021}. Another alternative is that X-ray radiation is responsible for the emission, as suggested for the Flying Saucer system \citep{RuizRodriguez_2021}. UV radiation is deeply affected by the location of small dust particles, therefore the constraints on the emitting surfaces shown in this work may be used for dust growth and settling models in disks through simultaneous modeling of the dust location, UV radiation and molecular excitation. C$_2$H is also a molecule expected to be a good UV tracer \citep{Miotello_2019_c2h, MAPS_Guzman} and even though it is less radially extended, in the inner disk it traces the same region as HCN. Future observations of higher CN transitions may allow us to study the distribution of this molecule for the rest of the sample and compare its location with the retrieved HCN emitting region. Models suggest that HCO$^+$ should be emitting from a region similar to the CO emission layer \citep{Aikawa_2002}. We find that the average $z/r$ of the molecule is indeed close to C$^{18}$O. In IM Lup, AS 209 and WaOph 6 the emission comes from above $z/r$ of 0.1. These disks are expected to be the youngest of the sample \citep{DSHARP_Huang_Spirals, MAPS_Oberg}, which could imply they have had less chemical reprocessing and CO abundances similar to ISM. Disks in which CO has been transferred into other species (expected for older systems) have HCO$^+$ emission located closer to the midplane, compared to models with ISM-like abundances \citep{van_Zadelhoff_2001_vert_models}. Constraints from the measured excitation temperature of H$_2$CO \citep{Pegues_2020} and chemical models \citep{Walsh_2010} also predict that it should be located in the molecular layer, but above the CO freeze-out region. Indeed, we trace the emission from H$_2$CO in a very similar region to that of C$^{18}$O and HCO$^+$. The morphology of the HCN and H$_2$CO emitting layers show a distinct peaked structure (see Fig. \ref{panel_H2CO_HCN}). A possible relation is observed between the location of radial gaps in the molecular distribution and the peaks in the vertical profile. H$_2$CO is a molecule that has two formation pathways, it may emit due to desorption processes, from dust grains close to the midplane, or warm gas phase chemistry \citep{van_Scheltinga_2021_h2co, Carney_2017, MAPS_Guzman}. The distinct surfaces we trace in H$_2$CO could be the first direct indication of regions where different emission mechanisms are acting. In particular, modeling of HD\,163296 by \citet{Carney_2017} predicts that in order to reproduce the radial intensity profile the outer disk (beyond the millimeter edge) must have an increased H$_2$CO abundance. The model considered a higher abundance compared to that of the inner disk that extends vertically \citep[see Fig. 5 of][for the R-step case]{Carney_2017}, which is coincident with the peak traced in the emission surface of H$_2$CO in HD\,163296 beyond the millimeter edge (see Fig. \ref{panel_H2CO_HCN}). It is important to note that the results of \citet{Carney_2017} showed that the H$_2$CO could not be produced uniquely by gas-phase reactions. Our vertical profile shows that at radius $\sim$240\,au the emission comes from closer to the midplane, which would be coherent with thermal desorption of the molecule from icy grains. Previous works focused on H$_2$CO emission have not recovered the structure we see in this study, however, models for DM Tau considering gas-phase and grain chemistry recover a bulged H$_2$CO density structure in the vertical profile \citep{Loomis_2015}. This bulged feature is expected at a higher elevation than what we trace in our vertical profiles and may not necessarily be reproduced in the emission surface. In a study by \citet{Pegues_2020} it was found that measurements of the H$_2$CO excitation temperature in various systems show mostly constant radial temperatures, however clear peaks are detected in LkCa15 and J1604-2130. If the densities are sufficient to assume that the emission is in Local Thermodynamic Equilibrium (LTE) then the excitation temperature may be a proxy to the kinetic temperature and thus to the vertical emission distribution. For HCN there are no sources in the literature that show evidence of a structured vertical distribution and the excitation temperature profiles derived for the MAPS sample do not have any noticeable relation to the features seen in our vertical profiles \citep{MAPS_Guzman}. \subsection{CO modulations; correlation to millimeter gaps and kinematic features} Vertical structure is found in the CO isotopologue emission for IM Lup, HD163296 and MWC 480, this structure can be modelled as drops from a determined baseline, using gaussian functions (see Section 3.2.1) . We name these drops modulations. There seems to be a consistent relation between the locations of the modulations, gaps observed in millimeter continuum and kinematical residuals. As the emission surfaces are traced directly from the channel maps there are several possibilities regarding their origin. It could be that, if the kinematical features are caused by a planetary companion, the vertical modulations are a tracer of the material infalling and possibly accreting on to the planet. The kinematic residuals used in this work have all been related with the presence of planetary companions \citep{Pinte_2018_hd16planet, Teague_2018_hd16planet, Izquierdo_2021_hd16planet, MAPS_Teague}. If there are planets in these disks, it has been shown that meridonial flows of material \citep{Fung&Chiang_2016} will be capable of disturbing the vertical density structure \citep{Morbidelli_2014, Szulagyi_2017, Szulagyi_2022}. Meridonial flows have been detected observationally in HD\,163296 \citep{Teague_2018_hd16planet}, however, it is unclear if the modulations we detect in the vertical disk structure are related to the meridonial flows, as simulations do not provide clear predictions on the vertical structure, for they are mainly focused on density and velocity perturbations. It is expected that the vertical profiles trace the location where the gas becomes optically thick, therefore the modulations could also be related to density drops. However, as mentioned in section 3.2.1, the locations of vertical modulations do not coincide with the gaps and rings in the integrated emission map of each molecule \citep{MAPS_Law_radial}, which should be a direct proxy for density variations. Another possibility could be that there are no real vertical modulations, but rather we are recovering artifacts as we assume that the emission maxima trace an isovelocity curve in a smooth disk, but in the sample there are perturbed channel maps \citep[MWC 480, HD 163296, ][]{MAPS_Teague}. Alternatively, it could also be that the emitting layer is indeed perturbed and causing the channel map to appear perturbed due to projection effects. Future detailed studies on the effect of velocity perturbations in the extraction of emission surfaces will aid in breaking this degeneracy and understanding the observational characteristics of each effect. However, as we clearly recover the modulations at a constrained radial distance on both sides of the disk, with respect to the semi-minor axis and in all of the studied channels, we propose that they are real perturbations present in the vertical structure. If the surface modulations are related to planetary companions in the disks, the feature at $\sim$393\,au in IM Lup is particularly relevant. It is a coherent, strong and well defined modulation present in $^{12}$CO and $^{13}$CO emission, with no kinematical feature associated to it. The radial location of this dip is close to where the gas component of the disk is expected to have a sharp change in the surface density \citep{Panic_2009}. Searches for planetary companions in IM Lup have been performed \citep{Mawet_2012, Launhardt_2020} but no object has been detected, considering a detection probability of $\sim$30\% for a planet of 13M$_J$ at 400\,au. For MWC 480 and HD163296 there are coherent inner modulations that are coincident with dust gaps, which for MWC 480 have no kinematical counterpart, however for HD163296 there is a strong linewidth residual at $\sim$38\,au and a positive velocity gradient at $\sim$45\,au \citep{Teague_2018_hd16planet, Izquierdo_2021_hd16planet}. Detailed models studying the effect of planetary companions on the CO emitting layer will be useful to determine if the detection of vertical features may be an indirect measure of the presence of a planet, but this goes beyond the scope of this paper. \section{Conclusions} In this work we have presented observational constraints on the vertical location of the molecular emission of seven disks. Using data of high angular and spectral resolution we directly trace the emission location for up to ten different molecules. Our main findings and conclusions are the following: \begin{enumerate} \item We have characterized the emission surfaces of multiple molecules through a geometrical analysis of the channel map emission. We have also detected structured emission layers with modulations and spikes. Using an implementation of the \citet{Pinte_2018_method} method, where we mask the observed emission layer, we trace the emission surfaces at larger radii and for lower SNR molecular emission compared to previous work. \item The derived emitting surfaces for CO isotopologues show a clear layering, with $^{12}$CO being the most vertically extended tracer. $^{13}$CO and C$^{18}$O of the same transition trace similar regions. From our available data we can also corroborate that lower transitions trace closer to the midplane than higher transitions in the case of $^{13}$CO, consistent with thermochemical models. \item If the emission is optically thick and good approximations for the surface density and CO abundance are available, the $^{12}$CO emission surface can be used to obtain the gas pressure scale height of the disk. Our values indicate that the scale height is consistent with values of $H/R = $0.1. These results are obtained using a simple analytical one layer disk model and initial testing with thermochemical DALI models indicate the method is robust, at least if the vertical material in the disk follows a Gaussian distribution. \item The locations of the detected modulations in the CO isotopologue emitting surfaces are correlated to that of millimeter continuum gaps and kinematic perturbations. These modulations may be related to the presence of planetary companions or other mechanisms causing variations in the gas density distribution. \item HD\,163296 shows a rich molecular reservoir from which most tracers can be located in distinct vertical regions. Overall in our sample most of the molecular emission for tracers other than CO originates close to the midplane at $z/r$<0.15. In the case of HCO$^+$ and H$_2$CO the observational constraints agree with theoretical predictions and constraints on the excitation temperature. Both molecules seem to be tracing the molecular layer above the CO freeze-out region. The particular morphology of the H$_2$CO emitting surface may be the first direct indicator of this molecule originating from both ice grain desorption and gas phase chemistry. \item HD 163296 displays CN and HCN in an intermediate vertical region. Their location is in agreement with theoretical predictions \citep[e.g.][]{Cazzoletti_2018_CN, MAPS_Bergner} about HCN tracing a layer just below CN. The rest of the systems in the MAPS sample show HCN very close to the midplane and it is not possible to retrieve CN emission surfaces to compare with. This is not expected from theoretical models. Future observations of higher CN transitions will allow us to compare our results on the location of HCN and better understand the disk radiation conditions. \item This sample of disks and molecules represent the largest survey to date on the direct characterization of the emitting regions for multiple tracers. Dedicated chemical-physical modelling is crucial to understand the diversity in the location of each molecule and how to relate the vertical profiles to actual disk properties. \end{enumerate} We aim for this work to be used as a reference catalogue for future dedicated models on each of the sources so that the chemical-physical origin of each emission line can be adequately studied. With the sensitivity of instruments like ALMA we hope to enlarge the sample of disks where this kind of study is possible. \begin{acknowledgements} We thank the referee for the constructive comments. This paper makes use of the following ALMA data: \#2015.1.00168.S, \#2016.1.00484.L, \#2016.1.00606.S, \#2017.1.00069.S and \#2018.1.01055.L. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA), and by funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101019751 MOLDISK). \end{acknowledgements} \bibliographystyle{aa}
1,108,101,564,049
arxiv
\section{Introduction} Given the social and ethical impact that some affective computing systems may have~\cite{hupont2022landscape}, it becomes of the utmost importance to clearly identify and document their context of use, envisaged operational scenario or intended purpose. Undertaking such use case documentation practices would benefit, among others, system vendors and developers to make key design decisions from early development stages (e.g. target user profile/population, data gathering strategies, human oversight mechanisms to be put in place), authorities and auditors to assess the potential risks and misuses of a system, end users to understand the permitted uses of a commercial system, the people on whom the system is used to know how their data is processed and, in general, the wide public to have a better informed knowledge of the technology. The need for transparency and documentation practices in the field of Artificial Intelligence (AI) has been widely acknowledged in the recent literature~\cite{hupont2022documenting}. Several methodologies have been proposed for AI documentation, but their focus is rather on data~\cite{gebru2018datasheets} and models~\cite{mitchell2019model} than AI systems as a whole, limiting at most the documentation of use cases to a brief textual description. Nowadays, voluntary AI documentation practices are in the process of becoming legal requirements in some countries. The European Commission presented in April 2021 its pioneering proposal for the Regulation of Artificial Intelligence, the AI Act~\cite{AIact}, which regulates software systems that are developed with AI techniques such as machine or deep learning. Interestingly, the legal text does not mandate any specific technical solutions or approaches to be adopted; instead, it focuses on the \textit{intended purpose} of an AI system which determines its risk profile and, consequently, a set of legal requirements that must be met. The AI Act's approach further reinforces the need to properly document AI use cases. The concept of \textit{use case} has been used in classic software development for more than 20 years. Use cases are powerful documentation tools to capture the context of use, scope and functional requirements of a software system. They allow structuring requirements according to user goals~\cite{cockburn2001writing} and provide a means to specify the interaction between a certain software system and its environment~\cite{fantechi2003applications}. This work revisits classic software use case documentation methodologies, more particularly those based on the Unified Markup Language (UML) specification~\cite{UML251}, and proposes a template-based approach for AI use case documentation considering current information needs identified in the research literature and the European AI Act. Although the documentation methodology we propose is horizontal, i.e. it can be applied to different domains (e.g AI for medicine, social media, law enforcement), we address the specific information needs of affective computing use cases. The objective is to provide a standardised basis for an AI and affective computing technology-agnostic use case repository, where different aspects such as intended users, opportunities or risk levels can be easily assessed. To the best of our knowledge, this is the first methodology specific to the documentation of AI use cases. The remainder of the paper is as follows. Section~\ref{sec:background} provides an overview of the current AI regulatory framework, existing approaches for the documentation of AI and affective computing systems, and a background on UML. Section~\ref{sec:uml_template} identifies use case information needs and proposes an UML-based methodology for their unified documentation. In Section~\ref{sec:examples}, we put the methodology into practice with some concrete exemplar affective computing use cases. Finally, Section~\ref{sec:conclusions} concludes the paper. \section{Background} \label{sec:background} \subsection{``Intended purpose'' and ``emotion recognition systems'' in the AI Act} \label{subsec:aia_intended} The \textit{intended purpose} of an AI system is central to the European AI Act. It is defined as {\it ``the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation''}\footnote{The definitions provided in this manuscript are as of August 2022. The legal text is currently under negotiation and may be subject to change.}. An AI system's intended purpose determines its risk profile which can be, from highest to lowest: (1) \textit{unacceptable risk}, covering harmful uses of AI or uses that contradict ethical values; (2) \textit{high-risk}, covering uses identified through a list of high-risk application areas that may create an adverse impact on people's safety, health or fundamental rights; (3) \textit{transparency risk}, covering uses that are subject to a set of transparency rules (e.g. conversational agents, \textit{deepfakes}); and (4) \textit{minimal risk}, covering all other AI systems. The AI Act explicitly and implicitly refers to affective computing systems in several parts of the legal text\footnote{Please note that this assessment is based on the authors' own interpretation of the legal text as of August 2022.}. A transparency risk generally applies to affective computing systems, but there are some clearly identified prohibited practices and high-risk areas. Prohibited practices include systems used to distort a person's behaviour to cause psychological harm, and systems used by public authorities to perform social scoring based on predicted personality or social behaviour. AI systems intended to be used as \textit{``polygraphs and similar tools or to detect the emotional state of a person''} are listed as high-risk in the areas of \textit{``law enforcement''} and \textit{``migration, asylum and border control management''}. There might be situations where emotion recognition is exploited in recruitment contexts or to determine access to educational institutions, which would also be high-risk, as would emotion recognition systems being a safety component of a product (e.g. a system integrated in a car that detects a driver’s drowsiness and undertakes a safety action) or that are part of a machine or medical device (e.g. a companion robot for autistic children). Therefore, the AI Act establishes a clear set of harmonised rules that link use cases --including affective computing ones-- to risk levels, which in turn imply different legal requirements. This opens the door to the creation of a use case documentation methodology allowing for an unambiguous assessment of risk levels, such as the one proposed in this work, which could be a valuable tool for different stakeholders, ranging from system providers to authorities. \subsection{Current approaches for the documentation of AI systems} In the recent years, both key academic and industry players have proposed methodologies aiming at defining documentation approaches that increase transparency and trust in AI. Among the most successful initiatives, we find some that focus on documenting the datasets used for AI, such as \textit{Datasheets for Datasets}~\cite{gebru2018datasheets}, \textit{The Dataset Nutrition Label}~\cite{holland2018dataset,chmielinski2022dataset} and \textit{Data Cards}~\cite{pushkarna2022data}, as well as some that address the documentation of AI models and algorithms from a technical perspective, such as \textit{Model Cards}~\cite{mitchell2019model} and \textit{AI Factsheets}~\cite{arnold2019factsheets}. Very recently, the Organisation for Economic Co-operation and Development (OECD) has proposed a policy-oriented \textit{Framework for the classification of AI systems}~\cite{OECD} to which high-calibre institutions and a large number of AI practitioners have contributed. Being in the form of questionnaires or more visual factsheets, these methodologies are not based on formal documentation standards or specifications. Moreover, even though some of them do explicitly ask about the intended use of AI the system (e.g. \textit{``What is the intended use of the service output?"}\cite{arnold2019factsheets} and \textit{``Intended Use"} section in~\cite{mitchell2019model}), it is just in very broad terms and provided examples lack sufficient details to address complex legal concerns. To date, there is no unified and comprehensive AI documentation approach focusing exclusively on use cases. \subsection{Documentation of affective computing use cases} The aforementioned documentation approaches have scarcely been used in the field of affective computing. Only the \textit{Model Cards} original paper comes with a ``smiling detection in images" and a detection of ``toxicity in text" example. Use cases in the field have rather been presented to the community in plain text form (i.e. without following any documentation template), either in survey papers~\cite{aranha2019adapting,weninger2015emotion,zhao2019affective}, in papers presenting a very concrete application~\cite{xu2018automated,murali2021affectivespotlight,setiono2021enhancing} or in articles discussing ethical issues~\cite{hernandez2021guidelines,ong2021ethical,hupont2022landscape}. Interestingly, the Association for the Advancement of Affective Computing (AAAC) has recently launched the \textit{affective computing commercial products database}~\cite{aaac_productdb}, which presents a table with a list of commercial products, a brief description of each one and associated tags such as modality (e.g. speech, text, face), format (e.g. software, hardware) and application domain (e.g. general purpose, education, health). It is however limited to a high level description of real products in the market. \subsection{Unified Modeling Language (UML) for use case reporting} The Unified Modeling Language (UML) specification has been widely used in software engineering in the last two decades~\cite{UML251,kocc2021uml}. It provides a standard way to visualize the design and behaviour of a system by introducing a set of graphical notation elements. In particular, it allows for use case modelling, without entering into implementation details, in the form of intuitive \textit{use case diagrams} whose main elements are depicted in Figure~\ref{fig:uml_icons}. \begin{figure}[htb!] \centering \includegraphics[width=0.65\linewidth]{figures/UML_icons.png} \caption{Main UML graphical notation elements for use case modeling.} \label{fig:uml_icons} \end{figure} Use cases capture a system's requirements, i.e., what the system is supposed to do. A use case is triggered by an \textit{actor} (it might be a person or group of persons), who is called \textit{primary actor}. The use case describes the various sets of interactions that can occur between the various actors, while the primary actor is in pursuit of a goal. A use case is completed successfully when the goal that is associated with it is reached. Use case descriptions also include possible extensions to this sequence, e.g., alternative sequences that may also satisfy the goal, as well as sequences that may lead to failure in completing the service. Once use cases have been modelled in a diagrammatic form, the next step is to describe them in an easy-to-understand and structured written manner. Traditional use case modelling always includes this step, and several standards have been suggested for the layout of use case descriptions. The most widely used is the table format proposed by Cockburn in~\cite{cockburn2001writing} and shown in Figure~\ref{fig:use_case_tables}-left. UML is an powerful tool for use case documentation and communication, even to non-technical audiences. Nevertheless, it has not yet been exploited to document AI use cases. \section{An UML-based documentation methodology for AI and affective computing use cases} \label{sec:uml_template} We propose a novel methodology for the documentation of AI use cases which is grounded on (1) the UML standard specification for use case modeling and (2) the requirements for use case documentation under the European AI Act. Our methodology pays particular attention to information needs related to affective computing use cases. It is intended to be a tool to increase transparency and facilitate the understanding of the intended purpose of an AI system, in order to ease the assessment of its risk level and other relevant contextual considerations. \subsection{Information needs related to the ``intended purpose'' under the AI Act} \label{sec:aia} As discussed in Section~\ref{subsec:aia_intended}, the European AI Act centres around the concept of \textit{intended use}. Several key information elements are essential to document the intended use of an AI system according to the legal text. We have compiled them in the list presented in Table~\ref{tab:info_elements_aia}. As can be seen, the intended purpose of the system shall be put into context by providing additional information on: who will be the users and the target persons on which the system is intended to be used; the operational, geographical, behavioural and functional contexts of use that are foreseen, including a description of the hardware on which the system is intended to run (e.g. to highlight whether it is part of a device/machine); which are the system's inputs and outputs; and, if applicable, whether the system is a safety component of a product. Additionally, it is as important to clearly specify the intended use of the system as its foreseeable potential misuses and unintended purposes. Finally, the \textit{application areas} information element is one of the most important to assess when it comes to identify a system's risk level. The legal text links some practices, areas and concrete applications within these areas to prohibited practices and high-risk profiles. Table~\ref{tab:areas} compiles the prohibited practices (top) and high-risk application areas (bottom) mentioned in the legal text that are directly related to emotion recognition, or where some kind of affective computing technique could potentially be used (e.g. personality prediction for social scoring, facial expression recognition for student proctoring, pain detection for establishing priority in emergency services). In order to facilitate the identification of the level of risk of an affective computing system, it is therefore essential to indicate whether its intended application area(s) or any foreseeable misuse are among those on the list. It should be noted that Table~\ref{tab:info_elements_aia} is not meant to be a final and exhaustive list of information elements needed for compliance with any future legal requirement. First and foremost, because the AI regulation is still under negotiation, and is therefore subject to be modified in its road towards adoption. Second, because the objective of this work is the documentation of use cases, which is just a small part of the technical documentation required to demonstrate conformity with the legal text. \newcolumntype{I}{>{\raggedright\arraybackslash}m{0.08\textwidth}} \newcolumntype{D}{>{\raggedright\arraybackslash}m{0.36\textwidth}} \begin{table}[h] \centering \begin{tabular}{ID} \textbf{Element} & \textbf{Description} \\ \toprule Intended purpose & Use for which an AI system is intended by the provider. If the system is a safety component of a product, it must be clearly stated. \\ \midrule User & Natural or legal person using an AI system under its authority. \\ \midrule Target persons & Persons or group of persons on which the system is intended to be used. \\ \midrule Context of use & Description of all forms on which the system is deployed (e.g. characteristics of the specific geographical, behavioural or functional setting) and of the hardware on which it is intended to run. \\ \midrule Application areas & List of areas in which the AI system is intended to be applied, including those in Table \ref{tab:areas}.\\ \midrule Reasonably foreseeable misuses & Uses of an AI system in a way that is not in accordance with its intended purpose, which may lead to errors, faults, inconsistencies, or risks to health, safety or fundamental rights. \\ \midrule Inputs & Data provided to or directly acquired by the system, on the basis of which the system produces an output. \\ \midrule Outputs & Outputs of the AI system as provided to the user. \\ \bottomrule \end{tabular} \caption{Key use case information elements that are needed to assess an AI system's risk level according to the AI Act.} \label{tab:info_elements_aia} \end{table} \newcolumntype{A}{>{\raggedright\arraybackslash}m{0.46\textwidth}} \begin{table}[h] \centering \begin{tabular}{A} \textbf{AREA \textcolor{violet}{$>$ POTENTIAL AFFECTIVE COMPUTING USE}} \\ \toprule - Deploy subliminal techniques beyond a person's consciousness \\ \textcolor{violet}{\hspace{0.75cm} $>$ Distort a person's behaviour to cause psychological harm}\\ - Exploit the vulnerabilities of a specific group of persons \\ \textcolor{violet}{\hspace{0.75cm} $>$ Distort a person's behaviour to cause psychological harm}\\ - Social scoring by public authorities or on their behalf \\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluation of trustworthiness based on predicted personality}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluation of trustworthiness based on social behaviour}\\ \toprule - Education and vocational training \\ \textcolor{violet}{\hspace{0.75cm} $>$ Determine access to educational institutions}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Assess students in educational institutions}\\ - Employment, workers management and access to self-employment \\ \textcolor{violet}{\hspace{0.75cm} $>$ Recruitment or selection of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Make decisions on promotion/termination of contract}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Monitoring and evaluation of performance and behaviour}\\ - Access to essential private/public services and benefits \\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluate eligibility of natural persons for public assistance}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluate creditworthiness of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Establish priority in the dispatching of emergency services}\\ - Law enforcement \\ \textcolor{violet}{\hspace{0.75cm} $>$ Make individual risk assessments of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Detect the emotional state of a natural person}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Crime profiling of natural persons}\\ - Migration, asylum and border control management \\ \textcolor{violet}{\hspace{0.75cm} $>$ Make individual risk assessments of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Detect the emotional state of a natural person}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Examine applications for asylum/visa/residence}\\ - Administration of justice and democratic processes \\ \textcolor{violet}{\hspace{0.75cm} $>$ Assist judicial authority in researching and interpreting facts}\\ \bottomrule \end{tabular} \caption{Practices and application areas listed as \textit{prohibited} (top) and \textit{high-risk} (bottom) in the AI Act, that are directly or that could be indirectly related to affective computing. Please note that this table has been generated by the authors based on their own interpretation of the AI Act as of August 2022. } \label{tab:areas} \end{table} \subsection{Revisiting UML for AI and affective computing use case documentation} \label{sec:revisit_uml} The idea of \textit{intended use} defined in the AI Act is closely related to the traditional software concept of \textit{use case}, as defined in the UML specification. UML use case diagrams do not enter into technical details (e.g. implementation details, algorithm architectures) but rather focus on the context of use, the main actors using the system, and actor-actor and actor-system interactions, which is a focus aligned with that proposed by the AI Act to assess a system's risk level. The UML language is thus a powerful, standardized and highly visual tool to operationalise the need for a unified documentation of AI use cases. In Figure~\ref{fig:use_case_tables}-right, we propose an adaptation of the classic table template accompanying UML use case diagrams~\cite{cockburn2001writing} to the AI Act's taxonomy. As can be seen, the adapted fields are minimal and there is an almost perfect correspondence with the original template. We have only renamed some key words (in blue in the table), namely \textit{scope} to \textit{intended purpose}, \textit{primary actor} to \textit{user}, \textit{stakeholders and interests} to \textit{target persons}, and \textit{open issues} to \textit{misuses}. We have also included a new field called \textit{application areas} (in green), allowing to clearly identify the area(s) in which the system is intended to be used and, if applicable, specify whether they correspond to those listed in Table~\ref{tab:areas}. \begin{figure*}[htb!] \centering \includegraphics[width=\linewidth]{figures/use_case_tables.png} \caption{Left: classic table template for the documentation of UML use case diagrams, as in~\cite{cockburn2001writing}. Right: proposed adaptation for the documentation of AI use cases, inspired by the European AI Act's definitions. Green text corresponds to added fields, while blue text is used for fields that have been adapted.} \label{fig:use_case_tables} \end{figure*} \section{Methodology in practice: example of affective computing use cases} \label{sec:examples} In this section, we apply the proposed methodology to the documentation of three representative affective computing systems. Figures~\ref{fig:uc_smile}-\ref{fig:uc_car} show their corresponding UML use case diagrams and accompanying tables, which are further described below. \\ \noindent \textbf{Smart camera}. In this first use case, the system is a smart camera that shoots a picture only when all the people posing in front of it are smiling. There are several products in the market with this feature~\cite{canon,nikon}, which have inspired this example. The UML diagram of the \textit{smart shooting} use case and its corresponding table are shown in Figure~\ref{fig:uc_smile} left and right, correspondingly. This application may seem simple and naive a priori, but it has recently caused controversy. Workers at a Beijing office were forced to smile to an AI camera to get through the front doors, change the temperature or print documents, in an attempt to improve the working environment by keeping workers happy~\cite{news_happyface}. However, some workers felt their emotions were manipulated. Our proposed UML table makes it clear that the target application domain is \textit{entertainment and leisure} exclusively, and the \textit{misuses} field explicitly emphasises that the system is not conceived to be used to monitor or manipulate emotions in contexts such as working environments. This important claim excludes the use case from the high-risk area of \textit{workers management $>$ monitoring and evaluation of performance and behaviour} (c.f. Table~\ref{tab:areas}). \\ \noindent \textbf{Affective music recommender}. Figure~\ref{fig:uc_music} shows the UML diagram and table for the second use case, corresponding to an affective music recommender system proposing songs to the user based on her personality, current mood and playlist history. This use case has been inspired by the work presented in~\cite{amini2019affective}. Several studies have shown that users' music playlists can be used to infer emotions, personality traits and vulnerabilities~\cite{deshmukh2018survey}; the other way round, certain music pieces can induce behaviours and manipulate listeners' emotions~\cite{gomez2021music}. The proposed methodology allows to frame the ethical use of the system by documenting step by step its conceived functioning, and how and for what purpose personality and mood prediction are extracted and used (based on profile data voluntarily provided by the platform's users, with the sole purpose of making the most appropriate and enjoyable music recommendations). The \textit{misuses} field further strengthens the system's ethical principles by explicitly signaling the prohibition of proposing music pre-conceived to exploit vulnerabilities, manipulate, distort or induce certain emotions or behaviour in users, which would be a \textit{prohibited practice} according to the AI Act (c.f. Table~\ref{tab:areas}).\\ \noindent \textbf{Driver attention monitoring}. The third example is a use case where a driver's face is recorded with a car in-cabin camera, and monitored in order to recognise drowsiness and distraction. When such situations are detected, the vehicle's attention monitoring system sends alerts in the form of beep tones and light symbols in the car dash (Figure~\ref{fig:uc_car}). Driver monitoring systems have been a popular affective computing application in the last decade and the modelling of this use case is inspired by different papers~\cite{kumar2018driver,govindarajan2018affective} as well as real commercial products~\cite{subaru,tesla}. The \textit{intended purpose} field in the proposed UML-based table clearly states that the system is part of a safety component of the vehicle, which immediately positions it as a high-risk profile according to the AI Act. Further, the documentation methodology allows to indicate that the system is conceived to alert the driver, but in any case to allow the vehicle to take full control of the car in an autonomous manner. \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{figures/use_case_smile.png} \caption{First use case: methodology applied to a smart camera system with embedded smile detection capabilities.} \label{fig:uc_smile} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{figures/use_case_music.png} \caption{Second use case: proposed methodology applied to an affective music recommender system.} \label{fig:uc_music} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{figures/use_case_driving.png} \caption{Third use case: proposed methodology applied to a driver attention monitoring system.} \label{fig:uc_car} \end{figure*} \section{Conclusions and future work} \label{sec:conclusions} In this paper, we propose a methodology for the documentation of AI use cases which covers the particular information elements needed to address affective computing ones. The methodology has a solid grounding, being based on two strong pillars: (1) the UML use case modelling standard, and (2) the recently proposed European AI regulatory framework. Each use case is represented in a highly visual way by means of an UML diagram, accompanied by a structured and concise table that compiles the relevant information to understand the intended use of a system, and to assess its risk level and foreseeable misuses. Our approach is not intended to be an exhaustive methodology for the technical documentation of AI or affective computing systems (e.g. to demonstrate compliance with legal acts). Rather, it aims to provide a template for compiling related use cases with a simple but effective and unified language, understandable even by non-technical audiences. We have demonstrated the power of this language through practical affective computing exemplar use cases. In the near future, we plan to develop a collaborative repository compiling a catalogue of AI --including affective computing-- use cases following the proposed template. The first step will be to transcribe the 60 facial processing applications presented in~\cite{hupont2022landscape}, which contain 18 emotion recognition use cases, in order to add them to this catalogue. \section*{Ethical Impact Statement} The methodology presented in this paper proposes the first unified documentation approach for AI use cases, with a strong focus on affective computing ones, which allows to differentiate intended uses and potential misuses. In the last years, the need for trustworthy AI has been raised by both private and public key institutions and researchers in the field~\cite{OECD,gebru2018datasheets,arnold2019factsheets,madaio2020co,hupont2022landscape}. In particular, documentation has been identified as a key factor towards the fulfilment of \textit{transparency}~\cite{hupont2022documenting}, one of the seven pillar requirements for trustworthy AI established by the High-Level Expert Group on Artificial Intelligence (AI HLEG)~\cite{HLEG}. Therefore, this work represents a major step towards ethical AI and affective computing, and could even constitute a basis for the future standardisation activities in this area. \section*{Acknowledgment} This work is partially supported by the European Commission under the HUMAINT project of the Joint Research Centre. \bibliographystyle{IEEEtran}
1,108,101,564,050
arxiv
\section{Introduction} Extremal graph theory is becoming one of the significant branches of discrete mathematics nowadays, and it has experienced an impressive growth during the last few decades. With the rapid development of combinatorial number theory and combinatorial geometry, extremal graph theory has a large number of applications to these areas of mathematics. Problems in extremal graph theory deal usually with the question of determining or estimating the maximum or minimum possible size of graphs satisfying some certain requirements, and further characterize the extremal graphs attaining the bound. For example, one of the most well-studied problems is the Tur\'{a}n-type problem, which asks to determine the maximum number of edges in a graph which forbids the occurence of some specific substructures. Such problems are related to other areas including theoretical computer science, discrete geometry, information theory and number theory. \subsection{The classical extremal graph problems} Given a graph $F$, we say that a graph $G$ is $F$-free if it does not contain an isomorphic copy of $F$ as a subgraph. For example, every bipartite graph is $C_{3}$-free, where $C_3$ is a triangle. The {\em Tur\'{a}n number} of a graph $F$ is the maximum number of edges in an $F$-free $n$-vertex graph, and it is usually denoted by $\mathrm{ex}(n, F)$. An $F$-free graph on $n$ vertices with $\mathrm{ex}(n, F)$ edges is called an {\em extremal graph} for $F$. Over a century old, a well-known theorem of Mantel \cite{Man1907} states that every $n$-vertex graph with more than $ \lfloor \frac{n^2}{4} \rfloor$ edges must contain a triangle as a subgraph. We denote by $K_{s,t}$ the complete bipartite graph with parts of sizes $s$ and $t$. \begin{theorem}[Mantel, 1907] \label{thmMan} Let $G$ be an $n$-vertex graph. If $G$ is triangle-free, then $ e(G) \le e(K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }) = \lfloor \frac{n^2}{4} \rfloor $, equality holds if and only if $G=K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$. \end{theorem} Let $K_{r+1}$ be the complete graph on $r+1$ vertices. In 1941, Tur\'{a}n \cite{Turan41} proposed the natural question of determining $\mathrm{ex}(n,K_{r+1})$ for every integer $r\ge 2$. Let $T_r(n)$ denote the complete $r$-partite graph on $n$ vertices whose part sizes are as equal as possible. That is, $T_{r}(n)=K_{t_1,t_2,\ldots ,t_r}$ with $\sum_{i=1}^r t_i=n$ and $|t_i-t_j| \le 1$ for $i\neq j$. This implies that each vertex part of $T_r(n)$ has size either $\lfloor \frac{n}{r}\rfloor$ or $\lceil \frac{n}{r}\rceil$. The graph $T_r(n)$ is usually called the Tur\'{a}n graph. In particular, we have $T_2(n)=K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$. Importantly, Tur\'{a}n \cite{Turan41} extended Mantel's theorem and proved the following result. \begin{theorem}[Tur\'{a}n, 1941] \label{thmturanstrong} Let $G$ be a graph on $n$ vertices. If $G$ is $K_{r+1}$-free, then \[ e(G)\le e(T_r(n)), \] equality holds if and only if $G$ is the $r$-partite Tur\'{a}n graph $T_r(n)$. \end{theorem} Many different proofs of Tur\'{a}n's theorem have been found in the literature; see \cite[pp. 269--273]{AZ2014} and \cite[pp. 294--301]{Bollobas78} for more details. Furthermore, there are various extensions and generalizations on Tur\'{a}n's result; see, e.g., \cite{BT1981,Bon1983}. In the language of extremal number, Tur\'{a}n's theorem can be stated as \begin{equation*} \mathrm{ex}(n,K_{r+1}) = e(T_r(n)). \end{equation*} Moreover, we can easily see that $(1-\frac{1}{r}) \frac{n^2}{2} - \frac{r}{8}\le e(T_r(n)) \le (1-\frac{1}{r}) \frac{n^2}{2} $ and $e(T_r(n))= \lfloor (1-\frac{1}{r}) \frac{n^2}{2}\rfloor$ for every integer $r\le 7$. Thus Theorem \ref{thmturanstrong} implies the explicit numerical bound $e(G)\le (1-\frac{1}{r}) \frac{n^2}{2}$ for every $n$-vertex $K_{r+1}$-free graph $G$. This bound is more concise and called the weak version of Tur\'{a}n's theorem. The problem of determining $\mathrm{ex}(n, F)$ is usually referred to as the Tur\'{a}n-type extremal graph problem. It is a cornerstone of extremal graph theory to understand $\mathrm{ex}(n, F)$ for various graphs $F$; see \cite{FS13, Sim13} for comprehensive surveys. \subsection{The spectral extremal graph problems} Let $G$ be a simple graph on $n$ vertices. The \emph{adjacency matrix} of $G$ is defined as $A(G)=[a_{ij}]\in \mathbb{R}^{n\times n}$ where $a_{ij}=1$ if two vertices $v_i$ and $v_j$ are adjacent in $G$, and $a_{ij}=0$ otherwise. We say that $G$ has eigenvalues $\lambda_1 , \lambda_2,\ldots ,\lambda_n$ if these values are eigenvalues of the adjacency matrix $A(G)$. Since $A(G)$ is a symmetric real matrix, we write $\lambda_1,\lambda_2,\ldots ,\lambda_n$ for the eigenvalues of $G$ in decreasing order. Let $\lambda (G)$ be the maximum value in absolute among all eigenvalues of $G$, which is known as the {\it spectral radius} of graph $G$. Since the adjacency matrix $A(G)$ is nonnegative, the Perron--Frobenius theorem (see, e.g., \cite[p. 120--126]{Zhan13}) implies that the spectral radius of a graph $G$ is actually the largest eigenvalue of $G$ and it corresponds to a nonnegative eigenvector. Moreover, if $G$ is further connected, then $A(G)$ is an irreducible nonnegative matrix, $\lambda (G)$ is an eigenvalue with multiplicity one and there exists an entry-wise positive eigenvector corresponding to $\lambda (G)$. \medskip The classical extremal graph problems usually study the maximum or minimum number of edges that the extremal graphs can have. Correspondingly, the extremal spectral problems are well-studied in the literature. The spectral Tur\'{a}n function $\mathrm{ex}_{\lambda}(n,F)$ is define to be the largest spectral radius (eigenvalue) of the adjacency matrix in an $F$-free $n$-vertex graph, that is, \[ \mathrm{ex}_{\lambda}(n,F):=\max \bigl\{ \lambda(G): |G|=n~\text{and}~F\nsubseteq G \bigr\}. \] In 1970, Nosal \cite{Nosal1970} determined the largest spectral radius of a triangle-free graph in terms of the number of edges, which also provided the spectral version of Mantel's theorem. Note that when we consider a graph with given number of edges, we shall ignore the possible isolated vertices if there are no confusions. \begin{theorem}[Nosal, 1970] \label{thmnosal} Let $G$ be a graph on $n$ vertices with $m$ edges. If $G$ is triangle-free, then \begin{equation} \label{eq1} \lambda (G)\le \sqrt{m} , \end{equation} equality holds if and only if $G$ is a complete bipartite graph. Moreover, we have \begin{equation} \label{eq2} \lambda (G)\le \lambda (K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil } ), \end{equation} equality holds if and only if $G$ is a balanced complete bipartite graph $K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$ \end{theorem} Nosal's theorem implies that if $G$ is a bipartite graph, then $ \lambda (G)\le \sqrt{m} $, equality holds if and only if $G$ is a complete bipartite graph. On the one hand, Nosal's theorem implies the classical Mantel theorem. Indeed, applying the Rayleigh inequality, we have $\frac{2m}{n}\le \lambda (G)\le \sqrt{m}$, which yields $ m \le \lfloor \frac{n^2}{4} \rfloor$. On the other hand, applying the Mantel theorem to (\ref{eq1}), we obtain that $ \lambda (G)\le \sqrt{m} \le \sqrt{\lfloor {n^2}/{4}\rfloor} =\lambda (K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil })$. So inequality (\ref{eq1}) in Nosal's theorem can imply inequality (\ref{eq2}), which is called the spectral Mantel theorem. \medskip Over the past few years, various extensions and generalizations on Nosal's theorem have been obtained in the literature; see, e.g., \cite{Niki2002cpc,Niki2007laa2,Niki2009jctb,Wil1986} for extensions of $K_{r+1}$-free graphs, \cite{LNW2021,ZLS2021,ZS2022dm,LP2022oddcycle} for extensions of graphs with given size. In addition, many spectral extremal problems are also obtained recently; see \cite{CFTZ20,CDT2022} for the friendship graph and the odd wheel, \cite{LP2021,DKLNTW2021} for intersecting odd cycles and cliques, \cite{Wang2022} for a recent conjecture. We recommend the surveys \cite{NikifSurvey,CZ2018,LFL2022} for interested readers. The eigenvalues of the adjacency matrix sometimes can give some information about the structure of a graph. There is a rich history on the study of bounding the eigenvalues of a graph in terms of various parameters; see \cite{BN2007jctb} for spectral radius and cliques, \cite{TT2017,LN2021} for eigenvalues of outerplanar and planar graphs, and \cite{Tait2019} for the Colin de Verdi\`{e}re parameter, excluded minors, and the spectral radius. \medskip In 1986, Wilf \cite{Wil1986} provided the first result regarding the spectral version of Tur\'{a}n's theorem and proved that for every $n$-vertex $K_{r+1}$-free graph $G$, we have \begin{equation} \label{eqwilf} \lambda (G)\le \left(1-\frac{1}{r} \right)n. \end{equation} In 2002, Nikiforov \cite{Niki2002cpc} proved that for every $m$-edge $K_{r+1}$-free graph $G$, \begin{equation} \label{eqniki} \lambda (G)\le \sqrt{2m\left( 1-\frac{1}{r} \right)}. \end{equation} It is worth mentioning that both (\ref{eqwilf}) and (\ref{eqniki}) are direct consequences of the celebrated Motzkin--Straus theorem. On the one hand, combining with $\frac{2m}{n} \le \lambda (G)$, we see that both Wilf's result and Nikiforov's result can imply the weak version of Tur\'{a}n theorem: $m\le (1-\frac{1}{r}) \frac{n^2}{2}$. On the other hand, using the weak Tur\'{a}n theorem $m\le (1-\frac{1}{r}) \frac{n^2}{2}$, we know that Nikiforov's result (\ref{eqniki}) implies Wilf's result (\ref{eqwilf}). \medskip In 2007, Nikiforov \cite{Niki2007laa2} showed a spectral version of the Tur\'{a}n theorem. \begin{theorem}[Nikiforov, 2007] \label{thm460} Let $G$ be a graph on $n$ vertices. If $G$ is $K_{r+1}$-free, then \[ \lambda (G)\le \lambda ({T_r(n)}),\] equality holds if and only if $G$ is the $r$-partite Tur\'{a}n graph $T_r(n)$. \end{theorem} In other words, Nikiforov's result implies that \[ \mathrm{ex}_{\lambda}(n,K_{r+1})= \lambda (T_r(n)).\] By calculation, we can obtain that $(1-\frac{1}{r})n - \frac{r}{4n} \le \lambda (T_r(n))\le (1-\frac{1}{r})n$. Thus, Theorem \ref{thm460} implies Wilf's result (\ref{eqwilf}). It should be mentioned that the spectral version of Tur\'{a}n's theorem was early studied independently by Guiduli in his PH.D. dissertation \cite[pp. 58--61]{Gui1996} dating back to 1996 under the guidance of L\'{a}szl\'{o} Babai. \medskip A natural question is the following: what is the relation between the spectral Tur\'{a}n theorem and the edge Tur\'{a}n theorem? Does the spectral bound imply the edge bound of Tur\'{a}n's theorem? This question was also proposed and answered in \cite{Gui1996,Niki2009jctb}. The answer is positive. It is well-known that ${e(G)} \le \frac{n}{2}\lambda (G)$, with equality if and only if $G$ is regular. Although the Tur\'{a}n graph $T_r(n)$ is sometimes not regular, but it is nearly regular. Upon calculation, it follows that $ {e(T_r(n))} = \left\lfloor \frac{n}{2}\lambda (T_r(n)) \right\rfloor$. With the help of this observation, the spectral Tur\'{a}n theorem implies that \begin{equation*} \label{eqeqq3} e(G) \le \left\lfloor \frac{n}{2} \lambda (G) \right\rfloor \le \left\lfloor \frac{n}{2} \lambda (T_r(n)) \right\rfloor =e(T_r(n)). \end{equation*} Thus the spectral Tur\'{a}n Theorem \ref{thm460} implies the classical Tur\'{a}n Theorem \ref{thmturanstrong}. To some extent, this interesting implication has shed new lights on the study of spectral extremal graph theory. \medskip Recently, Lin, Ning and Wu \cite[Theorem 1.4]{LNW2021} proved a generalization of Nosal's theorem for non-bipartite triangle-free graphs (Theorem \ref{thmLNW}). In this paper, we shall extend the result of Lin, Ning and Wu to non-$r$-partite $K_{r+1}$-free graphs. Our result is also a refinement on Theorem \ref{thm460} in the sense of stability result (Theorem \ref{thm214}). The motivation is inspired by the works \cite{Gui1996,Niki2007laa2,HJZ2013,KN2014}, and it uses mainly the spectral extension of the Zykov symmetrization \cite{Zykov1949}. This article is organized as follows. In Section \ref{sec2}, we shall give an alternative proof of the spectral Tur\'{a}n Theorem \ref{thm460}. To make the proof of Theorem \ref{thm214} more transparent, we will present a different proof of the result of Lin, Ning and Wu \cite{LNW2021} in Section \ref{sec3}. In Section \ref{sec4}, we shall show the detailed proof of our main result (Theorem \ref{thm214}). In Section \ref{sec5}, we shall discuss the spectral extremal problem in terms of the $p$-spectral radius. Section \ref{sec6} contains some possible open problems, including the spectral extremal problems for $F$-free graphs with the chromatic number $\chi (G)\ge t$, the problems in terms of the signless Laplacian spectral radius, and the $A_{\alpha}$-spectral radius of a graph. \section{Alternative proof of Theorem \ref{thm460}} \label{sec2} The proof of Nikiforov \cite{Niki2007laa2} for Theorem \ref{thm460} is more algebraic and based on the characteristic polynomial of the complete $r$-partite graph. Moreover, his proof also relies on a theorem relating the spectral radius and the number of cliques \cite{Niki2002cpc}, as well as an old theorem of Zykov \cite{Zykov1949} (also proved independently by Erd\H{o}s \cite{Erd1962}), which is now known as the clique version of Tur\'{a}n's theorem. In addition, the proof of Guiduli \cite[pp. 58--61]{Gui1996} for the spectral Tur\'{a}n theorem is completely different from that of Nikiforov. The main idea of Guiduli's proof reduces the problem of bounding the largest spectral radius among $K_{r+1}$-free graphs to complete $r$-partite graphs by applying a spectral extension of Erd\H{o}s' degree majorization algorithm \cite{Erd1970}. Then one can show further that the balanced complete $r$-partite graph attains the maximum spectral radius among all complete $r$-partite graphs; see, e.g., \cite{HJZ2013,KN2014} for more spectral applications, and \cite{Fur2015,BBCLMS2017} for related topics. In this section, we shall provide an alternative proof of Theorem \ref{thm460}. The proof is motivated by the papers \cite{Gui1996,HJZ2013,KN2014}, and it is based on a spectral extension of the Zykov symmetrization \cite{Zykov1949}, which is becoming a powerful tool for extremal graph problems; see, e.g., \cite{FM2017} for a recent application on the minimum number of triangular edges. For $\bm{x}=(x_1,x_2,\ldots ,x_n)^T \in \mathbb{R}^n$, we denote $\lVert \bm{x}\rVert_2= (\sum_{i=1}^n |x_i|^2)^{1/2}$. Since the adjacency matrix $A(G)$ is real and symmetric, the Rayleigh formula gives \[ \lambda (G)=\max_{\lVert \bm{x}\rVert_2=1} \bm{x}^TA(G) \bm{x} =\max_{\lVert \bm{x}\rVert_2=1} 2\sum_{\{i,j\} \in E(G)} x_ix_j. \] We denote $|\bm{x}| = (|x_1|, |x_2|, \ldots ,|x_n|)^T$. Suppose that $\bm{x}\in \mathbb{R}^n$ is an optimal vector, i.e., $\lVert \bm{x}\rVert_2=1$ and $\lambda (G)=\bm{x}^TA(G)\bm{x}$, then so is $|\bm{x}| $. Thus there is always a non-negative unit vector $\bm{x}\in \mathbb{R}_{\ge 0}^n$ such that $\lambda (G)= \bm{x}^TA(G)\bm{x}$. Given a vector $\bm{x}$, we know from Rayleigh's formula (or Lagrange's multiplier method) that $\bm{x}$ is an optimal vector if and only if $\bm{x}$ is a unit eigenvector corresponding to $\lambda (G)$. Namely, for every $v\in V(G)$, we have \[ \lambda (G) x_v =(A(G)\bm{x})_v =\sum_{u\in V(G)} a_{vu}x_u = \sum_{u\in N_G(v)} x_u. \] This equation implies that if $G$ is connected, then every nonnegative optimal vector is entry-wise positive. Indeed, otherwise, if $x_v=0$ for some $v\in V(G)$, then we get $x_u=0$ for every $u\in N(v)$. Similarly, we get $x_w=0$ for every $w\in N(u)$. The connectivity of $G$ leads to $x_w=0$ for every $w\in V(G)$, and so $\bm{x}$ is a zero vector, which is a contradiction. Thus there exists an entry-wise positive eigenvector $\bm{x}\in \mathbb{R}_{>0}^n$ corresponding to $\lambda (G)$ whenever $G$ is a connected graph. This fact will be frequently used throughout the paper. \medskip The following Lemma was proved by Feng, Li and Zhang in \cite[Theorem 2.1]{FLZ2007} by using the characteristic polynomial of a complete multipartite graph; see, e.g., \cite[Theorem 2]{KNY2015} for an alternative proof and more extensions. \begin{lemma}[Feng--Li--Zhang, 2007] \label{lem-FLZ} If $G$ is an $r$-partite graph on $n$ vertices, then \[ \lambda (G) \le \lambda (T_r(n)), \] equality holds if and only if $G$ is the $r$-partite Tur\'{a}n graph $T_r(n)$. \end{lemma} Now, we present our alternative proof of Theorem \ref{thm460}. \begin{proof}[{\bf Proof of Theorem \ref{thm460}}] Let $G$ be a $K_{r+1}$-free graph on $n$ vertices with maximum value of the spectral radius and $V(G)=\{1,2,\ldots ,n\}$. Firstly, we show that $G$ is a connected graph. Otherwise, adding a new edge between a component attaining the spectral radius of $G$ and any other component will strictly increase the spectral radius of $G$, and it does not create a copy of $K_{r+1}$. Since $G$ is connected, we take $\bm{x} \in \mathbb{R}_{>0}^n$ as a unit positive eigenvector of $\lambda (G)$. Hence, we have \[ \lambda (G)= 2 \sum_{\{i,j\} \in E(G)} x_ix_j . \] Our goal is to show that $G$ is the Tur\'{a}n graph $T_r(n)$. First of all, we will prove that $G$ must be a complete $t$-partite graph for some integer $t$. Since $G$ is $K_{r+1}$-free, this implies $2\le t\le r$. Observe that $G$ attains the maximum spectral radius, Lemma \ref{lem-FLZ} implies that $G$ is further balanced, i.e., $G$ is the $t$-partite Tur\'{a}n graph $T_t(n)$. Note that $\lambda (T_t(n))\le \lambda (T_r(n))$. The maximality gives $t=r$ and $G=T_r(n)$. We assume on the contrary that $G$ is not complete $t$-partite for every $ t\in [2, r]$, so there are three vertices $u,v,w \in V(G)$ such that $vu\notin E(G)$ and $uw\notin E(G)$ while $vw\in E(G)$. (This reveals that the non-edge relation between vertices is not an equivalent binary relation, as it does not satisfy the transitivity.) Throughout the paper, we denote by $s_G(v,\bm{x}) $ the sum of weights of vertices in $N_G(v)$. Namely, \[ \boxed{s_G(v,\bm{x}) :=\sum_{i\in N_G(v)} x_i.} \] \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{proofTuran.png} \caption{The spectral Zykov symmetrization.} \label{fig-1} \end{figure} {\bf Case 1.}~ $s_G(u,\bm{x}) < s_G(v,\bm{x}) $ or $ s_G(u,\bm{x}) < s_G(w,\bm{x}) $. We may assume that $s_G(u,\bm{x}) < s_G(v,\bm{x})$. Then we duplicate the vertex $v$, that is, we create a new vertex $v'$ which has exactly the same neighbors as $v$, but $vv'$ is not an edge, and we delete the vertex $u$ and its incident edges; see the left graph in Figure \ref{fig-1}. Moreover, we distribute the value $x_u$ to the new vertex $v'$, and keep the other coordinates of $\bm{x}$ unchanged. It is not hard to verify that the new graph $G'$ has still no copy of $K_{r+1}$ and \begin{equation*} \begin{aligned} \lambda (G')\ge 2\sum_{\{i,j\} \in E(G')} x_ix_j &= 2 \sum_{\{i,j\} \in E(G)} x_ix_j - 2x_u s_G(u,\bm{x}) + 2x_u s_G(v,\bm{x})\\ &> 2 \sum_{\{i,j\} \in E(G)} x_ix_j =\lambda (G), \end{aligned} \end{equation*} where we used the positivity of vector $\bm{x}$. This contradicts with the choice of $G$. {\bf Case 2.}~ $s_G(u,\bm{x}) \ge s_G(v,\bm{x})$ and $s_G(u,\bm{x}) \ge s_G(w,\bm{x})$. We copy the vertex $u$ twice and delete vertices $v$ and $w$ with their incident edges; see the right graph in Figure \ref{fig-1}. Similarly, we distribute the value $x_v$ to the new vertex $u'$, and $x_w$ to the new vertex $u''$, and keep the other coordinates of $\bm{x}$ unchanged. Moreover, the new graph $G''$ contains no copy of $K_{r+1}$ and \begin{equation*} \begin{aligned} \lambda (G'')\ge 2\sum_{\{i,j\} \in E(G'')} x_ix_j &=2 \sum_{\{i,j\} \in E(G)} x_ix_j - 2x_v s_G(v,\bm{x}) - 2x_w s_G(w,\bm{x}) \\ & \quad \quad \quad \,\,\,\, + 2x_vx_w + 2x_v s_G(u,\bm{x})+ 2x_w s_G(u,\bm{x}) \\ &> \sum_{i=1}^n x_i s_G(i,\bm{x}) =\lambda (G). \end{aligned} \end{equation*} So we get a contradiction again. \end{proof} We conclude here that the spectral version of Zykov's symmetrization starts with a $K_{r+1}$-free graph $G$ with vertex set $V=\{1,2,\ldots ,n\}$, and at each step takes two {\it non-adjacent} vertices $v_i$ and $v_j$ such that $s_G(v_i, \bm{x}) > s_G(v_j, \bm{x})$, and deleting all edges incident to $v_j$, and adding new edges between vertex $v_j$ and the neighborhood $N(v_i)$. We do the same if $s_G(v_i, \bm{x}) = s_G(v_j, \bm{x})$ and $N(v_i) \neq N(v_j)$ for $i<j$. The spectral version of Zykov's symmetrization does not increase the size of the largest clique and does not decrease the spectral radius\footnote{Combining Rayleigh's formula or Lagrange's multiplier method, one can show further that the spectral radius will increase strictly whenever all coordinates of the vector $\bm{x}$ are positive.}. When the process terminates, it yields a complete multipartite graph with at most $r$ vertex parts. Otherwise, there are three vertices $u,v,w\in V(G)$ such that $vu\notin E(G)$ and $uw\notin E(G)$ but $vw\in E(G)$. Applying the same case analysis as in Theorem \ref{thm460}, we will get a new graph with larger spectral radius, which is a contradiction. \medskip We illustrate the difference of the spectral extension between the Erd\H{o}s degree majorization algorithm and the Zykov symmetrization. Recall that the spectral version of the Erd\H{o}s degree majorization algorithm asks us to choose a vertex $v\in V(G)$ with maximum value of $s_G(v ,\bm{x})$ among all vertices of $G$, and we remove all edges incident to vertices of $V(G)\setminus ( N_G(v) \cup \{v\})$, and then add all edges between $N_G(v)$ and $V(G)\setminus N_G(v)$. We observe that this operation makes each vertex of $V(G)\setminus (N_G(v)\cup \{v\}$ being a copy of the vertex $v$. Since $G$ is $K_{r+1}$-free, we see that the subgraph of $G$ induced by $N_G(v)$ is $K_r$-free. We denote by $V_1=V(G) \setminus N_G(v)$. Next, we do the same operation on vertex set ${V_1^c}=N_G(v)$. More precisely, we further choose a vertex $u\in {V_1^c}$ with maximum value of $s_G(u,\bm{x})$ over all vertices of $ V_1^c$, and we remove all edges incident to vertices $V_1^c \setminus (N_{V_1^c}(u) \cup \{u\})$, and then add all edges between $N_{V_1^c}(u)$ and $V_1^c \setminus N_{V_1^c}(u)$. By using this operation repeatedly, we get a complete $r$-partite graph $H$ on the same vertex set $V(G)$. Furthermore, one can verify that the majorization inequality $s_G(v,\bm{x}) \le s_H(v, \bm{x})$ holds for every vertex $v\in V(G)$; see, e.g., \cite{Gui1996, HJZ2013, KN2014}. \medskip The spectral extensions of the Erd\H{o}s majorization algorithm and the Zykov symmetrization share some similarities. For example, these two operations ask us to compare the sum of weights of neighbors, and turn a $K_{r+1}$-free graph into a complete $r$-partite. Importantly, these two operations do not create a copy of $K_{r+1}$ and do not decrease the value of spectral radius. The only difference between them is that one step of the Erd\H{o}s operation will change many vertices and its incident edges, while one step of the Zykov operation will only change two vertices and its incident edges. This subtle difference will bring great convenience in later Sections \ref{sec3} and \ref{sec4}. As a matter of fact, at each step of the Erd\H{o}s operation, there are many times of actions of the Zykov operation. In other words, each step of the Erd\H{o}s operation can be decomposed as a series of the Zykov operation. \section{Refinement for triangle-free graphs} \label{sec3} Mantel's theorem has many interesting applications and miscellaneous generalizations in the literature; see, e.g., \cite{Bollobas78,BT1981,Bon1983,Sim13} and references therein. In particular, Mantel's Theorem \ref{thmMan} was refined in the sense of the following stability form. \begin{theorem}[Erd\H{o}s] \label{thmErd} Let $G$ be an $n$-vertex graph containing no triangle. If $G$ is not bipartite, then $ e(G)\le \lfloor \frac{(n-1)^2}{4} \rfloor+1 $. \end{theorem} \begin{figure}[htbp] \centering \includegraphics[scale=0.75]{Erdos.png} \\ \caption{Two drawings of extremal graphs in Theorem \ref{thmErd}.} \label{fig-2} \end{figure} It is said that this stability result attributes to Erd\H{o}s; see \cite[Page 306, Exercise 12.2.7]{BM2008}. The bound in Theorem \ref{thmErd} is best possible and the extremal graph is not unique. To show that the bound is sharp for all integers $n$, we take two vertex sets $X$ and $Y$ with $|X|= \lfloor \frac{n}{2} \rfloor$ and $|Y|= \lceil \frac{n}{2} \rceil$. We take two vertices $u,v \in Y$ and join them, then we put every edge between $X$ and $Y\setminus \{u,v\}$. We partition $X$ into two parts $X_1$ and $X_2$ {\it arbitrarily} (this shows that the extremal graph is not unique), then we connect $u$ to every vertex in $X_1$, and $v$ to every vertex in $X_2$; see Figure \ref{fig-2}. This yields a graph $G$ which contains no triangle and $e(G)= \lfloor \frac{n^2}{4}\rfloor - \lfloor \frac{n}{2}\rfloor +1 = \lfloor \frac{(n-1)^2}{4} \rfloor +1 $. Note that $G$ has a $5$-cycle, so $G$ is not bipartite. \medskip In 2021, Lin, Ning and Wu \cite[Theorem 1.4]{LNW2021} proved a generalization on (\ref{eq2}) in Nosal's Theorem \ref{thmnosal} for non-bipartite graphs. Let $SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$ denote the subdivision of the complete bipartite graph $K_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$ on one edge; see Figure \ref{fig-3}. Clearly, $SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$ is one of the extremal graphs in Theorem \ref{thmErd} by setting $|X_1|=\lfloor \frac{n}{2}\rfloor -1$ and $|X_2|=1$ in Figure \ref{fig-2}. \begin{figure}[htbp] \centering \includegraphics[scale=0.75]{Non-bipartite-LNW.png} \caption{Two drawings of the graph $SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$.} \label{fig-3} \end{figure} \begin{theorem}[Lin--Ning--Wu, 2021] \label{thmLNW} Let $G$ be an $n$-vertex graph. If $G$ is triangle-free and non-bipartite, then \[ \lambda (G) \le \lambda (SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}), \] equality holds if and only if $G=SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$. \end{theorem} Theorem \ref{thmLNW} is also a corresponding spectral version of Erd\H{o}s' stability Theorem \ref{thmErd}, while the extremal graph in spectral problem is uniquely $SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$; see \cite{LG2021,LSY2022} for a recent extension on graphs without short odd cycles, and \cite{LP2022} for more stability theorems on spectral graph problems. \medskip In this section, we shall provide an alternative proof of Theorem \ref{thmLNW}. One of the key ideas in the proof is to use the spectral Zykov symmetrization, which provides great convenience to yield a clearly approximate structure of the required extremal graph. Moreover, the ideas in this proof can benefit us to extend Theorem \ref{thmLNW} to $K_{r+1}$-free non-$r$-partite graphs, which will be discussed in Section \ref{sec4}. Before starting the proof, we include the following lemma, which is a direct consequence by computations; see, e.g., \cite[Appendix A]{LNW2021}. \begin{lemma} \label{lempp} If $G$ is a graph on $n=a+b+1$ vertices obtained from $K_{a,b}$ by subdividing an edge arbitrarily, then \[ \lambda (G) \le \lambda (SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}), \] equality holds if and only if $G=SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$. \end{lemma} \begin{proof} We denote by $SK_{a,b}$ the graph obtained from $K_{a,b}$ by subdividing an edge. Let $s,t$ be two positive integers with $t\ge s\ge 1$. It suffices to show that \[ \lambda (SK_{s+1,t+3}) < \lambda (SK_{s+2,t+2}). \] By computation, the spectral radius of $SK_{a,b}$ is the largest root of \begin{equation*} \begin{aligned} F_{a,b}(x) :=x^5 - (ab+1) x^3+ (3ab -2a -2b +1)x -2ab +2a +2b -2. \end{aligned} \end{equation*} Hence $\lambda(SK_{s+2,t+2})$ is the largest root of \[ F_{s+2,t+2}(x)= x^5 - (2s+2t +st +5)x^3 + (4s+4t +3st +5)x -2s-2t -2st -2. \] Similarly, $\lambda (SK_{s+1,t+3})$ is the largest root of $F_{s+1,t+3}(x)$. Note that \[ F_{s+2,t+2}(x) - F_{s+1,t+3}(x) = - (x-1)^2(x+2)(t-s+1). \] This implies $ F_{s+2,t+2}(x) < F_{s+1,t+3}(x)$ for every $x>1$. Since $K_{2,3}$ is a subgraph of $SK_{s+1,t+3}$, we know that $\lambda (SK_{s+1,t+3}) \ge \lambda (K_{2,3})=\sqrt{6}$. Thus, we have \[ F_{s+2,t+2}(\lambda (SK_{s+1,t+3})) < F_{s+1,t+3}(\lambda (SK_{s+1,t+3}))=0. \] Therefore, we obtain $\lambda (SK_{s+1,t+3}) < \lambda (SK_{s+2,t+2})$. \end{proof} Now we are ready to show our proof of Theorem \ref{thmLNW}. For two {\it non-adjacent} vertices $u,v\in V(G)$, we denote the Zykov symmetrization $Z_{u,v}(G)$ to be the graph obtained from $G$ by replacing $u$ with a twin of $v$, that is, deleting all edges incident to vertex $u$, and then adding new edges from $u$ to $N_G(v)$. We can verify that the Zykov symmetrization does not increase both the clique number $\omega (G)$ and the chromatic number $\chi (G)$. More precisely, we have $\omega (Z_{u,v}(G)) = \omega (G \setminus \{u\})$ and $\chi (Z_{u,v}(G)) = \chi (G \setminus \{u\})$. Let $\bm{x} \in \mathbb{R}_{>0}^n$ be a {\it positive unit eigenvector} corresponding to $\lambda (G)$. Recall that $s_G(v,\bm{x}) =\sum_{i\in N_G(v)} x_i$ is denoted by the sum of weights of all neighbors of $v$ in $G$. For two {\it non-adjacent} vertices $u,v$, if $s_G(u,\bm{x}) < s_G(v,\bm{x})$, then we replace $G$ with $Z_{u,v}(G)$. Apparently, the spectral Zykov symmetrization does not make triangles. More importantly, it will increase strictly the spectral radius, since \begin{equation*} \begin{aligned} \lambda (Z_{u,v}(G)) \ge 2\sum_{\{i,j\} \in E(Z_{u,v}(G))} x_ix_j &= 2 \sum_{\{i,j\} \in E(G)} x_ix_j - 2x_u s_G(u,\bm{x}) + 2x_u s_G(v,\bm{x})\\ &> 2 \sum_{\{i,j\} \in E(G)} x_ix_j =\lambda (G). \end{aligned} \end{equation*} If $s_G(u,\bm{x}) = s_G(v,\bm{x})$ and $N_G(u)\neq N_G(v)$, then we can apply either $Z_{u,v}$ or $Z_{v,u}$, which leads to $N(u)=N(v)$ after making the spectral Zykov symmetrization, and while the operation will keep the spectral radius $\lambda (G)$ increasing strictly. Indeed, we can easily see that \[ \lambda (Z_{u,v}(G)) \ge 2\sum_{\{i,j\} \in E(Z_{u,v}(G))} x_ix_j = 2 \sum_{\{i,j\} \in E(G)} x_ix_j =\lambda (G). \] We claim that $\lambda (Z_{u,v}(G)) > \lambda (G)$. Assume on the contrary that $\lambda (Z_{u,v}(G)) = \lambda (G)$, then the inequality in above become an equality, thus $\bm{x}$ is an eigenvector of $\lambda (Z_{u,v}(G))$. Namely, $A(Z_{u,v}(G))\bm{x} = \lambda (Z_{u,v}(G)) \bm{x} =\lambda (G)\bm{x}$. Taking any vertex $z\in N_G(v) \setminus N_G(u)$, we observe that $\lambda (Z_{u,v}(G))x_z=\sum_{t\in N_G(z)\cup \{u\}} x_t > \sum_{t\in N_G(z)} x_t = \lambda (G)x_z$. Consequently, we get $\lambda (Z_{u,v}(G)) > \lambda (G)$, which contradicts with our assumption. It is worth emphasizing that the positivity of $\bm{x}$ is necessary in above discussions. Roughly speaking, applying the spectral Zykov symmetrization will make the $K_{r+1}$-free graph more regular in some sense according to the weights of the eigenvector. \begin{proof}[{\bf Proof of Theorem \ref{thmLNW}}] Let $G$ be a non-bipartite triangle-free graph on $n$ vertices with the largest spectral radius. Our goal is to show that $G=SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$. Clearly, we know that $G$ is connected. Otherwise, any addition of an edge between a component with the maximum spectral radius and any other component will strictly increase the spectral radius. Since $G$ is connected, there exists a positive unit eigenvector corresponding to $\lambda (G)$, and then we denote such a vector by $\bm{x}=(x_1,\ldots ,x_n)^T$, where $x_i>0$ for every $i$. Since $G$ is triangle-free, we apply repeatedly the spectral Zykov symmetrization for every pair of non-adjacent vertices until it becomes a bipartite graph. Without loss of generality, we may assume that $G$ is triangle-free and non-bipartite, while $Z_{u,v}(G)$ is bipartite. We next are going to show that $\lambda (G) \le \lambda (SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil})$, equality holds if and only if $G=SK_{\lfloor \frac{n-1}{2}\rfloor,\lceil \frac{n-1}{2}\rceil}$. Since $Z_{u,v}(G)$ is bipartite, we know that $G\setminus \{u\}$ is bipartite. We denote $V(G)\setminus \{u\}=V_1\cup V_2 $, where $V_1,V_2$ are disjoint and $|V_1| + |V_2|=n-1$. Assume that $C=N(u)\cap V_1$ and $D=N(u)\cap V_2$. We denote $A=V_1 \setminus C$ and $B=V_2\setminus D$. Since $G$ is triangle-free, there are no edges between parts $C$ and $D$. As $G$ attains the largest spectral radius, we know that the pair of parts $(A,B),(A,D)$ and $(B,C)$ are complete bipartite subgraphs; see Figure \ref{fig-4}. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{proofLNW.png} \caption{An approximate structure of $G$.} \label{fig-4} \end{figure} Note that each vertex in $A$ has the same neighborhood, we know that the coordinates $\{x_v:v\in A\}$ are all equal. This property holds similarly for vertices in $B,C$ and $D$ respectively. Thus, we write $x_a$ for the value of the entries of $\bm{x}$ in vertex set $A$. And $x_b,x_c $ and $x_d$ are defined similarly. The remaining steps of our proof are outlined as follows. \begin{itemize} \item[\ding{73}] If $|A|x_a \ge |B|x_b$, then we delete $|C|-1$ vertices in $C$ with its incident edges, and add $|C|-1$ vertices to $D$ and connect these vertices to $A\cup \{u\}$. We keep the weight of these new vertices being $x_c$ and denote the new graph by $G'$. We can verify that $\lambda (G') \ge 2 \sum_{\{i,j\}\in E(G)} x_ix_j - 2(|C|-1)|B| x_cx_b + 2(|C|-1)|A|x_cx_a \ge 2\sum_{\{i,j\}\in E(G)} x_ix_j =\lambda (G)$. In fact, we can further prove that $\lambda (G') > \lambda (G)$. Otherwise, if $\lambda (G')= \lambda (G)$, then $\bm{x}$ is the Perron vector of $G'$, that is, $A(G')=\lambda (G')\bm{x}=\lambda (G)\bm{x}$. Taking any vertex $z\in A$, we observe that $\lambda (G')x_z=\sum_{v\in N_{G'}(z)} x_v = \sum_{v\in N_G(z)} x_v + (|C|-1)x_c > \sum_{v\in N_G(z)} x_v = \lambda (G) x_z$, and then $\lambda (G') > \lambda (G)$, which is a contradiction. \item[\ding{78}] If $|A|x_a < |B|x_b$, then we can delete $|D|-1$ vertices from $D$ with its incident edges, and add $|D|-1$ vertices to $C$ and join these vertices to $B\cup \{u\}$. Similarly, we can show that this process will increase the spectral radius strictly. From the above discussion, we can always remove the vertices to force either $|C|=1$ or $|D|=1$. Without loss of generality, we may assume that $|C|=1$ and $C=\{c\}$. \item[\ding{77}] If $x_u\ge x_c$, then we remove $|B|-1$ vertices from $B$ with its incident edges, and add $|B|-1$ vertices to $D$ and join these vertices to $A\cup \{u\}$. We keep the weight of these new vertices being $x_b$ and denote the new graph by $G^*$. Then $\lambda (G^*) \ge 2 \sum_{\{i,j\}\in E(G)} x_ix_j - 2(|B|-1)x_bx_c +2(|B|-1)x_bx_u \ge 2 \sum_{\{i,j\}\in E(G)} x_ix_j =\lambda (G)$. Furthermore, by Rayleigh's formula, we know that the first inequality holds strictly. Thus we conclude in the new graph $G^*$ that $B$ is a single vertex, say $B=\{b\}$. We observe that the graph $G^*$ is a subdivision of a complete bipartite graph on $(A\cup \{u\}, \{b\} \cup D)$ by subdividing the edge $\{b,u\}$. \item[\ding{72}] If $x_u< x_c$, then we delete $|D|-1$ vertices from $D$ with its incident edges, and add $|D|-1$ vertices to $B$ and join these new vertices to $A\cup \{c\}$. Keeping the weight of vertices unchanged, we denote the new graph by $G^{\star}$. Then we can similarly get $\lambda (G^{\star}) > \lambda (G)$. In the graph $G^{\star}$, we have $|D|=1$ and write $D=\{d\}$. Thus $G^{\star}$ is a subdivision of a complete bipartite graph on $(A\cup \{c\}, B\cup \{d\})$ by subdividing the edge $\{c,d\}$. \end{itemize} From our discussion above, we know that if $G$ is an $n$-vertex triangle-free non-bipartite graph and attains the maximum spectral radius, then $G$ is a subdivision of a complete bipartite by subdividing exactly one edge. Lemma \ref{lempp} implies that $G$ is a subdivision of a balanced complete bipartite graph on $n-1$ vertices. \end{proof} \section{Refinement of spectral Tur\'{a}n theorem} \label{sec4} In 1981, Brouwer \cite{Bro1981} proved the following improvement on Tur\'{a}n's Theorem \ref{thmturanstrong}. \begin{theorem}[Brouwer, 1981] \label{thmBrouwer} Let $n\ge 2r+1$ be an integer and $G$ be an $n$-vertex graph. If $G$ is $K_{r+1}$-free and $G$ is not $r$-partite, then \[ e(G)\le e(T_r(n)) - \left\lfloor \frac{n}{r} \right\rfloor +1. \] \end{theorem} Theorem \ref{thmBrouwer} was also independently studied in many references, e.g., \cite{AFGS2013,HT1991,KP2005,TU2015}. Similar with that of Theorem \ref{thmErd}, the bound of Theorem \ref{thmBrouwer} is sharp and there are many extremal graphs attaining this bound. We would like to illustrate the reason why we are interested in the study of the family of non-$r$-partite graphs. On the one hand, the Erd\H{o}s degree majorization algorithm \cite{Erd1970} or \cite[pp. 295--296]{Bollobas78} implies that if $G$ is an $n$-vertex $K_{r+1}$-free graph, then there exists an $r$-partite graph $H$ on the same vertex set $V(G)$ such that $d_G(v) \le d_H(v)$ for every vertex $v$. Consequently, we get $e(G)\le e(H)\le e(T_r(n))$. Hence it is meaningful to determine the family of graphs attaining the second largest value of the extremal function. This problem is also called the stability problem. On the other hand, there are various ways to study the extremal graph problems under some reasonable constraints. For example, the condition of non-$r$-partite graph is equivalent to saying the chromatic number $\chi (G)\ge r+1$. Moreover, we can also consider the extremal problem under the restriction $\alpha (G) \le f(n)$ for a given function $f(n)$, where $\alpha (G)$ is the independence number of $G$. This is the well-known Ramsey--Tur\'{a}n problem; see \cite{SS2001} for a comprehensive survey. The proof of Theorem \ref{thmLNW} stated in Section \ref{sec3} can bring us more effective treatment for the extremal spectral problem when $K_{r+1}$ is a forbidden subgraph. In what follows, we shall extend Lin--Ning--Wu's Theorem \ref{thmLNW}. Our result is also a spectral version of Brouwer's Theorem \ref{thmBrouwer}. Recall that $T_r(n)$ is the $n$-vertex $r$-partite Tur\'{a}n graph in which the parts $T_1$, $T_2$, $\ldots ,T_r$ have sizes $t_1,t_2,\ldots ,t_r$ respectively. We may assume that $\lfloor \frac{n}{r}\rfloor = t_1\le t_2 \le \cdots \le t_r = \lceil \frac{n}{r}\rceil$. Now, we are going to define a new graph obtained from $T_r(n)$. Firstly, we choose two vertex parts $T_1$ and $T_r$. Secondly, we add a new edge into $T_r$, denote by $uw$, and then remove all edges between $T_1$ and $\{u,w\}$. Finally, we connect $u$ to a vertex $v\in T_1$, and connect $w$ to the remaining vertices of $T_1$. The resulting graph is denoted by $Y_r(n)$; see Figure \ref{fig-5}. Clearly, $Y_r(n)$ is one of the extremal graphs of Brouwer's theorem. \begin{figure}[htbp] \centering \includegraphics[scale=0.75]{Impro-Turan.png} \caption{The graph $Y_r(n)$ for $n=13 $ and $r=3$.} \label{fig-5} \end{figure} \begin{lemma} \label{lem42} Let $K_{b_1,b_2,\ldots ,b_r}$ be the complete $r$-partite graph with parts $B_1,B_2,\ldots ,B_r$ satisfying $|B_i|=b_i$ for every $i\in [r]$ and $\sum_{i=1}^r b_i=n-1$. Let $G$ be an $n$-vertex graph obtained from $K_{b_1,b_2,\ldots ,b_r}$ by adding a new vertex $u$ and choosing $v\in B_1, w\in B_2$, and removing the edge $vw$, and adding the edges $uv, uw$ and $ut$ for every $t\in \cup_{i=3}^r B_i$. Then \[ \lambda (G) \le \lambda (Y_r(n)). \] Moreover, the equality holds if and only if $G=Y_r(n)$. \end{lemma} Next, we illustrate the construction of $Y_r(n)$ in another way. Let $T_r(n-1)$ be the $r$-partite Tur\'{a}n graph on $n-1$ vertices whose parts $S_1$, $S_2, \ldots $, $S_r$ have sizes $s_1,s_2,\ldots ,s_r$ such that $\lfloor \frac{n-1}{r}\rfloor = s_1\le s_2 \le \cdots \le s_r = \lceil \frac{n-1}{r}\rceil$. Note that $Y_r(n)$ can also be obtained from $T_r(n-1)$ by adding a new vertex $u$, and choosing two vertices $v\in S_1$ and $w\in S_2$, and deleting the edge $vw$, and adding the edges $uv,uw$ and $ut$ for every vertex $t \in \cup_{i=3}^r S_i$. Hence Lemma \ref{lem42} states that $G$ attains the maximum spectral radius when its part sizes $b_1,b_2,\ldots ,b_r$ are as equal as possible and the two special vertices $v,w$ are located in the smallest two parts respectively. We know that $\lambda (G)$ is the largest root of the characteristic polynomial $P_G(x)=\det (xI_n- A(G))$. It is operable to compute $\lambda (G)$ exactly for some small integers $r$ by using computers, while it seems complicated for large $r$. \begin{proof}[{\bf Proof of Lemma \ref{lem42}}] Let $G$ be a graph satisfying the requirement of Lemma \ref{lem42} and $G$ has the maximum spectral radius. We will show that $G=Y_r(n)$. Since $G$ is connected, there exists a positive unit eigenvector $\bm{x}\in \mathbb{R}^n$ corresponding to $\lambda (G)$. Then $A(G) \bm{x}=\lambda (G) \bm{x}$ and \[ \lambda (G)= \bm{x}^TA(G)\bm{x}=2\sum_{\{i,j\}\in E(G)} x_ix_j. \] Moreover, the eigen-equation gives that $ \lambda (G) x_v= \sum_{u\in N(v)} x_u$ for every $v\in V(G)$. It follows that if two non-adjacent vertices have the same neighborhood, then they have the same value in the corresponding coordinate of $\bm{x}$. Thus all coordinates of $\bm{x}$ corresponding to the vertices of $B_i$ are equal for every $i\in \{3,4,\ldots ,r\}$. We write $x_i$ for the value of the entries of $\bm{x}$ corresponding to vertices of $B_i$ for each $i\in \{3,\ldots ,r\}$. We denote $B_1^-=B_1 \setminus \{v\}$ and $B_2^-=B_2 \setminus \{w\}$. Similarly, all coordinates of $\bm{x}$ corresponding to the vertices of $B_i^-$ are equal for $i\in \{1,2\}$. Assume on the contrary that $G$ is not isomorphic to $Y_r(n)$. In other words, there are two parts $B_i$ and $B_j$ such that $|b_i-b_j| \ge 2$, or $b_i \le b_j-1$ for some $i\in \{3,4,\ldots ,n\}$ and $j\in \{1,2\}$. By the symmetry, there are four cases listed below. \begin{itemize} \item[(i)] $b_i \le b_j-2$ for some $i,j\in \{3,\ldots ,r\}$; \item[(ii)] $b_1 \le b_2 -2$; \item[(iii)] $b_1\le b_i-2$ for some $i\in \{3,\ldots ,r\}$; \item[(iv)] $b_i\le b_1-1 $ for some $i\in \{3,\ldots ,r\}$. \end{itemize} {\bf Case 1.} First and foremost, we shall consider case (i) that $b_i \le b_j -2$ for some $i, j \in \{3,\ldots , r\}$. The treatment for this case has its root in \cite{KNY2015}. If $b_i+b_j=2b$ for some integer $b$, then we will balance the number of vertices of parts $B_i$ and $B_j$. Namely, we define a new graph $G'$ obtained from $G$ by deleting all edges between $B_i$ and $B_j$, and then we move some vertices from $B_j$ to $B_i$ such that the resulting sets, say $B_i',B_j'$, have size $b$, and then we add all edges between $B_i'$ and $B_j'$. In this process, we keep the other edges unchanged. We define a vector $\bm{y}\in \mathbb{R}^n$ such that $y_s=(\frac{b_ix_i^2 +b_jx_j^2}{2b})^{1/2}$ for every vertex $s \in B_i'\cup B_j'$, and $y_t=x_t$ for every $t \in V(G')\setminus (B_i' \cup B_j')$. Clearly, $\sum_{v\in V(G')} y_v^2=1$. Furthermore, we have \begin{align*} \bm{y}^TA(G')\bm{y} - \bm{x}^TA(G)\bm{x} = 2((b y_s)^2- b_ix_ib_jx_j) + 2(2b y_s- (b_ix_i + b_j x_j))\sum_{t\notin B_i'\cup B_j'} x_t. \end{align*} Note that $b=\frac{b_i+b_j}{2}> \sqrt{b_ib_j}$ and \[ (b y_s)^2=b^2\cdot \frac{b_ix_i^2 +b_jx_j^2}{2b} \ge b \sqrt{b_ix_i^2b_jx_j^2}> b_ix_ib_jx_j. \] Moreover, the weighted power-mean inequality gives \[ 2by_s =2b \left(\frac{b_ix_i^2 +b_jx_j^2}{b_i+b_j} \right)^{1/2} \ge 2b \frac{b_ix_i +b_jx_j}{b_i+b_j}=b_ix_i +b_jx_j. \] Thus we get $\bm{y}^TA(G')\bm{y} > \bm{x}^TA(G)\bm{x}$. Rayleigh's formula gives \[ \lambda (G') \ge \bm{y}^TA(G')\bm{y} > \bm{x}^TA(G)\bm{x} = \lambda (G), \] which contradicts with the choice of $G$. If $b_i+b_j=2b+1$ for some integer $b$, then we move similarly some vertices from $B_j$ to $B_i$ such that the resulting sets $B_i',B_j'$ satisfying $|B_i'|=b$ and $|B_j'|=b+1$. We construct a vector $\bm{y}\in \mathbb{R}^n$ satisfying $y_s=(\frac{b_ix_i^2 +b_jx_j^2}{2b+1})^{1/2}$ for every vertex $s \in B_i'\cup B_j'$, and $y_t=x_t$ for every $t \in V(G')\setminus (B_i' \cup B_j')$. Similarly, we get \begin{equation*} \begin{aligned} \bm{y}^TA(G')\bm{y} - \bm{x}^TA(G)\bm{x} &= 2( b(b+1) y_s^2- b_ix_ib_jx_j) \\ & \quad + 2( (2b+1) y_s- (b_ix_i + b_j x_j))\sum_{t\notin B_i'\cup B_j'} x_t. \end{aligned} \end{equation*} We are going to show that \[ b(b+1) y_s^2- b_ix_ib_jx_j> 0, \quad \text{and} \quad (2b+1) y_s- (b_ix_i + b_j x_j) \ge 0. \] For the first inequality, by applying AM-GM inequality, we get \[ b(b+1)y_s^2 =b(b+1)\frac{b_ix_i^2+b_jx_j^2}{b_i+b_j} \ge \frac{2b(b+1)}{b_i+b_j} \sqrt{b_ib_j} x_ix_j. \] Then it is sufficient to prove that $2b(b+1) > (b_i+b_j)\sqrt{b_ib_j}$. Note that $b_i\le b_j-2$ and $b_i+b_j =2b+1$ is odd. Then $b_i \le b-1$ and $ b_j\ge b+2$. The desired inequality holds. For the second one, the weighted power-mean inequality yields \[ (2b+1)y_s =(2b+1) \left(\frac{b_ix_i^2 +b_jx_j^2}{b_i+b_j} \right)^{1/2} \ge (2b+1) \frac{b_ix_i +b_jx_j}{b_i+b_j}=b_ix_i +b_jx_j. \] This case also contradicts with the choice of $G$. For the remaining three cases, we will show our proof by considering the characteristic polynomial of $G$ and then applying induction on integer $r$. {\bf Case 2.} Now, we consider case (ii) that $b_1\le b_2-2$. Recall that $B_1^-=B_1\setminus \{v\}$ and $B_2^-=B_2\setminus \{w\}$. We define a graph $G'$ obtained from $G$ by deleting a vertex of $B_2^-$, and adding a copy of a vertex of $B_1^-$. This makes the two parts $B_1^-,B_2^-$ more balanced. Our goal is to prove that $\lambda (G) < \lambda (G')$, which contradicts with the maximality of $G$. Let $x_v, x_w$ and $x_u$ be the weights of vertices $v,w$ and $u$ respectively. We denote by $x_1^-$ and $x_2^-$ the weights of vertices of $B_1^-$ and $B_2^-$ respectively. The eigen-equation $A(G)\bm{x}=\lambda (G) \bm{x}$ gives $\sum_{j\in N(i)} x_j = \lambda (G) x_i$ for every $i\in [n]$. Then \begin{equation*} \begin{cases} \phantom{x_v+x_w+} x_u \phantom{+(b_1-1)x_1^-} + (b_2-1)x_2^- + b_3x_3 + \cdots + b_rx_r =\lambda (G) x_v, \\ \phantom{x_v+x_w+} x_u + (b_1-1)x_1^- \phantom{+(b_2-1)x_2^-} + b_3x_3 + \cdots +b_r x_r = \lambda (G) x_w, \\ x_v +x_w \phantom{+x_u+(b_1-1)x_1^-+(b_2-1)x_2^-} \!\! + b_3x_3 + \cdots +b_rx_r = \lambda (G) x_u, \\ \phantom{x_v+}\, \,x_w \phantom{+x_u +(b_1-1)x_1^-} + (b_2-1)x_2^- \! + b_3x_3 + \cdots +b_rx_r = \lambda (G) x_1^-, \\ x_v \phantom{+x_w+x_u} \, + (b_1-1)x_1^- \phantom{+(b_2-1)x_2^-} + b_3x_3 + \cdots +b_rx_r =\lambda (G) x_2^- , \\ x_v+x_w+x_u + (b_1\!-\!1)x_1^- + (b_2\!-\!1)x_2^- + b_4x_4 + \cdots +b_rx_r = \lambda (G) x_3, \\ \quad \,\,\, \vdots \\ x_v+x_w+x_u + (b_1\!-\!1)x_1^- + (b_2\!-\!1)x_2^- + b_3x_3 + \cdots +b_{r-1}x_{r-1} = \lambda (G) x_r. \end{cases} \end{equation*} Thus $\lambda (G)$ is the largest eigenvalue of the matrix $A_r$ corresponding to eigenvector $(x_v,x_w,x_u,x_1^-,x_2^-, x_3 ,\ldots ,x_r)$, where $A_r (r\ge 3)$ is defined as the following. \[ A_r:= \left[ \begin{array}{ccccc;{2pt/2pt}ccc} 0 & 0 & 1 & 0 & b_2-1 & b_3 & \cdots & b_r \\ 0&0&1&b_1-1 &0 & b_3 & \cdots & b_r \\ 1&1&0&0&0&b_3 & \cdots & b_r \\ 0&1&0&0 & b_2-1 & b_3 & \cdots & b_r \\ 1 & 0&0& b_1-1 & 0 & b_3 & \cdots & b_r \\ \hdashline[2pt/2pt] 1&1&1&b_1-1 & b_2-1 & 0 & \cdots & b_r \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & & \vdots \\ 1&1&1&b_1-1 & b_2-1 & b_3 & \cdots & 0 \end{array} \right]. \] For notational convenience, we denote \[ A_2:= \begin{bmatrix} 0 & 0 & 1 & 0 & b_2-1 \\ 0&0&1&b_1-1 &0 \\ 1&1&0&0&0 \\ 0&1&0&0 & b_2-1 \\ 1 & 0&0& b_1-1 & 0 \end{bmatrix}, \] and \[ R_{b_1,b_2}(x):=\det \begin{bmatrix} x+1&1 & 0 & b_1-1 &0 \\ 1&x+1&0&0&b_2-1\\ 0&0&x+1 &b_1-1&b_2-1 \\ 1&0&1&x+b_1-1&0\\ 0&1&1&0&x+b_2-1 \end{bmatrix}. \] For every $r\ge 2$, the characteristic polynomial of $A_r$ is denoted by \[ F_{b_1,b_2,\ldots ,b_r}(x) = \det (xI_{r+3} - A_r).\] In particular, the polynomial $F_{b_1,b_2}(x)$ is the same as that in Lemma \ref{lempp}. By expanding the last column of $\det (xI_{r+3}-A_r)$, we get the following recurrence relations: \begin{equation} \label{eq5} F_{b_1,b_2,b_3}(x) = (x+b_3) F_{b_1,b_2}(x) - b_3 R_{b_1,b_2}(x), \end{equation} and for every integer $r\ge 4$, \begin{equation} \label{eq6} F_{b_1,b_2,\ldots ,b_r}(x) = (x+b_r) F_{b_1,b_2,\ldots ,b_{r-1}}(x) - b_r\prod_{i=3}^{r-1} (x+b_i) R_{b_1,b_2}(x), \end{equation} where $F_{b_1,b_2}(x) $ and $R_{b_1,b_2}(x)$ are computed as below: \begin{equation*} \begin{aligned} F_{b_1,b_2}(x) &= x^5 - (b_1b_2+1) x^3+ (3b_1b_2 -2b_1 -2b_2 +1)x -2b_1b_2 +2b_1 +2b_2 -2, \\ R_{b_1,b_2}(x) &= x^5+(b_1+b_2+1)x^4 + (b_1b_2+1)x^3-(b_1b_2+b_1+b_2-3)x^2 \\ & \quad + (2b_1 + 2b_2 -3b_1b_2-1)x + 3(b_1-1)(b_2-1). \end{aligned} \end{equation*} Note that $b_1\le b_2-2$. Upon computations, we obtain \begin{equation*} F_{b_1+1,b_2-1}(x) - F_{b_1,b_2}(x) = (b_1 -b_2+1)(x-1)^2(x+2) < 0, \end{equation*} and \begin{equation*} R_{b_1+1,b_2-1}(x) - R_{b_1,b_2}(x) = - (b_1-b_2+1)(x-1)(x^2-3) > 0. \end{equation*} Note that $b_1-b_2+1\le -1$. Combining with equation (\ref{eq5}), we obtain \begin{align*} &F_{b_1+1,b_2-1,b_3}(x) - F_{b_1,b_2,b_3}(x) \\ &= (x+b_3)(F_{b_1+1,b_2-1}(x)- F_{b_1,b_2}(x)) - b_3 ( R_{b_1+1,b_2-1}(x)- R_{b_1,b_2}(x) ) \\ &= (b_1-b_2+1)(x-1)^2(x+2)(x+b_3) + b_3 (b_1-b_2+1)(x-1)(x^2-3) \\ &= (b_1-b_2+1)(x-1) \bigl( (x-1)(x+2)(x+b_3) +b_3 (x^2-3) \bigr) < 0. \end{align*} Next we prove by induction that for every $r\ge 3$ and $x\ge 2$, \begin{equation} \label{eqcase2} \begin{aligned} F_{b_1+1,b_2-1,b_3,\ldots ,b_r}(x) - F_{b_1,b_2,b_3,\ldots ,b_r}(x) <0. \end{aligned} \end{equation} Firstly, the base case $r=3$ was verified in the above. For $r\ge 4$, we get from (\ref{eq6}) that \begin{align*} &F_{b_1+1,b_2-1,b_3,\ldots ,b_r}(x) - F_{b_1,b_2,b_3,\ldots ,b_r}(x) \\ &=(x+b_r) \bigl( F_{b_1+1,b_2-1,b_3,\ldots ,b_{r-1}}(x)- F_{b_1,b_2,b_3,\ldots ,b_{r-1}}(x) \bigr) \\ &\quad -b_r\prod_{i=3}^{r-1} (x+b_i) \bigl( R_{b_1+1,b_2-1}(x) - R_{b_1,b_2}(x) \bigr) <0, \end{align*} where the last inequality holds by applying inductive hypothesis on the case $r-1$ and invoking the fact $R_{b_1+1,b_2-1}(x) - R_{b_1,b_2}(x) >0$. From inequality (\ref{eqcase2}), we know that $F_{b_1+1,b_2-1,b_3,\ldots ,b_r}( \lambda (G)) < F_{b_1,b_2,b_3,\ldots ,b_r}(\lambda (G))=0$. Since $\lambda (G')$ is the largest root of $F_{b_1+1,b_2-1,b_3,\ldots ,b_r}(x)$, this implies $\lambda (G) < \lambda (G')$. {\bf Case 3.} Thirdly, we consider case (iii) that $b_1\le b_i-2$ for some $i\in \{3,\ldots ,r\}$. We may assume by symmetry that $b_1\le b_3-2$. Our treatment in this case is similar with that of case (ii). Let $G^*$ be the graph obtained from $G$ by deleting a vertex of $B_3$ with its incident edges, and add a new vertex to $B_1^-$ and connect this new vertex to all remaining vertices of $B_3$ and all vertices of $B_2\cup B_4 \cup \cdots \cup B_r$. We will prove that $\lambda (G) < \lambda (G^*)$. By case (ii), we may assume that $|b_1-b_2|\le 1$. Clearly, $\lambda (G^*)$ is the largest root of $F_{b_1+1,b_2,b_3-1,b_4,\ldots ,b_r}(x)$. First of all, we will show that \begin{equation} \label{eqcase3} F_{b_1+1,b_2,b_3-1}(x) - F_{b_1,b_2,b_3}(x) <0, \end{equation} and then by applying induction, we will prove that for each $r\ge 4$, \begin{equation} \label{eqcase3second} F_{b_1+1,b_2,b_3-1,b_4,\ldots ,b_r}(x) - F_{b_1,b_2,b_3,b_4,\ldots ,b_r}(x) <0. \end{equation} Indeed, we next verify inequalities (\ref{eqcase3}) and (\ref{eqcase3second}) for the case $r=4$ only, since the inductive steps are the same as that of case (ii) with slight differences. By computation, we obtain \begin{align*} &(x+b_3-1)F_{b_1+1,b_2}(x)- (x+b_3)F_{b_1,b_2}(x)\\ &= -x^5 -b_2x^4 +(b_2(b_1-b_3+1)+1)x^3+ (3b_2-2)x^2 \\ &\quad + (3b_2b_3-3b_1b_2+2b_1-3b_2-2b_3+3)x + 2b_1b_2 - 2b_1 - 2b_2b_3+2b_3, \end{align*} and \begin{align*} &-(b_3-1)R_{b_1+1,b_2}(x) +b_3R_{b_1,b_2}(x) \\ &=x^5 + (b_2+b_1-b_3+2)x^4 + (b_2(b_1-b_3+1)+1)x^3 \\ & \quad +(-b_1b_2-b_1+b_2b_3-2b_2+b_3+ 2)x^2 \\ & \quad +(3b_2b_3-3b_1b_2+2b_1-b_2-2b_3+1)x \\ & \quad + 3b_1b_2-3b_1-3b_2b_3+ 3b_3. \end{align*} Combining these two equations with (\ref{eq5}), we get \begin{align*} &F_{b_1+1,b_2,b_3-1}(x) - F_{b_1,b_2,b_3}(x) \\ &=(x+b_3-1)F_{b_1+1,b_2}(x)-(x+b_3)F_{b_1,b_2}(x) - (b_3-1)R_{b_1+1,b_2}(x) + b_3 R_{b_1,b_2}(x) \\ &=(b_1-b_3+2)x^4 + 2(b_2(b_1-b_3+1)+1)x^3 + (b_2b_3-b_1b_2-b_1+b_2+b_3)x^2 \\ &\quad +(6b_2b_3-6b_1b_2 + 4b_1 - 4b_2 - 4b_3 + 4)x + 5b_1b_2-5b_1-5b_2b_3+5b_3 . \end{align*} Combining $|b_1-b_2|\le 1$ and $b_1- b_3\le -2$, one can verify that $F_{b_1+1,b_2,b_3-1}(x) < F_{b_1,b_2,b_3}(x)$ for every $x\ge 2(b_1-2)$. This completes the proof of (\ref{eqcase3}). We now consider (\ref{eqcase3second}) in the case $r=4$. Note that $b_1- b_3+2\le 0$ and \begin{align*} &-(x+b_3-1)R_{b_1+1,b_2}(x) + (x+b_3)R_{b_1,b_2}(x) \\ &=(b_1-b_3+2)x^4 + (b_2(b_1 -b_3+ 2)+2)x^3 + (b_2(b_3-b_1+1)+b_3-b_1)x^2 \\ & \quad +(3b_2b_3 -3b_1b_2+ 2b_1- 4b_2- 2b_3+4)x + 3b_1b_2- 3b_1- 3b_2b_3+ 3b_3 <0, \end{align*} which together with (\ref{eq6}) and the case $r=3$ yields \begin{align*} &F_{b_1+1,b_2,b_3-1,b_4}(x) - F_{b_1,b_2,b_3,b_4}(x) \\ & = (x+b_4) (F_{b_1+1,b_2,b_3-1}(x)- F_{b_1,b_2,b_3}(x) ) \\ & \quad - b_4(x+b_3-1)R_{b_1+1,b_2}(x) + b_4(x+b_3)R_{b_1,b_2}(x)<0. \end{align*} Let $t=\min\{b_i: 1\le i\le r\} -1$. Since the complete $r$-partite $K_{t,t,\ldots ,t}$ is a subgraph of $G$, we know that $\lambda (G) \ge \lambda (K_{t,t,\ldots ,t})=(r-1)t$. Thus, we can similarly get that $F_{b_1+1,b_2,b_3-1,b_4, \ldots ,b_r} (\lambda (G)) < F_{b_1,b_2,b_3,b_4,\ldots ,b_r}(\lambda (G))=0$, which yields $\lambda (G) < \lambda (G^*)$, which contradicts with the choice of $G$. {\bf Case 4.} Finally, we consider the case (iv) that $b_i \le b_1 -1$ for some $i\ge 3$. We may assume that $b_3\le b_1-1$. This case can similarly be completed by applying a similar argument of case (iii). Let $G^*$ be the graph obtained from $G$ by removing a vertex of $B_1^-$ with its incident edges, and adding a copy of a vertex of $B_3$. In what follows, we will show that \begin{equation} \label{eqcase4} F_{b_1-1,b_2,b_3+1}(x) - F_{b_1,b_2,b_3}(x)<0, \end{equation} and then we prove by induction that for every $r\ge 4$, \begin{equation} \label{eqcase4second} F_{b_1-1,b_2,b_3+1,b_4,\ldots ,b_r}(x) - F_{b_1,b_2,b_3,b_4,\ldots ,b_r}(x)<0. \end{equation} By computation, we obtain that \begin{align*} &(x+b_3+1)F_{b_1-1,b_2}(x) - (x+b_3)F_{b_1,b_2}(x) \\ &= x^5 + b_2x^4 + ( b_2(b_3-b_1+1)-1)x^3 + (-3b_2+2)x^2 \\ & \quad + (3b_1b_2- 2b_1- 3b_2b_3- 3b_2+ 2b_3+1)x - 2b_1b_2+ 2b_1+ 2b_2b_3+ 4b_2- 2b_3-4, \end{align*} and \begin{align*} &- (b_3+1) R_{b_1-1,b_2}(x) + b_3 R_{b_1,b_2}(x) \\ &= -x^5 + (b_3- b_1- b_2)x^4 + (b_2(b_3-b_1+1)-1)x^3 \\ &\quad + (b_1b_2-b_2b_3+b_1-b_3-4)x^2 \\ &\quad +(3b_1b_2- 2b_1-3b_2b_3- 5b_2+ 2b_3+3)x \\ & \quad +3b_2b_3- 3b_1b_2+ 3b_1+6b_2-3b_3-6. \end{align*} Combining with the recurrence equation (\ref{eq5}), we get \begin{align*} &F_{b_1-1,b_2,b_3+1}(x) - F_{b_1,b_2,b_3}(x) \\ &= (x+b_3+1)F_{b_1-1,b_2}(x) - (x+b_3)F_{b_1,b_2}(x) - (b_3+1) R_{b_1-1,b_2}(x) + b_3 R_{b_1,b_2}(x) \\ &= (b_3-b_1)x^4 + ( 2b_2(b_3-b_1+1)-2)x^3 + (b_1b_2-b_2b_3+b_1-b_3-3b_2-2)x^2 \\ &\quad + (6b_1b_2-6b_2b_3- 4b_1- 8b_2+ 4b_3+4)x - 5b_1b_2+ 5b_1+5b_2b_3+10b_2-5b_3-10. \end{align*} Since $b_3-b_1\le -1$ and $|b_1-b_2|\le 1$, one can verify that $F_{b_1-1,b_2,b_3+1}(x) - F_{b_1,b_2,b_3}(x)<0$ for every $x\ge 2(b_3-1)$. This completes the proof of (\ref{eqcase4}). Next we will prove (\ref{eqcase4second}) for the case $r=4$ only, since the inductive steps are similar with that of case (ii) and (iii). By computation, we have \begin{align*} &-(x+b_3+1)R_{b_1-1,b_2}(x) + (x+b_3)R_{b_1,b_2}(x) \\ &=(b_3-b_1)x^4 + (b_2(b_3-b_1)-2)x^3 + (b_1b_2+b_1- b_2b_3- 3b_2- b_3- 2)x^2 \\ &\quad + (3b_1b_2- 2b_1- 3b_2b_3- 2b_2+ 2b_3)x - 3b_1b_2 + 3b_1+ 3b_2b_3+ 6b_2- 3b_3-6<0, \end{align*} which together with (\ref{eq6}) and the case $r=3$ gives \begin{align*} &F_{b_1-1,b_2,b_3+1,b_4}(x) - F_{b_1,b_2,b_3,b_4}(x) \\ &= (x+b_4)(F_{b_1-1,b_2,b_3+1}(x) - F_{b_1,b_2,b_3}(x) ) \\ &\quad -b_4(x+b_3+1)R_{b_1-1,b_2}(x) + b_4(x+b_3)R_{b_1,b_2}(x)<0. \end{align*} Since $F_{b_1-1,b_2,b_3+1,b_4, \ldots ,b_r} (\lambda (G)) < F_{b_1,b_2,b_3,b_4,\ldots ,b_r}(\lambda (G))=0$ and $\lambda (G^*)$ is the largest root of $F_{b_1-1,b_2,b_3+1,b_4,\ldots ,b_r}(x)$, we know that $\lambda (G) < \lambda (G^*)$, which contradicts with the choice of $G$. In summary, we complete the proof of all possible cases. \end{proof} \noindent {\bf Remark.} It seems possible to prove the last three cases by using a weight-balanced argument similar with that of the first case. Nevertheless, it is inevitable that a great deal of tedious calculations are required in the proof of these cases. Moreover, applying the recursive technique of determinants in the proof of Lemma \ref{lem42}, one can compute the characteristic polynomial of the adjacency matrix and signless Laplacian matrix of the $n$-vertex complete $r$-partite graph $K_{t_1,\ldots ,t_r}$. More precisely, \begin{align*} \det (xI_n-A(K_{t_1,\ldots ,t_r})) = x^{n-r} \left( 1-\sum_{i=1}^r \frac{t_i}{x+t_i}\right) \prod_{i=1}^r(x +t_i), \end{align*} and \begin{align*} \det (xI_n-Q(K_{t_1,\ldots ,t_r})) = \prod_{i=1}^r(x-n+t_i)^{t_i-1} (x-n+2t_i) \left( 1-\sum_{i=1}^r \frac{t_i}{x-n+2t_i}\right). \end{align*} It has its own interests to compute the eigenvalues of complete multipartite graphs; see, e.g., \cite{Del2012,YWS2011,Obo2019,YYSW2019} for different proofs and related results. \medskip We next show our main result in this paper. \begin{theorem} \label{thm214} Let $G$ be an $n$-vertex graph. If $G$ is $K_{r+1}$-free and $G$ is not $r$-partite, then \[ \lambda (G) \le \lambda (Y_r(n)). \] Moreover, the equality holds if and only if $G=Y_r(n)$. \end{theorem} Theorem \ref{thm214} is a refinement of the spectral Tur\'{a}n Theorem \ref{thm460} and also it is an extension of Theorem \ref{thmLNW}. Our proof is mainly based on Zykov's symmetrization. \begin{proof} First of all, we assume that $G$ is a $K_{r+1}$-free non-$r$-partite graph with maximum value of spectral radius. Our goal is to prove that $G=Y_r(n)$. Clearly, we know that $G$ must be a connected graph. Let $\bm{x} \in \mathbb{R}_{>0}^n$ be a positive unit eigenvector of $\lambda (G)$. \begin{claim} \label{claim4.1} There exists a vertex $u\in V(G)$ such that $G\setminus \{u\}$ is $r$-partite. \end{claim} \begin{proof}[Proof of Claim \ref{claim4.1}] Recall that for two non-adjacent vertices $u,v\in V(G)$, the spectral Zykov symmetrization $Z_{u,v}(G)$ is defined as the graph obtained from $G$ by removing all edges incident to vertex $u$ and then adding new edges from $u$ to $N_G(v)$. We can verify that the spectral Zykov symmetrization does not increase the clique number and the chromatic number. Recall that $s_G(v,\bm{x}) =\sum_{i\in N_G(v)} x_i$ is the sum of weights of all neighbors of $v$ in $G$. For two non-adjacent vertices $u,v$, if $s_G(u,\bm{x}) < s_G(v,\bm{x})$, then we replace $G$ with $Z_{u,v}(G)$. If $s_G(u,\bm{x}) = s_G(v,\bm{x})$, then we can apply either $Z_{u,v}$ or $Z_{v,u}$, which leads to $N(u)=N(v)$ after making the spectral Zykov symmetrization. Obviously, the spectral Zykov symmetrization does not create a copy of $K_{r+1}$. More significantly, it will increase the spectral radius strictly, since $\bm{x}$ is entry-wise positive. The proof of Claim \ref{claim4.1} is based on the spectral Zykov symmetrization stated in above. Since $G$ is $K_{r+1}$-free, we can repeatedly apply the Zykov symmetrization on every pair of non-adjacent vertices until $G$ becomes an $r$-partite graph. Without loss of generality, we may assume that $G$ is $K_{r+1}$-free and $G$ is not $r$-partite, while $Z_{u,v}(G)$ is $r$-partite. Thus $G \setminus \{u\}$ is $r$-partite, and we assume that $V(G) \setminus \{u\}=V_1\cup V_2 \cup \cdots \cup V_r$, where $V_1,V_2,\ldots ,V_r$ are pairwise disjoint and $\sum_{i=1}^r |V_i| =n-1$. \end{proof} We denote $A_i=N(u)\cap V_i$ for every $i\in [r]:=\{1,\ldots ,r\}$. Note that $G$ has maximum spectral radius among all $K_{r+1}$-free non-$r$-partite graphs. Then for each $i\in [r]$, every vertex of $ V_i \setminus A_i$ is adjacent to every vertex of $V_j$ for every $j\in [r]$ and $j\neq i$. We remark here that the difference between the $K_{r+1}$-free case (Theorem \ref{thm214}) and the triangle-free case (Theorem \ref{thmLNW}) is that there may exist some edges between the pair of sets $A_i$ and $A_j$, which makes the problem seems more difficult. \begin{claim} \label{claim4.2} There exists a pair $\{i,j\} \subseteq [r]$ such that $G[A_i,A_j]$ forms an empty graph, and for other pairs $\{s,t\} \neq \{i,j\}$, $G[A_s,A_t]$ is a complete bipartite subgraph in $G$. \end{claim} \begin{proof}[Proof of Claim \ref{claim4.2}] Let $G[A_1,A_2,\ldots ,A_r]$ be the subgraph of $G$ induced by the vertex sets $A_1,A_2$, $\ldots ,A_r$. Claim \ref{claim4.2} is equivalent to say that $G[A_1\cup A_2,A_3,\ldots ,A_r]$ forms a complete $(r-1)$-partite subgraph in $G$. Since $G$ is $K_{r+1}$-free, we know that the subgraph $G[A_1,A_2,\ldots ,A_r]$ is a $K_{r}$-free subgraph in $G$. First of all, we choose a vertex $v_1\in A_1$ such that $s_G(v_1,\bm{x})$ is maximum among all vertices of $ A_1$, then we apply the Zykov operation $Z_{u,v_1}$ on $G$ for every $u\in A_1\setminus \{v_1\}$. These operations will make all vertices of $A_1$ being equivalent, that is, every pair of vertices in $A_1$ has the same neighbors. Secondly, we choose a vertex $v_2\in A_2$ such that $s_G(v_2,\bm{x})$ is maximum over all vertices of $ A_2$, and then we apply similarly the Zykov operation $Z_{u,v_2}$ on $G$ for every $u\in A_2\setminus \{v_2\}$. {\it Note that all vertices in $A_1$ have the same neighbors.} After doing Zykov's operations on vertices of $A_2$, we claim that the induced subgraph $G[A_1,A_2]$ is either a complete bipartite graph or an empty graph. Indeed, if $v_2\in \cap_{v\in A_1}N(v)$, then the operations $Z_{u,v_2}$ for all $u\in A_2 \setminus \{v_2\}$ will lead to a complete bipartite graph between $A_1$ and $A_2$. If $v_2\in \cap_{v\in A_1}N(v)$, then $v_2$ is not adjacent to all vertices of $A_1$, and so is $u$ for every $u\in A_2 \setminus \{v_2\}$, which yields that $G[A_1,A_2]$ is an empty graph. Moreover, by applying the similar operations on $A_3,A_4, \ldots ,A_r$, we can obtain that for every $i,j\in [r]$ with $i\neq j$, the induced bipartite subgraph $G[A_i,A_j]$ is either complete bipartite or empty. Since $G[A_1,A_2,\ldots ,A_r]$ is $K_r$-free and $G$ attains the maximum spectral radius, we know that there is exactly one pair $\{i,j\}\subseteq [r]$ such that $G[A_i,A_j]$ is an empty graph. \end{proof} We may assume that $\{i,j\}=\{1,2\}$ for convenience. In what follows, we intend to enlarge $ A_i$ to the whole set $V_i$ for every $i\in \{3,4,\ldots ,r\}$. Observe that every vertex of $V_i \setminus A_i$ is adjacent to every vertex of $V_j$ for every $j\in [r]$ with $j\neq i$, and adding all edges between $\{u\}$ to $V_i \setminus A_i$ does not create a copy of $K_{r+1}$ in $G$, and does not decrease the spectral radius of $G$. From this observation, we know that $u$ is adjacent to every vertex of $V_i$ for each $i\in \{3,4,\ldots ,r\}$; see (a) in Figure \ref{fig-6}. \begin{figure}[htbp] \centering \includegraphics[scale=0.7]{F6a.png} \quad \includegraphics[scale=0.7]{F6b.png} \quad \includegraphics[scale=0.7]{F6c.png} \caption{Further movement step.} \label{fig-6} \end{figure} Assume that $C:=N(u)\cap V_1$ and $D:=N(u)\cap V_2$. We denote $A:=V_1 \setminus C$ and $B:=V_2\setminus D$; see (a) in Figure \ref{fig-6}. Note that there is no edge between $C$ and $D$, since $G$ does not contain $K_{r+1}$ as a subgraph. In the remaining of our proof, we will prove by two steps that both $C$ and $D$ are single vertex sets. \begin{claim} \label{claim4.3} The set $C$ is a single vertex, i.e., $|C|=1$. \end{claim} \begin{proof}[Proof of Claim \ref{claim4.3}] The treatment is similar with that of our proof of Theorem \ref{thmLNW}. If $\sum_{v\in A}x_v \ge \sum_{v\in B}x_v$, then we choose $|C|-1$ vertices of $C$ and delete its incident edges only in $B$, then we move these $|C|-1$ vertices into $D$ and connect these vertices to $A$. In this process, the edges between these $|C|-1$ vertices and $V_3 \cup \cdots \cup V_r$ are unchanged. We write $G'$ for the resulting graph. Using the similar computation as in Section \ref{sec3}, we can verify that $\lambda (G') > \lambda (G)$. If $\sum_{v\in A} x_v < \sum_{v\in B} x_v$, then we can choose $|D|-1$ vertices of $D$ and delete its incident edges only in $A$, and then move these $|D|-1$ vertices into $C$ and join these vertices to $B$. This process will increase strictly the spectral radius. From the above case analysis, we can always remove the vertices of $G$ to force either $|C|=1$ or $|D|=1$. Without loss of generality, we may assume that $|C|=1$ and denote $C=\{c\}$; see (b) in Figure \ref{fig-6}. \end{proof} \begin{claim} \label{claim4.4} The set $D$ is a single vertex, i.e., $|D|=1$. \end{claim} \begin{proof}[Proof of Claim \ref{claim4.4}] If $x_u< x_c$, then we choose $|D|-1$ vertices of $D$ and delete its incident edges to vertex $u$, then we move these $|D|-1$ vertices into $B$ and join these these vertices to $c$, and keeping the other edges unchanged, we denote the new graph by $G^{\star}$. Then we can similarly get $\lambda (G^{\star}) > \lambda (G)$. In the graph $G^{\star}$, we have $|D|=1$ and write $D=\{d\}$. Thus $G^{\star}$ is the graph obtained from a complete $r$-partite graph $K_{t_1,t_2,\ldots ,t_r}$, where $\sum_{i=1}^r t_i=n-1$, by adding a new vertex $u$ and then joining $u$ to a vertex $c\in V_1$, and joining $u$ to a vertex $d\in V_2$, and joining $u$ to all vertices of $V_3 \cup \cdots \cup V_r$, and finally removing the edge $cd\in E(K_{t_1,t_2,\ldots ,t_r})$. If $x_u\ge x_c$, then we choose $|B|-1$ vertices of $B$ and delete its incident edges to vertex $c$, then we move these $|B|-1$ vertices into $D$ and join these vertices to vertex $u$. We denote the new graph by $G^*$. Then $\lambda (G^*) > \lambda (G)$. Thus we conclude in the new graph $G^*$ that $B$ is a single vertex, say $B=\{b\}$; see (c) in Figure \ref{fig-6}. In what follows, we will exchange the position of $u$ and $c$. Note that $c\in V_1$ is adjacent to a vertex $b\in V_1$ and all vertices of $V_3 \cup \cdots \cup V_r$. Now, we move vertex $c$ outside of $V_1$ and put vertex $u$ into $V_1$. Thus the new center $c$ is adjacent to a vertex $u\in V_1$, a vertex $b\in V_2$ and all vertices of $V_3 \cup \cdots \cup V_r$. Note that $bu\notin E(G^*)$. Hence $G^*$ has the same structure as the previous case, and then we may assume that $|D|=1$. \end{proof} From the above discussion, we know that $G$ is isomorphic to the graph defined as in Lemma \ref{lem42}. By applying Lemma \ref{lem42}, we know that $\lambda (G) \le \lambda (Y_r(n))$. Moreover, the equality holds if and only if $G=Y_r(n)$. This completes the proof. \end{proof} \section{Unified extension to the $p$-spectral radius} \label{sec5} Recall that the spectral radius of a graph is defined as the largest eigenvalue of its adjacency matrix. By Rayleigh's theorem, we know that it is also equal to the maximum value of $\bm{x}^TA(G)\bm{x}=2\sum_{\{i,j\}\in E(G)} x_ix_j$ over all $\bm{x}\in \mathbb{R}^n$ with $|x_1|^2 + \cdots +|x_n|^2=1$. The definition of the spectral radius was recently extended to {\it the $p$-spectral radius}. We denote the $p$-norm of $\bm{x}$ by $\lVert \bm{x}\rVert_p =( |x_1|^p + \cdots +|x_n|^p)^{1/p}$. For every real number $p\ge 1$, the $p$-spectral radius of $G$ is defined as \begin{equation} \label{psp} \lambda^{(p)} (G) : = 2 \max_{\lVert \bm{x}\rVert_p =1} \sum_{\{i,j\} \in E(G)} x_ix_j. \end{equation} We remark that $\lambda^{(p)}(G)$ is a versatile parameter. Indeed, $\lambda^{(1)}(G)$ is known as the Lagrangian function of $G$, $\lambda^{(2)}(G)$ is the spectral radius of its adjacency matrix, and \begin{equation} \label{eqlimit} \lim_{p\to +\infty} \lambda^{(p)} (G)=2e(G), \end{equation} which can be guaranteed by the following inequality \begin{equation*} 2e(G)n^{-2/p} \le \lambda^{(p)}(G) \le (2e(G))^{1-1/p}. \end{equation*} To some extent, the $p$-spectral radius can be viewed as a unified extension of the classical spectral radius and the size of a graph. In addition, it is worth mentioning that if $ 1\le q\le p$, then $\lambda^{(p)}(G)n^{2/p} \le \lambda^{(q)}(G)n^{2/q}$ and $(\lambda^{(p)}(G)/2e(G))^p \le (\lambda^{(q)}(G)/2e(G))^q$; see \cite[Propositions 2.13 and 2.14]{Niki2014laa} for more details. As commented by Kang and Nikiforov in \cite[p. 3]{KN2014}, linear-algebraic methods are irrelevant for the study of $\lambda^{(p)}(G)$ in general, and in fact no efficient methods are known for it. Thus the study of $\lambda^{(p)}(G)$ for $p\neq 2$ is far more complicated than the classical spectral radius. The extremal function for $p$-spectral radius is given as \[ \mathrm{ex}_{\lambda}^{(p)}(n, {F}) := \max\{ \lambda^{(p)} (G) : |G|=n ~\text{and $G$ is ${F}$-free} \}. \] To some extent, the proof of results on the $p$-spectral radius shares some similarities with the usual spectral radius when $p>1$; see \cite{Niki2014laa,KN2014,KNY2015} for extremal problems for the $p$-spectral radius. In 2014, Kang and Nikiforov \cite{KN2014} extended the Tur\'{a}n theorem to the $p$-spectral version for $p>1$. They proved that \[ \mathrm{ex}_{\lambda}^{(p)} (n,K_{r+1}) =\lambda^{(p)}(T_r(n)).\] \begin{theorem}[Kang--Nikiforov, 2014] \label{thmKN-p} If $G$ is a $K_{r+1}$-free graph on $n$ vertices, then for every $p>1$, \[ \lambda^{(p)}(G)\le \lambda^{(p)}(T_r(n)), \] equality holds if and only if $G$ is the $n$-vertex Tur\'{a}n graph $T_r(n)$. \end{theorem} \noindent {\bf Remark.} We remark that a theorem of Motzkin and Straus states that Theorem \ref{thmKN-p} is also valid for the case $p=1$ except for the extremal graphs attaining the equality. \medskip Keeping (\ref{eqlimit}) in mind, we can see that Theorem \ref{thmKN-p} is a unified extension of both Tur\'{a}n's Theorem \ref{thmturanstrong} and Spectral Tur\'{a}n's Theorem \ref{thm460} by taking $p\to +\infty$ and $p=2$ respectively. We can obtain by detailed computation that $ \lambda^{(p)} (T_r(n))=(1+O(\frac{1}{n^2})) 2e(T_r(n)) n^{-2/p} $ and $ \lambda^{(p)} (T_r(n)) = (1-O(\frac{1}{n^2})) \left(1-\frac{1}{r} \right)n^{2-2/p} $, where $O(\frac{1}{n^2})$ stands for a positive error term. This theorem implies $ \lambda^{(p)} (G) \le \left(1-\frac{1}{r} \right) n^{2-(2/p)}$, equality holds if and only if $r$ divides $n$ and $G=T_r(n)$. \medskip Recall that the proof of Theorem \ref{thm214} relies on the Rayleigh representation of $\lambda (G)$ and the existence of a positive eigenvector of $\lambda (G)$. For the $p$-spectral radius, there is also a positive vector corresponding to $\lambda^{(p)}(G)$. Indeed, we choose $G$ as a $K_{r+1}$-free graph on $n$ vertices with maximum value of the $p$-spectral radius, where $p>1$. Clearly, we can assume further that $G$ is connected. A vector $\bm{x}\in \mathbb{R}^n$ is called a unit (optimal) eigenvector corresponding to $\lambda^{(p)} (G)$ if it satisfies $\sum_{i=1}^n |x_i|^p=1$ and $\lambda^{(p)} (G)=2\sum_{\{i,j\} \in E(G)} x_ix_j$. From the definition (\ref{psp}), we know that there is always a non-negative eigenvector of $\lambda^{(p)} (G)$. Moreover, since $p>1$, Lagrange's multiplier method gives that for every $v\in V(G)$, we have \begin{equation} \label{eqeq} \lambda^{(p)} (G) x_v^{p-1} = \sum_{u\in N_G(v)} x_u. \end{equation} Therefore, if $G$ is connected and $p>1$, applying (\ref{eqeq}), then a non-negative eigenvector of $\lambda^{(p)} (G)$ must be entry-wise positive. Hence there exists a positive unit optimal vector $\bm{x} \in \mathbb{R}_{>0}^n$ corresponding to $\lambda^{(p)} (G)$ such that \[ \lambda^{(p)} (G)= 2 \sum_{\{i,j\} \in E(G)} x_ix_j . \] By applying a similar line of the proof of Theorem \ref{thm214}, one can extend Theorem \ref{thm214} to the $p$-spectral radius. We leave the details for interested readers. \begin{theorem} Let $G$ be an $n$-vertex graph. If $G$ does not contain $K_{r+1}$ and $G$ is not $r$-partite, then for every $p>1$, we have \[ \lambda^{(p)} (G) \le \lambda^{(p)} (Y_r(n)). \] Moreover, the equality holds if and only if $G=Y_r(n)$. \end{theorem} \section{Concluding remarks} \label{sec6} In this paper, we studied the spectral extremal graph problems for graphs with given number of vertices. By extending the Mantel theorem and Nosal theorem, we presented an alternative proof of an extension of Nikiforov for $K_{r+1}$-free graphs, and provided a different proof of a refinement of Lin, Ning and Wu for non-bipartite $K_3$-free graphs. Furthermore, we generalized these two results to non-$r$-partite $K_{r+1}$-free graphs. Our result is not only a refinement on the spectral Tur\'{a}n theorem, but it is also a spectral version of Brouwer's theorem. In a forthcoming paper \cite{LP2022oddcycle}, we shall present some extensions and generalizations on Nosal's theorem for graphs with given number of edges. At the end of this paper, we shall conclude with some possible problems for interested readers. To begin with, we define an extremal function as \begin{equation*} \psi (n,F,t):= \max \{ e (G) : F\nsubseteq G , \chi (G)\ge t\}. \end{equation*} Brouwer's theorem says that $\psi (n,K_{r+1}, r+1) = e(T_r(n)) - \lfloor \frac{n}{r} \rfloor +1$. Similarly, we can define the spectral extremal function as \[ \psi_{\lambda} (n,F,t):= \max \{ \lambda (G) : F\nsubseteq G , \chi (G)\ge t\}. \] In Theorem \ref{thm214}, we proved that $\psi_{\lambda} (n,K_{r+1}, r+1) = \lambda (Y_r(n))$. Note that the extremal graph $Y_r(n)$ has chromatic number $\chi (Y_r(n))=r+1$. It is possible to determine the function $\psi_{\lambda} (n,K_{r+1}, r+2)$. {\it More generally, it would be interesting to determine the functions $ \psi (n,F,t)$ and $ \psi_{\lambda} (n,F,t)$ for a general graph $F$ and an integer $t$. For instance, it is possible to study these extremal functions by setting $F$ as the odd cycle $C_{2k+1}$, the book graph $B_k=K_2 \vee kK_1$, the fan graph $F_k=K_1\vee kK_2$, the wheel graph $W_k=K_1 \vee C_k$, and a color-critical graph $F$.} \medskip We write $q(G)$ for the signless Laplacian spectral radius, i.e., the largest eigenvalue of the {\it signless Laplacian matrix} $Q(G)=D(G) + A(G)$, where $D(G)= \mathrm{diag} (d_1,\ldots ,d_n)$ is the degree diagonal matrix and $A(G)$ is the adjacency matrix. In 2013, He, Jin and Zhang \cite[Theorem 1.3]{HJZ2013} showed some bounds for the signless Laplacian spectral radius in terms of the clique number. As a consequence, they proved the signless Laplacian spectral version of Theorem \ref{thmturanstrong}, which states that if $G$ is a $K_{r+1}$-free graph on $n$ vertices, then $q(G)\le q(T_r(n))$, equality holds if and only if $r=2$ and $G=K_{t,n-t}$ for some $t$, or $r\ge 3$ and $G=T_r(n)$. This spectral extension also implies the classical edge Tur\'{a}n theorem. {\it It is possible to establish analogues of the results of our paper in terms of the signless Laplacian spectral radius. For example, whether $Y_r(n)$ is the extremal graph attaining the maximum signless Laplacian spectral radius among all non-$r$-partite $K_{r+1}$-free graphs.} \medskip In 2017, Nikiforov \cite{NikiMerge} provided a unified extension of both the adjacency spectral radius and the signless Laplacian spectral radius. It was proposed by Nikiforov \cite{NikiMerge} to study the family of matrices $A_{\alpha}$ defined for any real $\alpha \in [0,1]$ as \[ A_{\alpha}(G) =\alpha D(G) + (1-\alpha )A(G). \] In particular, we can see that $A_0(G)=A(G)$ and $2A_{1/2}(G)=Q(G)$. Nikiforov \cite[Theorem 27]{NikiMerge} presented some extremal spectral results in terms of the spectral radius of $A_{\alpha}$. It was proved that for every $r\ge 2$ and every $K_{r+1}$-free graph $G$, if $0\le \alpha <1-\frac{1}{r}$, then $\lambda (A_{\alpha}(G)) < \lambda (A_{\alpha}(T_r(n)))$, unless $G=T_r(n)$; if $\alpha =1-\frac{1}{r}$, then $\lambda (A_{\alpha}(G)) < (1-\frac{1}{r})n$, unless $G$ is a complete $r$-partite graph; if $1-\frac{1}{r}< \lambda <1$, then $\lambda (A_{\alpha}(G)) < \lambda (A_{\alpha}(S_{n,r-1}))$, unless $G=S_{n,r-1}$, where $S_{n,k}=K_k \vee I_{n-k}$ is the graph consisting of a clique on $k$ vertices and an independent set on $n-k$ vertices in which each vertex of the clique is adjacent to each vertex of the independent set. {\it From this evidence, it is possible to extend the results of our paper into the $A_{\alpha}$-spectral radius in the range $ \alpha \in [0,1-\frac{1}{r})$ for non-$r$-partite $K_{r+1}$-free graphs.} \subsection*{Acknowledgements} This work was supported by NSFC (Grant No. 11931002). This article was completed during a quarantine period due to the COVID-19 pandemic. The authors would like to express their sincere gratitude to all of the volunteers and medical staffs for their kind help and support, which makes our daily life more and more secure. \frenchspacing
1,108,101,564,051
arxiv
\section{Introduction} As the scale of cyber attacks and volume of network data increases exponentially, organizations must continually adapt against the dynamic nature of evolving cyber threat actors. With more security tools and sensors being deployed in modern enterprise networks, the number of security events being generated continues to increase, making it more challenging to detect malicious activities. Organizations must adopt new techniques to augment human analysts in monitoring, preventing, detecting, and responding to cybersecurity events and potential attacks. Machine Learning has been deemed by many as a game changer in cyber defense. However, the usefulness of Deep Learning in the context of Network Intrusion Detection Systems (NIDSs) has not been systematically understood, despite its tremendous success in other application domains (e.g., image recognition). \subsection{Our contributions} The contribution of this work is two-fold. First, we propose using a feedforward fully connected Deep Neural Network (DNN) to train a NIDS via supervised learning. We also propose using an autoencoder to detect and classify attack traffic via unsupervised learning in the absence of labeled malicious traffic. Second, we evaluate these models using two recent network intrusion detection datasets with known ground truth of malicious vs. benign traffic. We show (i) DNN outperforms other machine learning based network intrusion detection systems; (ii) DNN is robust in the presence of dynamic IP addresses assigned by the Dynamic Host Configuration Protocol (DHCP), which is important when we need to use IP addresses as features in training DNNs; and (iii) autoencoder is effective for anomaly detection. \subsection{Related work} \input{related-work.tex} \ignore{ \begin{figure}[htbp] \centerline{\includegraphics[scale=0.17]{figures/lcd-framework.png}} \caption{Cybersecurity Dynamics Framework} \label{fig:lcd-framework} \end{figure} } \smallskip The rest of the paper is organized as follows. Section \ref{sec:preliminaries} reviews DNNs and Autoencoders. Section \ref{sec:case-study} presents the case study. Section \ref{sec:limitations} discusses the limitations of the present study. Section \ref{sec:conclusion} concludes the paper. \section{Preliminaries} \label{sec:preliminaries} DNNs are a powerful mechanism for supervised learning. They can represent functions of increasing complexity, by inclusion of more layers and more units per layer in a neural network \cite{Goodfellow-et-al-2016}. In the context of NIDSs, DNNs can be used to discover patterns of benign and malicious traffic hidden within large amounts of structured data. Figure \ref{fig:dnn} is an example of a standard Deep Learning representation, where nodes represent inputs, edges represent weights, superscript $(i)$ denotes the $i$th training example, and superscript $[l]$ denotes the $l$th layer. Our case study focuses on DNNs because they can cope with tabular data and categorical variables of high cardinality, which are exhibited by the datasets we analyze. \begin{figure}[!htbp] \centerline{\includegraphics[scale=0.28]{dnn-example.png}} \caption{Deep neural network representation} \label{fig:dnn} \end{figure} Autoencoder is another type of neural network and is trained in such a way that it aims to copy its input to its output, namely aiming to find a lower dimensional, latent space representation of the input data \cite{Goodfellow-et-al-2016}. Unlike other popular dimensionality reduction techniques such as Principle Component Analysis (PCA), it achieves its goal in a non-linear fashion. Figure \ref{fig:autoencoder} shows an example standard Autoencoder, where the number of input neurons is equal to the number of output neurons. We choose Autoencoder for our case study on anomaly detection because of its usefulness given lots of normal data, and its applicability to situations where it may be difficult to explain what represents anomalous data. \begin{figure}[!htbp] \centerline{\includegraphics[scale=0.24]{autoencoder.png}} \caption{Example Autoencoder neural network architecture} \label{fig:autoencoder} \end{figure} \section{Case Study} \label{sec:case-study} \subsection{Methodology} \input{methodology.tex} \subsection{Data description} The ISCX IDS 2012 dataset \cite{Shiravi:2012} was created by modeling a given network environment with a testbed and then using agents to perform attacks on the testbed network. When compared with the outdated datasets \cite{Lippmann1999results, Lippmann20001999,Cup1999data}, this dataset can be characterized as follows \cite{Shiravi:2012}: realistic network configuration because of the real testbed; realistic traffic because of the real attacks/exploits; labeled ground truth of benign and malicious traffic; total capture of communications; and diverse attack scenarios. This dataset is provided in PCAP as well as a custom XML file of {\em network flows} created with the IBM QRadar appliance; the XML flow file contains ground truth labels. Recall that a {\em network flow} is assembled from a number of IP packets and consists of source and destination IP addresses, source and destination port numbers, and protocol. Moreover, flows are often used as a unit for detecting attacks, which is our focus in the present study (another unit is IP packet). Table \ref{table:UNB_ISCX_2012} provides an overview of this dataset. \begin{table}[!htbp] \caption{Overview of the ISCX IDS 2012 dataset, where ``\# of attacks'' is the subset of flows that contain an attack. } \begin{center} \begin{tabular}{|c|c|c|l|} \hline \textbf{Date} & \textbf{\# of Flows}& \textbf{\# of Attacks}& \textbf{Description} \\ \hline 6/11/2012& 474,278& 0& Benign network activities\\ \hline 6/12/2012& 133,193& 2,086& Brute-force against SSH\\ \hline 6/13/2012& 275,528& 20,358& Infiltrations internally\\ \hline 6/14/2012& 171,380& 3,776& HTTP DoS attacks\\ \hline 6/15/2012& 571,698& 37,460& DDoS using IRC bots\\ \hline 6/16/2012& 522,263& 11& Brute-force against SSH\\ \hline 6/17/2012& 397,595& 5,219& Brute-force against SSH\\ \hline \textbf{Total}& \textbf{2,545,935}& \textbf{68,910}& \textbf{2.71\% malicious} \\ \hline \end{tabular} \label{table:UNB_ISCX_2012} \end{center} \end{table} Table \ref{table:iscx2012-features} summarizes the 14 features that can be extracted from the labeled XML file of network flows. \begin{table}[!htbp] \caption{Description of the 14 features of the ISCX IDS 2012 dataset, where ``uniques'' means the number of possible values of a categorical feature.} \centering \label{table:iscx2012-features} \begin{tabular}{|c|c|l|c|c|} \hline \textbf{No.} & \textbf{Feature} & \textbf{Description} & \textbf{Type} & \textbf{Uniques} \\ \hline 1 &SrcIP & Source IP address & Categorical & 2,478 \\ \hline 2 &DstIP & Dest. IP address & Categorical & 34,552 \\ \hline 3 &SrcPort & Source port & Categorical & 64,482 \\ \hline 4 &DstPort & Dest. port & Categorical & 24,238\\ \hline 5 &AppName & Application name & Categorical & 107 \\ \hline 6 &Direction & Direction of flow & Categorical & 4 \\ \hline 7 &Protocol & IP protocol & Categorical & 6 \\ \hline 8 &Duration & Flow duration & Continuous & N/A\\ \hline 9 &TotalSrcBytes & Total source bytes & Continuous & N/A\\ \hline 10 &TotalDstBytes & Total dest. bytes & Continuous & N/A\\ \hline 11&TotalBytes & Total bytes & Continuous & N/A\\ \hline 12 &TotalSrcPkts & Total source packets & Continuous & N/A\\ \hline 13 &TotalDstPkts & Total dest. packets & Continuous & N/A\\ \hline 14 &TotalPkts & Total packets & Continuous & N/A\\ \hline \end{tabular} \end{table} The CIC IDS 2017 dataset \cite{Sharafaldin:2018jr} improves the ISCX IDS 2012 dataset by containing, along with benign traffic, attack traffic from seven different kinds of attacks (i.e., brute-force against the SSH and Web, Heartbleed, botnet, denial of service (DoS), distributed denial of service (DDoS), cross-site scripting (XSS) and SQL injection attacks against websites, and infiltration). This dataset includes not only the raw PCAP data, but also pre-processed network flow data from the PCAP data (processed using the CICFlowMeter tool \cite{LashkariDraper-Gil:icissp17}). This pre-processed network flow data is provided as CSV files that can be fed into the machine learning pipeline. The pre-processed network flow data has 83 columns (e.g., duration, number of packets, number of bytes, length of packets) that can be used as features, plus one label column and one flow ID column. Since seven different kinds of attacks are contained in this dataset, we can conduct multiclass classification research. Table \ref{table:cicids2017-dataset-overview} shows a summary of this dataset. \begin{table}[!htbp] \caption{Overview of the CIC IDS 2017 dataset, where the columns have the same meanings as in Table \ref{table:UNB_ISCX_2012}.} \label{table:cicids2017-dataset-overview} \begin{center} \begin{tabular}{|c|c|c|l|} \hline \textbf{Date} & \textbf{\# of Flows}& \textbf{\# of Attacks}& \textbf{Description} \\ \hline Monday & 529,918 & 0& Normal activities\\ \hline \multirow{2}{*}{Tuesday} & \multirow{2}{*}{445,909} & 7,938 & FTP-Patator \\ \cline{3-4} & & 5,897 & SSH-Patator \\ \hline \multirow{5}{*}{Wednesday} & \multirow{5}{*}{692,703} & 5,796 & DoS slowloris\\ \cline{3-4} & & 5,499 & DoS Slowhttptest \\ \cline{3-4} & & 231,073 & DoS Hulk \\ \cline{3-4} & & 10,293 & Dos GoldenEye \\ \cline{3-4} & & 11 & Heartbleed \\ \hline \multirow{3}{*}{Thursday AM} & \multirow{3}{*}{170,366} & 1507 & Web - Brute Force\\ \cline{3-4} & & 652 & Web - XSS\\ \cline{3-4} & & 21 & Web - SQL Injection \\ \hline Thursday PM & 288,602 & 36& Infiltration\\ \hline Friday AM & 191,033 & 1966& Bot\\ \hline Friday PM 1 & 286,467 & 158,930& PortScan\\ \hline Friday PM 2 & 225,745 & 128,027& DDoS\\ \hline \textbf{Total}& \textbf{2,830,743} & \textbf{557,646} & \textbf{19.70\% malicious} \\ \hline \end{tabular} \label{table:CICIDS2017-overview} \end{center} \end{table} Table \ref{table:cicids2017-features} highlights some of the 74 features that were ``useable'' from the CIC IDS 2017 dataset, while noting that among the other 85-74=11 features, eight continuous features contain no variability or missing values and therefore are discarded, and the remaining three are FlowID, the timestamp, and the label (used for the predicted class). \begin{table}[!htbp] \caption{Description of some of the 74 features of the CIC IDS 2017 dataset, where the columns have the same meanings as in Table \ref{table:iscx2012-features}.} \centering \label{table:cicids2017-features} \begin{tabular}{|c|c|l|c|c|} \hline \textbf{No.} & \textbf{Feature} & \textbf{Description} & \textbf{Type} & \textbf{Uniques} \\ \hline 1 &SrcIP & Source IP address & Categorical & 17,002 \\ \hline 2&DstIP & Dest. IP address & Categorical & 19,112 \\ \hline 3 &SrcPort & Source port & Categorical & 64,638 \\ \hline 4 &DstPort & Dest. port & Categorical & 53,791\\ \hline 5 &Protocol & IP protocol & Categorical & 3 \\ \hline 6 &Duration & Flow duration & Continuous & N/A\\ \hline 7 &total\_fpackets & \makecell[l]{Total num. \\ forward packets} & Continuous & N/A\\ \hline 8 &total\_bpackets & \makecell[l]{Total num. \\ backward packets} & Continuous & N/A\\ \hline 9 &total\_fpktl & \makecell[l]{Total size of \\ forward packets} & Continuous & N/A\\ \hline 10 &total\_bpackets & \makecell[l]{Total size of \\ backward packets} & Continuous & N/A\\ \hline $\vdots$ &$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ \hline 70 &std\_active & \makecell[l]{Std. dev time flow \\ active before idle} & Continuous & N/A\\ \hline 71 &min\_idle & \makecell[l]{Min time flow \\ idle before active} & Continuous & N/A\\ \hline 72 &mean\_idle & \makecell[l]{Mean time flow \\ idle before active} & Continuous & N/A\\ \hline 73 &max\_idle & \makecell[l]{Max time flow \\ idle before active} & Continuous & N/A\\ \hline 74 &std\_idle & \makecell[l]{Std. dev time flow \\ idle before active} & Continuous & N/A\\ \hline \end{tabular} \vspace{-3mm} \end{table} \subsection{Using DNNs for network intrusion detection} \subsubsection{Pre-processing} We propose formatting a dataset (more specifically, network flows) in such a way that can be input into a DNN. Recall that the ISCX IDS 2012 dataset is provided in PCAP as well as a custom XML file of network flows with associated ground-truth labels (indicating malicious or benign flows). The XML file is parsed and converted to a CSV file of flows, which becomes the input into the machine learning pipeline. Recall that the CIC IDS 2017 dataset is in the form of both PCAP as well as flows characterized by 74 usable features (5 categorical and 69 statistical). In order to make machine learning algorithms train models in the same feature space, it is a common practice to normalize or scale the continuous values among all the features. For this purpose, we use the standard \textit{min-max scaling}, which is a normalization method for scaling data to [0,1] as follows: $X_{norm} = \frac{X - X_{min}}{X_{max} - X_{min}}$, where $X_{min}$ and $X_{max}$ are respectively the minimum and maximum value of feature $X$. In order to train DNNs over categorical data, we need to convert them to numerical values. For this purpose, we propose adopting the \textit{entity embedding} technique \cite{guo2016entity} because it can cope with categorical features that take a large number of possible values. This is true for the datasets we analyze because there are many possible values for source IP addresses, destination IP addresses, source port numbers, and destination port numbers. In the entity embedding method, the number of embedding dimensions are determined according to the following rule of thumb \cite{google2019mlcrashcourse}: \begin{equation} dimensions = \ceil[\Big]{\sqrt[4]{possible\:values}\:}, \label{eq:embedding} \end{equation} where $possible\:values$ is the number of possible values a categorical feature can take. Specifically, a categorical feature is first mapped to an integer between $0$ and $n-1$, where $n$ is the number of unique values that can be taken by the feature, and then encoded as a {\em dense} vector according to the dimensions as calculated in Eq. \eqref{eq:embedding}. Table \ref{table:embedding-cicids} summarizes the embedding result of the four categorical features in the CIC IDS 2017 dataset. \begin{table}[!htbp] \caption{Embedding of the four categorical features in the CIC IDS 2017 dataset.} \centering \label{table:embedding-cicids} \begin{tabular}{|c|c|c|} \hline \textbf{Feature}&\textbf{Possible Values}&\textbf{Embedded Dimensions} \\ \hline Source IP& 17,002 & 12 \\ \hline Destination IP& 19,112 & 12 \\ \hline Source Port& 64,638 & 16 \\ \hline Destination Port& 53,791 & 15 \\ \hline \end{tabular} \end{table} The parameters (weights) for the vector representation of the categorical features are initialized using a random uniform distribution over the support $[-0.05, 0.05]$. This representation is not only more computationally efficient, but the entity embedding layer learns intrinsic properties of each categorical feature, and the deeper layers of the neural network form complex combinations between them \cite{guo2016entity}. Since these vectors are inputs into the first layer of a neural network, their weights are updated in the back-propagation step at each epoch. \subsubsection{Training} The neural network consists of three layers of 64 units per layer. Feeding into these three hidden layers is an initial input layer consisting of the embedded categorical variables concatenated with the statistical input features. The activation function on each hidden layer is the ReLU activation function, $R(z) = max(0, z)$, while the last output layer uses a sigmoid activation function, $\sigma(z) = \frac{1}{1 + e^{-z}}$. A dropout rate of $0.40$ is used on each of the hidden layers. The optimizer used is RMSProp, with a default learning rate of $0.001$. The loss function used is binary crossentropy: \begin{equation} H_p(q) = -\frac{1}{N} \sum_{i=1}^N y_i \cdot \log(p(y_i)) + (1 - y_i) \cdot \log(1 - p(y_i)), \label{eq:binary-crossentropy} \end{equation} where $y_i$ is the label ($1$ for malicious and $0$ for benign), $p(y_i)$ is the predicted probability of a given flow, and $N$ is the total number of flows. Intuitively, Eq. \ref{eq:binary-crossentropy} says that for each malicious flow ($y_i=1$), the loss is $\log(p(y_i))$, which is the logarithm of the probability that the flow is malicious; for each benign flow ($y_i=0$), the loss is $\log(1-p(y_i))$, which is the logarithm of the probability that the flow is benign. \subsubsection{Experiments and results} We aim to use experiments to answer two questions: (i) Is deep learning more effective than other machine learning methods? (ii) Is deep learning robust in the presence of dynamic IP addresses? Note that (ii) is important because a trained DNN, which uses IP addresses as an important feature, can easily become useless in the presence of dynamic IP addresses, which are produced by networks using the Dynamic Host Configuration Protocol (DHCP). In order to answer the aforementioned question (i), we compare the effectiveness of deep learning and other machine learning methods using two standard metrics \cite{Pendleton16}, namely the {\em True-Positive Rate} (TPR) and the {\em False-Positive Rate} (FPR). \ignore{ \begin{itemize} \item \textbf{True Positive Rate} (TPR) or \textbf{Detection Rate (DR)}: $TPR=DR=\frac{TP}{TP+FN}$ \item \textbf{True Negative Rate} (TNR): $TNR=\frac{TN}{TN+FP}$. \item \textbf{False Positive Rate} (FPR): $FPR=\frac{FP}{FP+FN}$ \item \textbf{False Negative Rate} (FNR): $FNR=\frac{FN}{FN+TP}$. \end{itemize} } \begin{table}[!htbp] \caption{Comparison of deep learning based intrusion detection and other machine learning methods based intrusion detection \cite{ahmim2018novel} using the CIC IDS 2017 dataset.} \label{table:cicids-result-comparison} \noindent \centering{}% \begin{tabular}{|c|c|c|} \hline \textbf{Technique} & \textbf{TPR} & \textbf{FPR}\tabularnewline \hline \makecell{Hypbrid IDS \\ Decision Tree + Rule-based} \cite{ahmim2018novel} & $0.94475$ & $.01145$ \tabularnewline \hline WISARD \cite{de2018experimental} & $0.48175$ & $0.02865$ \tabularnewline \hline Forest PA \cite{adnan2017forest} & $.92920$ & $0.03550$ \tabularnewline \hline J48 Consolidated \cite{ibarguren2015coverage} & $0.92020$ & $0.06645$ \tabularnewline \hline LIBSVM \cite{chang2001libsvm} & $0.54595$ & $0.05130$ \tabularnewline \hline FURIA \cite{huhn2009furia} & $0.90500$ & $0.03165$ \tabularnewline \hline Random Forest \cite{ahmim2018novel} & $0.93050$ & $0.01880$ \tabularnewline \hline REP Tree \cite{ahmim2018novel} & $0.91640$ & $0.04835$ \tabularnewline \hline MLP \cite{ahmim2018novel} & $0.77830$ & $0.07350$ \tabularnewline \hline Naive Bayes \cite{ahmim2018novel} & $0.82510$ & $0.33455$ \tabularnewline \hline Jrip \cite{ahmim2018novel} & $0.93400$ & $0.04470$ \tabularnewline \hline J48 \cite{ahmim2018novel} & $0.91990$ & $0.05040$ \tabularnewline \hline \textbf{DNN with IPs} & $\textbf{0.9993}$ & $\textbf{0.0003}$ \tabularnewline \hline \textbf{DNN without IPs} & $\textbf{0.9677}$ & $\textbf{0.0052}$ \tabularnewline \hline \end{tabular} \end{table} Table \ref{table:cicids-result-comparison} compares deep learning against the other approaches evaluated in \cite{ahmim2018novel} using the CIC IDS 2017 dataset. We observe that DNN while using IP addresses leads to the highest True-Positive Rate (Detection Rate) and lowest False-Positive Rate. This leads to the following: \begin{insight} DNN while using IP addresses achieves the highest effectiveness when compared with the other machine learning method studied in the literature. \end{insight} \begin{figure*}[!htbp] \begin{subfigure}[b]{0.19\textwidth} \centering \includegraphics[width=\textwidth]{iscx-keras-cf.png} \caption{ISCX2012 w/ IP} \label{fig:iscx-confusion-matrix-embeddings} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \centering \includegraphics[width=\textwidth]{iscx-cf-without-ips.png} \caption{ISCX2012 w/o IP} \label{fig:iscx-confusion-matrix-noips} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \centering \includegraphics[width=\textwidth]{cicids-keras-cf.png} \caption{CIC2017 w/ IP} \label{fig:cicids-confusion-matrix-embeddings} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \centering \includegraphics[width=\textwidth]{cicids-keras-cf-noips.png} \caption{CIC2017 w/o IP} \label{fig:cicids-confusion-matrix-embeddings-no-ips} \end{subfigure} \begin{subfigure}[b]{0.19\textwidth} \centering \includegraphics[width=1.10\textwidth]{cicids-cf-first-3-octets.png} \caption{CIC2017 - first 3 octets} \label{fig:cicids-confusion-matrix-first-3-octets} \end{subfigure} \caption{Confusion matrix results for both datasets, where the $x$-axis is the predicted class and the $y$-axis is the true class. } \label{fig:cf} \end{figure*} In order to answer the aforementioned question (ii), we train the deep learning model using some portion of the IP addresses. This is reasonable because DHCP typically operates in the same network, meaning that the network identity is static (e.g., the first 24-bit of IP addresses of a class C network). Figure \ref{fig:cf} shows the results for the two datasets with and without IP address features. Figure \ref{fig:cicids-confusion-matrix-first-3-octets} shows the results of using just the first three octets for source and destination IP address. In Figure \ref{fig:iscx-confusion-matrix-noips} we observe that for the ISCX IDS 2012 dataset, when removing the IP address feature the performance drops considerably in terms of TPR and FNR. For the CIC IDS 2017 dataset, Figure \ref{fig:cicids-confusion-matrix-embeddings-no-ips} shows that removing the IP address only slightly degrades the performance in comparison to ISCX IDS 2012. Note that there is considerably larger amount of malicious examples in CIC IDS 2017 ($19.68\%$) compared to ISCX IDS 2012 ($3.32\%$). We also notice that embedding the IP address with only the first three octets (Figure \ref{fig:cicids-confusion-matrix-first-3-octets}) achieves similar results as when using the full IP address as shown (Figure \ref{fig:cicids-confusion-matrix-embeddings}). This leads to the following: \begin{insight} DNN while using the first three octets of the IP address is as effective as using the full IP address, meaning that deep learning based intrusion detection is robust in the presence of DHCP. However, using full IP addresses is important when the dataset is imbalanced (i.e., the proportion of labeled malicious traffic is small). \end{insight} \subsection{Using Autoencoders for network intrusion detection} \subsubsection{Pre-processing} For the Autoencoder experiments, all 69 usable continuous features of flow statistics in the CIC IDS 2017 dataset are used, and are normalized using the \textit{min-max} technique mentioned above. One categorical feature of ``protocol'' is also used, which only has 3 unique values, and is converted to floating point numbers using one-hot encoding. The high-cardinality features of IP address and port are not used; we leave it to future work to incorporate these into the training of autoencoders. \subsubsection{Training} \ignore{ \begin{figure}[!htbp] \centering \includegraphics[scale=0.3]{figures/autoencoder-structure.png} \caption{CIC IDS 2017 Autoencoder Configuration {\color{red}font size problem ....}} \label{fig:cicids-autoencoder-configuration} \end{figure} } The autoencoder configuration consists of 7 layers, with the first and last layer using the sigmoid activation function, and all other hidden layers using ReLU. The first and last layer consist of 72 units (representing all input features), and the hidden layers consist of 140, 35, 16, 16, 35 units respectively. In addition, L1 regularization is applied to the first input layer. The objective function for the autoencoder is the squared error. Written out in terms of weights and inputs, this function is shown in Eq. \eqref{eq:squared-error} below. \begin{equation} \label{eq:squared-error} J = | X - \hat{X} |_F^2 = | X - sigmoid(sigmoid(X \cdot W)W^T) |_F^2 \end{equation} \subsubsection{Experiment and results} Figure \ref{fig:cicids-autoencoder-reconstruction-error} plots the experimental results. We observe that there is a higher reconstruction error for the malicious traffic flows as compared to the benign flows. Depending on the threshold set, the number of false positives can be adjusted. With the current threshold set at a $0.03$ reconstruction error, there only results in a total of $89$ false positives and a False-Positive Rate of $0.00013$. However, there is a high False-Negative Rate of $0.7670$. In addition, we observe that a majority of the malicious flows are clustered in groups, lending credence to future work that can incorporate the time domain as a feature. We draw the following insight: \begin{figure}[!htbp] \centering \includegraphics[width=.4\textwidth]{reconstruction-error.png} \caption{Experimental results using the CIC IDS 2017 dataset: Autoencoder reconstruction error with threshold.} \label{fig:cicids-autoencoder-reconstruction-error} \end{figure} \begin{insight} Autoencoders can be effective as anomaly detection mechanisms for network intrusion detection (in terms of low False-Positive Rate) when training on benign traffic only. \end{insight} \section{Limitations} \label{sec:limitations} \input{limitations.tex} \section{Conclusion} \label{sec:conclusion} \input{conclusion.tex} \noindent{\bf Acknowledgements}. This work was supported in part by ARL grant \#W911NF-17-2-0127 and NSF CREST Grant \#1736209. \bibliographystyle{ieeetr}
1,108,101,564,052
arxiv
\section{Introduction} Disease progression refers to the temporal evolution of a disease over time. Modeling the temporal characteristics of a disease may be useful for various purposes including scientific discovery (e.g., understanding how a disease manifests itself by discovering the stages the patients typically go through) and clinical decision-making (e.g., evaluating the health status of a patient by identifying the stage the patient is in). Probabilistic time-series models are a natural choice for disease progression modeling as they take into account temporal relations in data. However, the task remains challenging for these models mainly because of (i) limited availability of data, (ii) data quality problems (e.g., missing data), (iii) the need for interpretability and (iv) heterogeneous nature of diseases such as Alzheimer's disease (AD) and Parkinson's disease (PD). A practical solution to these problems has been using hidden Markov models (HMMs), which (i) can be trained using small datasets, (ii) can handle missing data in a principled approach and (iii) are interpretable models, e.g., it is possible to relate inferred latent states to particular symptoms. Most existing HMMs \citep{jackson2003multistate,sukkar2012disease,guihenneuc2000modeling,wang2014unsupervised,sun2019probabilistic,severson2020personalized,severson2021discovery}, however, assume that each patient follows the same latent state transition dynamics, ignoring the heterogeneity in the disease progression dynamics. The need for heterogeneous disease progression modeling has been highlighted by the works on \emph{disease subtyping}, which is defined as the task of identifying subpopulations of similar patients that can guide treatment decisions for a given individual \citep{saria2015subtyping}. Disease subtyping can be useful especially for complex diseases which are often poorly understood such as autism \citep{state2012emerging}, cardiovascular disease \citep{de2009heart} and Parkinson’s disease \citep{lewis2005heterogeneity}. The discovery of subtypes can further benefit both the scientific discovery (e.g., studying the associations between the shared characteristics of similar patients and potential causes) and clinical decision-making (e.g., reducing the uncertainty in an individual's expected outcome) \citep{saria2015subtyping}. Traditionally, disease subtyping has been carried out by clinicians who may notice the presence of subgroups \citep{barr1999patterns,ewing1921diffuse}. More recently, the growing availability of medical datasets and computational resources have facilitated the rapid adaptation of data-driven approaches that offer objective methods to discover underlying disease subtypes \citep{schulam2015clustering,lewis2005heterogeneity}. For instance, \citet{lewis2005heterogeneity} discover the presence of four subtypes of PD; however, they apply k-means clustering which may provide a limited capability to capture complex patterns in the data. \citet{schulam2015clustering} develop a more sophisticated approach based on a mixture model that is robust against the variability unrelated to disease subtpying; however, their proposed model does not take into account the temporal relations in the clinical visits. In this work, we relax the assumption of HMMs that the disease dynamics, as specified by the transition matrix, is shared among all patients. Instead, we propose the use of hierarchical HMMs for disease progression modeling, particularly mixture of HMMs (mHMMs) and their variants that can explicitly model group-level similarities of patients. We are motivated by the applications of mHMMs in other domains where they have been shown to outperform HMMs such as modeling activity levels in accelerometer data \citep{de2020mixture}, modeling clickstreams of web surfers \citep{ypma2008categorization} and modeling human mobility using geo-tagged social media data \citep{zhang2016gmove}. We summarize our contributions and the organization of the paper below: \paragraph{Contributions:} To our knowledge, this is the first attempt to apply mHMMs to disease progression modeling. Particularly, we show that mixture of input-output HMMs (mIOHMMs) suits disease progression modeling better than IOHMMs, as they can discover multiple disease progression dynamics in addition taking into account the medications information. Moreover, we develop mixtures of a number of HMM variants, namely mIOHMMs, mixture of personalized HMMs (mPHMMs) and mixture of personalized IOHMMs (mPIOHMMs) which have not been explored before by the machine learning community. \paragraph{Organization:} We first introduce our notation for HMMs and present three HMM variants with their mixture extensions (Section \ref{label:methodology}). We then discuss the related work (Section \ref{label:related-work}), which is followed by the experiments and the results (Section \ref{label:experiments}). Finally, we summarize our work and discuss the possible future research directions (Section \ref{label:summary}). \section{Methodology} \label{label:methodology} This section describes the background information on HMMs, introduces our proposed models and the training procedure we apply. \subsection{Background} Below we introduce our notation for HMMs and describe its three variants proposed by \citet{severson2020personalized}. \subsubsection*{HMM} We consider an HMM with a Gaussian observation model and define it as a tuple $M= (\pi, A, \mu, \Sigma)$ where $\pi$ is the initial-state probabilities, $A$ is the state-transition probabilities, $\mu$ and $\Sigma$ are the mean and covariance parameters of the observation model with Gaussian densities. The generative model of an HMM becomes as follows: \begin{gather} x_1^{(i)} \sim \mathcal{C}at(\pi), \qquad x_t^{(i)} | x_{t-1}^{(i)} = l \sim \mathcal{C}at(A_l), \nonumber \\ y_t^{(i)} | x_t^{(i)}=l \sim \mathcal{N}(y_t^{(i)}; \mu_l, \Sigma_l), \end{gather} where $x_t^{(i)}$ and $y_t^{(i)}$ are respectively the hidden state and observation at time $t$ for the $i^{th}$ time-series sequence, and $\mathcal{C}at(\cdot)$ and $\mathcal{N}(\cdot)$ respectively denote the Categorical and Gaussian distributions. Here, $x_{t}^{(i)}$ is conditionally generated given that the hidden state at time $t-1$ for the $i^{th}$ sequence, denoted by $x_{t-1}^{(i)}$, is the $l^{th}$ hidden state. Similarly, $y_{t}^{(i)}$ is generated conditionally on $x_{t}^{(i)}=l$. \subsubsection*{PHMM} We can train an HMM using multiple medical time-series sequences collected from different patients. This approach would rely on the assumption that each patient follows the same state means and covariance, which may not be realistic when the individuals differ from the state means with different amounts. To address this issue, \citet{severson2020personalized} propose a personalized HMM (PHMM) by modifying the observation model of HMM as follows: \begin{eqnarray} y_t^{(i)} | x_t^{(i)}=l \sim \mathcal{N}(y_t^{(i)}; \mu_l + r^{(i)}, \Sigma_l), \end{eqnarray} where $r^{(i)}$ denotes the individual deviation from the states. \subsubsection*{IOHMM} The observed variables of an HMM are typically the clinical assessments made during hospital visits. However, the medications information can also be informative about the health status of a patient. To incorporate such information into the disease progression modeling, \citet{severson2020personalized} introduce the following observation model: \begin{eqnarray} \label{eqn:iohmm-likelihood} y_t^{(i)} | x_t^{(i)}=l &\sim& \mathcal{N}(y_t^{(i)}; \mu_l +v_l d_t^{(i)}, \Sigma_l), \end{eqnarray} where $d_t^{(i)}$ is the observed medication data at time $t$ for the $i^{th}$ patient and $v_l$ denotes the state medication effects. The proposed model resembles input-output HMMs \cite{bengio1994input} except that the hidden states are not conditioned on the input variables which are used to incorporate medications data, as the medication is thought to have no disease-modifying impact. This assumption is valid for diseases such as PD where there is no cure but treatments that can only help reduce the symptoms. \subsubsection*{PIOHMM} Finally, combining PHMM and IOHMM provides us a personalized model that takes into account the medications: \begin{eqnarray} y_t^{(i)} | x_t^{(i)}=l &\sim& \mathcal{N}(y_t^{(i)}; \mu_l + r^{(i)} + (v_l + m^{(i)}) d_t^{(i)}, \Sigma_l), \end{eqnarray} where $m^{(i)}$ denotes the personalized medication effects. \subsection{The Proposed Models} Below we follow a general recipe to construct hierarchical mixture models. We first extend the HMM and then its three variants to their mixture correspondences. We construct the mixture version of a HMM model (e.g., mPHMMs) by concatenating the parameters of each HMM variant (e.g., PHMM) for simplicity; however, it would also be possible to apply alternative schemes, e.g., see \citet{smyth1996clustering} for a hierarchical clustering-based approach. \subsubsection*{mHMMs} We define a mHMMs as a set $M= \{M_1, M_2, \dots, M_K\}$ where $M_k = (\pi_k, A_k, \mu_k, \Sigma_k)$ is the $k^{th}$ HMM mixture. The generative model becomes as follows: \begin{align} z^{(i)} &\sim \mathcal{C}at(\alpha), \nonumber \\ x_1^{(i)} | z^{(i)}=k &\sim \mathcal{C}at(\pi_k), \nonumber \\ x_t^{(i)} | x_{t-1}^{(i)} = l, z^{(i)}=k &\sim \mathcal{C}at(A_{k,l}), \nonumber \\ y_t^{(i)} | x_t^{(i)}=l, z^{(i)}=k &\sim \mathcal{N}(y_t^{(i)}; \mu_{k,l}, \Sigma_{k,l}), \end{align} where $z^{(i)}$ denotes the HMM that the $i^{th}$ time-series sequence belongs to, $x_t^{(i)}$ and $y_t^{(i)}$ are respectively the corresponding hidden state and observation at time $t$. Note that when the cardinality of $z$ is 1, the model reduces to the standard HMM. Fig.\ \ref{fig:mHMMs-graphical} presents a graphical representation of mHMMs. mHMMs assume that each time-series sequence belongs to an HMM mixture. This construction allows us to cluster similar sequences so that each cluster is represented using different parameter values. As we have mentioned earlier, training a single HMM for all sequences may not be expressive enough. On the other hand, training a separate HMM for each sequence can be challenging due to the sparsity of the data and the computational problems. mHMMs overcome these problems by combining a number of HMMs which is higher than 1 and lower than the number of sequences. \begin{figure}[htb!] \begin{center} \includegraphics[width=0.45\textwidth]{figures/mHMM-graphical-model.pdf} \caption{A graphical representation of mHMMs.} \label{fig:mHMMs-graphical} \end{center} \end{figure} \subsubsection*{mPHMMs} Similarly to mHMMs, we obtain the mixture versions of the HMM variants. For example, we modify the observation model of PHMM to obtain its mixture version as follows: \begin{eqnarray} y_t^{(i)} | x_t^{(i)}=l, z^{(i)}=k \sim \mathcal{N}(y_t^{(i)}; \mu_{k,l} + r^{(i)}, \Sigma_{k,l}). \end{eqnarray} \subsubsection*{mIOHMMs} We obtain mIOHMMs using the observation model given below: \begin{eqnarray} \label{eqn:miohmm-likelihood} y_t^{(i)} | x_t^{(i)}=l, z^{(i)}=k \sim \mathcal{N}(y_t^{(i)}; \mu_{k,l} + v_{k,l} d_t^{(i)}, \Sigma_{k,l}). \end{eqnarray} \subsubsection*{mPIOHMM} Finally, mPIOHMM has the following observation model: \begin{eqnarray} y_t^{(i)} \sim \mathcal{N}(y_t^{(i)}; \mu_{k,l} + r^{(i)} + (v_{k,l} + m^{(i)}) d_t^{(i)}, \Sigma_{k,l}), \end{eqnarray} where $l=x_t^{(i)}$ and $k=z^{(i)}$. \subsection{The Training of the Models} We follow the same training procedure proposed by \citet{severson2020personalized} where variational inference is used to approximate the posterior distributions over the latent variables of $x$, $m$ and $r$ as follows: \begin{align} q(x,m,r|y, \lambda) &= \prod_{i=1}^N q(m^{(i)}|\lambda) q(r^{(i)}|\lambda) q(x^{(i)}| y^{(i)}, m^{(i)}, r^{(i)}), \nonumber \\ &= \prod_{i=1}^N q(m^{(i)}|\lambda) q(r^{(i)}|\lambda) \nonumber \\ & \qquad \qquad \prod_{t=2}^{T_i} q(x^{(i)}_t|x^{(i)}_{t-1},y^{(i)}_t, m^{(i)}, r^{(i)}), \end{align} where $\lambda$ denote the variational free parameters. The corresponding evidence lower bound (ELBO) is maximized using coordinate ascent alternating between the updates for variational parameters $\lambda$ and model parameters $\theta$. Please see \citet{severson2020personalized} for the details of the training algorithm. Note that we simplify the inference by not explicitly inferring the latent variables $z$. Instead, we obtain the cluster membership of each sequence based on its state trajectory estimated via the Viterbi algorithm, thanks to the block-diagonal structure of the transition matrices. However, it would be possible to explicitly infer the variables $z$ by introducing the corresponding variational distribution $q(z_i | \lambda_{z_i})$. \section{Related Work} \label{label:related-work} Most common approach for disease progression modeling has been using HMMs. For example, \citet{guihenneuc2000modeling} employ a HMM with discrete observations for modeling the progression of Acquired Immune Deficiency Syndrome (AIDS). \citet{sukkar2012disease} apply the same model for Alzheimer's Disease. \citet{wang2014unsupervised} introduce additional hidden variables to incorporate the \emph{comorbidities} of a disease into the transition dynamics. Note that comorbidities are defined as syndromes co-occurring with the target disease, e.g., hypertension is a common comorbidity of diabetes. Other applications of HMMs for disease progression include the work on Huntington’s disease \cite{sun2019probabilistic} and abdominal aortic aneurysm \cite{jackson2003multistate}. Lastly, the standard HMMs have been modified for personalized disease progression. \citet{altman2007mixed} introduce random effects to better capture individual deviations from states. \citet{severson2020personalized,severson2021discovery} propose a model that is both personalized and takes into account medications information for disease progression modeling. An alternative approach to personalize disease progression is through Gaussian processes (GPs). \citet{peterson2017personalized} propose a GP model personalized based on each patient’s previous visits. \citet{lorenzi2019probabilistic} combines a GP with a set of random effect variables where the former is used to model progression dynamics shared among patients and the latter is used to represent their individual differences. \citet{schulam2015framework} propose a more general framework based on a hierarchical GP model with population, subpopulation and individual components that has been applied on the measurements of a single biomarker. \citet{futoma2016predicting} later generalize this model to the case of multiple biomarkers. Another common approach for disease progression modeling have been the use of deep learning, especially when the interpretability is not a major concern and a large amount of clinical data is available \citep{che2018hierarchical,eulenberg2017reconstructing,pham2017predicting,alaa2019attentive,lee2020temporal,chen2022clustering}. Among these methods, the most relevant works to ours are the approaches proposed by \citet{lee2020temporal} and \citet{chen2022clustering} which can identify ``similar'' patients via time-series clustering. Perhaps the closest related works are the studies on disease subtyping, particularly those focusing on Parkinson's Disease (PD). \citet{lewis2005heterogeneity} discover the presence of four subtypes of PD by applying k-means clustering. \citet{schulam2015clustering} develop a mixture model that is robust against the variability unrelated to disease subtpying. Both these approaches, however, do not take into account the temporal relations in the clinical visits. Finally, mHMMs have been shown to outperform HMMs in other domains such as modeling activity levels in accelerometer data \citep{de2020mixture}, modeling clickstreams of web surfers \citep{ypma2008categorization} and modeling human mobility using geo-tagged social media data \citep{zhang2016gmove}. We also note a couple of works on the training of mHMMs such as the hierarchical clustering-based approach proposed by \citet{smyth1996clustering} and the spectral-learning based training algorithm proposed by \citet{subakan2014spectral}. \section{Experiments} \label{label:experiments} We present two sets of experiments. The goal of the first experiment is to demonstrate the ability of mPHMM to simultaneously learn personalized state effects and multiple disease progression dynamics using synthetically generated data, for which we know the true disease progression dynamics. Then, we show that mIOHMM provides a better `fit' of a real-world dataset than IOHMM by discovering multiple disease progression dynamics. The code to reproduce the experiments is publicly available at \url{https://github.com/tahaceritli/mIOHMM}. \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.9\textwidth]{figures/x_hats.png} \caption{A comparison of the models for simulated data based on the original study by \citet{severson2020personalized}. The three rows correspond to different pairs of models being compared. The standard HMM incorrectly assigns states and compensates for the personalization with large variances, as shown in the first row. We observe the same phenomenon in the middle row with mHMMs, although the variance is lower than the variance of HMM as the mixture components provide a richer representation of the state-means. As per the bottom comparison, PHMM and mPHMMs overlap showing that the model can still handle individual variations in the data.} \label{fig:synt-fit-plot} \end{center} \end{figure*} \subsection{Synthetic Data} Combining the settings used by \citet{severson2020personalized} and \citet{smyth1996clustering}, we build a 2-component mPHMM with 2 latent states for each PHMM. The state transition matrices of the PHMM mixtures are given below: \begin{equation*} A_1 = \begin{bmatrix} 0.8 & 0.2 \\ 0.2 & 0.8 \end{bmatrix}, A_2 = \begin{bmatrix} 0.2 & 0.8 \\ 0.8 & 0.2 \end{bmatrix}, \end{equation*} where $A_k$ denotes the state transition matrix of the $k^{th}$ PHMM. The observation model is built using Gaussian densities with the means $\mu_1 = \mu_2 = \begin{bmatrix} 0 \\ 2 \end{bmatrix}$ and variances $\sigma_1^2 = \sigma_2^2 = \begin{bmatrix} 0.1 \\ 0.1 \end{bmatrix}$. Note that the state means and variances are the same for each PHMM whereas the transition dynamics are different, i.e., the transitions between the latent states are more likely to occur in the first PHMM than they are in the second PHMM. The initial state probabilities are assumed to be uniformly distributed. We use the noisy observation model of $\hat{x}_{i,t} | z_{i,t} = l \sim \mathcal{N}(\mu_l + r_i, \Sigma_T)$ where $\Sigma_T$ is specified via a squared exponential kernel $\kappa(t, t′) = \sigma^2 \exp \frac{-(t-t')^2}{2*l^2}$ with $l$ and $\sigma$ are respectively set to 1 and 0.1. Lastly, the personalized state offset $r_i$ is uniformly sampled for each sample with $b=1$. Fixing the dimensionality of the data to 1, we generate 200 sequences of length 30 using this model. The training of mPHMM yields the parameter estimates given below: \begin{align*} \hat{A_1}= \begin{bmatrix} 0.80 & 0.20 \\ 0.19 & 0.81 \end{bmatrix}, \hspace{.5cm} \hat{\mu_1} = \begin{bmatrix} 0.11 \\ 2.10 \end{bmatrix}, \hspace{.5cm} \hat{\sigma^2_1} = \begin{bmatrix} 0.10 \\ 0.11 \end{bmatrix}, \\ \hat{A_2} = \begin{bmatrix} 0.21 & 0.79 \\ 0.80 & 0.20 \end{bmatrix}, \hspace{.5cm} \hat{\mu_2} = \begin{bmatrix} 0.04 \\ 2.05 \end{bmatrix}, \hspace{.5cm} \hat{\sigma^2_2} = \begin{bmatrix} 0.10 \\ 0.10 \end{bmatrix}, \end{align*} On the other hand, we obtain the following parameter estimates using PHMM: $A = \begin{bmatrix} 0.53 & 0.47 \\ 0.46 & 0.54 \end{bmatrix}$, $\mu = \begin{bmatrix} 0.05 & 2.05 \end{bmatrix}$ and $\sigma = \begin{bmatrix} 0.10 & 0.11 \end{bmatrix}$ which indicates that PHMM cannot distinguish the heterogeneous state-transitions. Note that we could have adapted PHMM to this example by using 4 latent states; however, the distinction between the states would not be clear as the block-diagonal structure is not introduced in PHMM (see the additional experimental results in Appendix \ref{appendix:additional-experimental-results}). Finally, we demonstrate that our model keeps the personalization capabilities of the original PHMM discussed in \citet{severson2020personalized}. Fig.\ \ref{fig:synt-fit-plot} presents a number of sequences and the corresponding estimates obtained using HMM, PHMM, mHMM and mPHMM. The figure indicates that mPHMM performs similarly as PHMM in fitting the data. However, mPHMM has the advantage of discovering the heterogeneous transition matrices over PHMM as discussed above. \subsection{Real Data} \subsubsection*{Data} Following the experimental setup in \citet{severson2020personalized}, we use the Parkinson Progression Marker Initiative (PPMI) dataset \cite{marek2011parkinson} for real data experiments. PPMI is a longitudinal dataset collected from 423 PD patients, including clinical, imaging and biospecimen information. We focus on the clinical assessments measured via the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) \cite{goetz2008movement}. The MDS-UPDRS consists of a combination of patient reported-measures and physician-assessed measures: (i) non-motor experiences of daily living, (ii) motor experiences of daily living, (iii) motor examination and (iv) motor complications. Each item on the scale is rated from 0 (normal) to 4 (severe). We do not use the motor complications obtaining 59 features for the observations. As the medication data, we use the levodopa equivalent daily dose (LEDD) \cite{tomlinson2010systematic}, which is provided in the PPMI dataset. \subsubsection*{Metrics} To compare the models, we use three information criteria: Akaike Information Criterion (AIC, \citealt{akaike1998information}), Bayesian Information Criterion (BIC, \citealt{schwarz1978estimating}) and Integrated Completed Likelihood (ICL, \citealt{biernacki2000assessing}) which are defined below: \begin{eqnarray*} \mathrm{AIC} &=& - 2\ell + 2k \\ \mathrm{BIC} &=& - 2\ell + k \log N \\ \mathrm{ICL} &=& - 2\hat{\ell} + 2k, \end{eqnarray*} where $\ell$ is the log-likelihood of the training data, $k$ is the number of free parameters, $N$ is the number of training data instances, and $\hat{\ell}$ is the log-likelihood of the training data under the most likely trajectory. Here, $k$ is calculated as $L^2 + 3*L*D -1$ where $D$ is the dimension of observations and $L$ is the total number of hidden states aggregated over the HMM mixtures, as we use diagonal covariance matrices. Additionally, the log-likelihoods are calculated based on Equations \ref{eqn:iohmm-likelihood} and \ref{eqn:miohmm-likelihood}. \subsubsection*{Model} We compare IOHMM and a number of mIOHMMs with varying number of components (i.e., $K \in \{2,3,4,5\}$). Note that these models are not personalized, meaning that they are equivalent to PIOHMM with personalized effect variables fixed to zero, i.e., $r_i=0$ and $m_i=0$. In this work, we evaluate the impact of using mIOHMM over IOHMM. One could similarly apply model selection for PIOHMM without fixing personalized effect variables to zero; however, in our experience, this is computationally expensive and more efficient algorithms need to be developed which is out of the scope of this work. Additionally, we fix the number of hidden states to 8 following the setting in \citet{severson2020personalized}, and use diagonal covariance matrices. \subsubsection*{Results} Table \ref{tab:all-methods-performance} presents the values of the information criteria obtained using the models, which indicate that mIOHMMs are favoured over IOHMM. As per the table, AIC and BIC select five components whereas ICL selects two components. This result is not surprising as AIC and BIC tend to be overoptimistic about the model size \cite{biernacki2000assessing}. \begin{table}[h] \centering \caption{Performance of the mHMM methods using AIC, BIC and ICL.} \begin{tabular}{lccc} \toprule K & AIC & BIC & ICL \\ \midrule 1 & -5.5370e+07 & -5.5365e+07 & -5.4256e+07 \\ 2 & -5.5532e+07 & -5.5520e+07 & \textbf{-5.5330e+07} \\ 3 & -5.5540e+07 & -5.5521e+07 & -5.5246e+07 \\ 4 & \textbf{-5.5567e+07} & \textbf{-5.5542e+07} & -5.2234e+07 \\ 5 & -5.5536e+07 & -5.5503e+07 & -5.5250e+07 \\ \bottomrule \end{tabular} \label{tab:all-methods-performance} \end{table} Following the ICL criterion, we compare and interpret the parameter estimates obtained using mIOHMMs with 2 components and IOHMM. In addition to the ICL criterion that reflects the overall performance of the models, we report a measure of performance per patient in Appendix Fig.\ \ref{fig:appendix:ppmi-test-diff} which indicates that mIOHMM leads to a higher likelihood per patient than IOHMM (on average). Next, we inspect the initial-state probabilities, transition-state probabilities, state-means and medication-means. \begin{figure*}[t!] \centering \subfigure[IOHMM]{\includegraphics[width=.95\textwidth]{figures/state-summaries-combined.pdf}} \caption{A summary of the state and medication means obtained using IOHMM and 2-component mIOHMM.} \label{fig:mean-summaries} \end{figure*} Note that the two clusters obtained using mIOHMMs respectively contain 105 and 227 patients. Table \ref{tab:summary-patients-char} presents a summary of the age and sex distribution for each cluster of patients, which indicates that the clusters are not picking up on simple subject demographics. \begin{table}[h] \centering \caption{A summary of the patients' characteristics.} \begin{tabular}{llcrr} \toprule & & Overall & 1st cluster & 2nd cluster \\ \midrule Age & & 61.6 (9.8) & 60.6 (10.1) & 62.1 (9.6)\\ \multirow{2}{*}{Sex} & Female & 217 (65\%) & 76 (72\%) & 141 (62\%)\\ & Male & 115 (35\%) & 29 (28\%) & 86 (38\%)\\ \bottomrule \end{tabular} \label{tab:summary-patients-char} \end{table} We have a total number of 59 features. Therefore, the complete state and medication means are reported in Appendix \ref{appendix:additional-experimental-results}. Here, we present their summaries which are calculated based on primary clinical symptoms used for the diagnoses of PD as done in \citet{severson2020personalized}. Fig.\ \ref{fig:mean-summaries} presents the average of state and medication for each hidden state based on the tremor, bradykinesia, rigidity and postural instability/gait (PI/G) related features. The relevant features are selected based on the MDS-UPDRS as follows: tremor, 2.10, 3.15-3.18; postural instability gait, 2.12-2.13, 3.10-3.12; bradykinesia, 3.4-3.8, 3.14; and rigidity, 3.3 (see \citet{stebbins2013identify} for the details). We first discuss the initial-state probabilities. Recall that the state-transitions are allowed only in the forward direction. Therefore, we expect the most likely initial-states to represent a patient's health condition at enrollment, which is often mild. This is indeed the case for both IOHMM and each mixture of mIOHMMs where the most likely initial-states have mild symptoms. For instance, the state 2 of IOHMM has the highest initial-state probability and the lowest total MDS-UPDRS score. For the first and second mixtures of mIOHMMs, these are respectively the states 1 and 5. Note that the score for each symptom is not recommended to be collapsed into a single total score \cite{goetz2008movement}. Therefore, we also consider the score per symptom and discuss the characteristics of the states based on the intensities of individual scores. One common state characteristics is the co-occurring severity in the PI/G, bradykinesia and rigidity symptoms. For example, the state 6 of IOHMM, the state 2 of the first mixture of mIOHMMs and the state 6 of the second mixture of mIOHMMs have severe PI/G, bradykinesia and rigidity issues but no tremor issues. However, these states differ in terms of the level of severity, e.g., the state with least severe symptoms is the state 2 of IOHMM. Another state characteristics we observe is the co-occurring severity in bradykinesia and rigidity symptoms. This characteristics is seen in the states 3 and 8 of IOHMM, and the states 2 and 8 of the second mixture of mIOHMMs. We characterize the states based on the subtype methodology proposed by \citet{stebbins2013identify}, where each state is labeled with one of the following subtypes: (i) tremor, (ii) PI/G and (iii) intermediate. For IOHMM, the states 1,2,4,5,7 are tremor dominant, states 6 is PI/G dominant, and states 3 and 8 are indeterminate. For the first mixture of mIOHMMs, the states 1,3,5,7,8 are tremor dominant, states 2 and 4 are PI/G dominant and state 7 is indeterminate. For the second mixture of mIOHMMs, the states 2,6,7 are PI/G dominant and the remaining states are tremor dominant. These observations conform with the findings of \citet{severson2020personalized} in that the number of tremor dominant states is higher than the number of PI/G dominant states. \begin{figure*}[t!] \begin{center} \includegraphics[width=.7\textwidth]{figures/data_states_3_2nd_patno_241.pdf} \caption{A histogram of a deteriorating PD patient (female, 57 years old) (Top) and the corresponding state-trajectory (Bottom). The data is clustered into the second mixture of mIOHMMs. The subject appears to deteriorate over time, as denoted by visiting the states 1, 4 and 8---indicating increasing means in the bradykinesia- and rigidity-based features. Notice that the data column is entirely missing when the patient misses a hospital visit, whereas some features are non-missing but still zero-valued because a 0 rating has been given for the corresponding symptom.} \label{fig:data-state-traj} \end{center} \end{figure*} In addition to the state-means, each state is associated with a medication variable. When a patient is on medication, the symptoms are modeled using the state means and state medication variables. This can help to distinguish the states with similar means and different medications effects. For example, the state 5 and 7 of the first mixture of mIOHMMs are similar in terms of the state-means; however, the state 5 has a higher medication effect for the PI/G symptom than the state 5. Note that one should take into account whether a patient is on medication at a hospital visit and, if so, the dose of the medication for a better understanding of the model findings. We note that the state-transition probabilities favour self-transitions which may not be surprising as the disease progression occurs slowly between hospital visits. Again, the transitions need to be interpreted considering the medications effects. Finally, we discuss the state-trajectories obtained via the Viterbi algorithm. Fig.\ \ref{fig:data-state-traj} plots a patient's data who is clustered in the second mIOHMM mixture and the corresponding state-trajectory. It was observed the changes in the regime have been successfully captured by the states. Note: for the corresponding feature list and index, we refer the reader to Appendix Fig.\ \ref{fig:appendix:IOHMM-mIOHMM-full-state-means}. We also visualize the disease progression trajectories for a number of patients in each cluster with their overall severity scores in Appendix Fig.\ \ref{fig:appendix:state-traj} \section{Summary} \label{label:summary} In this paper, we have applied mixtures of hidden Markov models for disease progression modeling. The proposed models can identify similar groups of patients through time-series clustering and separately represent the progression of each group, unlike hidden Markov models which assume that a single dynamics is shared among all patients. Our experiments on a real-world dataset have demonstrated the benefits of mixture models over a single hidden Markov model for disease progression modeling. Future work includes the development of efficient training algorithms for mPIOHMMs. \section*{Acknowledgements} We would like to thank Kristen A.\ Severson for comments and discussion. Data used in the preparation of this article were obtained from the Parkinson’s Progression Markers Initiative (PPMI) database (\url{www.ppmi-info.org/access-data-specimens/download-data}). For up-to-date information on the study, visit \url{www.ppmi-info.org}. PPMI – a public-private partnership – is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners, including 4D Pharma, Abbvie, AcureX Therapeutics, Allergan, Amathus Therapeutics, Aligning Science Across Parkinson’s, Avid Radiopharmaceuticals, Bial Biotech, Biogen, BioLegend, Bristol-Myers Squibb, Calico Life Sciences LLC, Celgene, DaCapo Brainscience, Denali, the Edmond J. Safra Foundation, Eli Lilly and Company, GE Healthcare, Genentech, GlaxoSmithKline, Golub Capital, Handl Therapeutics, Insitro, Janssen, Lundbeck, Merck, Meso Scale Diagnostics, Neurocrine Biosciences, Pfizer, Primal, Prevail Therapeutics, Roche, Sanofi Genzyme, Servier, Takeda, Teva, UCB, Vanqua Bio, Verily, Voyager Therapeutics and Yumanity. \section{Introduction} Disease progression refers to the temporal evolution of a disease over time. Modeling the temporal characteristics of a disease may be useful for various purposes including scientific discovery (e.g., understanding how a disease manifests itself by discovering the stages the patients typically go through) and clinical decision-making (e.g., evaluating the health status of a patient by identifying the stage the patient is in). Probabilistic time-series models are a natural choice for disease progression modeling as they take into account temporal relations in data. However, the task remains challenging for these models mainly because of (i) limited availability of data, (ii) data quality problems (e.g., missing data), (iii) the need for interpretability and (iv) heterogeneous nature of diseases such as Alzheimer's disease (AD) and Parkinson's disease (PD). A practical solution to these problems has been using hidden Markov models (HMMs), which (i) can be trained using small datasets, (ii) can handle missing data in a principled approach and (iii) are interpretable models, e.g., it is possible to relate inferred latent states to particular symptoms. Most existing HMMs \citep{jackson2003multistate,sukkar2012disease,guihenneuc2000modeling,wang2014unsupervised,sun2019probabilistic,severson2020personalized,severson2021discovery}, however, assume that each patient follows the same latent state transition dynamics, ignoring the heterogeneity in the disease progression dynamics. The need for heterogeneous disease progression modeling has been highlighted by the works on \emph{disease subtyping}, which is defined as the task of identifying subpopulations of similar patients that can guide treatment decisions for a given individual \citep{saria2015subtyping}. Disease subtyping can be useful especially for complex diseases which are often poorly understood such as autism \citep{state2012emerging}, cardiovascular disease \citep{de2009heart} and Parkinson’s disease \citep{lewis2005heterogeneity}. The discovery of subtypes can further benefit both the scientific discovery (e.g., studying the associations between the shared characteristics of similar patients and potential causes) and clinical decision-making (e.g., reducing the uncertainty in an individual's expected outcome) \citep{saria2015subtyping}. Traditionally, disease subtyping has been carried out by clinicians who may notice the presence of subgroups \citep{barr1999patterns,ewing1921diffuse}. More recently, the growing availability of medical datasets and computational resources have facilitated the rapid adaptation of data-driven approaches that offer objective methods to discover underlying disease subtypes \citep{schulam2015clustering,lewis2005heterogeneity}. For instance, \citet{lewis2005heterogeneity} discover the presence of four subtypes of PD; however, they apply k-means clustering which may provide a limited capability to capture complex patterns in the data. \citet{schulam2015clustering} develop a more sophisticated approach based on a mixture model that is robust against the variability unrelated to disease subtpying; however, their proposed model does not take into account the temporal relations in the clinical visits. In this work, we relax the assumption of HMMs that the disease dynamics, as specified by the transition matrix, is shared among all patients. Instead, we propose the use of hierarchical HMMs for disease progression modeling, particularly mixture of HMMs (mHMMs) and their variants that can explicitly model group-level similarities of patients. We are motivated by the applications of mHMMs in other domains where they have been shown to outperform HMMs such as modeling activity levels in accelerometer data \citep{de2020mixture}, modeling clickstreams of web surfers \citep{ypma2008categorization} and modeling human mobility using geo-tagged social media data \citep{zhang2016gmove}. We summarize our contributions and the organization of the paper below: \paragraph{Contributions:} To our knowledge, this is the first attempt to apply mHMMs to disease progression modeling. Particularly, we show that mixture of input-output HMMs (mIOHMMs) suits disease progression modeling better than IOHMMs, as they can discover multiple disease progression dynamics in addition taking into account the medications information. Moreover, we develop mixtures of a number of HMM variants, namely mIOHMMs, mixture of personalized HMMs (mPHMMs) and mixture of personalized IOHMMs (mPIOHMMs) which have not been explored before by the machine learning community. \paragraph{Organization:} We first introduce our notation for HMMs and present three HMM variants with their mixture extensions (Section \ref{label:methodology}). We then discuss the related work (Section \ref{label:related-work}), which is followed by the experiments and the results (Section \ref{label:experiments}). Finally, we summarize our work and discuss the possible future research directions (Section \ref{label:summary}). \section{Methodology} \label{label:methodology} This section describes the background information on HMMs, introduces our proposed models and the training procedure we apply. \subsection{Background} Below we introduce our notation for HMMs and describe its three variants proposed by \citet{severson2020personalized}. \subsubsection*{HMM} We consider an HMM with a Gaussian observation model and define it as a tuple $M= (\pi, A, \mu, \Sigma)$ where $\pi$ is the initial-state probabilities, $A$ is the state-transition probabilities, $\mu$ and $\Sigma$ are the mean and covariance parameters of the observation model with Gaussian densities. The generative model of an HMM becomes as follows: \begin{gather} x_1^{(i)} \sim \mathcal{C}at(\pi), \qquad x_t^{(i)} | x_{t-1}^{(i)} = l \sim \mathcal{C}at(A_l), \nonumber \\ y_t^{(i)} | x_t^{(i)}=l \sim \mathcal{N}(y_t^{(i)}; \mu_l, \Sigma_l), \end{gather} where $x_t^{(i)}$ and $y_t^{(i)}$ are respectively the hidden state and observation at time $t$ for the $i^{th}$ time-series sequence, and $\mathcal{C}at(\cdot)$ and $\mathcal{N}(\cdot)$ respectively denote the Categorical and Gaussian distributions. Here, $x_{t}^{(i)}$ is conditionally generated given that the hidden state at time $t-1$ for the $i^{th}$ sequence, denoted by $x_{t-1}^{(i)}$, is the $l^{th}$ hidden state. Similarly, $y_{t}^{(i)}$ is generated conditionally on $x_{t}^{(i)}=l$. \subsubsection*{PHMM} We can train an HMM using multiple medical time-series sequences collected from different patients. This approach would rely on the assumption that each patient follows the same state means and covariance, which may not be realistic when the individuals differ from the state means with different amounts. To address this issue, \citet{severson2020personalized} propose a personalized HMM (PHMM) by modifying the observation model of HMM as follows: \begin{eqnarray} y_t^{(i)} | x_t^{(i)}=l \sim \mathcal{N}(y_t^{(i)}; \mu_l + r^{(i)}, \Sigma_l), \end{eqnarray} where $r^{(i)}$ denotes the individual deviation from the states. \subsubsection*{IOHMM} The observed variables of an HMM are typically the clinical assessments made during hospital visits. However, the medications information can also be informative about the health status of a patient. To incorporate such information into the disease progression modeling, \citet{severson2020personalized} introduce the following observation model: \begin{eqnarray} \label{eqn:iohmm-likelihood} y_t^{(i)} | x_t^{(i)}=l &\sim& \mathcal{N}(y_t^{(i)}; \mu_l +v_l d_t^{(i)}, \Sigma_l), \end{eqnarray} where $d_t^{(i)}$ is the observed medication data at time $t$ for the $i^{th}$ patient and $v_l$ denotes the state medication effects. The proposed model resembles input-output HMMs \cite{bengio1994input} except that the hidden states are not conditioned on the input variables which are used to incorporate medications data, as the medication is thought to have no disease-modifying impact. This assumption is valid for diseases such as PD where there is no cure but treatments that can only help reduce the symptoms. \subsubsection*{PIOHMM} Finally, combining PHMM and IOHMM provides us a personalized model that takes into account the medications: \begin{eqnarray} y_t^{(i)} | x_t^{(i)}=l &\sim& \mathcal{N}(y_t^{(i)}; \mu_l + r^{(i)} + (v_l + m^{(i)}) d_t^{(i)}, \Sigma_l), \end{eqnarray} where $m^{(i)}$ denotes the personalized medication effects. \subsection{The Proposed Models} Below we follow a general recipe to construct hierarchical mixture models. We first extend the HMM and then its three variants to their mixture correspondences. We construct the mixture version of a HMM model (e.g., mPHMMs) by concatenating the parameters of each HMM variant (e.g., PHMM) for simplicity; however, it would also be possible to apply alternative schemes, e.g., see \citet{smyth1996clustering} for a hierarchical clustering-based approach. \subsubsection*{mHMMs} We define a mHMMs as a set $M= \{M_1, M_2, \dots, M_K\}$ where $M_k = (\pi_k, A_k, \mu_k, \Sigma_k)$ is the $k^{th}$ HMM mixture. The generative model becomes as follows: \begin{align} z^{(i)} &\sim \mathcal{C}at(\alpha), \nonumber \\ x_1^{(i)} | z^{(i)}=k &\sim \mathcal{C}at(\pi_k), \nonumber \\ x_t^{(i)} | x_{t-1}^{(i)} = l, z^{(i)}=k &\sim \mathcal{C}at(A_{k,l}), \nonumber \\ y_t^{(i)} | x_t^{(i)}=l, z^{(i)}=k &\sim \mathcal{N}(y_t^{(i)}; \mu_{k,l}, \Sigma_{k,l}), \end{align} where $z^{(i)}$ denotes the HMM that the $i^{th}$ time-series sequence belongs to, $x_t^{(i)}$ and $y_t^{(i)}$ are respectively the corresponding hidden state and observation at time $t$. Note that when the cardinality of $z$ is 1, the model reduces to the standard HMM. Fig.\ \ref{fig:mHMMs-graphical} presents a graphical representation of mHMMs. mHMMs assume that each time-series sequence belongs to an HMM mixture. This construction allows us to cluster similar sequences so that each cluster is represented using different parameter values. As we have mentioned earlier, training a single HMM for all sequences may not be expressive enough. On the other hand, training a separate HMM for each sequence can be challenging due to the sparsity of the data and the computational problems. mHMMs overcome these problems by combining a number of HMMs which is higher than 1 and lower than the number of sequences. \begin{figure}[htb!] \begin{center} \includegraphics[width=0.45\textwidth]{figures/mHMM-graphical-model.pdf} \caption{A graphical representation of mHMMs.} \label{fig:mHMMs-graphical} \end{center} \end{figure} \subsubsection*{mPHMMs} Similarly to mHMMs, we obtain the mixture versions of the HMM variants. For example, we modify the observation model of PHMM to obtain its mixture version as follows: \begin{eqnarray} y_t^{(i)} | x_t^{(i)}=l, z^{(i)}=k \sim \mathcal{N}(y_t^{(i)}; \mu_{k,l} + r^{(i)}, \Sigma_{k,l}). \end{eqnarray} \subsubsection*{mIOHMMs} We obtain mIOHMMs using the observation model given below: \begin{eqnarray} \label{eqn:miohmm-likelihood} y_t^{(i)} | x_t^{(i)}=l, z^{(i)}=k \sim \mathcal{N}(y_t^{(i)}; \mu_{k,l} + v_{k,l} d_t^{(i)}, \Sigma_{k,l}). \end{eqnarray} \subsubsection*{mPIOHMM} Finally, mPIOHMM has the following observation model: \begin{eqnarray} y_t^{(i)} \sim \mathcal{N}(y_t^{(i)}; \mu_{k,l} + r^{(i)} + (v_{k,l} + m^{(i)}) d_t^{(i)}, \Sigma_{k,l}), \end{eqnarray} where $l=x_t^{(i)}$ and $k=z^{(i)}$. \subsection{The Training of the Models} We follow the same training procedure proposed by \citet{severson2020personalized} where variational inference is used to approximate the posterior distributions over the latent variables of $x$, $m$ and $r$ as follows: \begin{align} q(x,m,r|y, \lambda) &= \prod_{i=1}^N q(m^{(i)}|\lambda) q(r^{(i)}|\lambda) q(x^{(i)}| y^{(i)}, m^{(i)}, r^{(i)}), \nonumber \\ &= \prod_{i=1}^N q(m^{(i)}|\lambda) q(r^{(i)}|\lambda) \nonumber \\ & \qquad \qquad \prod_{t=2}^{T_i} q(x^{(i)}_t|x^{(i)}_{t-1},y^{(i)}_t, m^{(i)}, r^{(i)}), \end{align} where $\lambda$ denote the variational free parameters. The corresponding evidence lower bound (ELBO) is maximized using coordinate ascent alternating between the updates for variational parameters $\lambda$ and model parameters $\theta$. Please see \citet{severson2020personalized} for the details of the training algorithm. Note that we simplify the inference by not explicitly inferring the latent variables $z$. Instead, we obtain the cluster membership of each sequence based on its state trajectory estimated via the Viterbi algorithm, thanks to the block-diagonal structure of the transition matrices. However, it would be possible to explicitly infer the variables $z$ by introducing the corresponding variational distribution $q(z_i | \lambda_{z_i})$. \section{Related Work} \label{label:related-work} Most common approach for disease progression modeling has been using HMMs. For example, \citet{guihenneuc2000modeling} employ a HMM with discrete observations for modeling the progression of Acquired Immune Deficiency Syndrome (AIDS). \citet{sukkar2012disease} apply the same model for Alzheimer's Disease. \citet{wang2014unsupervised} introduce additional hidden variables to incorporate the \emph{comorbidities} of a disease into the transition dynamics. Note that comorbidities are defined as syndromes co-occurring with the target disease, e.g., hypertension is a common comorbidity of diabetes. Other applications of HMMs for disease progression include the work on Huntington’s disease \cite{sun2019probabilistic} and abdominal aortic aneurysm \cite{jackson2003multistate}. Lastly, the standard HMMs have been modified for personalized disease progression. \citet{altman2007mixed} introduce random effects to better capture individual deviations from states. \citet{severson2020personalized,severson2021discovery} propose a model that is both personalized and takes into account medications information for disease progression modeling. An alternative approach to personalize disease progression is through Gaussian processes (GPs). \citet{peterson2017personalized} propose a GP model personalized based on each patient’s previous visits. \citet{lorenzi2019probabilistic} combines a GP with a set of random effect variables where the former is used to model progression dynamics shared among patients and the latter is used to represent their individual differences. \citet{schulam2015framework} propose a more general framework based on a hierarchical GP model with population, subpopulation and individual components that has been applied on the measurements of a single biomarker. \citet{futoma2016predicting} later generalize this model to the case of multiple biomarkers. Another common approach for disease progression modeling have been the use of deep learning, especially when the interpretability is not a major concern and a large amount of clinical data is available \citep{che2018hierarchical,eulenberg2017reconstructing,pham2017predicting,alaa2019attentive,lee2020temporal,chen2022clustering}. Among these methods, the most relevant works to ours are the approaches proposed by \citet{lee2020temporal} and \citet{chen2022clustering} which can identify ``similar'' patients via time-series clustering. Perhaps the closest related works are the studies on disease subtyping, particularly those focusing on Parkinson's Disease (PD). \citet{lewis2005heterogeneity} discover the presence of four subtypes of PD by applying k-means clustering. \citet{schulam2015clustering} develop a mixture model that is robust against the variability unrelated to disease subtpying. Both these approaches, however, do not take into account the temporal relations in the clinical visits. Finally, mHMMs have been shown to outperform HMMs in other domains such as modeling activity levels in accelerometer data \citep{de2020mixture}, modeling clickstreams of web surfers \citep{ypma2008categorization} and modeling human mobility using geo-tagged social media data \citep{zhang2016gmove}. We also note a couple of works on the training of mHMMs such as the hierarchical clustering-based approach proposed by \citet{smyth1996clustering} and the spectral-learning based training algorithm proposed by \citet{subakan2014spectral}. \section{Experiments} \label{label:experiments} We present two sets of experiments. The goal of the first experiment is to demonstrate the ability of mPHMM to simultaneously learn personalized state effects and multiple disease progression dynamics using synthetically generated data, for which we know the true disease progression dynamics. Then, we show that mIOHMM provides a better `fit' of a real-world dataset than IOHMM by discovering multiple disease progression dynamics. The code to reproduce the experiments is publicly available at \url{https://github.com/tahaceritli/mIOHMM}. \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.9\textwidth]{figures/x_hats.png} \caption{A comparison of the models for simulated data based on the original study by \citet{severson2020personalized}. The three rows correspond to different pairs of models being compared. The standard HMM incorrectly assigns states and compensates for the personalization with large variances, as shown in the first row. We observe the same phenomenon in the middle row with mHMMs, although the variance is lower than the variance of HMM as the mixture components provide a richer representation of the state-means. As per the bottom comparison, PHMM and mPHMMs overlap showing that the model can still handle individual variations in the data.} \label{fig:synt-fit-plot} \end{center} \end{figure*} \subsection{Synthetic Data} Combining the settings used by \citet{severson2020personalized} and \citet{smyth1996clustering}, we build a 2-component mPHMM with 2 latent states for each PHMM. The state transition matrices of the PHMM mixtures are given below: \begin{equation*} A_1 = \begin{bmatrix} 0.8 & 0.2 \\ 0.2 & 0.8 \end{bmatrix}, A_2 = \begin{bmatrix} 0.2 & 0.8 \\ 0.8 & 0.2 \end{bmatrix}, \end{equation*} where $A_k$ denotes the state transition matrix of the $k^{th}$ PHMM. The observation model is built using Gaussian densities with the means $\mu_1 = \mu_2 = \begin{bmatrix} 0 \\ 2 \end{bmatrix}$ and variances $\sigma_1^2 = \sigma_2^2 = \begin{bmatrix} 0.1 \\ 0.1 \end{bmatrix}$. Note that the state means and variances are the same for each PHMM whereas the transition dynamics are different, i.e., the transitions between the latent states are more likely to occur in the first PHMM than they are in the second PHMM. The initial state probabilities are assumed to be uniformly distributed. We use the noisy observation model of $\hat{x}_{i,t} | z_{i,t} = l \sim \mathcal{N}(\mu_l + r_i, \Sigma_T)$ where $\Sigma_T$ is specified via a squared exponential kernel $\kappa(t, t′) = \sigma^2 \exp \frac{-(t-t')^2}{2*l^2}$ with $l$ and $\sigma$ are respectively set to 1 and 0.1. Lastly, the personalized state offset $r_i$ is uniformly sampled for each sample with $b=1$. Fixing the dimensionality of the data to 1, we generate 200 sequences of length 30 using this model. The training of mPHMM yields the parameter estimates given below: \begin{align*} \hat{A_1}= \begin{bmatrix} 0.80 & 0.20 \\ 0.19 & 0.81 \end{bmatrix}, \hspace{.5cm} \hat{\mu_1} = \begin{bmatrix} 0.11 \\ 2.10 \end{bmatrix}, \hspace{.5cm} \hat{\sigma^2_1} = \begin{bmatrix} 0.10 \\ 0.11 \end{bmatrix}, \\ \hat{A_2} = \begin{bmatrix} 0.21 & 0.79 \\ 0.80 & 0.20 \end{bmatrix}, \hspace{.5cm} \hat{\mu_2} = \begin{bmatrix} 0.04 \\ 2.05 \end{bmatrix}, \hspace{.5cm} \hat{\sigma^2_2} = \begin{bmatrix} 0.10 \\ 0.10 \end{bmatrix}, \end{align*} On the other hand, we obtain the following parameter estimates using PHMM: $A = \begin{bmatrix} 0.53 & 0.47 \\ 0.46 & 0.54 \end{bmatrix}$, $\mu = \begin{bmatrix} 0.05 & 2.05 \end{bmatrix}$ and $\sigma = \begin{bmatrix} 0.10 & 0.11 \end{bmatrix}$ which indicates that PHMM cannot distinguish the heterogeneous state-transitions. Note that we could have adapted PHMM to this example by using 4 latent states; however, the distinction between the states would not be clear as the block-diagonal structure is not introduced in PHMM (see the additional experimental results in Appendix \ref{appendix:additional-experimental-results}). Finally, we demonstrate that our model keeps the personalization capabilities of the original PHMM discussed in \citet{severson2020personalized}. Fig.\ \ref{fig:synt-fit-plot} presents a number of sequences and the corresponding estimates obtained using HMM, PHMM, mHMM and mPHMM. The figure indicates that mPHMM performs similarly as PHMM in fitting the data. However, mPHMM has the advantage of discovering the heterogeneous transition matrices over PHMM as discussed above. \subsection{Real Data} \subsubsection*{Data} Following the experimental setup in \citet{severson2020personalized}, we use the Parkinson Progression Marker Initiative (PPMI) dataset \cite{marek2011parkinson} for real data experiments. PPMI is a longitudinal dataset collected from 423 PD patients, including clinical, imaging and biospecimen information. We focus on the clinical assessments measured via the Movement Disorder Society Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) \cite{goetz2008movement}. The MDS-UPDRS consists of a combination of patient reported-measures and physician-assessed measures: (i) non-motor experiences of daily living, (ii) motor experiences of daily living, (iii) motor examination and (iv) motor complications. Each item on the scale is rated from 0 (normal) to 4 (severe). We do not use the motor complications obtaining 59 features for the observations. As the medication data, we use the levodopa equivalent daily dose (LEDD) \cite{tomlinson2010systematic}, which is provided in the PPMI dataset. \subsubsection*{Metrics} To compare the models, we use three information criteria: Akaike Information Criterion (AIC, \citealt{akaike1998information}), Bayesian Information Criterion (BIC, \citealt{schwarz1978estimating}) and Integrated Completed Likelihood (ICL, \citealt{biernacki2000assessing}) which are defined below: \begin{eqnarray*} \mathrm{AIC} &=& - 2\ell + 2k \\ \mathrm{BIC} &=& - 2\ell + k \log N \\ \mathrm{ICL} &=& - 2\hat{\ell} + 2k, \end{eqnarray*} where $\ell$ is the log-likelihood of the training data, $k$ is the number of free parameters, $N$ is the number of training data instances, and $\hat{\ell}$ is the log-likelihood of the training data under the most likely trajectory. Here, $k$ is calculated as $L^2 + 3*L*D -1$ where $D$ is the dimension of observations and $L$ is the total number of hidden states aggregated over the HMM mixtures, as we use diagonal covariance matrices. Additionally, the log-likelihoods are calculated based on Equations \ref{eqn:iohmm-likelihood} and \ref{eqn:miohmm-likelihood}. \subsubsection*{Model} We compare IOHMM and a number of mIOHMMs with varying number of components (i.e., $K \in \{2,3,4,5\}$). Note that these models are not personalized, meaning that they are equivalent to PIOHMM with personalized effect variables fixed to zero, i.e., $r_i=0$ and $m_i=0$. In this work, we evaluate the impact of using mIOHMM over IOHMM. One could similarly apply model selection for PIOHMM without fixing personalized effect variables to zero; however, in our experience, this is computationally expensive and more efficient algorithms need to be developed which is out of the scope of this work. Additionally, we fix the number of hidden states to 8 following the setting in \citet{severson2020personalized}, and use diagonal covariance matrices. \subsubsection*{Results} Table \ref{tab:all-methods-performance} presents the values of the information criteria obtained using the models, which indicate that mIOHMMs are favoured over IOHMM. As per the table, AIC and BIC select five components whereas ICL selects two components. This result is not surprising as AIC and BIC tend to be overoptimistic about the model size \cite{biernacki2000assessing}. \begin{table}[h] \centering \caption{Performance of the mHMM methods using AIC, BIC and ICL.} \begin{tabular}{lccc} \toprule K & AIC & BIC & ICL \\ \midrule 1 & -5.5370e+07 & -5.5365e+07 & -5.4256e+07 \\ 2 & -5.5532e+07 & -5.5520e+07 & \textbf{-5.5330e+07} \\ 3 & -5.5540e+07 & -5.5521e+07 & -5.5246e+07 \\ 4 & \textbf{-5.5567e+07} & \textbf{-5.5542e+07} & -5.2234e+07 \\ 5 & -5.5536e+07 & -5.5503e+07 & -5.5250e+07 \\ \bottomrule \end{tabular} \label{tab:all-methods-performance} \end{table} Following the ICL criterion, we compare and interpret the parameter estimates obtained using mIOHMMs with 2 components and IOHMM. In addition to the ICL criterion that reflects the overall performance of the models, we report a measure of performance per patient in Appendix Fig.\ \ref{fig:appendix:ppmi-test-diff} which indicates that mIOHMM leads to a higher likelihood per patient than IOHMM (on average). Next, we inspect the initial-state probabilities, transition-state probabilities, state-means and medication-means. \begin{figure*}[t!] \centering \subfigure[IOHMM]{\includegraphics[width=.95\textwidth]{figures/state-summaries-combined.pdf}} \caption{A summary of the state and medication means obtained using IOHMM and 2-component mIOHMM.} \label{fig:mean-summaries} \end{figure*} Note that the two clusters obtained using mIOHMMs respectively contain 105 and 227 patients. Table \ref{tab:summary-patients-char} presents a summary of the age and sex distribution for each cluster of patients, which indicates that the clusters are not picking up on simple subject demographics. \begin{table}[h] \centering \caption{A summary of the patients' characteristics.} \begin{tabular}{llcrr} \toprule & & Overall & 1st cluster & 2nd cluster \\ \midrule Age & & 61.6 (9.8) & 60.6 (10.1) & 62.1 (9.6)\\ \multirow{2}{*}{Sex} & Female & 217 (65\%) & 76 (72\%) & 141 (62\%)\\ & Male & 115 (35\%) & 29 (28\%) & 86 (38\%)\\ \bottomrule \end{tabular} \label{tab:summary-patients-char} \end{table} We have a total number of 59 features. Therefore, the complete state and medication means are reported in Appendix \ref{appendix:additional-experimental-results}. Here, we present their summaries which are calculated based on primary clinical symptoms used for the diagnoses of PD as done in \citet{severson2020personalized}. Fig.\ \ref{fig:mean-summaries} presents the average of state and medication for each hidden state based on the tremor, bradykinesia, rigidity and postural instability/gait (PI/G) related features. The relevant features are selected based on the MDS-UPDRS as follows: tremor, 2.10, 3.15-3.18; postural instability gait, 2.12-2.13, 3.10-3.12; bradykinesia, 3.4-3.8, 3.14; and rigidity, 3.3 (see \citet{stebbins2013identify} for the details). We first discuss the initial-state probabilities. Recall that the state-transitions are allowed only in the forward direction. Therefore, we expect the most likely initial-states to represent a patient's health condition at enrollment, which is often mild. This is indeed the case for both IOHMM and each mixture of mIOHMMs where the most likely initial-states have mild symptoms. For instance, the state 2 of IOHMM has the highest initial-state probability and the lowest total MDS-UPDRS score. For the first and second mixtures of mIOHMMs, these are respectively the states 1 and 5. Note that the score for each symptom is not recommended to be collapsed into a single total score \cite{goetz2008movement}. Therefore, we also consider the score per symptom and discuss the characteristics of the states based on the intensities of individual scores. One common state characteristics is the co-occurring severity in the PI/G, bradykinesia and rigidity symptoms. For example, the state 6 of IOHMM, the state 2 of the first mixture of mIOHMMs and the state 6 of the second mixture of mIOHMMs have severe PI/G, bradykinesia and rigidity issues but no tremor issues. However, these states differ in terms of the level of severity, e.g., the state with least severe symptoms is the state 2 of IOHMM. Another state characteristics we observe is the co-occurring severity in bradykinesia and rigidity symptoms. This characteristics is seen in the states 3 and 8 of IOHMM, and the states 2 and 8 of the second mixture of mIOHMMs. We characterize the states based on the subtype methodology proposed by \citet{stebbins2013identify}, where each state is labeled with one of the following subtypes: (i) tremor, (ii) PI/G and (iii) intermediate. For IOHMM, the states 1,2,4,5,7 are tremor dominant, states 6 is PI/G dominant, and states 3 and 8 are indeterminate. For the first mixture of mIOHMMs, the states 1,3,5,7,8 are tremor dominant, states 2 and 4 are PI/G dominant and state 7 is indeterminate. For the second mixture of mIOHMMs, the states 2,6,7 are PI/G dominant and the remaining states are tremor dominant. These observations conform with the findings of \citet{severson2020personalized} in that the number of tremor dominant states is higher than the number of PI/G dominant states. \begin{figure*}[t!] \begin{center} \includegraphics[width=.7\textwidth]{figures/data_states_3_2nd_patno_241.pdf} \caption{A histogram of a deteriorating PD patient (female, 57 years old) (Top) and the corresponding state-trajectory (Bottom). The data is clustered into the second mixture of mIOHMMs. The subject appears to deteriorate over time, as denoted by visiting the states 1, 4 and 8---indicating increasing means in the bradykinesia- and rigidity-based features. Notice that the data column is entirely missing when the patient misses a hospital visit, whereas some features are non-missing but still zero-valued because a 0 rating has been given for the corresponding symptom.} \label{fig:data-state-traj} \end{center} \end{figure*} In addition to the state-means, each state is associated with a medication variable. When a patient is on medication, the symptoms are modeled using the state means and state medication variables. This can help to distinguish the states with similar means and different medications effects. For example, the state 5 and 7 of the first mixture of mIOHMMs are similar in terms of the state-means; however, the state 5 has a higher medication effect for the PI/G symptom than the state 5. Note that one should take into account whether a patient is on medication at a hospital visit and, if so, the dose of the medication for a better understanding of the model findings. We note that the state-transition probabilities favour self-transitions which may not be surprising as the disease progression occurs slowly between hospital visits. Again, the transitions need to be interpreted considering the medications effects. Finally, we discuss the state-trajectories obtained via the Viterbi algorithm. Fig.\ \ref{fig:data-state-traj} plots a patient's data who is clustered in the second mIOHMM mixture and the corresponding state-trajectory. It was observed the changes in the regime have been successfully captured by the states. Note: for the corresponding feature list and index, we refer the reader to Appendix Fig.\ \ref{fig:appendix:IOHMM-mIOHMM-full-state-means}. We also visualize the disease progression trajectories for a number of patients in each cluster with their overall severity scores in Appendix Fig.\ \ref{fig:appendix:state-traj} \section{Summary} \label{label:summary} In this paper, we have applied mixtures of hidden Markov models for disease progression modeling. The proposed models can identify similar groups of patients through time-series clustering and separately represent the progression of each group, unlike hidden Markov models which assume that a single dynamics is shared among all patients. Our experiments on a real-world dataset have demonstrated the benefits of mixture models over a single hidden Markov model for disease progression modeling. Future work includes the development of efficient training algorithms for mPIOHMMs. \section*{Acknowledgements} We would like to thank Kristen A.\ Severson for comments and discussion. Data used in the preparation of this article were obtained from the Parkinson’s Progression Markers Initiative (PPMI) database (\url{www.ppmi-info.org/access-data-specimens/download-data}). For up-to-date information on the study, visit \url{www.ppmi-info.org}. PPMI – a public-private partnership – is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners, including 4D Pharma, Abbvie, AcureX Therapeutics, Allergan, Amathus Therapeutics, Aligning Science Across Parkinson’s, Avid Radiopharmaceuticals, Bial Biotech, Biogen, BioLegend, Bristol-Myers Squibb, Calico Life Sciences LLC, Celgene, DaCapo Brainscience, Denali, the Edmond J. Safra Foundation, Eli Lilly and Company, GE Healthcare, Genentech, GlaxoSmithKline, Golub Capital, Handl Therapeutics, Insitro, Janssen, Lundbeck, Merck, Meso Scale Diagnostics, Neurocrine Biosciences, Pfizer, Primal, Prevail Therapeutics, Roche, Sanofi Genzyme, Servier, Takeda, Teva, UCB, Vanqua Bio, Verily, Voyager Therapeutics and Yumanity.
1,108,101,564,053
arxiv
\section{Introduction}\label{sec:introduction} The fundamental theory of the strong interactions is nowadays taken to be quantum chromodynamics (QCD); see, e.g., Refs.~\cite{ChengLi1985,Marshak1993} and other references therein. In the framework of this theory, there is evidence for the existence of a gluon condensate~\cite{ShifmanVainshteinZakharov1978,Narison1996,Rakow2006,AndreevZakharov2007}. The question, then, is how the gluon condensate gravitates and evolves as the Universe expands. Here, a tentative answer is obtained by use of the so-called $q$--theory approach for the gravitational effects of vacuum energy density~\cite{KlinkhamerVolovik2008a,KlinkhamerVolovik2008b,KlinkhamerVolovik2008c,KlinkhamerVolovik2009a}. The outline of this article is as follows. In Sec.~\ref{sec:Gluon-condensate-dynamics-FRW}, an example of a gluon-condensate-induced modification of gravity is presented and the corresponding field equations are derived, which are then reduced for the case of a spatially flat Friedmann--Robertson--Walker universe. In Sec.~\ref{sec:Three-component-universe}, the resulting evolution of a simple three-component model universe is studied both analytically and numerically, in order to establish whether or not a model universe can be obtained which resembles the observed ``accelerating Universe'' ~\cite{Riess-etal1998,Perlmutter-etal1998}. In Sec.~\ref{sec:Conclusion}, concluding remarks are presented. \section{QCD--scale modified gravity and cosmology} \label{sec:Gluon-condensate-dynamics-FRW} \subsection{Theory: Action and field equations} \label{subsec:Theory} It has been argued~\cite{KlinkhamerVolovik2009a} that, in a de-Sitter universe with Hubble constant $H$, a QCD--scale vacuum energy density $\rho_{V} \sim |H|\,\Lambda_\text{QCD}^3$ could arise from infrared effects of the gluon propagator. Since the de-Sitter universe has Ricci curvature scalar $|R| \sim H^2$ and the particular gluon condensate $q$ has energy scale $q \sim \Lambda_\text{QCD}^4$, one is led to consider the following modified-gravity action ($\hbar=c=1$):\begin{subequations}\label{eq:action-S-f}\begin{eqnarray} S_\text{eff}&=& \int_{\mathbb{R}^4} \,d^4x\, \sqrt{-g}\; \Big[ K\, \widetilde{f}(R,q) +\epsilon(q)+\mathcal{L}_{M}(\psi)\Big]\,, \label{eq:action-S}\\[2mm] \widetilde{f}&\equiv& R + \widetilde{h} \equiv R + \eta\,K^{-1}\,|R|^{1/2}\,|q|^{3/4} \,, \label{eq:action-f} \end{eqnarray} \end{subequations} with gravitational coupling constant $K\equiv(16\pi G)^{-1}>0$, dimensionless coupling constant $\eta > 0$ [standard general relativity has $\eta= 0\,$], energy density $\epsilon(q)$ of the gluon condensate $q(x)$, and matter field $\psi(x)$ [later on, this single matter component will be generalized to $N$ matter components]. The precise definition of the gluon-condensate variable $q(x)$ in the context of QCD has been given in Ref.~\cite{KlinkhamerVolovik2009a}, to which the reader is referred for details. In the following, $q$ is simply assumed to be nonzero and is, in fact, taken to be positive. The relation between the gravitational constant $G$ and Newton's constant $G_{N}$~\cite{Cavendish1798,MohrTaylorNewell2008} will be discussed in Sec.~\ref{subsec:Analytic-results}. Throughout, the conventions of Ref.~\cite{Weinberg1972} are used, in particular, those for the Riemann tensor and the metric signature $(-+++)$. The field equations from \eqref{eq:action-S-f} are fourth order and it is worthwhile to switch to the scalar-tensor formulation which has field equations of second order. The equivalent Jordan-frame Brans--Dicke theory~\cite{Weinberg1972,BransDicke1961,Will1993,Uzan2003} has action \begin{subequations}\label{eq:BDaction-S-U} \begin{eqnarray} S_\text{eff}^\text{\,(BD)}&=& \int_{\mathbb{R}^4} \,d^4x\, \sqrt{-g}\; \Big[ K\, \Big( \phi\, R - U(\phi,q) \Big) +\epsilon(q)+\mathcal{L}_{M}(\psi)\Big]\,, \label{eq:BDaction-S}\\[2mm] U&\equiv& -(1/4)\, (\eta^2/K^2)\,|q|^{3/2}/(1-\phi) \,, \label{eq:BDaction-U} \end{eqnarray} \end{subequations} in terms of a dimensionless scalar field $\phi$ restricted to values less than $1$ [$\phi$ would be greater than $1$ for the $\eta < 0$ case not considered here]. The $\phi$ dependence of potential \eqref{eq:BDaction-U} allows for the so-called chameleon effect~\cite{KhouryWeltman2004}, which will be briefly discussed at the end of this subsection.\footnote{See also Ref.~\cite{MotaBarrow2004} for chameleon-type effects in a different context and Ref.~\cite{Tamaki_etal2008} for recent analytic and numerical work on the scalar profiles from compact objects, extending the original analysis of Ref.~\cite{KhouryWeltman2004}.} The proof of the classical equivalence of the actions \eqref{eq:action-S-f} and \eqref{eq:BDaction-S-U}, for $\eta \ne 0$ and $q \ne 0$, is not affected by the presence of the $q$--field in the function $\widetilde{f}$ of \eqref{eq:action-f}. See, e.g., Refs.~\cite{Faulkner-etal2007,Brax-etal2008,SotiriouFaraoni2008} for details of the proof, which is straightforward and need not be repeated here. Anyway, the classical equivalence of \eqref{eq:action-S-f} and \eqref{eq:BDaction-S-U} can be verified directly by eliminating $\phi$ from \eqref{eq:BDaction-S}, using its field equation $R=\partial U/\partial\phi$ with $U(\phi)$ given by \eqref{eq:BDaction-U}. At this moment, two remarks may be helpful to place the theory considered in context. First, the rigorous microscopic derivation of the effective action \eqref{eq:action-S-f} remains a major outstanding problem, because only a rough argument has been given in the appendix of Ref.~\cite{KlinkhamerVolovik2009a}, where $\eta$ was called $f$ (see also Ref.~\cite{ThomasUrbanZhitnitsky2009} for a general discussion of the physics involved and \cite{endnote-heuristics} for a heuristic argument). Awaiting this derivation, the main motivation of \eqref{eq:action-S-f} is that it naturally gives the correct order of magnitude for the present vacuum energy density (see Ref.~\cite{KlinkhamerVolovik2009a} and Sec.~\ref{sec:Conclusion}). Just to be crystal clear: the term $\widetilde{h}$ in \eqref{eq:action-f} is, at present, purely hypothetical and the aim of this article is to explore its cosmological consequences, leaving aside its theoretical derivation. Second, the effective action \eqref{eq:action-S-f} is only considered to be valid on cosmological length scales and additional nonstandard terms in $\widetilde{f}(R,q)$ can be expected to be operative at smaller length scales, relevant to solar-system tests and laboratory experiments~\cite{Faulkner-etal2007,Brax-etal2008}. Purely phenomenologically, the $\widetilde{h}$ term in \eqref{eq:action-f} could, for example, be replaced by an extended term \begin{equation}\label{eq:h_ext} \widetilde{h}_\text{ext}= \eta\,\,K^{-1}\,|q|^{9/4}\,|R|^{1/2} \big/\big(|q|^{3/2}+\zeta\, K^2|R|\big)\,, \end{equation} with constants $0<\eta\ll |\zeta| \lesssim 1$. This term $\widetilde{h}_\text{ext}$ vanishes as $|R|^{-1/2}$ at large enough curvatures and, for $\eta \sim 10^{-3}$ and $|\zeta| \sim 1$, is consistent with the relevant bound in Ref.~\cite{Brax-etal2008} based on the E\"{o}t--Wash laboratory experiment~\cite{Kapner-etal2006}. Returning to the action \eqref{eq:BDaction-S-U}, the field equations are obtained from the variational principle for variations $\delta g_{\mu\nu}$ of the metric $g_{\mu\nu}$, variations $\delta\phi$ of the Brans--Dicke field $\phi$, and variations $\delta A$ of the microscopic field $A$ responsible for $q$ condensate (see, in particular, Refs.~\cite{KlinkhamerVolovik2008b,KlinkhamerVolovik2009a}). Specifically, the field equations are \begin{subequations}\label{eq:BDfield-eqs-Gmunu-R-mu-tmp} \begin{eqnarray} R^{\mu\nu}-\frac{1}{2}\,R\,g^{\mu\nu} &=& -\frac{1}{2\phi\,K}\, \Big( T_{M}^{\mu\nu} -\widetilde{\epsilon}\, g^{\mu\nu}\Big) -\frac{1}{2\phi}\,\widetilde{U}\,g^{\mu\nu} -\frac{1}{\phi}\,\Big( \nabla^\mu\nabla^\nu- g^{\mu\nu}\,\Box\Big)\phi\,, \label{eq:BDfield-eqs-Gmunu-tmp}\\[2mm] R&=&\frac{\partial U}{\partial\phi} \,, \label{eq:BDfield-eqs-R-tmp} \\[2mm] \frac{\partial\epsilon}{\partial q}-K\,\frac{\partial U}{\partial q} &=& \mu \,, \label{eq:BDfield-eqs-mu-tmp} \end{eqnarray} \end{subequations} with the covariant derivative $\nabla_\mu$, the invariant d'Alembertian $\Box \equiv \nabla^\nu \nabla_\nu$, the energy-momentum tensor $T_{M}^{\mu\nu}$ of the matter field $\psi$, the integration constant $\mu$, and the effective energy densities \begin{subequations}\label{eq:widetilde-epsilon-U} \begin{eqnarray} \widetilde{\epsilon} &\equiv& \epsilon -q\,\frac{\partial\epsilon}{\partial q}\,, \label{eq:widetilde-epsilon}\\[1mm] \widetilde{U} &\equiv& U -q\,\frac{\partial U}{\partial q}\,. \label{eq:widetilde-U} \end{eqnarray} \end{subequations} Two comments are in order. First, the reason of having the extra term $-q\,\partial\epsilon/\partial q$ in \eqref{eq:widetilde-epsilon} and $-q\,\partial U/\partial q$ in \eqref{eq:widetilde-U} is the fact that the field $q$ is not fundamental but contains, in addition to the microscopic field $A$ mentioned above, the inverse metric $g^{\mu\nu}$ (see Sec. II of Ref.~\cite{KlinkhamerVolovik2009a}). Second, the constant $\mu$ on the right-hand side of \eqref{eq:BDfield-eqs-mu-tmp} can be interpreted, for spacetime-independent $q$ and $dU/dq=0$, as the chemical potential corresponding to the conserved charge $q$ (see, in particular, the detailed discussion in Secs.~II A and B of Ref.~\cite{KlinkhamerVolovik2008a}). For completeness, also the generalized Klein--Gordon equation is given, which is obtained by taking the trace of \eqref{eq:BDfield-eqs-Gmunu-tmp} and using \eqref{eq:BDfield-eqs-R-tmp}: \begin{equation} \Box\, \phi = \frac{1}{6\,K}\,\Big( T_{M} -4\,\widetilde{\epsilon}\Big) +\frac{2}{3}\,\widetilde{U} -\frac{1}{3}\,\phi\,\frac{\partial U}{\partial \phi}\,, \label{eq:BDfield-eqs-Box-eta-tmp} \end{equation} with the matter energy-momentum trace $T_{M}\equiv T_{M}^{\mu\nu}\,g_{\mu\nu}$. Eliminating $q\,\partial U/\partial q$ from \eqref{eq:BDfield-eqs-Gmunu-tmp} and \eqref{eq:BDfield-eqs-mu-tmp}, the final field equations are \begin{subequations}\label{eq:BDfield-eqs-Gmunu-R-drhoVdq} \begin{eqnarray} R^{\mu\nu}-\frac{1}{2}\,R\,g^{\mu\nu} &=& -\frac{1}{2\phi\,K}\, \Big( T_{M}^{\mu\nu} -\rho_{V}\, g^{\mu\nu}\Big) -\frac{1}{2\phi}\,U\,g^{\mu\nu} -\frac{1}{\phi}\,\Big( \nabla^\mu\nabla^\nu- g^{\mu\nu}\,\Box\Big)\phi\,, \label{eq:BDfield-eqs-Gmunu}\\[2mm] R&=&\frac{\partial U}{\partial\phi} \,, \label{eq:BDfield-eqs-R} \\[2mm] \frac{\partial\rho_{V}}{\partial q} &=& K\,\frac{\partial U}{\partial q}\,, \label{eq:BDfield-eqs-drhoVdq} \end{eqnarray} \end{subequations} in terms of the gravitating vacuum energy density \begin{equation} \rho_{V}(q)\equiv \epsilon(q) -\mu\, q\,, \label{eq:EinsteinFRW-rhoV} \end{equation} with the integration constant $\mu$. Equally, the generalized Klein--Gordon equation \eqref{eq:BDfield-eqs-Box-eta-tmp} becomes \begin{equation} \Box\, \phi = \frac{1}{6\,K}\,\Big( T_{M} -4\,\rho_{V}\Big) +\frac{2}{3}\,U -\frac{1}{3}\,\phi\,\frac{\partial U}{\partial \phi}\,, \label{eq:BDfield-eqs-Box-eta} \end{equation} where the very last term on the right-hand side, in particular, is relevant to the previously mentioned chameleon effect. With \eqref{eq:BDfield-eqs-R}, this last term of \eqref{eq:BDfield-eqs-Box-eta} becomes $(-R/3)\,\phi$ and corresponds to an effective mass square term for the scalar field, with a mass square of the order of $\rho_{M}/K$ for the case of a pressureless perfect fluid. This is indeed one aspect of the chameleon effect, namely, an effective mass value dependent on the environment~\cite{KhouryWeltman2004}. \subsection{Differential equations for a flat FRW universe} \label{subsec:FRW-equations} For a spatially flat ($k=0$) Friedmann--Robertson--Walker (FRW) universe~\cite{Weinberg1972} with scale factor $a(\dimfultime)$ and matter described by a perfect fluid, the $00$ and $11$ components of the generalized Einstein field equation \eqref{eq:BDfield-eqs-Gmunu} can be combined to give a generalized Friedmann equation. Together with equations obtained directly from \eqref{eq:BDfield-eqs-R} and \eqref{eq:BDfield-eqs-Box-eta}, the relevant equations are then \begin{subequations}\label{eq:field-eqs-FRW-phidot-Hdot-phiddot} \begin{eqnarray} H^2\,\phi &=& \frac{1}{6\,K}\,\rho_\text{tot}-\frac{1}{6}\,U- H\, \dot{\phi}\,, \label{eq:field-eqs-FRW-phidot-second}\\[2mm] \dot{H} &=&-2H^2- \frac{1}{6}\, \frac{\partial U}{\partial \phi}\,, \label{eq:field-eqs-FRW-Hdot}\\[2mm] \ddot{\phi} &=& -3H\,\dot{\phi} +\frac{1}{6\,K}\,\Big( \rho_\text{tot}-3\, P_\text{tot}\Big) -\frac{2}{3}\,U +\frac{1}{3}\,\phi\,\frac{\partial U}{\partial \phi}\,, \label{eq:field-eqs-FRW-phiddot} \end{eqnarray} \end{subequations} with the overdot standing for the derivative with respect to $\dimfultime$ (the somewhat unusual notation $\dimfultime$ is used for the dimensionful cosmic time, in order to reserve the letter $\dimlesstime$ for the dimensionless time later on). The total energy density and pressure are given by \begin{subequations} \begin{equation}\label{eq:rho-total} \rho_\text{tot}\equiv \rho_{V}+\rho_{M}\,,\quad P_\text{tot} \equiv P_{V}+P_{M}\,, \end{equation} for the gravitating vacuum energy density \begin{equation} \rho_{V}(q)= -P_{V}(q) = \epsilon(q) -\mu\, q\,, \label{eq:EinsteinFRW-rhoV-PV} \end{equation} \end{subequations} as discussed in the previous subsection. Observe that \eqref{eq:field-eqs-FRW-phidot-second} reproduces the standard Friedmann equation for $U=0$, $\phi=1$, and $K\equiv(16\pi G)^{-1}=(16\pi G_{N})^{-1} \equiv K_{N}$. The last two equations in \eqref{eq:field-eqs-FRW-phidot-Hdot-phiddot} are, respectively, first- and second-order ordinary differential equations (ODEs) for $H$ and $\phi$. Two further ODEs can be obtained as follows. First, multiplying \eqref{eq:BDfield-eqs-drhoVdq} by $\dot{q}$ gives an equation for the time dependence of the vacuum energy density, \begin{subequations} \begin{equation} \dot{\rho}_{V} = K\,\left(\dot{U}-\dot{\phi}\;\frac{\partial U}{\partial \phi}\right)\,, \label{eq:field-eqs-FRW-rhoVdot} \end{equation} which describes the energy exchange between the vacuum and the nonstandard gravitational field ($U\ne 0$). Second, the standard energy conservation of matter gives \begin{equation} \dot{\rho}_{M} = -3H\,\Big(\rho_{M}+P_{M}\Big) = -3H\,\Big(1+w_{M}\Big)\,\rho_{M}\,, \label{eq:matter-energy-conservation} \end{equation} \end{subequations} where the matter equation-of-state (EOS) parameter $w_{M}\equiv P_{M}/\rho_{M}$ has been introduced (henceforth, $w_{M}$ will be assumed to be time independent). Equation \eqref{eq:matter-energy-conservation} implies that, for the theory considered, there is no energy exchange between vacuum and matter (such an energy exchange for a different version of $q$--theory has been studied in Ref.~\cite{Klinkhamer2008}). \subsection{Dimensionless variables and ODEs} \label{subsec:Dimensionless-equations} Now rewrite the cosmological equations in appropriate microscopic units. The gluon condensate $q$ from Refs.~\cite{ShifmanVainshteinZakharov1978,KlinkhamerVolovik2009a} has the dimension of energy density, $[q]=[\epsilon]$, which implies that the corresponding integration constant $\mu$ is dimensionless, $[\mu]=[1]$. The equilibrium value $q_{0}$ of the gluon-condensate variable $q$ is taken to be determined by a laboratory experiment in an environment with negligible spacetime curvature and has the order of magnitude $q_{0}\equiv E_\text{QCD}^4 =\text{O}(10^{9}\,\text{eV}^4)$; see Sec.~\ref{subsec:Exploratory-numerical-results} for further remarks. From this moment on, consider $N$ matter components, labeled by an index $n=1, \ldots , N$. Specifically, the following dimensionless variables $\dimlesstime$, $h$, $f$, $r$, $u$, and $\dimlessscalar$ can be introduced: \begin{subequations}\label{eq:Dimensionless-var} \begin{align} \hspace*{-.8cm} \dimfultime&\equiv \dimlesstime \;K\big/q_{0}^{3/4}\,, &H(\dimfultime)&\equiv h(\dimlesstime)\;q_{0}^{3/4}\big/K\,, \label{eq:Dimensionless1-tau-h} \\[2mm] \hspace*{-.8cm} q(\dimfultime)&\equiv f(\dimlesstime)\; q_{0}\,, &\rho(\dimfultime) &\equiv r(\dimlesstime)\;q_{0}^{3/2}\big/K\,, \label{eq:Dimensionless1-f-rS} \\[2mm] \hspace*{-.8cm} U(\dimfultime)&\equiv u(\dimlesstime)\;q_{0}^{3/2}\big/K^2\,, &\phi(\dimfultime)&\equiv \dimlessscalar(\dimlesstime)\,. \label{eq:Dimensionless1-u-s} \end{align} \end{subequations} Observe that all dimensionless quantities are denoted by lower-case Latin letters. A further rescaling $t=t^\prime/\eta$ and $h=h^\prime\,\eta$ will not be used in the present article, as the effects from the unknown coupling constant $\eta$ are preferred to be kept as explicit as possible. It is, then, straightforward to obtain the dimensionless versions of the algebraic equation \eqref{eq:BDfield-eqs-drhoVdq}, the last two ODEs in \eqref{eq:field-eqs-FRW-phidot-Hdot-phiddot}, and the matter conservation equation \eqref{eq:matter-energy-conservation} generalized to $N$ matter components. This gives a closed system of $4+N$ equations for the $4+N$ dimensionless variables $f(\dimlesstime)$, $h(\dimlesstime)$, $\dimlessscalar(\dimlesstime)$, $v(\dimlesstime)$, and $r_{M,n}(\dimlesstime)$. Specifically, this system of equations consists of a single algebraic equation, \begin{eqnarray} \hspace*{-5mm} \frac{\partial r_{V}(f)}{\partial f} &=& \frac{\partial u(s,f)}{\partial f}\,, \label{eq:alg-eq-FRWdim-f} \end{eqnarray} and $3+N$ ODEs, \begin{subequations}\label{eq:4ODEsFRWdim} \begin{eqnarray} \hspace*{-5mm} \dot{h} &=& -2\,h^2 - \frac{1}{6}\,\frac{\partial u}{\partial\dimlessscalar}\,, \label{eq:4ODEsFRWdim-h}\\[2mm] \hspace*{-5mm} \dot{s} &=& v\,, \label{eq:4ODEsFRWdim-s}\\[2mm] \hspace*{-5mm} \dot{v} &=& \frac{1}{6}\,\big(r_\text{tot}-3\, p_\text{tot}\big)-3\,h\,v- \frac{2}{3}\,u + \frac{1}{3}\,\dimlessscalar\,\frac{\partial u}{\partial \dimlessscalar}\,, \label{eq:4ODEsFRWdim-v}\\[2mm] \hspace*{-5mm} \dot{r}_{M,n} &=& -3\,h \,\big(1+w_{M,n}\big)\,\,r_{M,n}\,, \label{eq:4ODEsFRWdim-rM} \end{eqnarray} \end{subequations} where, now, the overdot stands for differentiation with respect to the dimensionless cosmic time $\dimlesstime$ and the dimensionless total energy density and pressure are given by \begin{subequations}\label{eq:rtot-ptot} \begin{eqnarray} \hspace*{-0mm} r_\text{tot} &=& +r_{V}+ \sum_{n=1}^{N}\; r_{M,n}\,, \label{eq:rtot}\\[2mm] \hspace*{-0mm} p_\text{tot} &=& -r_{V}+\sum_{n=1}^{N}\; w_{M,n}\,r_{M,n}\,, \label{eq:ptot} \end{eqnarray} \end{subequations} with matter EOS parameters $w_{M,n}$ still to be specified. The dimensionless vacuum energy density $r_{V}$ appearing in the above equations will be discussed in Sec.~\ref{subsec:Ansatz-rV-solution-f(s)}. The dimensionless potential $u$ has already been defined by \eqref{eq:BDaction-U} and \eqref{eq:Dimensionless1-u-s}, but will be given again in Sec.~\ref{subsec:Ansatz-rV-solution-f(s)}. With the solution of Eqs.~\eqref{eq:alg-eq-FRWdim-f}--\eqref{eq:4ODEsFRWdim} for appropriate boundary conditions, it is possible to verify \emph{a posteriori} the Friedmann-type equation \eqref{eq:field-eqs-FRW-phidot-second} in dimensionless form: \begin{equation} h^2\,\dimlessscalar+h\,v = \big(r_\text{tot} - u\big)\big/ 6 \,, \label{eq:Friedmann-type-eq} \end{equation} which, in general, is guaranteed to hold by the contracted Bianchi identities and energy conservation (cf. Refs.~\cite{Weinberg1972,Klinkhamer2008}). Specifically, if the solution of Eqs.~\eqref{eq:alg-eq-FRWdim-f}--\eqref{eq:4ODEsFRWdim} satisfies \eqref{eq:Friedmann-type-eq} at one particular time, then \eqref{eq:Friedmann-type-eq} is satisfied at all the times considered. The additional constraint \eqref{eq:Friedmann-type-eq} will provide a valuable check on the numerical solution of the equations. \subsection{Ansatz for $\boldsymbol{r_{V}(f)}$ and solution for $\boldsymbol{f(s)}$} \label{subsec:Ansatz-rV-solution-f(s)} The only further input needed for the cosmological Eqs.~\eqref{eq:alg-eq-FRWdim-f}--\eqref{eq:4ODEsFRWdim} is an \emph{Ansatz} for the gravitating vacuum energy density $\rho_{V}(q)$ from \eqref{eq:EinsteinFRW-rhoV} or the corresponding dimensionless quantity $r_{V}$ from \eqref{eq:Dimensionless1-f-rS}. In Refs.~\cite{KlinkhamerVolovik2008a,KlinkhamerVolovik2008b,KlinkhamerVolovik2008c,KlinkhamerVolovik2009a}, it was argued that the vacuum variable $q$ of the late Universe is close to its flat-spacetime equilibrium value $q_{0}$ and the quadratic approximation can be used \begin{equation} r_{V} =\gamma\, (1-f)^2\,, \label{eq:rV-Ansatz} \end{equation} with positive constant $\gamma$. From the $r_{V}$ definition in \eqref{eq:Dimensionless1-f-rS}, the constant $\gamma$ in \eqref{eq:rV-Ansatz} can be expected to be of order $Z^{-1}$, with definition \begin{equation} Z \equiv q_{0}^{1/2}\;K^{-1} \sim 16\pi\;\big(E_\text{QCD}/E_\text{Planck}\big)^2 \sim 10^{-38}\,, \label{eq:Z-definition} \end{equation} for the quantum-chromodynamics energy scale $E_\text{QCD} \approx 0.2 \;\text{GeV}$ and the standard gravitational energy scale $E_\text{Planck} \equiv \sqrt{\hbar\, c^5/G_{N}} \approx 1.22 \times 10^{19}\;\text{GeV}$ (having set $G \sim G_{N}$; see Sec.~\ref{subsec:Analytic-results}). According to the discussion in Refs.~\cite{KlinkhamerVolovik2008a,KlinkhamerVolovik2008b,KlinkhamerVolovik2008c,KlinkhamerVolovik2009a}, $f$ can also be expected to be sufficiently close to $1$, in order to reproduce an $r_{V}$ value of order unity or less for the present Universe. For technical reasons, the value $Z=10^{-2}$ is taken in a first numerical study (Sec.~\ref{subsec:Exploratory-numerical-results}). Later, the proper boundary conditions and scaling behavior are considered (Sec.~\ref{subsec:Elementary-scaling-analysis}). The dimensionless scalar potential $u(\dimlessscalar,f)$ from \eqref{eq:BDaction-U} and \eqref{eq:Dimensionless1-u-s} can be written as \begin{equation} u(t) \equiv U\,K^2\,q_{0}^{-3/2} = -(\eta^2/4)\;\frac{f(t)^{3/2}}{1-\dimlessscalar(t)}\,, \label{eq:dimensionless-potential-u} \end{equation} where a relatively small value for $\eta$ appears to be indicated~\cite{KlinkhamerVolovik2009a} by the measured value of the vacuum energy density; see Secs.~\ref{subsec:Analytic-results} and \ref{subsec:Elementary-scaling-analysis} for further discussion on the numerical value of $\eta$. With the specific functions \eqref{eq:rV-Ansatz} and \eqref{eq:dimensionless-potential-u}, Eq.~\eqref{eq:alg-eq-FRWdim-f} is a quadratic in $\sqrt{f}$ and the positive root gives \begin{subequations}\label{eq:fsolution-B-zeta} \begin{eqnarray} \overline{f}_{\pm}(s) &=& \left( \sqrt{1+D(s)^2} \pm D(s) \,\right)^2\,, \label{eq:fsolution}\\[2mm] D(s) &\equiv& \kappa/|1-s| \geq 0\,, \label{eq:C}\\[2mm] \kappa &\equiv& (3/32)\, \eta^2/\gamma \geq 0\,, \label{eq:kappa} \end{eqnarray} \end{subequations} where the minus sign inside the outer parentheses on the right-hand side of \eqref{eq:fsolution} holds for $s<1$ [the plus sign appears for the $s>1$ case not considered here]. Expression \eqref{eq:fsolution} can then be used to eliminate all occurrences of $f$ in the $3+N$ ODEs~\eqref{eq:4ODEsFRWdim} for the remaining $3+N$ variables $h(\dimlesstime)$, $\dimlessscalar(\dimlesstime)$, $v(\dimlesstime)$, and $r_{M,n}(\dimlesstime)$. Referring to the ODEs~\eqref{eq:4ODEsFRWdim} in the following, it will be understood that $f$ has been replaced by $\overline{f}_{-}(s)$ from \eqref{eq:fsolution-B-zeta}. \section{Three-component model universe} \label{sec:Three-component-universe} \vspace*{0mm} \subsection{Preliminaries} \label{subsec:Preliminaries} The modified-gravity theory considered in this article has been presented in Sec.~\ref{subsec:Theory} and the corresponding dynamical equations for a spatially flat FRW universe in Secs.~\ref{subsec:FRW-equations}--\ref{subsec:Ansatz-rV-solution-f(s)}. The specific model studied in this section is a simplified version with only three components labeled $n=0,1,2$: \begin{enumerate} \setcounter{enumi}{-1} \item A gluon condensate [described by the dimensionless variable $f$] with dimensionless energy density $r_{V}(f)$ from \eqref{eq:rV-Ansatz} and constant equation-of-state parameter $w_{V}=-1$, which is taken to give rise to a nonanalytic term in the modified-gravity action \eqref{eq:action-S-f}. \item A perfect fluid of ultrarelativistic matter [e.g., photons] with energy density $r_{M,1}$ and constant EOS parameter $w_{M,1}=1/3$. \item A perfect fluid of nonrelativistic matter [e.g., cold dark matter (CDM) and baryons (B)] with energy density $r_{M,2}$ and constant EOS parameter $w_{M,2}=0$. \end{enumerate} From the scalar-tensor formalism of the gluon-condensate-induced modification of gravity, there is also the auxiliary Brans--Dicke scalar $\dimlessscalar(t)$ to consider, with the dimensionless potential $u(\dimlessscalar,f)$ from \eqref{eq:dimensionless-potential-u}. The relevant ODEs follow from \eqref{eq:4ODEsFRWdim} by letting the matter label run over $n=1,2$. The ideal starting point of the calculations would be some time after the QCD crossover at $T \sim \Lambda_\text{QCD}$ with $r_{M,1} \gg r_{M,2}$. The physical idea is that the expansion of the Universe was standard up till that time and that, then, a type of phase transition occurred with the creation of the gluon condensate. Clearly, the gluon condensate can be expected to start out in a nonequilibrium state, $f \ne 1$ and $s \ne 1$. These issues will be discussed further in Sec.~\ref{subsec:Elementary-scaling-analysis}. At this moment, it is useful to recall the basic equations of a standard flat FRW universe~\cite{Weinberg1972,Weinberg2008} with gravitational coupling constant $G=G_{N}$ or $K=K_{N}$. For two components, a pressureless material fluid labeled $M$ and an unknown fluid labeled $X$, these equations are \begin{subequations}\label{eq:standard-FRW-dota-ddota} \begin{eqnarray} 6\,h^2 &\equiv& 6\,(\dot{a}/a)^2 = r_{M} + r_{X}\,, \label{eq:standard-FRW-dota}\\[2mm] -12\,\ddot{a}/a &=& r_{M} + r_{X} + 3\,p_{M} + 3\,p_{X} = r_{M} + r_{X}\,\big(1 + 3\,w_{X}\big)\,, \label{eq:standard-FRW-ddota} \end{eqnarray} \end{subequations} where $p_{M}$ in \eqref{eq:standard-FRW-ddota} has been set to zero and the EOS parameter $w_{X}\equiv p_{X}/r_{X}$ has been introduced. The standard energy-density parameters are defined as follows: \begin{subequations}\label{eq:standard-OmegaMX-wX} \begin{eqnarray} \Omega_{M} &\equiv& r_{M}/(6\,h^2)\,,\quad \Omega_{X} \equiv r_{X}/(6\,h^2) = 1-\Omega_{M}\,. \label{eq:standard-OmegaMX} \end{eqnarray} In addition, the following combination of observables can be introduced to determine the unknown EOS parameter: \begin{equation} \overline{w}_{X} \equiv -\frac{2}{3}\,\left(\frac{\ddot{a}\,a}{(\dot{a})^2}+\frac{1}{2}\right)\; \frac{1}{1-\Omega_{M}} = w_{X}\,, \label{eq:standard-wX} \end{equation} \end{subequations} where the last equality holds, again, for $p_{M}=0$. See, e.g., Refs.~\cite{WellerAlbrecht2002,SahniStarobinsky2006} for details on how to reconstruct the dark-energy equation of state from observations. In order to be specific, take the following fiducial values: \begin{equation} \big\{ \Omega_{M},\, \Omega_{X},\, \overline{w}_{X} \big\}^{\text{standard\;FRW}}_{\text{present}} = \big\{ 0.25,\,0.75,\, -1 \big\}\,, \label{eq:FRW-OmegaXM-wX} \end{equation} which agree more or less with the recent data compiled in Refs.~\cite{Freedman2001,Eisenstein2005,Astier2006,Riess2007,Komatsu2008,Vikhlinin-etal2008}. The standard flat FRW universe with parameters \eqref{eq:FRW-OmegaXM-wX} corresponds, in fact, to the basic $\Lambda$CDM model~\cite{Weinberg2008} with CDM energy density $r_{M}\propto 1/a^3$ (with constant EOS parameter $w_{M}=0$) and time-independent vacuum energy density $l \equiv r_{X}\propto a^0$ (with constant EOS parameter $w_{X}=-1$ and $l$ the dimensionless version of the cosmological constant $\Lambda$). Returning to the modified-gravity theory \eqref{eq:action-S-f}--\eqref{eq:BDaction-S-U}, the same observables $\Omega$ and $\overline{w}_{X}$ can be identified. Specifically, the generalized Friedmann equation \eqref{eq:Friedmann-type-eq} gives \begin{subequations}\label{eq:Omegabar} \begin{eqnarray} \Omega_{X}+\Omega_{M}&=&1\,, \label{eq:Omegabar-XplusM}\\[2mm] \Omega_{X} &\equiv& \Omega_\text{grav}+\Omega_{V}\,, \label{eq:Omegabar-X}\\[2mm] \Omega_\text{grav}&\equiv& 1-s-\dot{s}/h-u/(6h^2)\,, \label{eq:Omegabar-grav}\\[2mm] \Omega_{V} &\equiv& r_{V}/(6h^2)\,, \label{eq:Omegabar-V}\\[2mm] \Omega_{M} &\equiv& r_{M}/(6h^2)\,, \label{eq:Omegabar-M} \end{eqnarray} \end{subequations} where $\Omega_\text{grav}$ is the new ingredient, as it vanishes for the standard theory with $u=0$ and $s=1$. Similarly, the effective EOS parameter of the unknown component $X$ can be extracted from \eqref{eq:4ODEsFRWdim} and \eqref{eq:Friedmann-type-eq} for $p_{M}=0$: \begin{eqnarray}\label{eq:modgrav-wXbar} \overline{w}_{X} &\equiv& -\frac{2}{3}\,\left(\frac{\ddot{a}\,a}{(\dot{a})^2}+\frac{1}{2}\right)\; \frac{1}{1-\Omega_{M}} = -\;\frac{r_{V} - u -4\,h\,\dot{s} -2\,\ddot{s} \phantom{\;(1-s)}} {r_{V} - u -6\,h\,\dot{s} +r_{M}\,(1-s) )}\;. \end{eqnarray} The right-hand side of \eqref{eq:modgrav-wXbar} shows that $\overline{w}_{X}$ of the modified-gravity model \eqref{eq:BDaction-S-U} approaches the value $-1$ in the limit of vanishing matter content and constant Brans--Dicke scalar $s$ as $t\to\infty$. \emph{A priori\,}, there is no reason why this approach cannot be from below, so that $1+\overline{w}_{X}$ would be negative for a while (cf. Ref.~\cite{CarrollDeFeliceTrodden2004}). The main goal of this section is to get a quasirealistic model for the ``present universe,'' which is taken to be defined by a value of approximately $0.25$ for the matter energy-density parameter $\Omega_{M}$. This can only be done with a numerical solution of the ODEs, but, first, analytic results relevant to the asymptotic behavior at early and late times are discussed. \subsection{Analytic results} \label{subsec:Analytic-results} It is not difficult to get two types of analytic solutions of the combined ODEs~\eqref{eq:4ODEsFRWdim} and \eqref{eq:Friedmann-type-eq} for the specific functions \eqref{eq:rV-Ansatz} and \eqref{eq:dimensionless-potential-u}, having used solution \eqref{eq:fsolution-B-zeta} to eliminate $f$ in favor of $s$. The first corresponds to a Friedmann universe with relativistic matter and without vacuum energy. The second corresponds to a de-Sitter-type universe without matter and with an effective form of vacuum energy. For $\eta=0$, the first analytic solution of \eqref{eq:4ODEsFRWdim}--\eqref{eq:fsolution-B-zeta} has only relativistic matter ($w_{M,1}=1/3$) contributing to the expansion. Specifically, this Friedmann solution (labeled ``$\text{F}$'') is given by \begin{subequations}\label{eq:Fsolution} \begin{eqnarray} h^\text{(F)} &=& (1/2)\;\dimlesstime^{-1}\,,\quad \dimlessscalar^\text{(F)}=f^\text{(F)}=1\,, \\[2mm] r_{M,1}^\text{(F)} &=& (3/2)\;\dimlesstime^{-2}\,,\quad r_{M,2}^\text{(F)} = 0\,. \end{eqnarray} \end{subequations} Remark that standard general relativity [which has, from the start, the action equal to \eqref{eq:action-S-f} for $\eta=0$ and $G=G_{N}$] allows for arbitrary values $r_{M,1}(1)$ and $r_{M,2}(1)$ at reference time $\dimlesstime=1$. For $\eta > 0$, the second set of analytic solutions of \eqref{eq:4ODEsFRWdim}--\eqref{eq:fsolution-B-zeta} has only vacuum energy contributing to the expansion, together with the effects of the gluon-condensate-induced modification of gravity ($\overline{w}_{X}=-1$). This type of solution has constant (time-independent) variables $h>0$ and $s \in (0,\,1)$, with $f$ given by \eqref{eq:fsolution}. From \eqref{eq:4ODEsFRWdim-h} and \eqref{eq:4ODEsFRWdim-v}, using \eqref{eq:dimensionless-potential-u}, a cubic in $s$ is obtained, which needs to be discussed first. Specifically, the cubic in $x\equiv 1-s$ reads \begin{equation}\label{eq:cubic} 9\,x^3 - 6\, x^2 + \big(1 + 9\,\kappa^2\big)\,x -6\, \kappa^2 =0\,, \end{equation} with parameter $\kappa$ defined by \eqref{eq:kappa}. This cubic has three distinct real solutions for $0<\kappa^2< (5\, \sqrt{5}-11)/18 \approx \big(0.100094\big)^2$. Two of these solutions (with $2/3<s<1$) give stationary de-Sitter-type solutions of the ODEs~\eqref{eq:4ODEsFRWdim}--\eqref{eq:fsolution-B-zeta}. These two roots can be written in manifestly real form by use of the Chebyshev cube root \begin{subequations}\begin{eqnarray} C_{1/3}(t)\,\Big|_{|t|<2} &\equiv& 2 \cos\big[(1/3) \arccos(t/2)\big]\,,\\ C_{1/3}(0) &\equiv& \sqrt{3}\,. \end{eqnarray}\end{subequations} Defining the auxiliary parameters \begin{subequations}\label{eq:parameters-p-q} \begin{eqnarray} p &\equiv& (1/3)\,\big(1/27+\kappa^2 \big)\,,\\[2mm] q &\equiv& (2/9)\,\big(1/82-2\,\kappa^2 \big),, \end{eqnarray} \end{subequations} the relevant roots of \eqref{eq:cubic} are \begin{subequations}\label{eq:sroots} \begin{eqnarray} s_\text{high} &=& 7/9 +\sqrt{p}\; \;C_{1/3}\big(\! -q\,p^{-3/2}\big)\,, \\[2mm] s_\text{mid} &=& 7/9 +\sqrt{p}\;\Big[ C_{1/3}\big(q\,p^{-3/2}\big) -C_{1/3}\big(\!-q\,p^{-3/2}\big)\Big]\,, \end{eqnarray} \end{subequations} where the third solution $s_\text{low}=7/3-s_\text{high}-s_\text{mid}$ can be omitted, as it lies below $2/3$ for $\kappa$ in the domain considered [the stationary limit of, e.g., Eq.~\eqref{eq:4ODEsFRWdim-v} requires $s\geq2/3$ because $r_{V}$ from \eqref{eq:rV-Ansatz} is non-negative by definition]. The first de-Sitter-type solution (labeled ``$\text{deS,0}$'' because $f\sim 0$ for $|\kappa|\ll 1$) is then given by \begin{subequations}\label{eq:deSsolution0} \begin{eqnarray} s^\text{(deS,0)} &=& s_\text{high} = 1 - 6\, \kappa^2 - 162\, \kappa^4 +\text{O}\big(\kappa^6\big)\,, \\[2mm] f^\text{(deS,0)} &=& \overline{f}_{-}\big( s_\text{high} \big) = 9\, \big(\kappa^2 + 36\, \kappa^4\big) +\text{O}\big(\kappa^6\big)\,, \\[2mm] h^\text{(deS,0)} &=& \eta \big/\big(4\sqrt{3}\big)\; \big|f^\text{(deS,0)}\big|^{3/4}\;\big|1-s^\text{(deS,0)}\big|^{-1} = \sqrt{\gamma/6}\; \nonumber\\ &&\times \left[ 1 - (81/2)\, \kappa^4 +\text{O}\big(\kappa^6\big)\right]\,, \label{eq:deSsolution0-h}\\[2mm] r_{M,n}^\text{(deS,0)}&=& 0\,, \end{eqnarray} \end{subequations} in terms of the function $\overline{f}_{-}(s)$ defined by \eqref{eq:fsolution} and with an integer $n=1,2$ to label the different matter components. Note that the expression in the middle of \eqref{eq:deSsolution0-h} simply follows from \eqref{eq:4ODEsFRWdim-h} for $\dot{h}=0$ and $u$ from \eqref{eq:dimensionless-potential-u}. The second solution (labeled ``$\text{deS,1}$'' because $f\sim 1$ for $|\kappa|\ll 1$) is given by \begin{subequations}\label{eq:deSsolution1} \begin{eqnarray} s^\text{(deS,1)} &=& s_\text{mid} = 2/3 + \kappa + 3\, \kappa^2 + (27/2)\, \kappa^3+ 81\, \kappa^4 +\text{O}\big(\kappa^5\big)\,, \label{eq:deSsolution1-s}\\[2mm] f^\text{(deS,1)} &=& \overline{f}_{-}\big( s_\text{mid} \big) =1 - 6\, \kappa - 27\, \kappa^3 - 162\, \kappa^4 +\text{O}\big(\kappa^5\big)\,, \label{eq:deSsolution1-f}\\[2mm] h^\text{(deS,1)} &=& \eta \big/\big(4\sqrt{3}\big)\; \big|f^\text{(deS,1)}\big|^{3/4}\;\big|1-s^\text{(deS,1)}\big|^{-1} = \sqrt{2\gamma\kappa}\big/ 1024 \; \nonumber\\ &&\times \Big[ 1024 - 1536\, \kappa + 1152\, \kappa^2 + 1728\, \kappa^3 + 17496\, \kappa^4 +\text{O}\big(\kappa^5\big) \Big]\,, \label{eq:deSsolution1-h}\\[2mm] r_{M,n}^\text{(deS,1)}&=& 0\,, \end{eqnarray} \end{subequations} where $\kappa$ is non-negative according to the original definition \eqref{eq:kappa}. Note that the last expressions of both \eqref{eq:deSsolution0-h} and \eqref{eq:deSsolution1-h} are proportional to $\sqrt{\gamma}$ with all further dependence on $\gamma$ entering through the parameter $\kappa \propto \eta^2/\gamma$, as can be expected on general grounds from the ODEs~\eqref{eq:4ODEsFRWdim} without matter. It is not quite trivial that there indeed exist de-Sitter-type solutions in the modified-gravity theory \eqref{eq:action-S-f}. The first solution \eqref{eq:deSsolution0} is far from the equilibrium state $f_\text{equil}=1$ and the second solution \eqref{eq:deSsolution1} is close to it, at least for $|\kappa|\ll 1$. The scaling behavior of both solutions under the limit $\gamma\to\infty$ for constant $\eta$ is also different, with $h$ diverging for the first solution and staying constant for the second. For fixed parameters $\gamma$ and $\eta$, numerical results suggest that the first solution \eqref{eq:deSsolution0} is unstable and the second solution \eqref{eq:deSsolution1} stable [and possibly an attractor]. In the following, the focus is on the second solution close to the equilibrium value $f_\text{equil}=1$ (corresponding to $q=q_0$). In fact, two remarks on the de-Sitter-type solution \eqref{eq:deSsolution1} are in order. First, observe that local experiments in this model universe with $\phi^\text{(deS,1)} \sim 2/3 < 1$ would have an increased effective gravitational coupling \begin{eqnarray}\label{eq:Geff} \overline{G}_{N} &\equiv& G_\text{\,eff}^\text{\,local\;exps}\,\Big|^\text{(deS,1)} \sim \big(1/\phi^\text{(deS,1)}\big)\; G \sim (3/2)\; G \,, \end{eqnarray} where the term $G/\phi^\text{(deS,1)}$ in the middle comes directly from the combination $K\,\phi=\phi/(16\pi\, G)$ present in the action \eqref{eq:BDaction-S-U}. Here, ``local experiments'' denote experiments on length scales very much less than the typical length scale of de-Sitter-type spacetime, the horizon distance $L_\text{hor}= c\,H^\text{(deS,1)}$, whose numerical value will be discussed shortly. It would then appear that the quantity \eqref{eq:Geff} must be identified with Newton's gravitational constant $G_{N}$ as measured by Cavendish~\cite{Cavendish1798} and modern-day experimentalists~\cite{MohrTaylorNewell2008}; see \cite{endnote-G_Newton} for additional comments. Second, the de-Sitter-type solution \eqref{eq:deSsolution1} of model \eqref{eq:BDaction-S-U} or equivalently model \eqref{eq:action-S-f} has the inverse Hubble constant \begin{equation}\label{eq:hinverse} \left(h^\text{(deS,1)}\right)^{-1} = 4/\sqrt{3}\,\;\eta^{-1} \approx 2.3 \times 10^{3}\;\left( \frac{10^{-3}}{\eta} \right)\,, \end{equation} as follows from \eqref{eq:deSsolution1-h} by neglecting terms suppressed by powers of $\kappa=\text{O}(1/\gamma)=\text{O}(10^{-38})$ and anticipating a particular order of magnitude for the model parameter $\eta$. With the conversion factor from \eqref{eq:Dimensionless1-tau-h}, the dimensionless quantity \eqref{eq:hinverse} corresponds to \begin{eqnarray}\label{eq:Hinverse} \left(H^\text{(deS,1)}\right)^{-1} &\sim& 4/\sqrt{3}\;\,\eta^{-1}\;(3/2)\,K_{N}\:q_{0}^{-3/4} \sim 8 \times 10^{17}\,\text{s}\,\left( \frac{10^{-3}}{\eta} \right) \left(\frac{200\;\text{MeV}}{q_{0}^{1/4}}\right)^3\;, \end{eqnarray} where, according to \eqref{eq:Geff}, an approximate factor $3/2$ appears in going from $K$ to the Newtonian value $K_{N}\equiv (16\pi G_{N})^{-1}$. The time scale found in \eqref{eq:Hinverse} is of the same order as the inverse Hubble constant $(H_0)^{-1} \approx 4.5\times 10^{17}\,\text{s}\;(0.70/h_{0})$ for the measured value $h_{0}\approx 0.70$ as reported in Refs.~\cite{Freedman2001,Komatsu2008,Vikhlinin-etal2008}. By equating the theoretical quantity $1/H^\text{(deS,1)}$ from \eqref{eq:Hinverse} multiplied by an \emph{ad hoc} factor $g=\half$ with the measured value $1/H_0$, a first estimate of the model parameter $\eta$ in the original action \eqref{eq:action-S-f} is obtained, \begin{equation}\label{eq:eta-first-estimate} \eta \sim \sqrt{3}\, K_{N}\:q_{0}^{-3/4} \:H_0 \sim 10^{-3}\,, \end{equation} for the $q_{0}$ and $H_0$ values mentioned in the previous paragraph. Admittedly, the choice of one-half for the factor $g$ is somewhat arbitrary, but consistent with the physical picture of our present Universe entering a de-Sitter phase. A more reliable estimate of $\eta$ will come from the numerical study of a model universe with both vacuum and matter energies. The numerical solution found will be seen to interpolate between the analytic solutions \eqref{eq:Fsolution} and \eqref{eq:deSsolution1}. \subsection{Exploratory numerical results} \label{subsec:Exploratory-numerical-results} Equation \eqref{eq:4ODEsFRWdim-h} for the potential $u(\dimlessscalar,f)$ from \eqref{eq:dimensionless-potential-u} makes clear that a model universe with an asymptotically nonvanishing Hubble constant, $h(\dimlesstime) \to \text{const} \ne 0$, requires a nonvanishing modified-gravity parameter, $\eta \ne 0$. The analytic de-Sitter solution with $\dot{h}= \dot{s}=\dot{f} = 0$ has already been given in Sec.~\ref{subsec:Analytic-results}. The numerical solution of ODEs~\eqref{eq:4ODEsFRWdim} for $\eta \sim 10^{-3}$ is presented in Fig.~\ref{fig:1} and several observations can be made: \begin{itemize} \item[(i)] The boundary conditions on the functions will be discussed in Sec.~\ref{subsec:Elementary-scaling-analysis}. \item[(ii)] There is a transition from deceleration in the early universe to acceleration in the late universe. \item[(iii)] The values for $s$, $1-f$, and $h$ at the largest time shown in Fig.~\ref{fig:1} agree already at the $10\,\%$ level with those of the analytic de-Sitter-type solution \eqref{eq:deSsolution1}. \item[(iv)] The ratio $r_{M,\text{tot}}/\big(6\,h^2\big)$ is equal to $0.25$ at the dimensionless cosmic time $\dimlesstime \approx 1.4\times 10^3$. \end{itemize} Points (ii)--(iv) suggest that, for the model parameter values chosen, the model universe at $\dimlesstime_{p}= 1.432\times 10^3$ resembles our own present Universe, characterized by the values \eqref{eq:FRW-OmegaXM-wX}. \begin{figure*}[ht] \vspace*{1mm} \begin{center} \includegraphics[width=0.85\textwidth]{gluon-cond-cosmology_FIG1_v6.eps} \end{center} \vspace*{1mm} \caption{Numerical solution of ODEs~\eqref{eq:4ODEsFRWdim}, with vacuum energy density \eqref{eq:rV-Ansatz}, Brans--Dicke scalar potential \eqref{eq:dimensionless-potential-u}, and both relativistic matter (energy density $r_{M,1}$) and nonrelativistic matter (energy density $r_{M,2}$). The figure panels are organized as follows: the panels of the first column from the left concern the expansion factor $a(t)$, those of the second column the modified-gravity scalar $s(t)$, those of the third column the gluon-condensate vacuum variable $f(t)$, and those of the fourth column the matter energy densities $r_{M,n}$. The model parameters are $\big(\gamma,\,\eta^2,\, w_{M,1},\,w_{M,2}\big)$ $=$ $\big( {10^{2},\, 9 \times 10^{-7},\, 1/3,\, 0}\big)$, with the resulting parameter $\kappa\equiv (3/32)\,\eta^2/\gamma= 8.4375\times 10^{-10}$. The boundary conditions at $\dimlesstime_\text{start}=0.1$ are $\big(a,\, h,\, \dimlessscalar,\, v,\, 1-f,\, r_{M,1},\, r_{M,2}\big)$ $=$ $\big( 1,\, 4.082483,\, 0.8,\, 0.8164966,\, 8.437500\times 10^{-9},\,75.97469,\, 24.02531 \big)$; see Sec.~\ref{subsec:Elementary-scaling-analysis} for details. The several energy-density parameters $\Omega$ and the effective ``dark-energy'' equation-of-state parameter $\overline{w}_{X}$ are defined in \eqref{eq:Omegabar} and \eqref{eq:modgrav-wXbar}, respectively. With $\gamma/\eta^2 \gg 1$, the values of $\Omega_{V}$ are negligible compared to those of $\Omega_\text{grav}$ for the time interval shown.} \label{fig:1} \end{figure*} More quantitatively, the following three estimates can be obtained. First, the product of the dimensionless age $\dimlesstime_{p}$ of the present universe with its dimensionless expansion rate $h(\dimlesstime_{p}) \approx 0.6351\times 10^{-3}$ gives \begin{subequations}\label{eq:results-age-wX-z_inflect} \begin{equation} t_{p}\,h(t_{p})\approx 0.91\,, \label{eq:results-age} \end{equation} which also holds for the product of the dimensionful quantities, $\tau_{p}\,H(\tau_{p})\approx 0.91$. Second, evaluating the particular combination \eqref{eq:modgrav-wXbar} of first and second derivatives of $a(t)$ and the matter energy density $\rho_{M}$, the present effective EOS parameter of the unknown component is found to be \begin{eqnarray} \overline{w}_{X}(\dimlesstime_{p}) &\equiv& -\frac{2}{3}\,\left.\left(\frac{\ddot{a}\,a}{(\dot{a})^2}+\frac{1}{2}\right)\; \frac{1}{1-\Omega_{M}}\;\right|_{\dimlesstime=\dimlesstime_{p}} \approx -0.66\,. \label{eq:results-wX} \end{eqnarray} For larger times $t \gg \dimlesstime_{p}$, this parameter $\overline{w}_{X}(t)$ drops to the value $-1$, as can be expected from the right-hand side of \eqref{eq:modgrav-wXbar}. Additional numerical values are $\overline{w}_{X}=-0.75082$, $-0.98921$, $-0.99780$, and $-0.99989$ for $t=2000$, $4000$, $8000$, and $16000$, respectively. Observe that the particular combination of observables \eqref{eq:modgrav-wXbar} is designed to be interpreted as the effective EOS parameter of the unknown component $X$ only if matter-pressure effects are negligible ($t \gtrsim 500$ in Fig.~\ref{fig:1}). Third, consider the transition of deceleration to acceleration mentioned in point (ii) above. In mathematical terms, this time corresponds to the nonstationary inflection point of the function $a(t)$, that is, the value $t_\text{inflect}$ at which the second derivative of $a(t)$ vanishes but not the first derivative. Referring to the model universe at $\dimlesstime_{p}=1.432\times 10^3$, the inflection point $\dimlesstime_\text{inflect}\approx 0.863\times 10^3$ corresponds to a redshift \begin{equation} z_\text{inflect} \equiv a(t_{p})/a(t_\text{inflect})-1 \approx 0.5\,, \label{eq:results-z_inflect} \end{equation} \end{subequations} which implies that the acceleration is a relatively recent phenomenon in this model universe. Inspection of the lower panels of Fig.~\ref{fig:1} shows that the acceleration sets in when the ratio of $\Omega_{X}=\Omega_\text{grav}+\Omega_{V}$ and $\Omega_{M,\text{tot}}$ is approximately unity, whereas the standard $\Lambda$CDM model would have $\Omega_{X}/\Omega_{M,\text{tot}}\sim 1/2$ according to \eqref{eq:standard-FRW-ddota}. Returning to the first estimate \eqref{eq:results-age}, note that this quantity can be interpreted as the age of the present universe in time units obtained from the present expansion rate. But it is also possible to obtain the absolute age of the model universe, using the time scale contained in \eqref{eq:Dimensionless1-tau-h}, which requires as input the experimental value of the QCD gluon condensate $q_{0}$ and the one of Newton's constant $G_{N}$, taken to be equal to the effective gravitational coupling $\overline{G}_{N}$ from \eqref{eq:Geff}. With the conversion factors from \eqref{eq:Dimensionless1-tau-h} and the relation $G \sim s(t_{p})\, G_{N}$ for $K\equiv 1/(16\pi G)$, the numerical results $t_{p} \approx 1432$, $h(t_{p}) \approx 1/1575$, and $s(t_{p}) \approx 0.7267$ give the following two dimensionful quantities of the present universe: \begin{subequations}\label{eq:results-tp-Hp} \begin{eqnarray} \tau_{p} &=& t_{p}\,K\,q_{0}^{-3/4} \sim 13.1\;\text{Gyr}\,, \label{eq:results-tp}\\[2mm] H_{p} &=& h(t_{p})\,K^{-1}\,q_{0}^{3/4} \sim 68 \;\text{km}\;\text{s}^{-1}\;\text{Mpc}^{-1}\,, \label{eq:results-Hp} \end{eqnarray} \end{subequations} where the numerical values have been calculated with $q_{0} = (210\;\text{MeV})^4$. Remark that, if the relation $G \sim G_{N}$ holds for Cavendish-type experiments as mentioned in \cite{endnote-G_Newton}, the same numerical values are obtained in \eqref{eq:results-tp-Hp} by taking $q_{0} \approx (190\;\text{MeV})^4$ and, if $G \sim G_{N}/2 $ holds, by taking $q_{0} \approx (230\;\text{MeV})^4$. All of these three $q_{0}$ values lie below the value $q_{0}\approx (330\;\text{MeV})^4$ indicated by particle physics~\cite{ShifmanVainshteinZakharov1978}, but the uncertainty in the latter value appears to be large~\cite{Narison1996,Rakow2006,AndreevZakharov2007}. In addition, it may be that certain particle-physics experiments are more appropriate than others to determine the truly homogeneous condensate $q_{0}$ relevant to cosmology. Next to the observations~\cite{Riess-etal1998,Perlmutter-etal1998,Freedman2001,Eisenstein2005,Astier2006,Riess2007,Komatsu2008,Vikhlinin-etal2008}, the values obtained in \eqref{eq:results-age-wX-z_inflect} and \eqref{eq:results-tp-Hp} have the correct order of magnitude, which is all that can be hoped for at the present stage. Still, it is remarkable that more or less reasonable values appear at all~\cite{endnote-variable-G-Newton}. For comparison, the standard flat--$\Lambda$CDM model \eqref{eq:standard-FRW-dota-ddota}--\eqref{eq:FRW-OmegaXM-wX} with boundary condition $r_{M}(t_{p})/r_{V}=1/3$ gives the product $\tau_{p}\,H(\tau_{p})\approx 1.01$, the effective EOS parameter $\overline{w}_{X}=-1$, and the inflection-point redshift $z_\text{inflect}=(6)^{1/3}-1 \approx 0.82$. These three numbers fit the observational data perfectly well, but the $\Lambda$CDM model is purely phenomenological and cannot explain, without further input,\footnote{Taking as additional input the \emph{measured} value~\cite{Freedman2001} $ h_{0}\approx 0.70$ of the Hubble constant $H_{0} \equiv h_{0}\;100\;\text{km}\;\text{s}^{-1}\;\text{Mpc}^{-1} = h_{0}\, (9.778 \times 10^{9}\,\text{yr})^{-1}$, the $\Lambda$CDM-model result $\tau_{0}\,H_{0}\approx 1.01$ gives the dynamic age $\tau_{0} \approx 14.2\;\text{Gyr}$.} the absolute age of the Universe as in \eqref{eq:results-tp} or the absolute vacuum energy density as will be discussed in Sec.~\ref{sec:Conclusion}. \subsection{Elementary scaling analysis} \label{subsec:Elementary-scaling-analysis} In the previous subsection, the ODEs~\eqref{eq:4ODEsFRWdim} have been solved numerically for certain parameter values and boundary conditions at $t=t_\text{start}$, which need to be discussed further. As explained in Sec.~\ref{subsec:Preliminaries}, $t_\text{start}$ is considered to correspond to a time just after the QCD crossover has happened. This implies, in particular, that the starting value $h(t_\text{start})$ for the expansion rate is approximately given by the value $[(r_{V}+r_{M,\text{tot}})/6]^{-1/2}$ of the corresponding standard FRW universe \eqref{eq:standard-FRW-dota}. The $f$ value at $t_\text{start}$ follows from \eqref{eq:fsolution-B-zeta} for the chosen $s$ value (see below) and the starting value for $v$ is obtained by solving \eqref{eq:Friedmann-type-eq}, considered as a linear equation in $v$ with all other quantities given. \begin{table}[t \begin{center} \caption{Numerical results for the ``present epoch'' [defined by $\Omega_{M}(t_{p})=0.25$] in model universes with different numerical values for the parameters $Z$ and $\eta$, where the latter parameter controls the modified-gravity term in the action \eqref{eq:action-S-f} and the former is defined by \eqref{eq:Z-definition} in terms of the physical energy scales. Other parameters and boundary conditions are given by \eqref{eq:scaling-gamma-tstart-rM1start-rM2start}, with constants $\widehat{\gamma}$, $\widehat{t}$, and $\widehat{r}$ set equal to $1$. A further boundary condition is $s(t_\text{start})=0.8\,$; see Sec.~\ref{subsec:Elementary-scaling-analysis} for details. The effective equation-of-state parameter $\overline{w}_{X}$ and the inflection-point redshift $z_\text{inflect}$ are defined in \eqref{eq:results-wX} and \eqref{eq:results-z_inflect}, respectively. Figure~\ref{fig:1} for $Z=10^{-2}$ illustrates the general behavior of $h(t)$, $\overline{w}_{X}(t)$, and other physical quantities. \vspace*{2mm}} \label{tab-scaling-results} \renewcommand{\tabcolsep}{1pc} \renewcommand{\arraystretch}{1.0} \begin{tabular}{cc|cccccc} \hline\hline $Z$ & $10^{6}\;\eta^2$ & $10^{-3}\;t_{p}$ & $10^{4}\;h(t_{p})$ & $s(t_{p})$ & $t_{p}\,h(t_{p})$ & $\overline{w}_{X}(t_{p})$&$z_\text{inflect}$\\ \hline $10^{-1\phantom{0}}$ & $0.8$ & $1.522$ & $5.980$ &$0.7272$ & $0.910$ & $-0.669$ & $0.541$\\ $10^{-2\phantom{0}}$ & $0.9$ & $1.432$ & $6.351$ &$0.7267$ & $0.910$ & $-0.662$ & $0.538$\\ $10^{-4\phantom{0}}$ & $0.7$ & $1.629$ & $5.584$ &$0.7259$ & $0.910$ & $-0.663$ & $0.515$\\ $10^{-8\phantom{0}}$ & $0.8$ & $1.523$ & $5.967$ &$0.7255$ & $0.909$ & $-0.660$ & $0.505$\\ $10^{-16}$ & $0.9$ & $1.436$ & $6.330$ &$0.7256$ & $0.909$ & $-0.660$ & $0.506$ \\ \hline\hline \end{tabular} \end{center} \vspace*{2cm}\end{table Next, the value of $t_\text{start}$ itself and the corresponding values for $r_{M,1}$ and $r_{M,2}$ need to be specified. These values depend on the physical ratio $Z$ defined by \eqref{eq:Z-definition}. Following the results for the standard FRW universe, take \begin{subequations}\label{eq:scaling-gamma-tstart-rM1start-rM2start} \begin{eqnarray} \gamma &=&\widehat{\gamma}\;Z^{-1}\,, \label{eq:scaling-gamma}\\[2mm] t_\text{start} &=&\widehat{t}\;\sqrt{Z}\,, \label{eq:scaling-tstart}\\[2mm] r_{M,1}\big(t_\text{start}\big)&=&\widehat{r}\;Z^{-1}\big/\big(1+Z^{1/4}\big)\,, \label{eq:scaling-rM1start}\\[2mm] r_{M,2}\big(t_\text{start}\big)&=&\widehat{r}\;Z^{-3/4}\big/\big(1+Z^{1/4}\big)\,, \label{eq:scaling-rM2start} \end{eqnarray} \end{subequations} where the constants $\widehat{\gamma}$, $\widehat{t}$, and $\widehat{r}$ are numbers of order unity [in the present elementary analysis, they are just set equal to $1$]. With $\widehat{t}=1$ and the particular \emph{Ans\"{a}tze} \eqref{eq:scaling-rM1start}--\eqref{eq:scaling-rM2start}, there is equality of the relativistic (label $n=1$) and nonrelativistic (label $n=2$) energy densities around $t\sim 1$, which is not entirely unrealistic if the present universe has $t\sim 10^3$. Finally, the boundary condition value $s(t_\text{start})$ is taken between $0$ and $1$. The results are, however, rather insensitive to the precise value of $s(t_\text{start})$; see \cite{endnote-sbcs} for selected numerical results. The explanation is that, independent of the precise starting value, $s(t)$ increases rapidly until, at $t\sim 1$, it bounces back from the $s=1$ ``wall'' and, then, slowly descends towards the de-Sitter value, with some initial oscillations. Having specified the boundary conditions of the physical variables, the optimal model parameter $\eta$ needs to be determined. The strategy is as follows: for a given $Z$ value, assume an $\eta$ value, determine $t_{p}$ from the condition $\Omega_{M,\text{tot}}(t_{p})=0.25$, evaluate the product $t_{p}\,h(t_{p})$, and, if necessary, return to a new value of $\eta$ in order to get $t_{p}\,h(t_{p})$ closer to the asymptotic value of approximately $0.909$. Numerical results are given in Table~\ref{tab-scaling-results}. Three physical quantities, the relative age of the present universe $t_{p}\,h(t_{p})$, the effective EOS parameter $\overline{w}_{X}$, and the inflection-point redshift $z_\text{inflect}$, appear to approach constant values as $Z$ drops to zero. This nontrivial result suggests that the behavior shown in Fig.~\ref{fig:1} and the corresponding estimates \eqref{eq:results-age-wX-z_inflect}--\eqref{eq:results-tp-Hp} also apply to the physical case with $Z\sim 10^{-38}$ as given by \eqref{eq:Z-definition}. \section{Conclusion} \label{sec:Conclusion} The bottom-row panels of Fig.~\ref{fig:1}, if at all relevant to our Universe, suggest that the present accelerated expansion may be due primarily to the nonanalytic modified-gravity term in the action \eqref{eq:action-S-f} rather than the direct vacuum energy density $\rho_{V}(q)$, because $q$ is already very close to its equilibrium value $q_0$, making $\rho_{V}(q)\sim \rho_{V}(q_{0})=0$. Referring to the definitions in \eqref{eq:Omegabar}, the second panel of the bottom row shows the effective energy-density parameter $\Omega_\text{grav}$ due to the gluon-condensate-induced modification of gravity and the third panel the energy-density parameter $\Omega_{V}$ from the vacuum energy density proper [with EOS parameter $w_{V}=-1$], their total giving $\Omega_{X}$ which equals $1-\Omega_{M}$ for a flat FRW universe. As discussed in Secs.~\ref{subsec:Preliminaries} and \ref{subsec:Exploratory-numerical-results}, the total unknown `$X$' component has an effective EOS parameter $\overline{w}_{X}$ which drops to the value $-1$ as the de-Sitter-type universe is approached. Remark that, in contrast to the results of, e.g., Refs.~\cite{Faulkner-etal2007,Brax-etal2008}, nontrivial dark-energy dynamics has been obtained, because the effective action \eqref{eq:action-S-f} is assumed to be valid only on cosmological length scales, not solar-system or laboratory length scales [see also the discussion in the paragraph of Sec.~\ref{subsec:Theory} containing Eq.~\eqref{eq:h_ext}]. As it stands, the effective action \eqref{eq:action-S-f} can be viewed as an efficient way to describe the main aspects of the late evolution of the Universe, with only two fundamental energy scales, $E_\text{QCD} \sim 10^{8}\;\text{eV}$ and $E_\text{Planck}\sim 10^{28}\;\text{eV}$, and a single dimensionless coupling constant, $\eta \sim 10^{-3}$. Moreover, this effective coupling constant $\eta$ can, in principle, be calculated from quantum chromodynamics and general relativity, which may or may not confirm our numerical value of approximately $10^{-3}$; cf. Refs.~\cite{KlinkhamerVolovik2009a,ThomasUrbanZhitnitsky2009} and the third remark in the Note Added. Elaborating on the source of the present acceleration, consider the second term on the right-hand side of \eqref{eq:BDfield-eqs-Gmunu}, which can be rewritten as $+(2\phi K)^{-1}\,\big(\rho_\text{V,\,BD}\big)\,g_{\mu\nu}$ for the Brans--Dicke vacuum energy density $\rho_\text{V,\,BD}\equiv -K U$. The exact de-Sitter-type solution \eqref{eq:deSsolution1} for $\kappa \ll 1$, together with the conversion factor from \eqref{eq:Dimensionless1-u-s} and Newton's constant from \eqref{eq:Geff}, then allows for the following estimate: \begin{eqnarray} \rho_\text{V,\,BD}\,\Big|^\text{(deS,1)} &=& -u\,q_{0}^{3/2}/K\,\Big|^\text{(deS,1)} = 12\pi\,\eta^2\;q_{0}^{3/2}\,G \sim (\pi/8)\,\eta^2\; K_\text{QCD}^3/E_\text{Planck}^2 \nonumber\\[1mm] \hspace*{-0.5cm} &\sim& \big( 2 \times 10^{-3}\,\text{eV}\big)^4\; \times \left(\frac{\eta}{10^{-3}}\right)^2 \left(\frac{K_\text{QCD}}{\big(420\,\text{MeV}\big)^2}\right)^{3}\,, \label{eq:rhoV-BD} \end{eqnarray} where $q_{0}$ has been expressed in terms of the QCD string tension $K_\text{QCD}$~\cite{ChengLi1985}, specifically, $q_{0}=E_\text{QCD}^4 \approx (K_\text{QCD}/4)^2$. The parametric dependence of the above expression, $\rho_{V}\propto K_\text{QCD}^3/E_\text{Planck}^2$, is the same as that of the previous estimate (6.7) in Ref.~\cite{KlinkhamerVolovik2009a}, but expression \eqref{eq:rhoV-BD} now comes from the solution of field equations. Two other dimensionful quantities, the age and expansion rate of the Universe, have already been given in \eqref{eq:results-tp-Hp}. Before the asymptotic de-Sitter-type universe with effective energy density \eqref{eq:rhoV-BD} is reached, the Brans--Dicke scalar $\phi$ evolves and allows for an effective EOS parameter $\overline{w}_{X}$ different from $-1$ [the scalar $\phi$ has no direct kinetic term in the action \eqref{eq:BDaction-S}, but the $\phi R$ term does give, by partial integration, an effective kinetic term for $\phi$, which, in fact, leads to the generalized Klein--Gordon equation \eqref{eq:BDfield-eqs-Box-eta}]. For the present Universe, the general lesson may be that the deformation of the QCD gluon condensate $q$ by the spacetime curvature of the expanding Universe can result in an effective EOS parameter $\overline{w}_{X}$ which evolves with time and, for the present epoch, can still be somewhat above its asymptotic value of $-1$. In turn, a possible discovery of a $\overline{w}_{X}$ time dependence may provide an additional incentive to theoretical investigations of the physics of the gravitating gluon condensate. \vspace*{-0\baselineskip}\newline \emph{Note Added. ---} After completion of the work reported here, we became aware of two earlier articles and a third article recently posted on the archive. The first article~\cite{Amendola-etal2007} is a systematic study of the cosmology of $f(R)$ modified-gravity models and identifies the modified-gravity term \eqref{eq:action-f}, for constant $q$, as cosmologically viable [observe the different sign definition of $R$ compared to ours]. The second article~\cite{PogosianSilvestri2007} investigates the growth of density perturbations in $f(R)$ modified-gravity models and establishes, in Eq.~(42), the effective gravitational coupling parameter for subhorizon CDM density perturbations, which turns out to be close to $G_{N}$ for the model universe of Fig.~\ref{fig:1} at times $t \lesssim 500$ (redshifts $z \gtrsim 1$). The third article~\cite{UrbanZhitnitsky2009} presents a QCD calculation for the origin of the modified-gravity term \eqref{eq:action-f} and may also explain the smallness of the coupling constant $\eta$, even though many conceptual and technical issues remain to be resolved. \vspace*{-2mm}
1,108,101,564,054
arxiv
\section{Introduction} Modern eye tracking sensors offer a suitable alternative to conventional input devices (i.e. keyboard and mouse) for users for whom manual interaction might be difficult or impossible. However, gaze-based interaction has well-known challenges the most important of which are (1) \textit{Midas touch} where a system cannot distinguish the basic function of the eye (i.e. looking and perceiving) from deliberate interaction with the system, and (2) \textit{eye jitter} which is caused by small physiological eye movements occurring during a fixation to perceive a scene visually \cite{JitterDefinition}. In this paper, we propose EyeTAP (Eye tracking point-and-select by Targeted Acoustic Pulse), an effective multimodal solution to the Midas touch problem. Specifically, our method integrates the user's gaze to control the mouse with audio input captured using a microphone to trigger button-press events for real-time interaction. The contributions of this paper are twofold. Firstly, we have designed and developed an effective, multimodal interaction technique EyeTAP. The proposed approach is low-cost and allows for a completely hands-free interaction solution between the user and the computer system using only an eye-tracker and an audio input device. Secondly, we present two independent user studies each with two experiments comparing EyeTAP with all other widely-used interaction techniques. The analysis of the results clearly shows that using EyeTAP has at least comparable performance with the mouse. Furthermore, EyeTAP reaches competitive performance with the remaining eye-based interaction methods in cases where users would have restricted physical movement, or where manual interaction with an input device is not possible, e.g. medical practitioner having both hands busy. \section{Related Work} In eye-based interaction, the Midas touch problem occurs when a user accidentally activates a computer command by looking when the intention was simply to look around and perceive the scene. According to Jacob \cite{MidasTouchDefinition}, this problem occurs because eye movements are natural, e.g. the eyes are used to look around an object or to scan a scene, often without any intention to activate a command or function. This phenomenon is one of the major challenges in eye interaction techniques and diverse methods have been proposed to address the Midas touch problem. The solutions can be categorized into four groups according to the interaction technique they employ: (a) dwell-time processing, (b) smooth pursuits, (c) gaze gestures, and (d) multimodal interaction. Below, we describe each of these solutions and provide example use-cases. \subsection{Dwell-time processing} Dwell-time is the amount of time that the eye gaze must remain on a specific target in order to trigger an event. Researchers have tried to detect specific thresholds to handle the Midas touch problem \cite{ProbabilisticDwellTime, FocalFixations}. For example, Pi \emph{et al.} proposed a probabilistic model for text entry using eye gaze \cite{ProbabilisticDwellTime}. They reduced the Midas touch problem by assigning each letter a probability value based on the previously chosen letter such that a letter with lower probability requires a longer activation time to be activated and vice-versa. Velichkovsky \emph{et al.} applied focal fixations to resolve the Midas touch problem by assigning the mean duration time (empirically set to 325 ms) of a visual search task to trigger a function \cite{FocalFixations}. Dwell time has been shown to be even faster than the mouse in certain tasks, e.g. selecting a letter given an auditory cue~\cite{SibertJacob2000}. However, with dwell time there is a trade-off between accuracy and speed \cite{10.1145/1028014.1028045, 10.1145/1452392.1452443, majaranta2006effects}. The method of applying focal fixations may be very subjective since searching time varies across users \cite{Bednarik_Gowases_Tukiainen_2009}. Moreover, increasing the threshold may increase the duration time of the entire interaction. Conversely, reducing the amount of dwell-time may lead to more errors for some users \cite{10.1145/1452392.1452443}. \subsection{Smooth pursuits} Smooth pursuits are a form of eye movement that occurs when a moving stimulus (e.g. an object or animation) is followed with gaze \cite{Barnes2012}. The method is typically implemented by using two visual points on the interface that appear on top and below each target. Then to activate the target the user must fixate on one of these points. This technique has been used to select targets~\cite{Pursuits}, control home appliances~\cite{AmbiGaze}, to activate functions such as mouse clicks~\cite{GazeEverywhere} or to use the music player on a smartwatch~\cite{OrbitsSmartWatch}. Schenk \emph{et al.} proposed a framework (GazeEverywhere) which enables users to replace mouse inputs \cite{GazeEverywhere}. This solution includes a computer to process gaze interactions (gaze PC), a computer to show the results (unmodified PC) which are connected via a micro-controller to trigger mouse click events, and a glass pane to project gaze targets on a second screen. Vidal \emph{et al.} introduced an interaction technique (Pursuits) for large screens using moving objects to be activated by eye gaze \cite{Pursuits}. They used a Tobii X300 eye tracker and a public display to select targets on the screen. Velloso \emph{et al.} presented a framework (AmbiGaze) to control ambient devices such as TVs and stereos (each assigned with an infrared (IR) beacon) with eye gaze using a head-mounted eye tracker \cite{AmbiGaze}. The system employs a server to process gaze inputs and control the devices. Esteves \emph{et al.} presented a framework for a multi-touch Android smartwatch (Callisto 300) to input commands using a head-mounted eye tracker (Pupil Pro) \cite{OrbitsSmartWatch}. They developed three use-cases: a music player, a notifications panel with six colored points on the smartwatch screen representing six applications (e.g. social media apps), and a missed call menu with four commands, call back, reply text, save number and clear the notification. \subsection{Gaze gestures} Gaze gestures are sequences of eye movements that follow a predefined pattern in a specific order~\cite{GazeGesture2007}. Researchers have proposed techniques which can be applied to analyze eye movements to detect unique gestures (e.g. ~\cite{GazeGesture2016, GazeGesture2007, GazeGesture2012, GazeGesture2010}). Drewes \emph{et al.} assigned up, down, left, right and diagonal directions to different characters on the keyboard thereby allowing a user to select a letter by moving the eye gaze in any direction \cite{GazeGesture2007}. In addition, they tried to distinguish between natural and intentional eye movements by using short fixation times during gesture detection and long fixation times to reset the gesture recognition. Istance \emph{et al.} developed two-legged and three-legged gaze gestures (up, down and diagonal patterns) for command selection to play World of Warcraft for users with motor impairment disabilities \cite{GazeGesture2010}. In a similar work, Hyrskykari \emph{et al.} studied both dwell-time and gaze gesture interactions in the context of video games and found that gaze gestures had better performance for command activation \cite{GazeGesture2012}. Moreover, gaze gestures produced fewer errors than the dwell-time and led to less visual distractions. B{\^a}ce \emph{et al.} proposed an AR prototype, containing a head-mounted eye tracker and a smartwatch, to embed virtual messages to real-world objects to be shared with peer users~\cite{GazeGesture2016}. The authors integrated eye gaze gestures as a pattern to encode and decode messages attached to a specific object previously tagged by another peer user, thus using gaze gestures as an authentication mechanism for secure communication. \subsection{Multimodal Interaction} Multimodal techniques apply extra inputs from another modality (e.g. touch, audio, etc.) as the trigger of a function in addition to eye tracking. They can be divided into the following sub-categories: using mechanical switches, touch interaction, or facial gestures. \subsubsection{Applying a specific (mechanical) switch} For some specific domains, such as rehabilitation, and user groups (i.e. users with motor impairments or severe disabilities), researchers have applied specific switches to activate an event or function. For instance, Rajanna \emph{et al.} proposed a combined framework for users with disabilities which applies a foot pedal device to click on objects and to enter text~\cite{FootSwitch}. Meena \emph{et al.} applied a soft button on a wheelchair to control the movements of the wheelchair in different directions (horizontal, vertical and diagonal)~\cite{WheelchairSwitch}. Sidorakis \emph{et al.} applied a switch for a gazed-controlled multimedia framework on virtual reality head-mounted displays (Oculus Rift) to resolve the Midas touch problem~\cite{BinocularEyeTracking}. Biswas \emph{et al.} proposed a joystick to control point-and-select tasks for combat aviation platforms to address the Midas touch problem~\cite{JoystickSwitch}. \subsubsection{Touch interaction} Some researchers have proposed the integration of using touch interaction, for a limited number of functions, to increase the accuracy of target selection. Pfeuffer \emph{et al.} applied a cursor at the gaze point to be controlled by a finger holding a tablet where a finger tap on the screen leads to a click on the current location of the pointer (CursorShift method)~\cite{GazeAndTouch01}. In a similar study by Pfeuffer \emph{et al.}, the authors investigated the integration of finger touch and pen inputs on a tablet for zooming or annotating tasks on images \cite{GazeAndTouch02}. Although this technique was not introduced as a solution to the Midas touch problem, it can increase the accuracy of selection which leads to reducing Midas touch. \subsubsection{Facial gestures recognition} In \cite{FaceGestures}, Rozado \emph{et al.} studied the potential of using live video monitoring to detect facial gestures to enhance eye tracking interaction. In their work (FaceSwitch), they associated facial gestures (opening mouth, raising eyebrows, smiling and twitching the nose up and down) to simulate left and right mouse clicks and customized some keyboard functions such as page down key press. Using a multimodal solution that combines eye-gaze with acoustic inputs (audio or speech detection) can be regarded as an alternative to the reviewed solutions and has the advantage of not requiring either extra hardware or a specialized user interface design. For this reason, we designed EyeTAP to use audio processing for selection. Our solution: (1) provides a hands-free interaction technique for users with special needs, and (2) addresses the Midas touch problem. Although there has been some work done on audio detection to simulate system events for computer interactions (e.g.~\cite{Blui, Cappella,DirectManipulationPatternRecognition}) the focus has been on signal processing for complex interactions. Conversely, in our work we applied acoustic inputs only as a way of sending commands. \section{EyeTAP Prototype} A simple mouse interaction consists of moving the pointer to a target (pointing phase), and clicking on it to trigger a function (selection phase). In the EyeTAP prototype the mouse pointer position is captured using the Tobii 4C tracker ~\footnote{https://tobiigaming.com/product/tobii-eye-tracker-4c/} and selection is done by generating an acoustic pulse by mouth (e.g. a mouth click) which is captured by a headset microphone (Logitech H370). The EyeTAP prototype was developed and the experiments were run on a commodity computer system: 64-bit Windows 10 PC with Intel i7 2.67GHz CPU, 12 GB RAM, 1 TB hard disk and NVIDIA GeForce GTX 770 graphics card. Thus, EyeTAP is a cost-effective system that can be applied at almost any work space. Figure \ref{fig:system_overview} illustrates the EyeTAP system setup. \begin{figure}[htbp] \centering \includegraphics[width=0.30 \textwidth]{figures/system_overview.pdf} \caption{EyeTAP system: The eye tracker is used to move the pointer from A to B. The user makes an acoustic pulse by mouth and the signal processing module interprets the signal as an input and triggers a click event to select B. The system has an ambient noise tolerance of up to 70 dB.} \label{fig:system_overview} \end{figure} \subsection{Eye Tracking: Pointing Phase} The Tobii SDK (TobiiEyeXSdk$-$Cpp$-$1.8.498) supports different events related to eye tracking activities such as providing the location of the current eye gaze, positions of both eyes, fixation points and user presence in front of the eye tracker. We employed the eye gaze library (API) to obtain users' gaze locations. These locations show the current gaze position on the screen as pixels. The SDK supports eye movements in a 3D coordinate system (horizontal, vertical, depth) but we applied a 2D coordinate system (x,y) such that the mouse cursor was synchronized with the gaze positions to control the mouse pointer on the screen. Eye-tracking for the EyeTAP prototype was developed in C++ and integrated as a new plug-in into the Tobii SDK. \subsection{Auditory Processing: Selection Phase} To simulate a click on the item to be selected a headset microphone listens to the user while suppressing the background ambient sounds/noise (conversations in office and equipment sounds) in real-time. The intensity of the mouth noise and distance of microphone is adjusted by the user before the test. A detected pulse in the real-time audio signal (a value larger than a predefined threshold) is regarded as a click. The threshold's value can be adjusted based on the environment to reduce background ambient noise. The EyeTAP prototype has an ambient noise tolerance of up to 70 dB. When a significant increase in the frequency spectrum (greater than the threshold) is detected a mouse click event is triggered. In general, recording is categorized into two phases: audible and silent periods. Any audible period with an intensity greater than the predefined threshold will be detected as an input signal to the system as the binary 1; similarly, values smaller than the threshold value are regarded as binary 0. The intuition behind the auditory processing was inspired from the simplicity of the Morse code~\cite{morse_code}, which consists of a series of ON/OFF signals triggered by tone or light. Information is interpreted using dots and dashes and therefore can be used to represent transmitted signals through a sequence of True/False variables. \section{Evaluation} \label{Evaluation} To evaluate the effectiveness of the developed EyeTAP prototype, we ran two independent user studies each with two internal experiments with 33 participants (13 female, from 22 to 35 years old, SD=2.96). All subjects partook in both experiments. Prior to running the experiments, subjects were informed about the purpose of the study, trained on each of the methods to be tested, and participated in a pre-test questionnaire probing them on their background in the fields of eye tracking, voice recognition technologies and their preferred kind of interaction in the case of hands-free alternatives. The Tobii calibration software was used to calibrate the system for each participant before starting the study. At the end of the two experiments subjects filled out a post-test questionnaire, which consisted of the NASA TLX questionnaire \cite{nasa_tlx} followed by specific questions about the subjects' perceptions of the different interaction methods. The order of interaction method was randomly selected for each participant. We played an artificial ambient noise through stereo desktop speakers of 50 dB to simulate a typical work environment since EyeTAP and voice recognition rely on audio inputs. \subsection{User Study 1: Matrix-based Test} In the first experiment, the EyeTAP interaction method was compared with: (a) the mouse, (b) dwell-time, and (c) eye tracking with voice-recognition. In this experiment, a matrix of buttons (targets), were randomly distributed across the screen. The task of the subjects was to point and click on buttons shown on the screen in increasing numerical order for various levels of difficulty from 1 (easy) to 5 (hard), described in detail below. The order of interaction methods seen by each subject was randomly selected for each participant however, the level of difficultly was presented in ascending order. \subsubsection{Stimulus} The stimulus consisted of 77 buttons (11 columns $\times$ 7 rows) some labeled with numbers and others not, which covered the entire screen at a resolution of 1920 $\times$ 1080 pixels on a Dell P2411Hb monitor. Two marginal columns (far left, far right) and two rows (top, bottom) were removed from the active selection due to the high difficulty to be selected by users during the pilot-test. Buttons that were not labeled are considered as \textit{barriers} or \textit{distractions}. To provide feedback to the subject, labeled buttons change color after the user has successfully pointed and selected on the correct button. Wrongly selected barriers (buttons with no label) are highlighted in red. The level of difficulty of the stimulus was also increased across subject trials. This was done by increasing the number of targets that had to be selected by the subject. Five levels of difficulty were used for each interaction method: level 1 (4 targets), level 2 (6 targets), level 3 (8 targets), level 4 (10 targets) and level 5 (12 targets). Targets were randomly distributed over the entire screen for each level. Figure \ref{fig:test_screenshot} shows the matrix-based test during difficulty level 5. The cursor that was used was a black circle because it is easier for users to keep it on the target's boundary rather than a pointer. \begin{figure}[htp] \centering \includegraphics[scale=0.35]{figures/matrix_test_screenshot.pdf} \caption{The matrix-based test for difficulty level 5. Target buttons are distributed randomly across the screen. The red button illustrates an error. The black circle on number 12 shows the current eye gaze location. Labels were enlarged for higher visibility.} \label{fig:test_screenshot} \end{figure} \subsubsection{Mouse} For the mouse method (our baseline method for comparison), subjects simply used a mouse to move to targets and select them in numerical order. \subsubsection{Dwell-time} For the dwell-time method, where an internal timer is used to determine if a target was selected. The range of dwell-time is in (300-1100) milliseconds for target selection \cite{vspakov2004line}. Then we defined the target activation threshold to 500 milliseconds, since it showed best performance in \cite{mackenzie2012evaluating} and participants preferred a dwell-time around 500 ms in a user study \cite{vspakov2004line}. In other words, a target was selected when a subject focused on a target for 0.5 seconds, and if the subject moved their gaze away from the target prior to 0.5 seconds the target selection process would restart. \subsubsection{Eye Tracking with Voice recognition} For voice recognition, eye tracking was used for pointing and voice for selection. The method was developed using the built-in Windows 10 speech recognition capabilities available in the .NET framework. We implemented a C\# application to respond to the activation keyword 'select' to trigger a mouse click. The same microphone was used as for the EyeTAP test. \subsubsection{Measures} \label{MatrixBasedVariablesDetails} The following variables were recorded: \textit{completion time}, \textit{path cost of selecting targets}, \textit{error locations}, and \textit{cognitive load} (based on the NASA TLX scores). An internal logging module recorded subjects' actions, selection times, as well as the number of correct and wrong selections. For the path cost measure the shortest path between targets and the produced path by each interaction method was processed. To compare the shapes of the generated paths, we used the dynamic time warping (DTW) algorithm~\cite{1104847, 1163491,1163055}. Since DTW works on a time-value domain the paths produced by the eye tracker were decomposed into their horizontal and vertical values and compared with their associated shortest path models' \textit{X} and \textit{Y} values. We applied the built-in \textit{DTW} function in the Python DTW 1.3.3 module \footnote{https://github.com/pierre-rouanet/dtw} to measure the deviations of each path from the shortest path model. \subsection{User Study 1: Dart-based Test} The purpose of this experiment was to measure the accuracy of EyeTAP in comparison to the previously proposed eye-based interaction methods. The task of the subject was to select, as accurately as possible, the bull's-eye of a dart target using each interaction method. In this experiment, the eye tracker was used for the pointing phase for each of the interaction methods, however selection of the target was triggered by different methods, i.e. dwell-time, voice command or EyeTAP acoustic signal. In order to take into consideration the fact that eye tracking has different accuracy in different regions of the monitor, we computed an average value based on five trials for each interaction method where the stimulus was shown at different areas of the screen near the center of the screen randomly. Each new randomly chosen trial began two seconds after selection of the previous target, allowing users time to change their gaze and to focus on the new target. For the dwell-time method, a countdown (5 to 0) representing remaining 100 milliseconds was displayed during the selection phase and users needed to focus on the dart shape before this time was up. \subsubsection{Stimulus} The stimulus for this experiment consisted of a dart-like target with three circles, green (0 to 30 pixels radius), blue (30 to 60 pixels radius) and red (60 to 90 pixels radius) as in Figure \ref{fig:circular_based}. Points within the center area i.e. green have the lowest range of distances to the bulls-eye; each other co-centric circle has a larger range of distance values. Any point lying outside the three co-centric circular areas is considered as having a fixed maximum distance of 90 pixels. For this experiment, a cross-hair icon was used. \begin{figure}[htbp] \centering \includegraphics[width=0.23\textwidth]{figures/circular_based_test.pdf} \caption{Dart-based test stimuli: the accuracy is highest in the green area. The cross-hair icon indicates the correct eye gaze location.} \label{fig:circular_based} \end{figure} \subsubsection{Measures} The purpose of this test was to measure the selected point's distance on the dart target to the center of the core circle (in green), thus the accuracy is measured in pixels. Since the measured trials are chosen randomly, the average is calculated to compare different methods based on accurate selection. \subsection{User Study 2: Ribbon-shaped Test} In order to compare our method to other studies, we performed the FittsStudy~\cite{wobbrock2011effects}. This study is used to analyze pointing interaction methods in accordance to well-established academic standards. As part of this study, we measured three metrics to compare the performance of all interaction techniques for point-and-select tasks, (1) \textit{throughput}, (2) \textit{movement time} and (3) \textit{error rates} for ribbon-shaped targets (see figure \ref{fig:fitts_overview}). We applied the FittsStudy application \footnote{http://depts.washington.edu/acelab/proj/fittsstudy/index.html} by Wobbrock \emph{et al.} \cite{wobbrock2011effects}. The test session includes three distances (256, 384, 512) and two widths (96, 128) pixels. \subsection{User Study 2: Circle-shaped Test} This test is similar to the Ribbon-shapped test, however, contains different target shapes. Figure \ref{fig:fitts_overview} illustrates the screenshots of both test applications. This experiment contains uni-variate endpoint deviation (\textit{SD\textsubscript{x}}) through one axis and bi-variate endpoint deviation (\textit{SD\textsubscript{x,y}}) through both axes for throughput calculations which results in better Fitts' law model \cite{wobbrock2011effects}. \begin{figure}[htbp] \centering \includegraphics[width=0.25 \textwidth]{figures/fitts_overview.pdf} \caption{Screenshots of the 'FittsStudy' application \cite{wobbrock2011effects}. Top figure illustrates the ribbon-shaped stimuli and the bottom figure shows the circle-shaped stimuli. The highlighted targets are shown in blue to represent the active target to be selected.} \label{fig:fitts_overview} \end{figure} \section{Results} To determine the effectiveness of the EyeTAP method, we analyzed the results of our experiments using an analysis of variance (ANOVA) followed by Bonferroni posthoc tests with the IBM SPSS software \footnote{https://www.ibm.com/analytics/spss-statistics-software}. \subsection{User Study 1: Matrix-based User Study} A two-way repeated measures ANOVA (methods $\times$ difficulty levels) was performed to examine the effect of interaction type on: (1) \textit{completion time} and (2) \textit{path costs of target selection} for each method and difficulty levels. \subsubsection{Completion time} We found a significant effect of interaction method on completion time (F(12,384)=8.51, \textit{p} < .001). A posthoc Bonferroni comparison test showed a significant difference between mouse ($M=8017.955~ms$, $SE=645.433~ms$) and all other eye tracking methods (see figure~\ref{fig:matrix_completion_time}). In addition, EyeTAP ($M=19998.812~ms$, $SE=2122.329~ms$), dwell-time ($M=11154.830~ms$, $SE=788.395~ms$) and voice recognition ($M=26904.333~ms$, $SE=2467.576~ms$) are significantly different (\textit{p} < .05). Figure \ref{fig:matrix_completion_time} illustrates the average completion time per method for 8 targets per level ($\frac{40~targets}{5~levels}$). \begin{figure}[htbp] \centering \includegraphics[width=0.47 \textwidth]{figures/matrix_overall_completion_time.pdf} \caption{Average completion time of point-and-select tasks for all participants obtained from the matrix-based user study for 8 targets per level ($\frac{40~targets}{5~levels}$). Completion time was significantly different for all techniques ($p < .001$).} \label{fig:matrix_completion_time} \end{figure} \subsubsection{Path costs of target selections} To examine the paths produced by selecting targets we compared the original locations of the targets and the shortest path (ideal path model), as described in Section \ref{Evaluation}. For each method, we had a $\frac{distance}{cost}$ measure to the shortest path. This metric can be regarded as the \textit{footprint} of each interaction technique on the display. A two-way repeated measures ANOVA (methods $\times$ difficulty levels) showed that there was a significant effect of interaction type on path cost (F(12,384)=2.57, \textit{p} < .05). A Bonferroni posthoc test showed that dwell-time ($M=76.73~pixels$, $SE=5.09~pixels$) produced the shortest path among all other interaction techniques, even better than the mouse interaction ($M=109.25~pixels, SE=3.82~pixels$) with \textit{p} < .05. However, there is no significant difference between dwell-time ($M=76.73~pixels$, $SE=5.09~pixels$), EyeTAP ($M=84.80~pixels$, $SE=3.59~pixels$) and voice recognition ($M=82.03~pixels$, $SE=4.41~pixels$). Figure \ref{fig:dtw_all}, which shows the path costs for all interaction methods, reveals that eye tracking movements produce significantly lower movements than mouse on a large screen. \begin{figure}[htbp] \includegraphics[width=0.47 \textwidth]{figures/dtw_all.pdf} \caption{Mean path cost comparison calculated using the dynamic time warping (DTW) algorithm. All eye tracking techniques have shorter path lengths than mouse interaction for traversing items on a screen ($p < .05$).} \label{fig:dtw_all} \end{figure} \subsubsection{Errors in target selections} To measure the effectiveness of each Midas touch solution we need to consider a penalty for wrongly selected neighboring targets. Those targets are shown in red on the screen (see figure \ref{fig:test_screenshot}). We projected the locations of errors per each interaction method, since difficulty level 5 has the highest number of targets (12 targets) on the screen, we illustrate the locations for this difficulty level in Figure \ref{fig:error_locations}. EyeTAP has the highest number of errors, however the figure reveals the potential regions of the screen which are more error prone. As shown in the figure, most errors occurred from the center towards the right side of the screen. In fact, the right side of the screen produces more errors than the left side. Moreover, the lower side produces more errors than the top side. Feit \emph{et al.} showed that the same bottom and right regions of the screen have lower accuracy \cite{feit2017toward}. We confirm their results and also demonstrate that the same regions are also more error prone. \begin{figure}[htbp] \centering \includegraphics[width=0.43 \textwidth]{figures/error_locations.pdf} \caption{The locations of errors during the matrix-based user study (figure \ref{fig:test_screenshot}) for difficulty level 5. The right side of the screen as well as bottom side are more error prone than the left and top sides.} \label{fig:error_locations} \end{figure} \subsection{User Study 2: Dart-based User Study} We performed a one-way repeated measures ANOVA to compare the effect of the different interaction methods on accuracy. The results of the ANOVA showed all eye tracking methods have statistical difference (F(3,96)=104.92, \textit{p} < 0.001) on selection accuracy. In fact, the mouse interaction has the lowest distance to target (higher accuracy) compared to eye tracking techniques. EyeTAP ($M=45.11~pixels$, $SE=2.28~pixels$) achieved the highest mean pixel accuracy compared to dwell-time ($M=35.30~pixels$, $SE=2.11~pixels$) and voice recognition ($M=29.27~pixels$, $SE=2.07~pixels$). Figure \ref{fig:dart_results_chart} depicts the results of the accuracy test. \begin{figure}[htbp] \centering \includegraphics[width=0.47 \textwidth]{figures/dart_results_chart.pdf} \caption{The mean distance to target in pixels for dart-based experiment ($p < .001$).} \label{fig:dart_results_chart} \end{figure} \subsection{User Study 2: Ribbon-shaped Test} A one-way repeated measures ANOVA was performed to examine the effect of interaction type on: (1) \textit{movement time}, (2) \textit{throughput} and (3) \textit{error rates} for each interaction method. \subsubsection{Movement time} We found a significant effect of the interaction method on movement time (F(3,96)=69.42, \textit{p} < .001). A posthoc Bonferroni comparison test showed a significant difference between mouse ($M=684.15~ms$, $SE=16.80~ms$) and all other eye tracking methods (figure~\ref{fig:ribbon_movement_time}). In addition, among all eye tracking methods, dwell-time ($M=599.39~ms$, $SE=18.76~ms$) achieved significantly lower movement time than EyeTAP ($M=1794.89~ms$, $SE=170.90~ms$) and voice recognition ($M=2014.20~ms$, $SE=89.28~ms$) techniques. However, there is no statistical significance between EyeTAP and voice recognition. The lower movement time of dwell-time method compared to the mouse interaction is associated with the low activation time (500 ms). \begin{figure}[htbp] \includegraphics[width=0.47 \textwidth]{figures/ribbon_movement_time.pdf} \caption{The calculated movement time per method for the ribbon-shaped test ($p < .001$).} \label{fig:ribbon_movement_time} \end{figure} \subsubsection{Throughput} We found a significant effect of the interaction method on throughput (F(3,96)=75.13, \textit{p} < .001). A posthoc Bonferroni comparison test showed a significant difference between dwell-time ($M=3.30~bits/sec$, $SE=0.36~bits/sec$) and all eye tracking methods (figure~\ref{fig:ribbon_tp}). The mouse ($M=4.81~bits/sec$, $SE=0.11~bits/sec$) achieved higher throughput than the eye tracking methods. However, there is no statistical difference between voice recognition ($M=1.15~bits/sec$, $SE=0.09~bits/sec$) and EyeTAP ($M=1.34~bits/sec$, $SE=0.12~bits/sec$). \begin{figure}[htbp] \centering \includegraphics[width=0.47 \textwidth]{figures/ribbon_tp.pdf} \caption{The calculated throughput per method for the ribbon-shaped test ($p < .001$).} \label{fig:ribbon_tp} \end{figure} \subsubsection{Error rates} We found a significant effect of the interaction method on error rates (F(3,96)=27.15, \textit{p} < .001). A posthoc Bonferroni comparison test showed a significant difference between mouse ($M=0.01~errors$, $SE=0.005~errors$) and all eye tracking interactions (see Figure~\ref{fig:ribbon_error_rates}). In addition, dwell-time ($M=0.28~errors$, $SE=0.03~errors$) reached a higher error rate than EyeTAP ($M=0.18~errors$, $SE=0.02~errors$) and voice recognition ($M=0.10~errors$, $SE=0.02~errors$). \begin{figure}[htbp] \centering \includegraphics[width=0.47 \textwidth]{figures/ribbon_error_rate.pdf} \caption{The calculated error rates per method for the ribbon-shaped test ($p < .001$).} \label{fig:ribbon_error_rates} \end{figure} \subsection{User Study 2: Circle-shaped Test} A one-way repeated measures ANOVA was performed to examine the effect of interaction type on: (1) \textit{movement time}, (2) \textit{throughput} and (3) \textit{error rates} for each interaction method. This experiment is similar to ribbon-shaped test but contains an extra metric to measure throughput of each method. \subsubsection{Movement time} We found a significant effect of the interaction method on movement time (F(3,96)=67.48, \textit{p} < .001). A posthoc Bonferroni comparison test showed a significant difference between EyeTAP ($M=1578.95~ms$, $SE=95.34~ms$), dwell-time ($M=638.80~ms$, $SE=24.35~ms$), voice recognition ($M=2123.35~ms$, $SE=132.42~ms$) and mouse ($M=727.91~ms$, $SE=46.12~ms$). However, there is no statistical difference between mouse ($M=727.91~ms$, $SE=46.12~ms$) and dwell-time ($M=638.80~ms$, $SE=24.35~ms$). Figure \ref{fig:circle_movement_time} illustrates the mean movement time per method for the circle-shaped test. \begin{figure}[htbp] \centering \includegraphics[width=0.47 \textwidth]{figures/circle_movement_time.pdf} \caption{The calculated movement time per method for the circle-shaped test ($p < .001$).} \label{fig:circle_movement_time} \end{figure} \subsubsection{Throughput} Since the circle-shaped test contains two variations (uni-variate, bi-variate) to measure throughput \cite{wobbrock2011effects}, we ran a two-way repeated measures ANOVA (throughput $\times$ variation) and found a significant effect of the interaction method on throughput (F(3,96)=19.75, \textit{p} < .001). A posthoc Bonferroni comparison test showed a significant difference between mouse ($M=4.16~bits/sec$, $SE=0.18~bits/sec$), dwell-time ($M=3.20~bits/sec$, $SE=0.25~bits/sec$), voice-recognition ($M=1.24~bits/sec$, $SE=0.07~bits/sec$) and EyeTAP ($M=1.04~bits/sec$, $SE=0.13~bits/sec$). However, there is no statistical difference between voice-recognition ($M=1.24~bits/sec$, $SE=0.07~bits/sec$) and EyeTAP ($M=1.04$\newline$bits/sec$, $SE=0.13~bits/sec$). Figure \ref{fig:circle_tp_chart} shows both variations of throughput per interaction method. \begin{figure*}[htbp] \centering \includegraphics[width=0.73 \textwidth]{figures/circle_tp_chart.pdf} \caption{The calculated throughput for both uni-, and bi-variations per method for the circle-shaped test ($p < .001$).} \label{fig:circle_tp_chart} \end{figure*} \subsubsection{Error rates} We found a significant effect of the interaction method on error rates (F(3,96)=18.25, \textit{p} < .001). A posthoc Bonferroni comparison test showed a significant difference between mouse ($M=0.02~errors$, $SE=0.01~errors$), dwell-time ($M=0.23~errors$, $SE=0.03~errors$), voice recognition ($M=0.13~errors$, $SE=0.02~errors$) and EyeTAP ($M=0.28~errors$, $SE=0.02~errors$). Voice recognition ($M=0.13~errors$, $SE=0.02~errors$) reached the lowest error rate among eye tracking methods, however, there is no statistical difference between dwell-time ($M=0.23~errors$, $SE=0.03~errors$) and EyeTAP ($M=0.28~errors$, $SE=0.02~errors$). Figure \ref{fig:circle_error_rates} illustrates the calculated error rates for the circle-shaped test. \begin{figure}[htbp] \centering \includegraphics[width=0.47 \textwidth]{figures/circle_error_rates.pdf} \caption{The calculated error rates per method for the circle-shaped test ($p < .001$).} \label{fig:circle_error_rates} \end{figure} \subsection{EyeTAP rating by users} We asked participants to evaluate the overall performance of EyeTAP in the post-test questionnaire on a scale from 1 (worst) to 5 (best). EyeTAP reached the average rate of 3.64 ($SD=0.99$) by 33 users. In addition, Users were asked to select multiple interaction techniques. Figure \ref{fig:techniques_rating_chart} illustrates the popular interaction techniques by users obtained from the post-test questionnaire. EyeTAP reached the second desired eye tracking technique. \begin{figure}[htbp] \centering \includegraphics[width=0.47 \textwidth]{figures/techniques_rating.pdf} \caption{The recorded users' multiple choice of interaction techniques among 33 participants.} \label{fig:techniques_rating_chart} \end{figure} \subsection{NASA TLX scores} Figure \ref{fig:nasa_tlx_chart} shows the NASA TLX scores for all interaction methods obtained during the user study. The overall workload is the average of scale values since we assume all scales equally important and therefore eliminated the weighting calculations to apply a simplified version \cite{nasa_tlx_20} of the basic NASA TLX ratings \cite{nasa_tlx}. According to our findings, the dwell-time method has the lowest workload among other eye tracking techniques. However, EyeTAP shows relatively lower workload compared to voice recognition technique. \begin{figure*}[htbp] \centering \includegraphics[width=.99 \textwidth]{figures/nasa_tlx.pdf} \caption{The NASA TLX scores for the interaction methods. (Left) Comparison of each method based on different scales. (Right) The overall mean workload of tested interaction methods.} \label{fig:nasa_tlx_chart} \end{figure*} \section{Discussion} Regarding the experiments with the reviewed Midas touch solutions, we found several benefits and disadvantages of each method. We discuss each method individually. \subsection{Voice Recognition} This interaction method showed relatively acceptable results but suffers from some limitations. In general, a voice recognition engine depends on the user's voice, gender, language, and accent. Additionally, it is not applicable to users with speech impediments. Another drawback is the need of prior training samples to detect words correctly. Furthermore, similar words may lead to false recognition as we experienced during our user study. The quality of the microphone and its distance to the user is also another factor to be considered for this kind of interaction. Regarding the accuracy of recognition, the choice of recognition software plays an important role. Finally, speaking out loud may not be suitable in certain working environments. In general, voice recognition presented some challenges for the users in terms of wrongly recognized words, need for action word repetition, and delay between input and feedback. The subjects' rating of this technique was very low (9.1\%) in our user study. Voice recognition showed the highest completion time in the matrix-based test and highest movement time in the circle-shaped test and reached the highest cognitive workload among all interaction techniques. However, voice recognition showed the lowest error rates in both Fitts' study experiments and reached the lowest distance to target (highest selection accuracy) among other eye tracking techniques. \subsection{Dwell-Time processing} Dwell-time method showed the fastest completion time in the matrix-based test, and fastest movement time and highest throughput in both Fitts' experiments due to the low amount of activation time (500 ms). In addition, it reached the lowest amount of cognitive workload. However, it showed the highest error rates in the ribbon-shaped test and with EyeTAP in the circle-shaped test. Moreover, some users complained about eye fatigue after a while during test sessions. \subsection{EyeTAP} We found several benefits of using EyeTAP in comparison to the other interaction techniques. First of all, it has no dependent features, rather it requires only an acoustic pulse (making sound with mouth) near a microphone to send a signal. In fact, the output of EyeTAP in a noisy environment (up to 70dB) can appear deterministic after a number of repetitions. According to the results of our study, it achieved faster completion time in the matrix-based test, and faster movement time in the circle-shaped experiment than voice recognition. In addition, it showed a similar path cost (pointer footprint on display) with the other eye tracking techniques. It also achieved lower cognitive workload in comparison to the voice recognition technique. Furthermore, EyeTAP was the popular choice of interaction (36.4\%) compared to voice recognition (9.1\%). However, EyeTAP showed relatively lower accuracy and higher error rates than voice recognition, since most users had no prior experiences with this kind of interaction. The performance of EyeTAP can be improved with more training. In general, EyeTAP is simple, integrates well into existing user interfaces, and allows for easy and accurate point-and-select interaction because it separates the actions of \textit{pointing} and \textit{selecting} to two different modalities while relaxing the requirement for accurate voice recognition. The results of our user study demonstrate that EyeTAP is a feasible alternative interaction technique. Moreover, it is a robust and effective solution to the Midas touch problem for eye tracking platforms and can be regarded as an alternative to voice recognition technique. \section{Conclusion and Future Work} In this paper, we proposed EyeTAP (Eye tracking point-and-select by Targeted Acoustic Pulse), an eye-tracking interface that addresses the Midas touch problem with acoustic input detection capabilities. EyeTAP allows for accurate and effective interaction without the need for extra equipment or user interface design for gaze-based interactions. The performance of the prototype was measured in two independent user studies with 33 participants based on eight criteria: (1) \textit{completion time}, (2) \textit{path cost of target selection}, (3) \textit{error rate}, (4) \textit{error locations on screen}, (5) \textit{accuracy of target selection}, (6) \textit{movement time}, (7) \textit{throughput}, and (8) \textit{cognitive workload}. The results of our user studies showed that the dwell-time method outperformed other eye tracking techniques, including EyeTAP on most criteria. At the same time we found that EyeTAP, in comparison to to the other tested methods is a competitive and a promising solution and provides a faster task completion time, faster movement time and lower workload than voice recognition. In addition, EyeTAP showed similar performance compared to the dwell-time method and lower error rate in the ribbon-shaped experiment. Moreover, our study showed that eye tracking has a lower footprint on the screen compared to a mouse pointer in time scale. Additionally, we confirmed that center regions towards the right and bottom side of the screen are more error prone than the left and top sides. Additionally, we developed two user tests that would be effective in studying different target selection for gaze-based interaction techniques. Although we only developed the left mouse click event, EyeTAP demonstrates a completely hands-free or touchless alternative to mouse interaction for users with disabilities and users who need to avoid physical contact with input devices considering their workplace or situation. Thus, we believe EyeTAP can be regarded as a competitive technique to both dwell-time and voice recognition. In future work, we will apply the EyeTAP technique on AR/VR headsets to measure its usability in different case scenarios.
1,108,101,564,055
arxiv
\section{\label{sec:intro}Introduction} The top quark is the heaviest known fundamental particle and was discovered in 1995~\cite{cdftopobs,d0topobs} at the Tevatron proton-antiproton collider at Fermilab. The dominant top quark production mode at the Tevatron is $p\bar{p} \rightarrow t\bar{t}X$. Since the time of discovery, over 100 times more integrated luminosity has been collected, providing a large number of \ttbar\ events with which to study the properties of the top quark. In the standard model (SM), the branching ratio for the top quark to decay to a $W$ boson and a $b$ quark is $> 99.8$\%. The on-shell $W$ boson from the top quark decay has three possible helicity states, and we define the fraction of $W$ bosons produced in these states as $f_0$ (longitudinal), $f_-$ (left-handed), and $f_+$ (right-handed). In the SM, the top quark decays via the $V-A$ charged weak current interaction, which strongly suppresses right-handed $W$ bosons and predicts $f_0$ and $f_-$ at leading order in terms of the top quark mass ($m_t$), $W$ boson mass ($M_W$), and $b$ quark mass ($m_b$) to be~\cite{fval} \begin{eqnarray} f_0 = \frac{(1-y^2)^2-x^2(1+y^2)}{(1-y^2)^2+x^2(1-2x^2+y^2)} \\ f_- = \frac{x^2(1-x^2+y^2+\sqrt{\lambda})}{(1-y^2)^2+x^2(1-2x^2+y^2)} \\ f_+=\frac{x^2(1-x^2+y^2-\sqrt{\lambda})}{(1-y^2)^2+x^2(1-2x^2+y^2)} \end{eqnarray} where $x=M_W/m_t$, $y=m_b/m_t$, and $\lambda = 1+x^4+y^4-2x^2y^2-2x^2-2y^2$. With the present measurements of $m_t = 173.3 \pm 1.1$ GeV/$c^2$~\cite{topmass} and $M_W=80.399 \pm 0.023$ GeV$/c^2$~\cite{wmass}, and taking $m_b$ to be 5 GeV/$c^2$, the SM expected values are $f_0$=0.698, $f_-$=0.301, and $f_+=4.1 \times10^{-4}$. The absolute uncertainties on the SM expectations, which arise from uncertainties on the particle masses as well as contributions from higher-order effects, are $\approx (0.01 - 0.02)$ for $f_0$ and $f_-$, and ${\cal O}(10^{-3})$ for $f_+$~\cite{fval}. In this paper, we present a measurement of the $W$ boson helicity fractions $f_0$ and $f_+$ and constrain the fraction $f_-$ through the unitarity requirement of $f_- + f_+ + f_0 = 1$. Any significant deviation from the SM expectation would be an indication of new physics, arising from either a deviation from the expected $V-A$ coupling of the $tWb$ vertex or the presence of non-SM events in the data sample. The most recently published results are summarized in Table~\ref{tab:prevMeas}. \begin{table} \caption{\label{tab:prevMeas} Summary of the most recent $W$ boson helicity measurements from the D0~ \cite{prevd0result} and CDF~\cite{prevcdfresult} collaborations. The first uncertainty is statistical and the second systematic. } \begin{tabular}{cl} \hline \hline D0, 1 fb$^{-1}$ \cite{prevd0result} & $f_0 = 0.425 \pm 0.166 \pm 0.102,$ \\ & $f_+ = 0.119 \pm 0.090 \pm 0.053$ \\ & $f_+$ fixed: $f_0 = 0.619 \pm 0.090 \pm 0.052$ \\ & $f_0$ fixed: $f_+ = -0.002 \pm 0.047 \pm 0.047$ \\ \hline CDF, 2.7 fb$^{-1}$ \cite{prevcdfresult} & $f_0 = 0.88 \pm 0.11 \pm 0.06,$ \\ & $f_+ = -0.15 \pm 0.07 \pm 0.06$ \\ & $f_+$ fixed: $f_0 = 0.70 \pm 0.07 \pm 0.04$ \\ & $f_0$ fixed: $f_+ = -0.01 \pm 0.02 \pm 0.05$ \\ \hline \hline \end{tabular} \end{table} The extraction of the $W$ boson helicities is based on the measurement of the angle $\theta^{\star}$ between the opposite of the direction of the top quark and the direction of the down-type fermion (charged lepton or $d$, $s$ quark) decay product of the $W$ boson in the $W$ boson rest frame. The dependence of the distribution of \coss on the $W$ boson helicity fractions is given by \begin{eqnarray} \omega(c) \propto 2(1-c^2)f_0 + (1-c)^2 f_- + (1+c)^2 f_+ \label{eq:expcost} \end{eqnarray} with $c=\coss$. After selection of a \ttbar\ enriched sample the four-momenta of the \ttbar\ decay products in each event are reconstructed as described below, permitting the calculation of \coss. Once the \coss\ distribution is measured, the values of $f_0$ and $f_+$ are extracted with a binned Poisson likelihood fit to the data. The measurement presented here is based on \ppbar ~collisions at a center-of-mass energy $\sqrt s$ = 1.96 TeV corresponding to an integrated luminosity of 5.4~fb$^{-1}$, five times more than the amount used in the result in Ref.~\cite{prevd0result}. \section{\label{sec:detector} Detector} The D0 Run II detector~\cite{d0nim} is a multipurpose detector which consists of three primary systems: a central tracking system, calorimeters, and a muon spectrometer. We use a standard right-handed coordinate system. The nominal collision point is the center of the detector with coordinate (0,0,0). The direction of the proton beam is the positive $+z$ axis. The $+x$ axis is horizontal, pointing away from the center of the Tevatron ring. The $+y$ axis points vertically upwards. The polar angle, $\theta$, is defined such that $\theta = 0$ is the $+z$ direction. Usually, the polar angle is replaced by the pseudorapidity $\eta = - \ln \tan\left(\frac{\theta}{2}\right)$. The azimuthal angle, $\phi$, is defined such that $\phi =0$ points along the $+x$ axis, away from the center of the Tevatron ring. The silicon microstrip tracker (SMT) is the innermost part of the tracking system and has a six-barrel longitudinal structure, where each barrel consists of a set of four layers arranged axially around the beam pipe. A fifth layer of SMT sensors was installed near the beam pipe in 2006~\cite{smtl0}. The data set recorded before this addition is referred to as the ``Run IIa'' sample, and the subsequent data set is referred to as the ``Run IIb'' sample. Radial disks are interspersed between the barrel segments. The SMT provides a spatial resolution of approximately 10 $\mu$m in $r-\phi$ and 100 $\mu$m in $r-z$ (where $r$ is the radial distance in the $x$-$y$ plane) and covers $|\eta| < 3$. The central fiber tracker (CFT) surrounds the SMT and consists of eight concentric carbon fiber barrels holding doublet layers of scintillating fibers (one axial and one small-angle stereo layer), with the outermost barrel covering $|\eta| < 1.7$. The solenoid surrounds the CFT and provides a 2 T uniform axial magnetic field.\\ The liquid-argon/uranium calorimeter system is housed in three cryostats, with the central calorimeter (CC) covering $|\eta|<1.1$ and two end calorimeters (EC) covering $1.5 < |\eta| < 4.2$. The calorimeter is made up of unit cells consisting of an absorber plate and a signal board; liquid argon, the active material of the calorimeter, fills the gap. The inner part of the calorimeter is the electromagnetic (EM) section and the outer part is the hadronic section.\\ The muon system is the outermost part of the D0 detector and covers $|\eta| < 2$. It is primarily made of two types of detectors, drift tubes and scintillators, and consists of three layers (A,B and C). Between layer A and layer B, there is magnetized steel with a 1.8 T toroidal field. \section{\label{sec:samples} Data and Simulation samples} At the Tevatron, with proton and anti-proton bunches colliding at intervals of 396 ns, the collision rate is about 2.5 MHz. Out of these $2.5\times10^{6}$ beam crossings per second at D0, only those that produce events which are identified by a three-level trigger system as having properties matching the characteristics of physics events of interest are retained, at a rate of $\sim$100~Hz~\cite{d0nim,l1cal2b}. This analysis is performed using events collected with the triggers applicable for $\ell+$jets and dilepton final states between April 2002 and June 2009, corresponding to a total integrated luminosity of 5.4 fb$^{-1}$. Analysis of the Run IIa sample, which totals about 1 fb$^{-1}$, was presented in Ref.~\cite{prevd0result}. Here we describe the analysis of the Run IIb data sample and then combine our result with the result from Ref.~\cite{prevd0result} when reporting our measurement from the full data sample. The Monte Carlo (MC) simulated samples used for modeling the data are generated with {\sc alpgen}~\cite{ref:alpgen} interfaced to {\sc pythia}~\cite{ref:pythia} for parton shower simulation, passed through a detailed detector simulation based on {\sc geant}~\cite{geant}, overlaid with data collected from a random subsample of beam crossings to model the effects of noise and multiple interactions, and reconstructed using the same algorithms that are used for data. For the signal (\ttbar) ~sample, we must model the distribution of \coss\ corresponding to any set of values for the $W$ boson helicity fractions, a task that is complicated by the fact that {\sc alpgen} can only produce linear combinations of $V-A$ and $V+A$ $tWb$ couplings. Hence, for this analysis, we use samples that are either purely $V-A$ or purely $V+A$, and use a reweighting procedure (described below) to form models of arbitrary helicity states. {\sc alpgen} is also used for generating all $V+$jets processes where $V$ represents the vector bosons. {\sc pythia} is used for generating diboson ($WW$, $WZ$, and $ZZ$) backgrounds in the dilepton channels. Background from multijet production is modeled using data. \section{\label{sec:eventselection1}Event Selection} We expect {\it a priori} that our measurement will be limited by statistics, so our analysis strategy aims to maximize the acceptance for \ttbar\ events. The selection is done in two steps. In the first step, a loose initial selection using data quality, trigger, object identification, and kinematic criteria is applied to define a sample with the characteristics of \ttbar\ events. Subsequently, a multivariate likelihood discriminant is defined to separate the \ttbar ~signal from the background in the data. We use events in the $\ell+$jets and dilepton \ttbar\ decay channels, which are defined below. In the $\ell+$jets decay $\ttbar ~\rightarrow~W^{+}~W^{-}\bbbar ~\rightarrow~\ell\nu~qq^{'} \bbbar$, events contain one charged lepton (where lepton here refers to an electron or a muon), at least four jets with two of them being $b$ quark jets, and significant missing transverse energy \met (defined as the opposite of the vector sum of the transverse energies in each calorimeter cell, corrected for the energy carried by identified muons and energy added or subtracted due to the jet energy calibration described below) . The event selection requires at least four jets with transverse momentum $p_T > 20$ GeV/$c$ and $|\eta| < 2.5$ with the leading jet $p_T > 40$ GeV/$c$. At least one lepton is required with $p_T > 20$ GeV/$c$ and $|\eta| < $ 1.1 (2.0) for electrons (muons). Requirements are also made on the value of \met\ and the angle between the \met\ vector and the lepton (to reduce the contribution of events in which mismeasurement of the lepton energy gives rise the spurious \met): in the $e+$jets channel the requirement is $\met > 20$ GeV and $\Delta\phi(e,\met) > 0.7\pi - 0.045\cdot\met\hbox{/GeV}$, and in the $\mu+$jets channel the requirement is $\met > 25$ GeV and $\Delta\phi(\mu,\met) > 2.1 - 0.035\cdot\met\hbox{/GeV}$. In addition, for the $\mu+$jets channel, the invariant mass of the selected muon and any other muon in the event is required to be outside of the $Z$ boson mass window ($< 70$ GeV/$c^2$ or $> 100$ GeV/$c^2$). For the dilepton decay channel, $\ttbar ~\rightarrow W^{+} W^{-} \bbbar ~\rightarrow \bar{\ell}\nu\ell^\prime\bar{\nu^\prime} \bbbar$, the signature is two leptons of opposite charge, two $b$ quark jets, and significant \met. The event selection requires at least two jets with $p_T > 20$ GeV/$c$ and $|\eta| < 2.5$ and two leptons (electron or muon) with $p_T > 20$ GeV/$c$. The muons are required to have $|\eta| < 2.0$, and the electrons are required to have $|\eta| < 1.1$ or $1.5 < |\eta| < 2.5$. Jets are defined using a mid-point cone algorithm~\cite{jetalg} with radius 0.5. Their energies are first calibrated to be equal, on average, to the sums of the energies of the particles within the jet cone. This calibration accounts for the energy response of the calorimeters, the energy that crosses the cone boundary due to the transverse shower size, and the additional energy from event pileup and multiple $p\bar{p}$ interactions in a single beam crossing. The energy added to or subtracted from each jet in due to the above calibration is propagated to the calculation of \met. Subsequently, an additional correction to for the average energy radiated by gluons outside of the jet cone is applied to the jet energy. Electrons are identified by their energy deposition and shower shape in the calorimeter combined with information from the tracking system. Muons are identified using information from the muon detector and the tracking system. We require the (two) highest-$p_T$ lepton(s) to be isolated from other tracks and calorimeter energy deposits in the $\ell+$jets (dilepton) channel. For all channels, we require a well-reconstructed $p\bar{p}$ vertex (PV) with the distance in $z$ between this vertex and the point of closest approach of the lepton track being less than 1 cm. The main sources of background after the initial selection in the $\ell+$jets channel are $W+$jets and multijet production; in the dilepton channels they are $Z$ boson and diboson production as well as multijet and $W$+jets production. Events with fewer leptons than required (multijet events, or $W+$jets events in the dilepton channel) can enter the sample when jets are either misidentified as leptons or contain a lepton from semileptonic quark decay that passes the electron likelihood or muon isolation criterion. In all cases they are modeled using data with relaxed lepton identification or isolation criteria. The multijet contribution to the $\ell+$jets final states in the initially-selected sample is estimated from data following the method described in Ref.~\cite{matrix}. This method relies on the selection of two data samples, one (the tight sample) with the standard lepton criteria, and the other (the loose sample) with relaxed isolation or identification criteria. The numbers of events in each sample are: \begin{align} N_{\rm loose} &= \phantom{\varepsilon_{sig}} N^{ \ttbar + W}+\phantom{\varepsilon_{\rm MJ}} N^{\rm MJ} \label{eq:matrix1} \\ N_{\rm tight} &= \varepsilon_{\ell} N^{ \ttbar + W}+\varepsilon_{\rm MJ} N^{\rm MJ} \label{eq:matrix2} \end{align} Here the coefficient $\varepsilon_{\ell}$ is the efficiency for isolated leptons in \ttbar\ or $Wjjjj$ events to satisfy the standard lepton requirements, while $\varepsilon_{\rm MJ}$ is the efficiency for a jet in multijet events to satisfy those requirements. We measure $\varepsilon_{\ell}$ in $Z\rightarrow\ell\ell$ control samples and $\varepsilon_{\rm MJ}$ in multijet control samples. Inserting the measured values, we solve Eqs. ~\ref{eq:matrix1} and~\ref{eq:matrix2} to obtain the number of multijet events ($N^{\rm MJ}$) and the number of events with isolated leptons ($N^{\ttbar + W}$). In the dilepton channels we model the background due to jets being misidentified as isolated leptons using data events where both leptons have the same charge. This background originates from multijets events with two jets misidentified as leptons and from $W+$jets events with one jet misidentified as a lepton.\\ To separate the \ttbar\ signal from these sources of background, we define a multivariate likelihood and retain only events above a certain threshold in the value of that likelihood. The set of variables used in the likelihood and the threshold value are optimized separately for each \ttbar\ decay channel. The first step in the optimization procedure is to identify a set of candidate variables that may be used in the likelihood. The set we consider is: \begin{itemize} \item{{\bf Aplanarity $\boldsymbol{{\cal A}}$}, defined as 3/2 of the smallest eigenvalue of the normalized momentum tensor for the jets (in the $\ell$+jets channels) or jets and leptons (in the dilepton channels). The aplanarity ${\cal A}$ is a measure of the deviation from flatness of the event, and \ttbar~ events tend to have larger values than background.} \item{ {\bf Sphericity $\boldsymbol{{\cal S}}$}, defined as 3/2 of the sum of the two smallest eigenvalues of the normalized momentum tensor for the jets (in the $\ell$+jets channels) or jets and leptons (in the dilepton channels). This variable is a measure of the isotropy of the energy flow in the event, and \ttbar~ events tend to have larger values than background.} \item{$\boldsymbol{H_T}$, introduced in Refs.~\cite{cdftopevidence} and~\cite{d0runItopsearch}, is defined as the scalar sum of the jets' $p_T$ values. Jets arising from gluon radiation often have lower $p_T$ than jets in \ttbar~ events, so background events tend to have smaller values of $H_T$ than signal.} \item{{\bf Centrality $\boldsymbol{{\cal C}}$}, defined as $\frac{H_T}{H_E}$ where $H_E$ is the sum of all jet energies. The centrality ${\cal C}$ is similar to $H_T$ but normalized in a way to minimize dependence on the top quark mass.} \item{$\boldsymbol{{K_{T\text{\bf min}}^\prime}}$, defined as $\Delta R_{jj{\rm min}}\cdot\frac{E_{T{\rm min}}}{E_T^W}$, where $\Delta R_{jj{\rm min}}$ is the distance in $\eta-\phi$ space between the closest pair of jets, $E_{T\rm min}$ is the lowest jet $E_T$ value in the pair, and $E_T^W$ is the transverse energy of the leptonically-decaying $W$ boson (in the dilepton channels $E_T^W$ is the magnitude of the vector sum of the \met\ and leading lepton $p_T$). Only the four leading-$E_T$ jets are considered in computing this variable. Jets arising from gluon radiation (as is the case for most of the background) tend to have lower values of $K_{T{\rm min}}^\prime$.} \item{$\boldsymbol{{m_{jj{\text{min}}}}}$, defined as the smallest dijet mass of pairs of selected jets. This variable is sensitive to gluon radiation and tends to be smaller for background than signal.} \item{$\boldsymbol{h}$, defined as the scalar sum of all the selected jet and lepton energies. Jets arising from gluon radiation often have lower energy than jets in \ttbar~ events, and leptons arising from the decay of heavy flavor jets often have lower energy than leptons from $W$ boson decay, so background events tend to have smaller values of $h$ than signal.} \item{{\bf $\boldsymbol{ \chi^2_k}$}, defined as the $\chi^2$ for a kinematic fit of $\ell+$jets final states to the \ttbar\ hypothesis. Signal events tend to have smaller $\chi^2$ values than background. This variable is not used for dilepton events, for which a kinematic fit is underconstrained.} \item{$\boldsymbol{\Delta\phi(\hbox{\bf lepton}, \met)}$, defined as the angle between the leading lepton and the \met. $W+$jets events with \met\ arising from mismeasured lepton $p_{T}$ tend to have $\Delta\phi(\hbox{lepton}, \met) \approx 0$ or $\pi$.} \item{{\bf $\boldsymbol{b}$ jet content of the event}. Due to the long lifetime of the $b$ quark, tracks within jets arising from $b$ quarks have different properties (such as larger impact parameters with respect to the PV and the presence of secondary decay vertices) than tracks within light-quark or gluon jets. The consistency of a given jet with the hypothesis that the jet was produced by a $b$ quark is quantified with a neural network (NN) that considers several properties of the tracks contained within the jet cone~\cite{bidNIM}. In the $\ell+$jets channels, we take the average of the NN values NN$_b$ of the two most $b$-like jets to form a variable called NN$_{b{\rm avg}}$, and in the dilepton channels we take the NN$_b$ values of the two most $b$-like jets as separate variables NN$_{b1}$ (the largest NN$_{b}$ value) and NN$_{b2}$ (the second-largest NN$_b$ value). For top quark events, these variables tend to be close to one, while for events containing only light jets they tend to be close to zero.} \item{$\boldsymbol {\met}$ {\bf or} $\boldsymbol{\chi^2_Z}$}. For the $e\mu$ and $ee$ channels only, \met\ is considered as a variable in the likelihood discriminant. In the $\mu\mu$ channel, where spurious \met\ can arise from mismeasurement of the muon momentum, we instead use $\chi^2_Z$, the $\chi^2$ of a kinematic fit to the $Z\rightarrow\mu\mu$ hypothesis. \item{\bf Dilepton mass }$\boldsymbol{m_{\ell\ell}}.$ Also for the dilepton channels only, the invariant mass of the lepton pairs is considered as a variable in the classical likelihood. The motivation is to discriminate against $Z$ boson production. \end{itemize} We consider all combinations of the above variables to select the optimal set to use for each \ttbar\ decay channel. For a given combination of variables, the likelihood ratio $L_t$ is defined as \begin{eqnarray} L_t = \frac{\exp\left\{\sum_{i=1}^{N_{\rm var}} [\ln(\frac{S}{B})_i^{\text{fit}}]\right\}} {\exp\left\{\sum_{i=1}^{N_{\rm var}} [\ln(\frac{S}{B})_i^{\text{fit}}]\right\}+ 1}, \label{eq:claslhood} \end{eqnarray} where $N_{\rm var}$ is the number of input variables used in the likelihood, and $(\frac{S}{B})_i^{\text{fit}}$ is the ratio of the parameterized signal and background probability density functions. We consider all possible subsets of the above variables to be used in $L_t$ and scan across all potential selection criteria on $L_t$. For each $L_t$ definition and prospective selection criterion, we compute the following figure of merit (FOM): \begin{eqnarray} {\rm FOM} = \frac{N_S}{\sqrt{N_S + N_B + \sigma^2_{B}}}, \label{eq:FOM} \end{eqnarray} where $N_S$ and $N_B$ are the numbers of signal and background events expected to satisfy the $L_t$ selection.\\ The term $\sigma_{B}$ reflects the uncertainty in the background selection efficiency arising from any mis-modeling of the input variables in the MC. To assess $\sigma_{B}$, we compare each variable in data and MC in background-dominated samples. The background-dominated samples are created by forming a multivariate likelihood ratio (Eq.~\ref{eq:claslhood}) that does not use the variable under study, nor any variable that is strongly correlated with it, where the criterion is a correlation coefficient between $-$0.10 and 0.10. We select events that have low values of this likelihood, and are therefore unlikely to be \ttbar\ events, such that 95\% of MC \ttbar\ events are rejected. Because the \ttbar\ contribution to the selected data sample is negligible, we can directly compare the background model to data. The impact of any mis-modeling on the likelihood distribution is assessed by taking the ratio of the observed to the expected distributions as a function of each variable and fitting this to a polynomial. The result is that for each variable $i$ we build a function $k_i$ that encodes the data/MC discrepancies in that variable. For each simulated background event, we reweight each likelihood according to the data/MC differences. For example, for a likelihood that uses $n$ of the possible variables, the likelihood is given a weight \begin{eqnarray} w=\prod_{i=1}^{n} k_i(v_i). \end{eqnarray} The quantity $\sigma_{B}$ is the difference in the predicted background yield when the unweighted and weighted $L_t$ distributions are used for background. This uncertainty is propagated through the analysis as one component of the total uncertainty in the background yield. \begin{table}[hhh] \caption{\label{tab:optimization} The set of variables chosen for use in $L_t$ for the $e$+jets and $\mu+$ jets channels. The numbers of background and $t\bar{t}$ events in the initially-selected data, as determined from a fit to the $L_t$ distribution, are also presented.} \begin{tabular}{lr@{$\,\pm \,$}lr@{$\,\pm \,$}l} \hline \hline & \multicolumn{2}{c}{$e+$jets} & \multicolumn{2}{c}{$\mu+$jets}\\ \hline Events passing initial selection & \multicolumn{2}{c}{1442} & \multicolumn{2}{c}{1250}\\ \hline Variables in best $L_t$ & \multicolumn{2}{c}{${\cal C}$} & \multicolumn{2}{c}{${\cal C}$}\\ & \multicolumn{2}{c}{${H_T}$} & \multicolumn{2}{c}{${H_T}$} \\ & \multicolumn{2}{c}{${K_{T\text{min}}^\prime}$} & \multicolumn{2}{c}{${K_{T\text{min}}^\prime}$} \\ & \multicolumn{2}{c}{NN$_{b{\rm avg}}$} & \multicolumn{2}{c}{NN$_{b{\rm avg}}$}\\ & \multicolumn{2}{c}{$\mathbf \chi^2_k$} & \multicolumn{2}{c}{$h$} \\ & \multicolumn{2}{c}{${m_{jj{\text{min}}}}$} & \multicolumn{2}{c}{}\\ & \multicolumn{2}{c}{Aplanarity} & \multicolumn{2}{c}{} \\ \hline $N$ (\ttbar) & 592.6 & 31.8 & 612.7 & 31.0\\ $N$ ($W+$jets) & 690.2 & 21.8 & 579.8 & 18.6\\ $N$ (multijet) & 180.3 & 9.9 & 6.5 & 4.9\\ \hline \hline \end{tabular} \end{table} \begin{table*}[htdp] \caption{\label{tab:ll_ltfits}The set of variables chosen for use in $L_t$ for the dilepton channels. The number of background and $t\bar{t}$ events in the initially-selected data, as determined from a fit to the $L_t$ distribution, are also presented.} \begin{center} \begin{tabular}{lr@{$\,\pm \,$}llr@{$\,\pm \,$}llr@{$\,\pm \,$}l} \hline \hline & \multicolumn{2}{c}{$e\mu$} & & \multicolumn{2}{c}{$ee$} & &\multicolumn{2}{c}{$\mu\mu$} \\ \hline Events passing initial selection & \multicolumn{2}{c}{323} & & \multicolumn{2}{c}{3275} & & \multicolumn{2}{c}{5740} \\ \hline Variables in optimized $L_t$ & \multicolumn{2}{c}{${\cal A}$,${\cal S}$,$h$,$m_{jj\text{min}}$} & $\mbox{ }$ &\multicolumn{2}{c}{${\cal A}$,${\cal S}$,$m_{jj\text{min}}$} & $\mbox{ }$ & \multicolumn{2}{c}{${\cal A}$,${\cal S}$,$m_{jj\text{min}}$,$K_{T\text{min}}^\prime$}\\ & \multicolumn{2}{c}{$K_{T\text{min}}^\prime$,\met,NN$_{b1}$,$m_{\ell\ell}$} & &\multicolumn{2}{c}{\met,NN$_{b1}$,$m_{\ell\ell}$} & & \multicolumn{2}{c}{$\chi^2_Z$,NN$_{b1}$}\\ \hline $N$ (\ttbar) & 178.7 & 15.6 & & 74.9 & 10.7 & & 86.0 & 13.8 \\ $N$ (background) & 144.3 & 14.5 & & 3200 & 57 & & 5654 & 76 \\ \hline \hline \end{tabular} \end{center} \end{table*} The sets of variables and $L_t$ selection criteria that maximize the FOM defined in Eq.~\ref{eq:FOM} for each \ttbar\ final state are shown in Tables~\ref{tab:optimization} and~\ref{tab:ll_ltfits}. Figures~\ref{fig:input_ejets}-\ref{fig:mumu_apla_spher} show the distributions of the variables in the best likelihood discriminant $L_t$ for the events passing the preselection cuts, where the signal and background contributions are normalized as described below. In addition, we use $L_t$ to determine the signal and background content of the initially-selected sample by performing a binned Poisson maximum likelihood fit to the $L_t$ distribution where the signal and total background normalizations are free parameters. The $W+$jets contribution is determined by the fit to the $L_t$ distribution, while the multijet component is constrained to be consistent with the value determined from Eqs.~\ref{eq:matrix1} and~\ref{eq:matrix2}. In the dilepton channels the relative contributions of the different background sources are fixed according to their expected yield, but the total background is allowed to float. The signal and background yields in the initially-selected sample for the $\mathbf{\ell+}$jets channels are listed in Table~\ref{tab:optimization}, and for the dilepton channels in Table~\ref{tab:ll_ltfits}. Figures~\ref{fig:BestLt} and~\ref{fig:BestLtll} show the distribution of the best likelihood discriminant for each channel, where the signal and background contributions are normalized according to the values returned by the fit. Tables~\ref{tab:data_selection} and~\ref{tab:llfinal} show the optimal $L_t$ cut value for each channel and the final number of events in data and the expected numbers of signal and background events after applying the $L_t$ requirement. \begin{figure*}[tbp] \includegraphics[scale=0.4]{ejets_presel_apla.eps} \includegraphics[scale=0.4]{ejets_presel_cent.eps} \includegraphics[scale=0.4]{ejets_presel_ht.eps} \includegraphics[scale=0.4]{ejets_presel_hitfitchisq.eps} \includegraphics[scale=0.4]{ejets_presel_dijetmass.eps} \includegraphics[scale=0.4]{ejets_presel_ktminp.eps} \includegraphics[scale=0.4]{ejets_presel_nnbvariable.eps} \caption{\label{fig:input_ejets} (Color online) Comparison of data and MC of the variables for preselected events, chosen for the best likelihood discriminant $L_{t}$ in the $e+$jets channel: (a) ${\cal A}$, (b) ${\cal C}$, (c) $H_T$, (d) $\chi_k^2$, (e) $m_{jj\text{min}}$, (f) ${K_{T\text{min}}^\prime}$, and (g) NN$_{b\rm{avg}}$. The uncertainties on the data points are statistical only. } \end{figure*} \begin{figure*}[tbp] \includegraphics[scale=0.4]{mujets_presel_cent.eps} \includegraphics[scale=0.4]{mujets_presel_h.eps} \includegraphics[scale=0.4]{mujets_presel_ktminp.eps} \includegraphics[scale=0.4]{mujets_presel_nnbvariable.eps} \includegraphics[scale=0.4]{mujets_presel_ht.eps} \caption{\label{fig:input_mujets} (Color online) Comparison of data and MC of the variables for preselected events, chosen for the best likelihood discriminant $L_{t}$ in the $\mu+$jets channel: (a) ${\cal C}$,(b) $h$, (c) ${K_{T\text{min}}^\prime}$, (d) NN$_{b{\rm avg}}$ and (e) $H_T$. The uncertainties on the data points are statistical only.} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.4]{EMU_Checkvars-presel-apla.eps} \includegraphics[scale=0.4]{EMU_Checkvars-presel-spher.eps} \includegraphics[scale=0.4]{EMU_Checkvars-presel-dijetmass.eps} \includegraphics[scale=0.4]{EMU_Checkvars-presel-ktminp.eps} \includegraphics[scale=0.4]{EMU_Checkvars-presel-met.eps} \includegraphics[scale=0.4]{EMU_Checkvars-presel-maxNN.eps} \includegraphics[scale=0.4]{EMU_Checkvars-presel-h.eps} \includegraphics[scale=0.4]{EMU_Checkvars-presel-dilepmass.eps} \caption{(Color online) Comparison of data and MC of the variables for preselected events, chosen for the best likelihood discriminant $L_{t}$ in the $e\mu$ channel: (a) ${\cal A}$, (b) ${\cal S}$, (c) $m_{jj\text{min}}$, (d) ${K_{T\text{min}}^\prime}$, (e) \met, (f) NN$_{b1}$, (g) $h$, and (h) $m_{\ell\ell}$. The uncertainties on the data points are statistical only.} \label{fig:emu_cent_spher} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.4]{EE_Checkvars-presel-apla.eps} \includegraphics[scale=0.4]{EE_Checkvars-presel-spher.eps} \includegraphics[scale=0.4]{EE_Checkvars-presel-dijetmass.eps} \includegraphics[scale=0.4]{EE_Checkvars-presel-met.eps} \includegraphics[scale=0.4]{EE_Checkvars-presel-maxNN.eps} \includegraphics[scale=0.4]{EE_Checkvars-presel-dilepmass.eps} \caption{(Color online) Comparison of data and MC of the variables for preselected events, chosen for the best likelihood discriminant $L_{t}$ in the $ee$ channel: (a) ${\cal A}$, (b) ${\cal S}$, (c) $m_{jj\text{min}}$, (d) \met, (e) NN$_{b1}$, and (f) $m_{\ell\ell}$. The uncertainties on the data points are statistical only.} \label{fig:ee_apla_spher} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=0.4]{MUMU_Checkvars-presel-apla.eps} \includegraphics[scale=0.4]{MUMU_Checkvars-presel-spher.eps} \includegraphics[scale=0.4]{MUMU_Checkvars-presel-ktminp.eps} \includegraphics[scale=0.4]{MUMU_Checkvars-presel-dijetmass.eps} \includegraphics[scale=0.4]{MUMU_Checkvars-presel-met.eps} \includegraphics[scale=0.4]{MUMU_Checkvars-presel-maxNN.eps} \caption{(Color online) Comparison of data and MC of the variables for preselected events, chosen for the best likelihood discriminant $L_{t}$ in the $\mu\mu$ channel: (a) ${\cal A}$, (b) ${\cal S}$, (c) ${K_{T\text{min}}^\prime}$, (d) $m_{jj\text{min}}$, (e) $\chi^2_Z$, and (f) NN$_{b1}$. The uncertainties on the data points are statistical only.} \label{fig:mumu_apla_spher} \end{center} \end{figure*} \begin{figure} \includegraphics[scale=0.40]{BestLH_mujets.eps} \includegraphics[scale=0.40]{BestLH_ejets.eps} \caption{\label{fig:BestLt} (Color online) Best $L_t$ variable for the (a) $\mu+$jets and (b) $e+$jets channels. The normalization of the signal and background models is determined by the Poisson maximum likelihood fit to the $L_t$ distribution. The arrows mark the required $L_t$ values for events in each channel.} \end{figure} \begin{figure} \includegraphics[scale=0.40]{EMU_Lt.eps} \includegraphics[scale=0.40]{EE_Lt_Log.eps} \includegraphics[scale=0.40]{MUMU_Lt_Log.eps} \caption{\label{fig:BestLtll} (Color online) Best $L_t$ variable for the (a) $e\mu$, (b) $ee$ and (c) $\mu\mu$ decay channels. The normalization of the signal and background models is determined by the Poisson maximum likelihood fit to the $L_t$ distribution. The arrows mark the required $L_t$ values for events in each channel.} \end{figure} \begin{table}[hhh] \caption{\label{tab:data_selection} Expected background and $t\bar{t}$ yields, and the number of events observed, after the selection on $L_t$ in the $\ell+$jets decay channels.} \begin{tabular}{lr@{$\,\pm \,$}lr@{$\,\pm \,$}l} \hline \hline & \multicolumn{2}{c}{$e+$jets} & \multicolumn{2}{c}{$\mu+$jets} \\ \hline Optimized $L_t$ requirement & \multicolumn{2}{c}{$>$ 0.58} & \multicolumn{2}{c}{$>$ 0.29} \\ \hline \ttbar & 484.4 & 41.4 & 567.2 & 47.3\\ $W+$jets & 111.7 & 12.6 & 227.7 & 19.2\\ Multijet & 58.1 & 3.9 & 4.0 & 3.1\\ \hline Total & 656.2 & 43.4 & 798.9 & 51.2\\ \hline Observed & \multicolumn{2}{c}{628} & \multicolumn{2}{c}{803} \\ \hline \hline \end{tabular} \end{table} \begin{center} \begin{table} \caption{\label{tab:llfinal} Expected background and $t\bar{t}$ yields, and the number of events observed, after the selection on $L_t$ in the dilepton decay channels.} \begin{tabular}{lr@{$\,\pm \,$}lr@{$\,\pm \,$}lr@{$\,\pm \,$}l} \hline \hline Source & \multicolumn{2}{c}{$e\mu$} & \multicolumn{2}{c}{$ee$} & \multicolumn{2}{c}{$\mu\mu$} \\ \hline Optimized $L_t$ requirement & \multicolumn{2}{c}{$>$ 0.28} & \multicolumn{2}{c}{$>$ 0.934} & \multicolumn{2}{c}{$> 0.972$} \\ \hline $t\bar{t}$ & 186.6 & 0.4 & 44.5 & 0.3 & 43.6 & 0.3 \\ $Z/\gamma^* \rightarrow \ell^+\ell^-$ & \multicolumn{2}{c}{N/A} & 7.4 & 1.0 & 19.1 & 1.3\\ $Z/\gamma^* \rightarrow \tau\tau$ & 11.2 & 3.7 & 0.8 & 0.3 & 0.35 & 0.05 \\ $WW$ & 5.6 & 1.4 & 0.3 & 0.1& 0.13 & 0.05\\ $WZ$ & 1.5 & 0.5 & 0.28 & 0.04 & 0.16 & 0.01\\ $ZZ$ & 1.0 & 0.5 & 0.34 & 0.04 & 0.57 & 0.04\\ Misidentified jets & 15.9 & 3.1 & 0.54 & 0.48 & 3.7 & 2.5\\ \hline Total & 221.7 & 5.1 & 54.2 & 1.2 & 67.7 & 3.9 \\ \hline Observed & \multicolumn{2}{c}{193} & \multicolumn{2}{c}{58} & \multicolumn{2}{c}{68} \\ \hline \hline \end{tabular} \end{table} \end{center} \section{\label{sec:template}Templates} After the final event selection, \coss\ is calculated for each event by using the reconstructed top quark and $W$ boson four-momenta. In the $\ell+$jets decay channel, the four-momenta are reconstructed using a kinemetic fit with the constraints: (i) two jets should give the invariant mass of the $W$ boson (80.4 GeV/$c^2$), (ii) the invariant mass of the lepton and neutrino should be the $W$ boson mass, (iii) the mass of the reconstructed top and anti-top quark should be 172.5 GeV/$c^2$, and (iv) the ${\vec p_T}$ of the $t\bar{t}$ system should be opposite that of the unclustered energy in the event. The four highest-$p_T$ jets in each event are used in the fit, and among the twelve possible permutations in the assignment of the jets to initial partons, the solution with the highest probability is chosen, considering both the NN$_b$ values of the four jets and $\chi^2_k$. This procedure selects the correct jet assignment in 59\% of MC $t\bar{t}$ events. With the jet assigned, the complete kinematics of the \ttbar\ decay products (i.e., including the neutrino) are determined, allowing us to boost to the rest frames of each $W$ boson in the event. We compute \coss\ for the $W$ boson that decays leptonically. The hadronic $W$ boson decay from the other top quark in the event also contains information about the helicity of that $W$ boson, but since we do not distinguish between jets formed from up-type and down-type quarks, we can not identify the down-type fermion to calculate \coss. We therefore calculate only $|\coss|$, which is identical for both jets in the rest frame of the hadronically decaying $W$ boson. Left-handed and right-handed $W$ bosons have identical $|\coss|$ distributions, but we can distinguish either of those states from longitudinal $W$ bosons, thereby improving the precision of the measurement. In the dilepton decay channel, the presence of two neutrinos prevents a constrained kinematic fit, but with the assumption that the top quark mass is 172.5~GeV/$c^2$, an algebraic solution for the neutrino momenta can be obtained (up to a two-fold ambiguity in pairing the jets and leptons, and a four-fold solution ambiguity). To account for the lepton and jet energy resolutions, the procedure described above is repeated 500 times with the energies fluctuated according to their uncertainties, and the average of all the solutions is used as the value of the \coss ~for each top quark. As mentioned above, the extraction of both $f_0$ and $f_+$ requires comparing the data with the MC models in which both of these values are varied. Since {\sc alpgen} can only produce linear combinations of $V-A$ and $V+A$ $tWb$ couplings, it is unable to produce non-SM $f_0$ values, and can produce $f_+$ values only in the range $[0, 0.30]$. We therefore start with {\sc alpgen} $V-A$ and $V+A$ samples, and divide the samples in bins of parton-level \coss. For each bin, we note the efficiency for the event to satisfy the event selection and the distribution of reconstructed \coss\ values. With this information we determine the expected distribution of reconstructed \coss\ values for any assumed $W$ helicity fractions, and in particular we choose to derive the distributions expected for purely left-handed, longitudinal, or right-handed $W$ boson, as shown in Fig.~\ref{fig:pureTemplates}. The deficit of entries near $\coss\ = -1$ relative to the expectation from Eq.~\ref{eq:expcost} is due to the $p_T$ requirement imposed when selecting leptons. We verify the reweighting procedure by comparing the generated $V\pm A$ {\sc alpgen} samples with the combination of reweighted distributions expected for $V\pm A$ couplings, and find that these distributions agree within the MC statistics. The templates for background samples are obtained directly from the relevant MC or data background samples, and are shown in Fig.~\ref{fig:bkgTemplates}. \begin{figure} \includegraphics[scale=0.45]{ljetsLepDataModelPure.eps} \includegraphics[scale=0.45]{ljetsHadDataModelPure.eps} \includegraphics[scale=0.45]{dilepDataModelPure.eps} \caption{\label{fig:pureTemplates} Distribution of \coss\ in \ttbar\ MC samples that were reweighted to derive the distributions for purely left-handed, longitudinal, or right-handed $W$ bosons. The distribution for leptonically- and hadronically-decaying $W$ bosons in $\ell+$jets events are shown in (a) and (b), respectively, and the distribution for dilepton events is shown in (c). For hadronically decaying $W$ bosons the \coss\ distribution for left- and right-handed $W$ bosons are identical. All of the distributions are normalized to unity. } \end{figure} \begin{figure} \includegraphics[scale=0.40]{ljetsBkgTemplates.eps} \includegraphics[scale=0.40]{ljetsBkgTemplateshad.eps} \includegraphics[scale=0.40]{llBkgTemplates.eps} \caption{\label{fig:bkgTemplates} (Color online) Distribution of \coss\ in background samples. The distribution for leptonically- and hadronically-decaying $W$ bosons in $\ell+$jets events are shown in (a) and (b), respectively, and the distribution for dilepton events is shown in (c). All of the distributions are normalized to the expected yield for each source of background. } \end{figure} \section{\label{sec:2Dfit}Model-independent $W$ Helicity Fit} The $W$ boson helicity fractions are extracted by computing a binned Poisson likelihood $L(f_0,f_+)$ with the distribution of \coss\ in the data to be consistent with the sum of signal and background templates. The likelihood is a function of the $W$ boson helicity fractions $f_0$ and $f_+$, defined as \begin{eqnarray} L(f_0,f_+) & = & \prod_{i=1}^{N_{\rm chan}}\prod_{j=1}^{N_{{\rm bkg},i}}e^{-(n_{b,ij}-\overline{n}_{b,ij})^2/2\sigma_{b,ij}^2} \times \nonumber \\ & & \prod_{k=1}^{N_{{\rm bins},i}} P(d_{ik};n_{ik}) \label{eq:lhood} \end{eqnarray} where $P(d_{ik};n_{ik})$ is the Poisson probability for observing $d_{ik}$ events given a mean expectation value $n_{ik}$, $N_{\rm chan}$ is the number of channels in the fit (a maximum of five in this analysis: $e+$jets, $\mu+$jets, $e\mu$, $ee$, and $\mu\mu$), $N_{{\rm bkg},i}$ is the number of background sources in the $i^{\rm th}$ channel, ${N_{{\rm bins},i}}$ is the number of bins in the \coss\ distribution for any given channel (plus the number of bins in the $|\coss|$ distribution for hadronic $W$ boson decays in the $\ell+$jets channels), $\overline{n}_{b,ij}$ is the nominal number of \coss\ measurements from the $j^{\rm th}$ background contributing to the $i^{\rm th}$ channel, $\sigma_{b,ij}$ is the uncertainty on $\overline{n}_{b,ij}$, ${n_{b,ij}}$ is the fitted number of events for this background, $d_{ik}$ is the number of data events in the $k^{\rm th}$ bin of \coss ~for the $i^{\rm th}$ channel, and $n_{ik}$ is the predicted sum of signal and background events in that bin. The $n_{ik}$ can be expressed as \begin{eqnarray} n_{ik} & = & n_{s,i}{\varepsilon_0 f_0p_{0,ik} + \varepsilon_+ f_+p_{+,ik} + \varepsilon_-(1-f_0-f_+)p_{-,ik} \over f_- \varepsilon_- + f_0 \varepsilon_0 + f_+ \varepsilon_+ } \nonumber \\ & & + \sum_{j=1}^{N_{\rm bkg}}n_{b,ij}p_{b,ijk} \end{eqnarray} where $n_{s,i}$ represents the number of \coss\ measurements from signal events in a given channel, the $p$ represent the probabilities for an event from some source to appear in bin $k$ for channel $i$ (as determined from the templates), and the subscripts 0, $+$, $-$ refer to the templates for \ttbar\ events in which the $W$ bosons have zero, negative, or positive helicity, and the subscript $b,i$ refers to the templates for the $i^{\rm th}$ background source. The efficiency for a \ttbar\ event to satisfy the selection criteria depends upon the helicity states of the two $W$ bosons in the event; the $\varepsilon$ are therefore necessary to translate the fractions of events with different helicity states in the selected sample to the fractions that were produced. The quantity $\varepsilon_\lambda$ is defined as \begin{eqnarray} \varepsilon_{\lambda} = \sum_{\lambda^\prime} f_{\lambda\prime} \varepsilon_{\lambda \lambda^\prime} \end{eqnarray} where $ \varepsilon_{\lambda \lambda^\prime}$ is the relative efficiency for events with $W$ bosons in the $\lambda$ and $\lambda^\prime$ helicity states to satisfy the selection criteria. The values of $ \varepsilon_{\lambda \lambda^\prime}$ for each \ttbar\ decay channel are given in Table~\ref{tab:relEff}. While performing the fit, both $f_0$ and $f_+$ are allowed to float freely, and the measured $W$ helicity fractions correspond to those leading to the highest likelihood value. \begin{table} \caption{\label{tab:relEff} Efficiencies of different $W$ boson helicity configurations in \ttbar\ events to pass the selection criteria, relative to the efficiencies for a mixture of $V-A$ and $V+A$ events. The indices $-,0$ and $+$ correspond to the helicity states of the two $W$ bosons, and their order is leptonic $W$, hadronic $W$ for the \ljets\ channel, and arbitrary for dilepton channels (where there is no distinction between the two $W$ bosons in the event). Small differences in values in the dilepton channels under interchange of the indices are from variations in MC statistics.} \begin{tabular}{cccccc} \hline \hline & $e$+jets & $\mu$+jets & $e\mu$ & $ee$ & $\mu\mu$ \\ \hline $\varepsilon_{--}$ & 0.76 & 0.73 & 0.67 & 0.68 & 0.68 \\ $\varepsilon_{-0}$ & 0.87 & 0.83 & 0.84 & 0.86 & 0.85 \\ $\varepsilon_{-+}$ & 0.76 & 0.73 & 0.88 & 0.89 & 0.89 \\ $\varepsilon_{0-}$ & 0.94 & 0.95 & 0.85 & 0.86 & 0.87 \\ $\varepsilon_{00}$ & 1.08 & 1.09 & 1.06 & 1.05 & 1.05 \\ $\varepsilon_{0+}$ & 0.94 & 0.95 & 1.10 & 1.05 & 1.05 \\ $\varepsilon_{+-}$ &0.92 & 0.96 & 0.89 & 0.88 & 0.91 \\ $\varepsilon_{+0}$ & 1.06 & 1.11 & 1.12 & 1.03 & 1.07 \\ $\varepsilon_{++}$ & 0.92 & 0.96 & 1.15 & 0.99 & 1.03 \\ \hline \hline \end{tabular} \end{table} We check the performance of the fit using simulated ensembles of events, with all values of $f_0$ and $f_+$ from 0 through 1 as inputs in increments of 0.1, with the sum of $f_0$ and $f_+$ not exceeding unity. We simulate input data distributions for the various values by combining the pure left-handed, longitudinal, and right-handed templates in the assumed proportions. In these ensembles, we draw a random subset of the simulated events, with the number of events chosen in each channel fixed to the number observed in data. Within the constant total number of events, the numbers of signal and background events are fluctuated binomially around the expected values. Each of these sets of simulated events is passed through the maximum likelihood fit using the standard \coss\ templates. We find that the average fit output value is close to the input value across the entire range of possible values for the helicity fractions, with the small differences between the input and output values being consistent with statistical fluctuations in the ensembles. As an example, the set of $f_0$ and $f_+$ values obtained when $t\bar{t}$ events are drawn in the proportions expected in the SM is shown in Fig.~\ref{fig:ensemExample}. \begin{figure} \includegraphics[scale=0.4]{exampleEnsembleResult.eps} \caption{\label{fig:ensemExample} Fit values for $f_0$ and $f_+$ obtained with 1000 MC simulations of the $W$ boson helicity measurement. The SM helicity fractions, marked by the star, were taken as input to the simulations. The triangle corresponds to the physically allowed region where $f_0 + f_+ \le 1$.} \end{figure} \section{\label{sec:syst}Systematic Uncertainties} Systematic uncertainties are evaluated using simulated event ensembles in which both changes in the background yield and changes in the shape of the \coss\ templates in signal and background are considered. The simulated samples from which the events are drawn can be either the nominal samples or samples in which the systematic effect under study has been shifted away from the nominal value. In general, the systematic uncertainties assigned to $f_0$ and $f_+$ are determined by taking an average of the absolute values of the differences in the average fit output values between the nominal and shifted $V-A$ and $V+A$ samples. The jet energy scale, jet energy resolution, and jet identification efficiency each have relatively small uncertainties that are difficult to observe above fluctuations in the MC samples. To make the effects more visible, we vary these quantities by $\pm$5 standard deviations, and then divide the resulting differences in the average fit output by 5. The top quark mass uncertainty corresponds to shifting $m_t$ by 1.4 GeV/$c^2$, which is the sum in quadrature of the uncertainty on the world average $m_t$ (1.1 GeV/$c^2$) and the difference between the world average value (173.3 GeV/$c^2$) and the value assumed in the analysis (172.5 GeV/$c^2$). We evaluate the contribution of template statistics to the uncertainty by repeating the fit to the data 1000 times, fluctuating the signal and background distributions according to their statistics in each fit. The uncertainties due to the modeling of \ttbar ~events are separated into several categories and evaluated using special-purpose MC samples. The uncertainty in the model of gluon radiation is assessed using {\sc pythia} MC samples in which the amount of gluon radiation is shifted upwards and downwards; the impact of NLO effects is assessed by comparing the default leading-order {\sc alpgen} generator with the NLO generator {\sc mc@nlo}~\cite{mcatnlo}; the uncertainty in the hadronic showering model is assessed by comparing {\sc alpgen} events showered with {\sc pythia} and with {\sc herwig}~\cite{herwig}; and lastly, the impact of color reconnection effects is assessed by comparing {\sc pythia} samples where the underlying event model does and does not include color reconnection. The uncertainty due to data and MC differences in the background \coss ~distribution is derived by taking the ratio of the data and the MC distribution for a background-enriched sample (defined by requiring that events have low values of $L_t$) and then using that ratio to re-weight the distribution of background MC events that satisfy the standard selection. The uncertainty in the heavy flavor content of the background is estimated by varying the fraction of background events with heavy flavor jets by $\pm 20$\%. Uncertainties due to the fragmentation of $b$ jets are evaluated by comparing the default fragmentation model, the Bowler scheme~\cite{bowler} tuned to data collected at the CERN LEP collider, with an alternate model tuned to data collected by the SLD collaboration~\cite{bfragtuning}. Uncertainties in the parton distribution functions (PDFs) are estimated using the set of $2\times20$ errors provided for the CTEQ6M~\cite{cteq6m} PDF. The analysis consistency uncertainty reflects the typical difference between the input helicity fractions and the average output values observed in fits to simulated event ensembles. Finally, we include an uncertainty corresponding to muon triggers and identification, as control samples indicate some substantial data/MC discrepancies for the loose selection we use. All the systematic uncertainties are summarized in Table~\ref{tab:2dsyst}. \begin{table}[hhh] \caption{\label{tab:2dsyst} Summary of the absolute systematic uncertainties on $f_+$ and $f_{0}$.} \begin{tabular}{lcc} \hline \hline Source & Uncertainty ($f_+$) & Uncertainty ($f_0$) \\ \hline Jet energy scale & 0.007 & 0.009 \\ Jet energy resolution & 0.004 & 0.009 \\ Jet ID & 0.004 & 0.004 \\ Top quark mass & 0.011 & 0.009 \\ Template statistics & 0.012 & 0.023 \\ \ttbar ~model & 0.022 & 0.033 \\ Background model & 0.006 & 0.017 \\ Heavy flavor fraction & 0.011 & 0.026 \\ $b$ fragmentation & 0.000 & 0.001 \\ PDF & 0.000 & 0.000 \\ Analysis consistency & 0.004 & 0.006\\ Muon ID & 0.003 & 0.021 \\ Muon trigger & 0.004 & 0.020 \\ \hline Total & 0.032 & 0.060 \\ \hline \hline \end{tabular} \end{table} \section{\label{sec:p20}Result} Applying the model independent fit to the Run IIb data, we find \begin{eqnarray} f_0 &=& 0.739 \pm 0.091 \hbox{ (stat.)} \pm 0.060 \hbox{ (syst.)} \\ \nonumber f_+ &=& -0.002 \pm 0.045 \hbox{ (stat.)} \pm 0.032 \hbox{ (syst.)}. \end{eqnarray} The comparison between the best-fit model and the data is shown in Fig.~\ref{fig:data2dmodel}, and the 68\% and 95\% C.L. contours in the $(f_+,f_0)$ plane are shown in Fig.~\ref{fig:data2dfit}(a). To account for systematic uncertainties, we perform a MC smearing of the $L$ distribution, where the width of the smearing in $f_0$ and $f_+$ is given by the systematic uncertainty on each helicity fraction, and the correlation coefficient of $-0.83$ between them is taken into account. To assess the consistency of the result with the SM, we note that the change in $-\ln L(f_0,f_+)$ (Eq.~\ref{eq:lhood}) between the best fit and the SM points is 0.24 considering only statistical uncertainties and 0.16 when systematic uncertainties are included. The probability of observing a greater deviation from the SM due to fluctuations in the data is 78\% when only the statistical uncertainty is considered and 85\% when both statistical and systematic uncertainties are considered. We have also split the data sample in various ways to check the internal consistency of the measurement. Using $\ell+$jets events only, we find \begin{eqnarray} f_0 &=& 0.767 \pm 0.117 \hbox{ (stat.)}, \\ \nonumber f_+ &=& 0.018 \pm 0.061 \hbox{ (stat.)}; \end{eqnarray} and when using only dilepton events we find \begin{eqnarray} f_0 &=& 0.677 \pm 0.144 \hbox{ (stat.)}, \\ \nonumber f_+ &=& -0.013 \pm 0.065 \hbox{ (stat.)}. \end{eqnarray} We also divide the sample into events with only electrons ($e+$jets and $ee$) and events with only muons ($\mu+$jets and $\mu\mu$). The results for electrons only are\\* \begin{eqnarray} f_0 &=& 0.816 \pm 0.142 \hbox{ (stat.)}, \\ \nonumber f_+ &=& -0.063 \pm 0.066 \hbox{ (stat.)}, \end{eqnarray} \\* and for muons only are \begin{eqnarray} f_0 &=& 0.618 \pm 0.150 \hbox{ (stat.)}, \\ \nonumber f_+ &=& 0.130 \pm 0.081 \hbox{ (stat.)}. \end{eqnarray} Finally, we perform fits in which one of the two helicity fractions is fixed to its SM value. Constraining $f_0$, we find \begin{eqnarray} f_+ = 0.014 \pm 0.025 \pm \hbox{ (stat.)} \pm 0.028 \hbox{(syst.)} , \end{eqnarray} We also constrain $f_+$ and measure $f_0$, finding \begin{eqnarray} f_0 = 0.735 \pm 0.051 \hbox{ (stat.)} \pm 0.051 \hbox{(syst.)}. \end{eqnarray} \begin{figure} \includegraphics[scale=0.45]{ljetsLepDataModel.eps} \\ \includegraphics[scale=0.45]{ljetsHadDataModel.eps}\\ \includegraphics[scale=0.45]{dilepDataModel.eps} \caption{\label{fig:data2dmodel} (Color online) Comparison of the \coss\ distribution in Run IIb data and the global best-fit model (solid line) and the SM (dashed line) for (a) leptonic $W$ boson decays in \ljets\ events, (b) hadronic $W$ boson decays in \ljets\ events, and (c) dilepton events.} \end{figure} \begin{figure*} \includegraphics[scale=0.40]{dataFit_allchan_runIIb_4fb_syst.eps} \includegraphics[scale=0.40]{dataFit_allchan_runIIcomb_5.4fb_syst.eps} \caption{\label{fig:data2dfit} Result of the model-independent $W$ boson helicity fit for (a) the Run IIb data sample and (b) the combined Run IIa and Run IIb data sample. In both plots, the ellipses indicate the 68\% and 95\% C.L. contours, the dot shows the best-fit value, the triangle corresponds to the physically allowed region where $f_0 + f_+ \le 1$, and the star marks the expectation from the SM.} \end{figure*} \section{\label{sec:p17p20}Combination with Our Previous Measurement} To combine this result with the previous measurement from Ref.~\cite{prevd0result}, we repeat the maximum likelihood fit with the earlier and current data samples and their respective MC models, treating them as separate channels in the fit. This is equivalent to multiplying the two-dimensional likelihood distributions in $f_0$ and $f_+$ corresponding to the two data sets. We determine the systematic uncertainty on the combined result by treating most uncertainties as correlated (the exception is template statistics) and propagating the uncertainties to the combined result. The results are presented in Table~\ref{tab:p17p20combsyst}. \begin{table}[hhh] \caption{\label{tab:p17p20combsyst} Summary of the combined systematic uncertainties on $f_+$ and $f_{0}$ for Run IIa and Run IIb.} \begin{tabular}{lcc} \hline \hline Source & Uncertainty ($f_+$) & Uncertainty ($f_0$) \\ \hline Jet energy scale & 0.009 & 0.010 \\ Jet energy resolution & 0.004 & 0.008 \\ Jet ID & 0.005 & 0.007 \\ Top mass & 0.012 & 0.009\\ Template statistics & 0.011 & 0.021 \\ \ttbar ~model & 0.024 & 0.039 \\ Background model & 0.008 & 0.023 \\ Heavy flavor fraction & 0.010 & 0.022 \\ $b$ fragmentation & 0.002 & 0.004 \\ PDF & 0.000 & 0.001 \\ Analysis consistency & 0.004 & 0.006 \\ Muon ID & 0.002 & 0.017 \\ Muon trigger & 0.003 & 0.024 \\ \hline Total & 0.034 & 0.065 \\ \hline \hline \end{tabular} \end{table} The combined result for the entire 5.4 fb$^{-1}$ sample is \begin{eqnarray} f_0 &=& 0.669 \pm 0.078 \hbox{ (stat.)} \pm 0.065 \hbox{ (syst.)}, \\ \nonumber f_+ &=& 0.023 \pm 0.041 \hbox{ (stat.)} \pm 0.034 \hbox{ (syst.)}. \end{eqnarray} The combined likelihood distribution is presented in Figs.~\ref{fig:data2dfit}(b). The probability of observing a greater deviation from the SM due to fluctuations in the data is 83\% when only statistical uncertainties are considered and 98\% when systematic uncertainties are included. Constraining $f_0$ to the SM value, we find \begin{eqnarray} f_+ = 0.010 \pm 0.022 \hbox{ (stat.)} \pm 0.030 \hbox{ (syst.)} \end{eqnarray} and constraining $f_+$ to the SM value gives \begin{eqnarray} f_0 = 0.708 \pm 0.044 \hbox{ (stat.)} \pm 0.048 \hbox{ (syst.)}. \end{eqnarray} \section{\label{sec:summ}Conclusion} We have measured the helicity of $W$ bosons arising from top quark decay in \ttbar\ events using both the $\ell+$jets and dilepton decay channels and find \begin{align} f_0 = 0.669 & \pm 0.102 \\ \nonumber &[ \pm 0.078 \hbox{ (stat.)} \pm 0.065 \hbox{ (syst.)}], \\ \nonumber f_+ = 0.023 & \pm 0.053 \\ \nonumber &[ \pm 0.041 \hbox{ (stat.)} \pm 0.034 \hbox{ (syst.)}]. \end{align} in a model-independent fit. The consistency of this measurement with the SM values $f_0 = 0.698$, $f_+=3.6\times10^{-4}$ is 98\%. Therefore, we report no evidence for new physics at the $tWb$ decay vertex. \section{Acknowledgement} \input acknowledgement.tex
1,108,101,564,056
arxiv
\section{Introduction} Let $\gf$ be the field of order $2$, and let $\gf^n$ be the $n$-dimensional vector space over $\gf$. For $n\in \mathbb{N}$, we let $[n]=\{1,\ldots ,n \}$. A Boolean function $f\colon \gf^n \rightarrow \gf^m$ is said to be linear if there exists a Boolean $m\times n$ matrix $A$ such that $f(\mathbf{x})=A\mathbf{x}$ for every $\mathbf{x}\in \gf^n$. This is equivalent to saying that $f$ can be computed using only XOR gates. An \emph{XOR circuit} (or a \emph{linear circuit}) $C$ is a directed acyclic graph. There are $n$ nodes with in-degree $0$, called the \emph{inputs}. All other nodes have in-degree $2$ and are called \emph{gates}. There are $m$ nodes which are called the \emph{outputs}; these are labeled $y_1,\ldots,y_m$. The value of a gate is the sum of its two children (addition in $\gf$, denoted $\oplus$). The circuit $C$, with inputs $\mathbf{x}=(x_1,\ldots ,x_n)$, \emph{computes} the $m\times n$ matrix $A$ if the output vector computed by $C$, $\mathbf{y}=(y_1,\ldots , y_m)$, satisfies $\mathbf{y}=A\mathbf{x}$. In other words, output $y_i$ is defined by the $i$th row of the matrix. The \emph{size} of a circuit $C$, is the number of gates in $C$. The \emph{depth} is the number of gates on a longest directed path from an input to an output. For simplicity, we will let $m=n$ unless otherwise explicitly stated. For a matrix $A$, let $|A|$ be the number of nonzero entries in $A$. \paragraph{Our contributions:} In this paper we deal with a restriction of XOR circuits called \emph{cancellation-free} circuits, coined in \cite{boyarcombinationalappear}, where the authors noticed that many heuristics for finding small XOR circuits always produce cancella\-tion-free XOR circuits. They asked the question of how large a separation there can be between these two models. Recently, Gashkov and Sergeev \cite{gashkov2011complexity} showed that the work of Grinchuk and Sergeev \cite{grinchukandsergeev} implied a separation of $\Omega\left(\frac{n}{\log^6 n\log\log n} \right)$. An improved separation of $\Omega\left(\frac{n}{\log^2 n} \right)$ follows from Lemma 4.1 and Lemma 4.2 in \cite{DBLP:journals/siamcomp/Jukna06}, although this implied separation was not published until recently \cite{juknasergeevSurvey}. We present an alternative proof of the same separation. Our proof is based on a different construction and uses communication complexity in a novel way that might have independent interest. Like the separation implied in the work \cite{juknasergeevSurvey}, but unlike the separations demonstrated in \cite{gashkov2011complexity,separatingnew}, our separation holds even in the case of constant depth circuits. We conclude that many heuristics for finding XOR circuits do not approximate better than a factor of $\Theta \left( \frac{n}{\log^{2}n} \right)$ of the optimal. We also study the complexity of computing the Sierpinski matrix using cancellation-free circuits. We show that the complexity is exactly $\frac{1}{2}n\log n$. Furthermore, our proof holds for OR circuits. As a corollary to this we obtain an explicit matrix where the smallest OR circuit is a factor $\Theta(\log n)$ larger than the smallest OR circuit for its complement. We also study the complexity of computing the \emph{Sierpinski matrix} (described later), and show a tight $\frac{1}{2}n\log n$ lower bound for OR circuits and cancellation-free circuits. This results follows implicitly from the work of Kennes \cite{DBLP:journals/tsmc/Kennes92}, however our proof is simpler and more direct. Also we hope that our proof can be strengthened to give an $\omega (n)$ lower bound for XOR circuits for the Sierpinski matrix. A similar lower bound was shown independently by Selezneva in \cite{seleznevaProc,seleznevaArticle}. \section{Cancellation-Free XOR Circuits} \label{cancelfreelinear} For XOR circuits, the value computed by every gate is the parity of a subset of the $n$ variables. That is, the output of every gate $u$ can be considered as a vector $\kappa(u)$ in the vector space $\gf^n$, where $\kappa(u)_i=1$ if and only if $x_i$ is a term in the parity function computed by the gate $u$. We call $\kappa(u)$ the \emph{value vector} of $u$, and for input variables define $\kappa(x_i)=e^{(i)}$, the unit vector having the $i$th coordinate $1$ and all others $0$. It is clear by definition that if a gate $u$ has the two children $w,t$, then $\kappa(u)=\kappa(w)\oplus~\kappa(t)$, where $\oplus$ denotes coordinate-wise addition in $\gf$. We say that an XOR circuit is \emph{cancellation-free} if for every pair of gates $u,w$ where $u$ is an ancestor of $w$, then $\kappa(u)\geq \kappa(w)$, where $\geq$ denotes the usual coordinate-wise partial order. These are also called SUM circuits in \cite{separatingnew,juknasergeevSurvey}. If this is satisfied, the circuit never exploits the $\gf$-identity, $a\oplus a=0$, so things do not ``cancel out'' in the circuit. Although it is not hard to see that cancellation-free circuits is equivalent to addition chains \cite{DBLP:conf/focs/Pippenger76,DBLP:journals/siamcomp/Pippenger80} and ``ensemble computations'' \cite{DBLP:books/fm/GareyJ79}, we stick to the term ``cancellation-free'', since we will think of it as a special case of XOR circuits. For a simple example demonstrating that cancellation-free circuits indeed are less powerful than general XOR circuits, consider the matrix \[ A = \begin{pmatrix} 1 & 1 & 0 &0 \\ 1 & 1 & 1 &0 \\ 1 & 1 & 1 &1 \\ 0 & 1 & 1 &1 \\ \end{pmatrix}. \] In Figure~\ref{fig:examplefig}, two circuits computing the matrix $A$ are shown, the circuit on the right uses cancellations, and the circuit on the left is cancellation-free, and has one gate more. For this particular matrix, any cancellation-free circuit must use at least $5$ gates. \begin{figure} \begin{center} \scalebox{0.8}{ \input{twoExamplesCancel.tex} } \end{center} \caption{Two circuits computing the matrix $A$. The circuit on the left is cancellation-free, and has size $5$ - one more more than the circuit on to the right.\label{fig:examplefig}} \end{figure} A different, but tightly related kind of circuits is OR circuits. The definition is exactly the same as for XOR circuits, but with $\vee$ (logical OR) instead of $\oplus$, see \cite{nechiporuk1963rectifier,juknasergeevSurvey,DBLP:books/fm/GareyJ79}. Cancellation-free circuits is a special case of OR circuits and every cancellation-free circuit can be interpreted as an OR circuit for the same matrix, as well as an XOR circuit. For a matrix $A$, we will let $C_\oplus(A)$, $C_{CF}(A)$, $C_\vee(A)$ denote the smallest XOR circuit, the smallest cancellation-free circuit and the smallest OR circuit computing the matrix $A$. By the discussion above, the following is immediate: \begin{proposition} \label{orcancelremark} For every matrix, $A$, $C_\vee(A)\leq C_{CF}(A)$. \end{proposition} This means in particular that any lower bound for OR circuits carries over to a lower bound for cancellation-free circuits. However, the converse does not hold in general \cite{separatingnew}. A simple example showing this is the matrix \[ B = \begin{pmatrix} 0 & 0 & 1 & 1 & 0 & 0\\ 0 & 1 & 1 & 1 & 0 & 0\\ 1 & 1 & 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 & 1 & 0\\ 0 & 0 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1\\ \end{pmatrix}. \] For this matrix, there exist an OR circuit with $6$ gates, however any cancellation-free circuit must have at least $7$ gates. Every matrix admits a cancellation-free circuit of size at most $n(n-1)$. This can be obtained simply by computing each row independently. It was shown by Nechiporuk \cite{nechiporuk1963rectifier} and Pippenger \cite{DBLP:conf/focs/Pippenger76} (see also \cite{juknasergeevSurvey}) that this upper bound can be improved to $(1+o(1))\frac{n^2}{2\log n}$. A Shannon-style counting argument gives that this is tight up to low order terms. A proof of this can be found in \cite{DBLP:conf/focs/Pippenger76}. Combining these results, we get that for most matrices, cancellation does not help much: \begin{theorem} For every $0<\epsilon<1$, for sufficiently large $n$, a random $n\times n$ matrix has $\frac{C_{CF}(A)}{C_\oplus(A)}\leq 1+\epsilon$ with probability at least $1-\epsilon$. \end{theorem} We also use the following upper bound, which also holds for cancellation-free circuits, and hence also for OR circuits and XOR circuits. \begin{theorem}[Lupanov \cite{Lupanov1956}] \label{upperboundtranspo} Any $m\times n$ matrix, admits a cancellation-free XOR circuit of size $O\left( \min\{ \frac{mn}{\log n}, \frac{mn}{\log m}\}+n+m\right)$. \end{theorem} The theorem follows directly from Lupanov's result and an application of the ``transposition principle'' (see e.g. \cite{juknabook}). A matrix $A$ is \emph{$k$-free} if it does not have an all one submatrix of size $(k+1)\times (k+1)$. The following lemma will be used later. According to Jukna and Sergeev \cite{juknasergeevSurvey}, it was independently due to Nechiporuk \cite{nechiporuktopologicalprinciples}, Mehlhorn \cite{mehlhorn1979some}, Pippenger \cite{DBLP:journals/tcs/Pippenger80}, and Wegener \cite{DBLP:journals/acta/Wegener80}. \begin{lemma}[Nechiporuk, Mehlhorn, Pippenger, Wegener] \label{freelemma} For $k$-free $A$, $C_\vee (A) \in \Omega\left(\frac{|A|}{k^2} \right)$. \end{lemma} \section[Relationship]{Relationship Between Cancellation-Free XOR Circuits and General XOR Circuits} In \cite{boyarcombinationalappear}, Boyar and Peralta exhibited an infinite family of matrices where the sizes of the cancellation-free circuits computing them are at least $\frac{3}{2}-o(1)$ times the corresponding sizes for smallest XOR circuits for them. We call this ratio the \emph{cancellation ratio}, $\rho(n)$, defined as \[ \rho(n) = \max_{A\in \gf^{n\times n}} \frac{C_{CF}(A)}{C_\oplus(A)}. \] The following proposition on the Boolean Sylvester-Hadamard matrix was pointed out by Edward Hirsch and Olga Melanich \cite{edwardolga}. The $n\times n$ Boolean Sylvester-Hadamard matrix $H_n$, is defined recursively: \[ H_1 = (1), H_{2n}= \begin{pmatrix} H_{n} & H_n\\ H_{n} & \overline{H}_n \end{pmatrix} \] Where $\overline{A}$ means the Boolean complement of the matrix $A$. It is known that $C_{\oplus}(H_n)\in O(n)$, but that in depth $2$ it requires circuits of size $\Omega(n\log n)$ \cite{AlonKW90}. \begin{proposition} The $n\times n$ Boolean Sylvester-Hadamard matrix requires can\-cel\-lation-free circuits of size $C_{CF}(H_n)\in \Omega(n\log n)$. \end{proposition} Since $\log |\det(H_n)| \in \Omega(n\log n)$, this proposition follows from following theorem due to Morgenstern, (\cite{DBLP:journals/jacm/Morgenstern73}, see also \cite[Thm. 13.14]{DBLP:books/daglib/0090316}). \begin{theorem}[Morgenstern]\label{morgensternlb} For a Boolean matrix $M$, \[ C_{CF}\in \Omega(\log |\det(M)|). \] \end{theorem} The statement holds more generally, namely for circuits with addition over the complex numbers and scalar multiplication by any constant $c\in \mathbb{C}$ with $|c|\leq 2$. Cancellation-free circuits can be seen as a special case of this. Using the recursive structure of $H_n$, it is not hard to show that $C_{\oplus}(H_n)\in O(n)$, so this demonstrates that $\rho(n)\in \Omega(\log n)$. It should be noted that no $n\times n$ Boolean matrix can have determinant larger than $n!$, so this technique cannot give lower a bound on $\rho(n)$ stronger than $O(\log n)$. As mentioned in the introduction, the ratio \[ \lambda (n) = \max_{A\in \gf^{n\times n}}\frac{C_\vee(A)}{C_{\oplus}(A)} \] has been studied, (see \cite{gashkov2011complexity,juknasergeevSurvey}). Using the techniques of \cite{DBLP:journals/siamcomp/Jukna06}, it can be derived (as is done in \cite{juknasergeevSurvey}) that $\lambda(n)\in \Omega(n/\log^2 n)$. We present a different construction exhibiting the same gap. The construction is different, and in some sense simpler. Furthermore our proof is quite different. More concretely we use communication complexity for the analysis to show that certain conditional random variables are almost uniformly distributed in a way that might have independent interest. Also our construction gives a similar separation for circuits of constant depth (see Section \ref{sec:constantdepth}). \begin{theorem}\label{separationthm} \(\lambda(n) \in \Omega\left(\frac{n}{\log ^{2} n} \right). \) \end{theorem} The proof uses the probabilistic method. We construct randomly two matrices, and let $A$ be their product. In order to use Lemma \ref{freelemma} on $A$, we need to show that with high probability, the matrix $A$ will be $2\log n$-free. We do this via Lemma \ref{dependencylemma} by showing that the marginal distribution of any entry in a fixed $2\log n\times 2\log n$ submatrix is almost uniformly random. In the following, for a matrix $M$, we let $M_i$ ($M^i$) denote its $i$th row (column). And for $I\subseteq [n]$, we let $M_I$ ($M^I$) denote the submatrix consisting of the rows (columns) with indices in $I$. Lemma \ref{dependencylemma} might seem somewhat technical. However, there is a very simple intuition behind it: Suppose $M$ is obtained at random as in the statement of the lemma. Informally we want to say that the entries do not ``depend'' too much on each other. More formally we want to show that given all but one entry in $M$ it is not possible to guess the last entry with significant advantage over random guessing. The proof idea is to transform any good guess into a deterministic communication protocol for computation of the inner product, and to use a well known limitation on how well this can be done \cite{DBLP:journals/siamcomp/ChorG88,DBLP:books/daglib/0011756}. We will say that two (partially) defined matrices are \emph{consistent} if they agree on all their defined entries. \begin{lemma} \label{dependencylemma} Let $M$ be an $m\times m$ partially defined matrix, where all entries except $M_p^q$ are defined. Let $B,C$ be matrices over $\gf$ with dimensions $m\times 7m$ and $7m\times m$ respectively, be uniformly random among all possible pairs $(B,C)$ such that $BC$ is consistent with $M$. Then for sufficiently large $m$, the conditional probability that $M_{p}^q$ is $1$, given all other entries, is contained in the interval $(\frac{1}{2}-\frac{1}{m},\frac{1}{2}+\frac{1}{m})$, where the probability is over the choices of $B$ and $C$. \end{lemma} Before proving the lemma, we will first recall a fact from communication complexity, due to Chor and Goldreich \cite{DBLP:conf/focs/ChorG85}, see also \cite{DBLP:books/daglib/0011756}. \begin{theorem}[Chor, Goldreich] \label{innerproductlb} Let $\mathbf{x}$ and $\mathbf{y}$ be independent and uniformly random vectors, each of $n$ bits. Suppose a deterministic communication protocol is used to compute the inner product of $\mathbf{x}$ and $\mathbf{y}$, and the protocol is correct with probability at least $\frac{1}{2}+p$. Then on some inputs, the protocol uses $\frac{n}{2}-\log( 1/p )$ bits of communication. \end{theorem} \begin{proof}[of Lemma \ref{dependencylemma}] Suppose for the sake of contradiction that there exists a partially defined matrix $M$, such that when all entries but one are revealed, the conditional probability of the last entry being $a$ is at least $\frac{1}{2}+\frac{1}{m}$ for some $a\in \{0,1\}$. Assuming this, we will first present a randomized communication protocol computing the inner product of two independent and uniformly random $7m$ bit vectors $\mathbf{x}$ and $\mathbf{y}$ that always uses $m$ bits of communication and is correct with probability at least $\frac{1}{2}+\frac{2^{-2m}}{4m}$. We will then argue that this protocol can be completely derandomized. This results in a deterministic communication protocol that violates Theorem \ref{innerproductlb}. From this we conclude that such a partially defined matrix, with this large probability of the last entry being $a$, does not exist. Let Alice and Bob have as input vectors $\mathbf{x}$ and $\mathbf{y}$, respectively, each of length $7m$. Before getting their inputs, they use their shared random bits to agree on a random choice of the two matrices $B$ and $C$ distributed as stated in the Lemma. To compute the inner product of $\mathbf{x}$ and $\mathbf{y}$, Alice replaces the row $B_p$ with $\mathbf{x}$ and Bob replaces the column $C^q$ with $\mathbf{y}$, let the resulting matrices be $B'$ and $C'$. Let $M'=B'C'$. Notice that $M$ and $M'$ are consistent, except possibly on row $p$ and column $q$. Alice can compute the entire $p$th row of $M'$ (except $(M')_p^q$). Similarly Bob can compute the entire $q$th column (except $(M')_p^q$). The communication in the protocol consists of first letting Alice send the $m-1$ bits in the part of the $p$th row she can compute to Bob. Bob now knows all the entries in $M'$, except the entry $M_p^q$. In order for $M'$ and $M$ to be consistent, it is only necessary that the $m-1$ defined entries in row $p$ and the $m-1$ defined entries in column $q$ are equal in the two matrices, since $B'$ and $C'$ were defined such that all other entries were equal. This occurs with probability at least $2^{-2m-2}$. In this case, the value Alice and Bob want to compute is exactly the only unknown entry $M'^q_p$. By assumption, this last entry is $a$ with probability at least $\frac{1}{2}+\frac{1}{m}$, so Bob outputs $a$. If the known entries in $M'$ are not consistent with the known entries in $M$, Bob outputs a uniformly random bit. This is correct with probability $\frac{1}{2}$. Thus, the probability of this protocol being correct is at least: \begin{eqnarray*} && 2^{-2m-2}\left(\frac{1}{2}+\frac{1}{m}\right) + (1-2^{-2m-2})\frac{1}{2}\\ &=& 1/2+\frac{2^{-2m}}{4m} \end{eqnarray*} So when the inputs are uniformly distributed, the randomized protocol computes the inner product of two $7m$ bits vectors with $m$ bits communication, and it is correct with probability at least $\frac{1}{2}+\frac{2^{-2m}}{4m}$. By an averaging argument it follows that there exist a deterministic communication protocol with the same success probability. According to Theorem \ref{innerproductlb}, any deterministic algorithm for computing the inner product with this success probability must communicate at least \[ \frac{7m}{2}-\log(1/p)=\frac{7}{2}m-\log \left( \frac{4m}{2^{-2m}} \right) = \frac{3}{2}m -\log m - 2 \] Which is larger than $m$ for sufficiently large values of $m$ ($m\geq 16$ suffices), and we arrive at the desired contradiction. \qed \end{proof} We now use this to prove Theorem \ref{separationthm}. We will use following result on the ``Zarankiewicz problem'' \cite{kovari1954problem}, see also \cite{juknacombinatorics}. \begin{theorem}[Kov{\'a}ri, S{\'o}s, Tur{\'a}n] \label{kovarisosturan} Let $M$ be an $(a-1)$-free $n\times n$ matrix. Then the number of ones in $M$ is at most $(a-1)^{1/a}n^{2-1/a}+(a-1)n$. \end{theorem} \begin{proof}[of Theorem \ref{separationthm}] We will probabilistically construct two matrices $B,C$ of dimensions $n\times 14\log n$, $14\log n\times n$. Each entry in $B$ and $C$ will be chosen independently and uniformly at random on $\gf$. We let $A=BC$. First notice that it follows directly from Theorem \ref{upperboundtranspo} that $B$ and $C$ can be computed with XOR circuits, both of size $O(n)$. Now we can let the outputs of the circuit computing $C$ be the inputs of the circuit computing $B$. Notice that this composed circuit will have many cancellations. The resulting circuit computes the matrix $A$ and has size $O(n)$. We will argue that with probability $1-o(1)$ this matrix will not have a $2\log n\times 2\log n$ submatrix of all ones, while $|A|\in \Omega(n^2)$. By Lemma \ref{freelemma} the results follows. We show that for large enough $n$, with high probability neither of the following two events will happen: \begin{enumerate} \item \label{all1submatrix}$BC$ has a submatrix of dimension $2\log n\times 2\log n$ consisting of all ones or all zeros \item \label{matrixnorm} $|BC| \leq 0.3n^2$ \end{enumerate} \paragraph{\ref{all1submatrix})} Fix a submatrix $M$ of $BC$ with dimensions $2\log n\times 2\log n$. That is, some subset $I$ of the rows of $B$, and a subset $J$ of the columns in $C$ so $M=B_IC^J$. We now want to show that the probability of this matrix having only ones (or zeros) is so small that a union bound over all choices of $2\log n\times 2\log n$ submatrices gives that the probability that there exists such a submatrix goes to $0$. Notice that this would be easy if all the entries in $M$ were mutually independent and uniformly distributed. Although this is not case, Lemma \ref{dependencylemma} for $m=2\log n$ states, that this is almost the case. More precisely, the conditional probability that a given entry is $1$ (or $0$) is at most $\frac{1}{2}+\frac{1}{2\log n}$. We can now use the union bound to estimate the probability that $A$ has a submatrix of dimension $2\log n\times 2\log n$ with all the entries being either $0$ or $1$: \begin{eqnarray*} 2{n\choose 2\log n}^2 \left( \frac{1}{2}+\frac{1}{2\log n} \right)^{4\log^2n} &\leq& 2\frac{n^{4\log n}}{(2\log n)!} \left( \frac{1+\frac{1}{\log n}}{2} \right)^{4\log^2n}\\ &\leq& 2\left(\frac{\left(1+\frac{1}{\log n} \right)^{4\log^2 n}}{ (2\log n)!}\right) \end{eqnarray*} This tends to $0$, so we arrive at the desired result. \paragraph{\ref{matrixnorm})} Note that if one wants to show that with positive probability the number of ones is $\Omega(n^2)$, a straightforward application of Markov's inequality suffices. Here we will show the stronger statement that with probability $1-o(1)$, the number of ones is at least $\frac{n^2}{2}-o(n^2)$. By the proof above, we may assume that the Boolean complement of $A$, $\bar{A}$, does not have a $2\log n$ submatrix of all ones. By Theorem \ref{kovarisosturan}, the number of ones in $\bar{A}$ is at most \[ (2\log n-1)^{1/2\log n}n^{2-1/2\log n}+(2\log n-1)n \] One can verify that \[ \lim_{n\rightarrow \infty}\frac{(2\log n-1)^{1/2\log n}n^{2-1/2\log n}+(2\log n-1)n}{n^2} =\frac{1}{\sqrt{2}} \] So if there is not a $2\log n\times 2\log n$ matrix of all zeros in $A$, the number of zeros in $A$ is at most $n^2(1-\frac{1}{\sqrt{2}})<0.3n^2$. Hence the probability of $|A|$ being less than $0.3{n^2}$ tends to $0$. \qed \end{proof} \emph{Remark 1:} It has been pointed out by Avishay Tal that in order to show that the matrix is $O(\log n)$-free, a significantly simpler argument suffices. We present it here: Let $B,C$ be random matrices as in the construction of Theorem \ref{separationthm} but with dimensions $n\times 5\log n$ and $5\log n \times n$, respectively, and let $A=BC$. Now any $5\log n\times 5\log n$ submatrix of $A$ is a product of two $5\log n\times 5\log n$ dimensional matrices, one being a submatrix of $B$ and one being a submatrix of $C$. Now recall the theorem from linear algebra: \begin{theorem}[Sylvester's Rank Inequality] For two $m\times m$ matrices $B,C$ \[ rank(BC) \geq rank(B)+rank(C)-m \] \end{theorem} The probability that a random $k\times k$ matrix has rank less than $d$ is at most $2^{k-(k-d)^2}$ (see e.g. the proof of Lemma 5.4 in \cite{DBLP:conf/focs/KomargodskiRT13}). Now a union bound shows that the probability that there is a $5\log n\times 5\log n$ submatrix of $B$ or $C$ with rank smaller than $0.51 \cdot 5\log n$ tends to $0$. So for large enough $n$, with high probability, every $5\log n\times 5\log n$ of $A$ will have rank at least $0.02\cdot 5\log n$. A submatrix consisting of all ones or all zeros has rank 0 or 1, which is less than $0.1 \log n$ for large enough $n$. Thus, the probability of this occurring tends to zero. In the matrix constructed in \cite[Theorem 5.8]{juknasergeevSurvey}, they highlight the property that the matrix is $t$\emph{-Ramsey}, meaning that both the matrix and its complement are $(t-1)$-free, and it is a somewhat interesting fact that such matrices admit small XOR circuits. It follows immediately from the proof of Theorem \ref{separationthm} that this holds as well for the matrix constructed, and we state this a separate corollary. \begin{corollary} For large enough $n$, with high probability, the bipartite graph with adjacency matrix $A$ from Theorem~\ref{separationthm} is $t$-Ramsey for $t=2\log n$. \end{corollary} Notice that by Theorem \ref{upperboundtranspo}, the obtained separation is at most a factor of $O(\log n$) from being optimal. Also, except for lower bounds based on counting, all strong lower bounds we know of are essentially based on Lemma \ref{freelemma}. Following that line of thought, one might hope to improve the separation above by coming up with a better choice of $A$ that does not have a $O(\log^{1-\epsilon}n)\times O(\log^{1-\epsilon}n)$ all $1$ submatrix to get a stronger lower bound on $C_\vee(A)$, or perhaps hope that a tighter analysis than the above would give a stronger separation. However, this direction does not seem promising. To see this, it follows from Theorem \ref{kovarisosturan} that a matrix without a $\log^{1-\epsilon} n\times \log^{1-\epsilon} n$ all $1$ submatrix, the lower bound obtained using Lemma \ref{freelemma} would be of order $O\left( \frac{n^{2-\frac{1}{\log^{1-\epsilon}n}}}{(\log^{1-\epsilon}n)^2} \right)$, which is $o\left(\frac{n^2}{\log^2n} \right)$. \section{Smallest XOR Circuit Problem} As mentioned earlier, the notion cancellation-free was introduced by Boyar and Peralta in \cite{boyarcombinationalappear}. The paper concerns shortest straight line programs for computing linear forms, which is equivalent to the model studied in this paper. In \cite{DBLP:books/fm/GareyJ79}, it is shown that the Ensemble Computation Problem (recall that this is equivalent to cancellation-free) is $\mathbf{NP}$-complete. For general XOR circuits, the problem remains $\mathbf{NP}$-complete \cite{boyarcombinationalappear}. It was observed in \cite{boyarcombinationalappear} that several researchers have used heuristics that will always produce cancellation-free circuits, see \cite{canright2005very,paar95,SatohMTM01}. By definition, any heuristic which only produces cancellation-free circuits cannot achieve an approximation ratio better than $\rho(n)$. By Proposition~\ref{orcancelremark}, $\rho(n)\geq \lambda(n)$. Thus, Theorem~\ref{separationthm} implies that techniques which only produce cancellation-free circuits are not guaranteed to be very close to optimal. \begin{corollary} The algorithms in \cite{canright2005very,paar95,SatohMTM01} do not guarantee approximation ratios better than $\Theta\left( \frac{n}{\log ^{2}n}\right)$. \end{corollary} \section{Constant Depth} \label{sec:constantdepth} For unbounded depth, there is no known family of (polynomial time computable) matrices known to require XOR circuits of superlinear size. However, if one puts restrictions on the depth, superlinear lower bounds are known \cite{juknabook}. In this case, we allow each gate to have unbounded fan-in, and instead of counting the number of gates we count the number of wires in the circuit. See Figure~\ref{fig:depth2example} for an example of a depth two circuit. \begin{figure} \begin{center} \scalebox{0.8}{ \input{exampledepth2.tex} } \end{center} \caption{An example of a depth $2$ circuit, computing the same matrix as the circuits in Figure~\ref{fig:examplefig}. Notice that some gates have fan-in larger than $2$. This circuit has size $9$. \label{fig:depth2example} } \end{figure} In particular, the circuit model where the depth is bounded to be at most $2$ is well studied (see e.g. \cite{juknabook}). Similarly to previously, an XOR circuit in depth $2$ is a circuit where each gate computes the XOR or its inputs. When considering matrices computed by XOR circuits, the general situation in the two circuit models is very similar. The following two results are due to Lupanov \cite{Lupanov1956}, see also \cite{juknabook}. \begin{theorem}[Lupanov] \label{lupanov2} For every $n\times n$ matrix $A$, there exists a depth 2 cancel\-lation-free circuit with at most $O\left(\frac{n^2}{\log n} \right)$ wires computing $A$. Furthermore, almost every such matrix requires $\Omega\left(\frac{n^2}{\log n} \right)$ wires. \end{theorem} Let $\lambda^d(n)$ denote $\lambda(n)$ for circuits restricted to depth $d$ (recall that now size is defined as the number of wires). Neither the separation in \cite{gashkov2011complexity} nor that in \cite{separatingnew} seems to carry over to bounded depth circuits in any obvious way. The separation presented in \cite[Theorem 5.8]{juknasergeevSurvey} holds for any depth $d\geq 2$. By inspecting the proof of Theorem~\ref{separationthm}, the upper bound on the size of the XOR circuit worked as follows: First construct a circuit to compute $C$, and then construct a circuit for $B$ with the outputs of $C$ as inputs, that is, a circuit for $B$ that comes topologically after $C$. To get to an upper bound of $O(n)$ wires, we use Theorem \ref{upperboundtranspo}. By using Theorem~\ref{lupanov2} twice, we get a depth $4$ circuit of that size. For depths $d=2$ and $d=3$, one can use arguments similar to those in given in the proof of \cite[Theorem 5.8]{juknasergeevSurvey}) to show that the separation still holds in these two cases. We summarize this in the following theorem. \begin{theorem} \label{canceldepth4andup} Let $d\geq 2$. $\lambda^d(n)\in \Omega\left( \frac{n}{\log^{2}n}\right)$. \end{theorem} \section[Sierpinski]{ Computing the Sierpinski Matrix } In this section we prove that the $n\times n$ Sierpinski matrix, $S_n$, needs $\frac{1}{2}n\log n$ gates when computed by a cancellation-free circuit, and that this suffices. The proof strategy is surprisingly simple, it is essentially gate elimination where more than one gate is eliminated in each step. Neither Theorem \ref{morgensternlb} nor Lemma \ref{freelemma} gives anything nontrivial for this matrix. As mentioned previously, there is no known (polynomial time computable) family of matrices requiring XOR circuits of superlinear size. However there are simple matrices that are conjectured to require circuits of size $\Omega (n\log n)$. One such matrix is the Sierpinski matrix, (Aaronson, personal communication and \cite{cstheorystackexchange}). The $n\times n$ Sierpinski (also called \emph{set disjointness}) matrix, $S_n$, is defined inductively \[ S_2 = \begin{pmatrix} 1 & 0 \\ 1 & 1\end{pmatrix}, S_{2n} = \begin{pmatrix} S_n & 0 \\ S_n & S_n\end{pmatrix} \] Independently of this conjecture, Jukna and Sergeev~\cite[Problem 7.11]{juknasergeevSurvey} have very recently asked if the ``set intersection matrix'', $K_n$, has $C_\oplus (K_n)\in \omega(n)$. The motivation for this is that $C_\vee(K_n)\in O(n)$, so if true this would give a counterpart to Theorem \ref{separationthm}. If $n$ is a power of two, the $n\times n$ set intersection matrix $K_n$ can be defined by associating each row and column with a subset of $[\log n]$, and letting an entry be $1$ if and only if the corresponding row and column sets have non-empty intersection. One can also define $K_n$ inductively: \[ K_2 = \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix}, K_{2n} = \begin{pmatrix} K_n & K_n \\ K_n & J\end{pmatrix}, \] where $J$ is the $n\times n$ matrix with $1$ in each entry. It is easy to see that up to a reordering of the columns, the complement of $K_n$ contains exactly the same rows as $S_n$. Thus, $C_\oplus(K_n)$ is superlinear if and only if $C_\oplus(S_n)$ is, since either matrix can be computed from the other with at most $2n-1$ extra XOR gates, using cancellation heavily. To see that the set intersection matrix can be computed with OR circuits of linear size observe that over the Boolean semiring, $K_n$ decomposes into $K_n = B\cdot B^T$, where the $i$th row in $B$ is the binary representation of $i$. Now apply Theorem \ref{upperboundtranspo} to the $n\times \log n$ matrix $B$ and its transpose and perform the composition. Any lower bound against XOR circuits must hold for cancellation-free circuits, so a first step in proving superlinear lower bounds for the set intersection matrix is to prove superlinear cancellation-free lower bounds for the Sierpinski matrix. Below we show that $C_{CF}(S_n)= \frac{1}{2}n\log n$. Our technique also holds for OR circuits. This provides a simple example of a matrix family where the complements are significantly easier to compute with OR circuits than the matrices themselves. \paragraph{Gate Elimination} Suppose some subset of the input variables are restricted to the value $0$. Now look at the resulting circuit. Some of the gates will now compute the value $z=0\oplus w$. In this case, we say that the gate is eliminated since it no longer does any computation. The situation can be more extreme, some gate might ``compute'' $z=0\oplus 0$. In both cases, we can remove the gate from the circuit, and forward the input if necessary (if $z$ is an output gate, $w$ now outputs the result). In the second case, the parent of $z$ will get eliminated, so the effect might cascade. For any subset of the variables, there is a unique set of gates that become eliminated when setting these variables to $0$. In all of the following let $n$ be a power of $2$, and let $S_n$ be the $n\times n$ Sierpinski matrix. The following proposition is easily established. \begin{proposition} For every $n$, the Sierpinski matrix $S_n$ has full rank, over both $\mathbb{R}$ and $\mathbb{F}_2$. \end{proposition} We now proceed to the proof of the lower bound of the Sierpinski matrix for cancellation-free circuits. It is our hope that this might be a step towards proving an $\omega(n)$ lower bound for XOR circuits. \begin{theorem} \label{sierpinskilower} For every $n\geq 2$, any cancellation-free circuit that computes the $n\times n$ Sierpinski matrix has size at least $\frac{1}{2}n\log n$. \end{theorem} \begin{proof} The proof is by induction on $n$. For the base case, look at the $2\times 2$ matrix $S_2$. This clearly needs at least $\frac{1}{2}2\log 2=1$ gate. \begin{figure} \centering \input{sierpinskisituationcodeonly.tex} \caption{Figure illustrating the inductive step. Due to monotinicity there is no wire crossing from right to left. The gates on the left hand side are in $C_1$. Notice that wires crossing the cut are red, and that these wires become constant when $x_1,\ldots ,x_n$ are set to $0$, so the gates with one such input wire are in $C_3$. The rest are in $C_2$.} \label{fig:sierpinskisit} \end{figure} Suppose the statement is true for some $n$ and consider the $2n\times 2n$ matrix $S_{2n}$. Denote the output gates $y_1,\ldots,y_{2n}$ and the inputs $x_1,\ldots,x_{2n}$. Partition the gates of $C$ into three disjoint sets, $C_1,C_2$ and $C_3$ (Figure \ref{fig:sierpinskisit} illustrates the situation), defined as follows: \begin{itemize} \item $C_1$: The gates having only inputs from $x_1,\ldots,x_n$ and $C_1$. Equivalently the gates not reachable from inputs $x_{n+1},\ldots,x_{2n}$. \item $C_2$: The gates in $C-C_1$ that are not eliminated when inputs $x_1,\ldots,x_n$ are set to $0$. \item $C_3$: $C-(C_1\cup C_2)$. That is, the gates in $C-C_1$ that do become eliminated when inputs $x_1,\ldots,x_n$ are set to $0$. \end{itemize} Obviously $|C|=|C_1|+|C_2|+|C_3|$. We will now give lower bounds on the sizes of $C_1$, $C_2$, and $C_3$. \paragraph{$C_1$:} Since the circuit is cancellation-free, the outputs $y_1,\ldots,y_n$ and all their predecessors are in $C_1$. By the induction hypothesis, $|C_1|\geq \frac{1}{2}n\log n$. \paragraph{$C_2$:} Since the gates in $C_2$ are not eliminated, they compute $S_n$ on the inputs $x_{n+1},\ldots,x_{2n}$. By the induction hypothesis, $|C_2|\geq \frac{1}{2}n\log n$. \paragraph{$C_3$:} The goal is to prove that this set has size at least $n$. Let $\delta(C_1)$ be the set of wires from $C_1\cup \{x_1,\ldots,x_n\}$ to $C_2\cup C_3$. We first prove that $|C_3|\geq |\delta(C_1)|$. By definition, all gates in $C_1$ attain the value $0$ when $x_1,\ldots,x_n$ are set to $0$. Let $(v,w)\in \delta(C_1)$ be arbitrary. Since $v\in C_1\cup \{x_1,\ldots , x_n\}$, $w$ becomes eliminated, so $w\in C_3$. By definition, every $u\in C_3$ can only have one child in $C_1$. So $|C_3|\geq |\delta(C_1)|$. We now show that $|\delta(C_1)|\geq n$. Let the endpoints of $\delta(C_1)$ in $C_1$ be $e_1,\ldots , e_p$ and let their corresponding value vectors be $v_1,\ldots , v_p$. The circuit is cancellation\--free, so coordinate\-wise addition corresponds to addition in $\mathbb{R}$. Now look at the value vectors of the output gates $y_{n+1},\ldots,y_{2n}$. For each of these, the vector consisting of the first $n$ coordinates must be in $span_{\mathbb{R}}(v_1,\ldots ,v_p)$, but the dimension of $S_n$ is $n$, so $p\geq n$. We have that $|C_3|\geq |\delta(C_1)|\geq n$, so \[ |C|=|C_1|+|C_2|+|C_3| \geq \frac{1}{2}n\log n + \frac{1}{2}n\log n + n = \frac{1}{2}(2n)\log(2n). \] \qed \end{proof} This is tight: \begin{proposition} The Sierpinski matrix can be computed by a cancellation-free circuit using $\frac{1}{2}n\log n$ gates. \label{constructiveSierpinskiUpperBound} \end{proposition} \begin{proof} This is clearly true for $S_2$. Assume that $S_n$ can be computed using $\frac{1}{2}n\log n$ gates. Consider the matrix $S_{2n}$. Construct the circuit in a divide and conquer manner by constructing recursively on the variables $x_1,\ldots,x_n$ and $x_{n+1},\ldots, x_{2n}$. This gives outputs $y_1,\ldots,y_n$. After this use $n$ operations to finish the outputs $y_{n+1},\ldots y_{2n}$. This adds up to exactly $\frac{1}{2}(2n)\log (2n)$. \qed \end{proof} \paragraph{Circuits With Cancellation} In the proof of Theorem \ref{sierpinskilower}, we used the cancel\-lation-free property when estimating the sizes of both $C_1$ and $C_3$. However, since $S_n$ has full rank over $\gf$, a similar dimensionality argument to that used when estimating $C_3$ holds even if the circuits use cancellation. Therefore we might replace the cancellation-free assumption with the assumption that for the $2n\times 2n$ Sierpinski matrix, there is no path from $x_{n+i}$ to $y_j$ for $i\geq 1$, $j\leq n$. We have not been able to show whether or not this is the case for minimum sized circuits, although we have experimentally verified that even for circuits where cancellation is allowed, the matrices $S_2,S_4,S_8$ do not admit circuits smaller than the lower bound from Theorem \ref{sierpinskilower}. \paragraph{OR circuits} In the proof of Theorem \ref{sierpinskilower}, the estimates for $C_1$ and $C_2$ hold for OR circuits too, but when estimating $C_3$, it does not suffice to appeal to rank over $\gf$ or $\mathbb{R}$. However, it is not hard to see that any set of row vectors that ``spans'' $S_n$ (with the operation being coordinate-wise OR) must have size at least $n$. \begin{theorem} Theorem \ref{sierpinskilower} holds for OR circuits as well. \end{theorem} This proof strategy for Theorem~\ref{sierpinskilower} has recently been used by Sergeev to prove similar lower bounds for another family of Boolean matrices in the OR model \cite{sergeevadditivecompl}. As mentioned in the introduction, Theorem~\ref{sierpinskilower} can be shown using another strategy. In \cite{DBLP:journals/tsmc/Kennes92}, Kennes gives a lower bound on the additive complexity for computing the M\"{o}bius transformation of a Boolean lattice. It is not hard to verify that the Sierpinski matrix corresponds to the M\"{o}bius transformation induced by the subset lattice. Combining this observation with Kennes' result gives the same lower bound. Since $C_{\vee}(K_n)\in O(n)$ and $K_n$ contains the same rows as $\bar{S}_n$, the complement of $S_n$, the Sierpinski matrix is harder to compute than its complement. \begin{corollary} $C_{\vee}(S_n) = \Theta (\log n) C_\vee(\bar{S}_n)$. \end{corollary} Until very recently, this was the largest gap between the OR complexity of $A$ and $\bar{A}$ for an explicit matrix. See \cite{igorimproves} for a very recent manuscript describing a construction greatly improving on this. \cite{juknasergeevSurvey}). \section{Conclusions and Open Problems} We show the existence of matrices, for which OR circuits and cancellation-free XOR circuits are both a factor of $\Omega\left( \frac{n}{\log^{2} n}\right)$ larger than the smallest XOR circuit. This separation holds in unbounded depth and any constant depth of at least $2$. This means that when designing XOR (sub)circuits, it can be important that the methods employed can produce circuits which have cancellation. If a cancellation-free or an OR circuit computes the Sierpinski matrix correctly, it has size at least $\frac{1}{2}n\log n$. For this particular family of matrices, it is not obvious to what extent cancellation can help. It would be very interesting to determine this, since it would automatically provide a converse to Theorem~\ref{separationthm}. \section*{Acknowledgments} The authors would like to thank Elad Verbin for an idea which eventually led to the proof of Theorem~\ref{separationthm}, Igor Sergeev and Stasys Jukna for references to related papers, Janne Korhonen for pointing out the result of Kennes, Avishay Tal for pointing out an alternative proof of a slightly weaker version of Theorem~\ref{separationthm}, and Mika G\"{o}\"{o}s for helpful discussions. We would also like to thank the anonymnous referees for many valuable suggestions. \bibliographystyle{model1-num-names.bst}
1,108,101,564,057
arxiv
\section{Introduction} Construction of a novel advanced calculation methodology for high accuracy is always one of the most important themes in theoretical materials science. One of the most successful theories in this context must be the density-functional theory (DFT) \cite{HK,KS}. The DFT has been applied to a wide variety of systems, from finite to extended, and enables us to reproduce structural parameters such as lattice constants within a few percent of error and even to predict the material properties with relatively cheap computational cost. According to Janak's theorem \cite{janak1978} with Kohn-Sham (KS) equation in the DFT framework, we can draw one-electron energy levels for finite systems and electronic band structures for periodic systems. This is another noteworthy property of the DFT because electric energy levels and band structures can be experimentally observed through X-ray photoelectron spectroscopy (XPS) and angle-resolved photo-emission spectroscopy (ARPES) \cite{damascelli2003}. Comparison of the band structures obtained from ARPES measurements and DFT calculations helps us deepen our understandings of the electronic properties of materials. Behind the great successes of the DFT, we start to notice some drawbacks in the DFT at the same time. Well-known examples are that the DFT cannot reproduce van der Waals interaction, satellite peaks, and Mott gaps. Many efforts have been done so far to solve such difficulties, including self-interaction correction (SIC) method \cite{SICDFT}, LDA+U \cite{LDAU}, hybrid functionals \cite{hybrid1,hybrid2,hybrid3}, LDA+DMFT \cite{DMFT1,DMFT2,DMFT3}, GW \cite{GW1,GW2,GW3}, GW+cumulant expansion \cite{GWC1,GWC2}, van der Waals DFT \cite{DFTvdW1,DFTvdW2}, RDMFT \cite{RDMFT1,RDMFT2,Sangeeta}, etc. From the viewpoint of the development of a methodology, the wave function theory (WFT) has a great advantage in comparison with the DFT. One can improve the accuracy relatively easily within the WFT, while it is difficult in the DFT. However, the application of the WFT has been mostly limited to finite systems so far due to the huge calculation cost. Very recently, owing to the dramatical development of supercomputers, some groups have succeessfully demonstrated the application of the WFTs to periodic systems. For example, density-matrix-renormalization group (DMRG) \cite{DMRG1,DMRG2}, the transcorrelated method \cite{TC1,TC2,TC3,TC4,TC5,TC6}, and the Monte-Carlo configuration interaction \cite{FCIQMC,NiOCCSD} have been reported to be applied to periodic systems. Most previous studies with the WFT, however, focused only on the ground state energies except for the transcorrelated method. For the most standard WFTs, drawing electronic band structures is not trivial. Among WFTs, coupled-cluster theory \cite{Monkhorst1977,Stanton1993,Bartlett2007} is known to be a highly successful scheme that is capable of efficiently incorporating electronic correlations. Coupled-cluster singles and doubles (CCSD) method, which expands the reference state using single and double excitation operators, is the most popular type of the implementations due to its high accuracy and computational feasibility. CCSD method has been applied to periodic systems of strongly-correlated systems such as NiO \cite{NiOCCSD}. However, CCSD method cannot draw the single-electron energy spectrum in the standard form, along with other WFTs. Electronic excited states can be also calculated in CC theory by using the equation-of-motion CC (EOM-CC) \cite{Monkhorst1977,Stanton1993} or the symmetry-adapted cluster/configuration interaction (SAC-CI) \cite{SACCIPk} method. EOM-CCSD has already used for silicon crystal \cite{mcclain2017}. Recently, a method to obtain one-body Green's functions based on CC theory (GFCC) was proposed \cite{Nooijen92,Nooijen93,Nooijen95}, with which one can obtain the one-electron energy spectrum of materials. It has been, however, only applied to a limited number of systems. In particular, no periodic system has ever been treated by GFCCSD method except for homogeneous electron gas \cite{mcclain2017}. In this work, we have calculated band structures of several kinds of materials, ranging from ionic to covalent and van der Waals systems, through GFCCSD method. We have found that GFCCSD method is a powerful and prominent tool drawing the electronic band structures and yielding total energy at one time by demonstrating the results. We present the calculation results of periodic systems, which are one-dimensional LiH chain, C chain, and Be chain. We also show the band structures obtained from GFCCSD calculations for the first time, in which we see the emergence of satellite peaks. We also discuss how the calculations are affected by the reduction of active space, which is an important factor in reducing the computational cost. \section{Green's function from the coupled-cluster calculations} \label{sec:method} The present study is restricted only to the non-relativistic Hamiltonian, $\hamil$. In the coupled-cluster theory, the ground state wave function $\ket{\Psi_\mathrm{CC}}$ is described to be \begin{equation} \ket{\Psi_\mathrm{CC}} = e^{\Top} \ket{\Psi_0}, \label{wfn_CCSD} \end{equation} where $\ket{\Psi_0}$ is a so-called reference state, which usually adopts the Hartree--Fock wave function. The operator $\Top$ represents the $p$-electron excitation and is defined as \begin{eqnarray} \Top_p = \frac{1}{(p!)^2} \sum_{ \substack{i,j,k,\dots, \bm{k}_i\bm{k}_j\bm{k}_k\dots \\ a,b,c,\dots, \bm{k}_a\bm{k}_b\bm{k}_c\dots } } \tamp{i\bm{k}_i j\bm{k}_j k\bm{k}_k \dots}{a\bm{k}_a b\bm{k}_b c\bm{k}_c \dots} \cdot \\ \cre{a\bm{k}_a} \cre{b\bm{k}_b} \cre{c\bm{k}_c} \cdots \ani{k\bm{k}_k} \ani{j\bm{k}_j} \ani{i\bm{k}_i} \end{eqnarray} where $\cre{p \bm{k}_p}$ and $\ani{p \bm{k}_p}$ are creation and annihilation operators of an electron with momentum $\bm{k}_p$ at state $p$, respectively. The indices $i,j,\cdots$ ($a,b,\cdots$) represent occupied (unnoccupied) states, whereas $p,q,\cdots$ are used for any states, regardless of whether they are occupied or unnoccupied ones. The coefficients in $\Top$, $\tamp{i\bm{k}_i j\bm{k}_j k\bm{k}_k \dots}{a\bm{k}_a b\bm{k}_b c\bm{k}_c \dots}$, are determined from the amplitude equations, which are deduced by projecting excited states $\bra{\Psi_{i\bm{k}_ij\bm{k}_j\cdots}^{a\bm{k}_ab\bm{k}_b\cdots}}$ to the Schr\"odinger equation $\hamil \ket{\Psi} = E\ket{\Psi}$, in which a similarity transformed Hamiltonian $\hamilbar = e^{-\Top} \hamil e^{\Top}$ appears: \begin{equation} \mel{ \Psi_{i\bm{k}_ij\bm{k}_j\cdots}^{a\bm{k}_ab\bm{k}_b\cdots} }{ \hamilbar }{\Psi_0} =0 \ . \end{equation} After determining the coefficients in $\Top$, the total energy $ E_\mathrm{CCSD}$ can be calculated by projecting $\bra{\Psi_0}$: \begin{equation} E_\mathrm{CCSD} = \mel{\Psi_0}{e^{-\Top} \hamil e^{\Top}}{\Psi_0} . \end{equation} One-particle Green's function of the frequency representation at zero temperature is written as \begin{equation} \begin{split} G_{p\bm{k}_pq\bm{k}_q}(\omega) = &G_{p\bm{k}_pq\bm{k}_q}^{(h)}(\omega) + G_{p\bm{k}_pq\bm{k}_q}^{(e)}(\omega) \\ = &\mel{\Psi}{ \cre{q\bm{k}_q} \frac{1}{\omega+\hamil_{N}} \ani{p\bm{k}_p} }{\Psi}\\ &+\mel{\Psi}{ \ani{q\bm{k}_q} \frac{1}{\omega-\hamil_{N}} \cre{p\bm{k}_p} }{\Psi}, \label{def_green} \end{split} \end{equation} in which the Green's function is separated into the electron removal and attachment part (partial Green's functions). The $\hamil_{N}$ is defined as $\hamil_{N} = \hamil - E_0$, where $E_0$ is the total energy of the exact ground state $\ket{\Psi}$. Here, one adopts the CCSD wave function to the exact wave function, $\ket{\Psi}=\ket{\Psi_\mathrm{CC}}$. Using the similarity transformed Hamiltonian $\hamilbar_{N} = e^{-\Top} \hamil e^{\Top} - E_0$ and the transformed creation and annihilation operators $\crebar{q\bm{k}_q} = e^{-\Top} \cre{q\bm{k}_q} e^{\Top}$ and $\anibar{p\bm{k}_p} = e^{-\Top} \ani{p\bm{k}_p} e^{\Top}$ , we can rewrite the partial Green's functions to \begin{equation} G_{p\bm{k}_pq\bm{k}_q}^{(h)}(\omega) = \mel{\Psi_0}{ (1+\Lop) \crebar{p\bm{k}_p} \frac{1}{\omega+\hamilbar_N} \anibar{q\bm{k}_q} }{\Psi_0}, \end{equation} \begin{equation} G_{p\bm{k}_pq\bm{k}_q}^{(e)}(\omega) = \mel{\Psi_0}{ (1+\Lop) \anibar{p\bm{k}_p} \frac{1}{\omega-\hamilbar_N} \crebar{q\bm{k}_q}}{\Psi_0}. \end{equation} Note that the transformed Hamiltonian $\hamilbar_{N}$ is not Hermitian and that the Green's function is constructed using bi-variational method \cite{Arponen83,Stanton1993,Bi-vari2}. The operator $\Lop$ is a de-excitation operator which is determined by solving \begin{equation} \mel{ \Psi_{i\bm{k}_ij\bm{k}_j\cdots}^{a\bm{k}_ab\bm{k}_b\cdots} } { (1+\Lop) e^{-\Top} \hamil e^{\Top} } {\Psi_0} = 0. \end{equation} In order to avoid the computational difficulty in treating the inverse matrix $(\omega \pm \hamilbar_N)^{-1}$, $\Xop_{q\bm{k}_q}(\omega)$ and $\Yop_{q\bm{k}_q}(\omega)$ are introduced as follows: \begin{equation} (\omega+\hamilbar_N) \Xop_{q\bm{k}_q} (\omega) \ket{\Psi_0} = \ani{q\bm{k}_q} \ket{\Psi_0}, \label{eq:ipgf} \end{equation} \begin{equation} (\omega-\hamilbar_N)\Yop_{q\bm{k}_q}(\omega)\ket{\Psi_0} = \cre{q\bm{k}_q} \ket{\Psi_0}. \label{eq:eagf} \end{equation} Once we solve Eq.~(\ref{eq:ipgf}) and (\ref{eq:eagf}), we can get the information of (N$-1$)- and (N$+1$)-electron states involved in the Green's function, respectively. Note that these two linear equations are equivalent to Hamiltonian of EOM-CC theory: Eq.~(\ref{eq:ipgf}) corresponds to (N$-1$)-electron states yielding ionization potential (IP-EOM-CC) and Eq.~(\ref{eq:eagf}) corresponds to (N$+1$)-electron states (EA-EOM-CC). With $\Xop_{q\bm{k}_q} (\omega)$ and $\Yop_{q\bm{k}_q} (\omega)$, the Green's function is finally expressed as \cite{Kowalski2014,Kowalski2016} \begin{equation} G_{p\bm{k}_pq\bm{k}_q}^{(h)}(\omega) = \mel{\Psi_0}{ (1+\Lop) \crebar{p\bm{k}_p} \Xop_{q\bm{k}_q} (\omega) }{\Psi_0}, \end{equation} \begin{equation} G_{p\bm{k}_pq\bm{k}_q}^{(e)}(\omega) = \mel{\Psi_0}{ (1+\Lop) \anibar{p\bm{k}_p} \Yop_{q\bm{k}_q} (\omega) }{\Psi_0}. \end{equation} We can calculate single-electron spectra $A(\omega)$ using the Green's function: \begin{equation} A(\omega) = - \frac{1}{\pi} \Im \left[ \tr \left( G(\omega+i\delta) \right) \right]. \label{spectr} \end{equation} The band structure is obtained simply by decomposing $A(\omega)$ into the contributions from each $k$-point, $A_{\bm{k}}(\omega)$: \begin{equation} A_{\bm{k}}(\omega) = - \frac{1}{\pi} \Im \left[ \sum_p G_{p\bm{k}_pp\bm{k}_p}(\omega+i\delta) \right] \end{equation} In this study, we truncate the excitation operator $\Top$ up to singles and doubles (CCSD) as follows: \begin{eqnarray} \Top &\simeq& \sum_{i\bm{k}_ia\bm{k}_a} \tamp{i\bm{k}_i}{a\bm{k}_a} \cre{a\bm{k}_a} \ani{i\bm{k}_i} \nonumber \\ &+& \frac{1}{4} \sum_{i\bm{k}_ij\bm{k}_ja\bm{k}_ab\bm{k}_b} \tamp{i\bm{k}_ij\bm{k}_j}{a\bm{k}_ab\bm{k}_b} \cre{a\bm{k}_a} \cre{b\bm{k}_b} \ani{j\bm{k}_j} \ani{i\bm{k}_i}. \end{eqnarray} By introducing the truncation in the $\Top$ operator, we derive the following equations for $\Lop$, $\Xop_{q\bm{k}_q}$, and $\Yop_{q\bm{k}_q}$ operators maintaining the same accuracy as CCSD: \begin{eqnarray} \Lop &\simeq& \sum_{i\bm{k}_ia\bm{k}_a} \lamp{i\bm{k}_i}{a\bm{k}_a} \cre{i\bm{k}_i} \ani{a\bm{k}_a} \nonumber \\ &+& \frac{1}{4} \sum_{i\bm{k}_ij\bm{k}_ja\bm{k}_ab\bm{k}_b} \lamp{i\bm{k}_ij\bm{k}_j}{a\bm{k}_ab\bm{k}_b} \cre{i\bm{k}_i} \cre{j\bm{k}_j} \ani{b\bm{k}_b} \ani{a\bm{k}_a} \end{eqnarray} \begin{eqnarray} \Xop_{q\bm{k}_q}(\omega) &\simeq& \sum_{i\bm{k}_i} \xamp{i(q\bm{k}_q)}{ } (\omega) \ani{i\bm{k}_i} \nonumber \\ &+& \frac{1}{2} \sum_{i\bm{k}_ij\bm{k}_ja\bm{k}_a} \xamp{i\bm{k}_ij\bm{k}_j(q\bm{k}_q)}{a\bm{k}_a} (\omega) \cre{a\bm{k}_a} \ani{j\bm{k}_j} \ani{i\bm{k}_i} \end{eqnarray} \begin{eqnarray} \Yop_{q\bm{k}_q}(\omega) &\simeq& \sum_{a\bm{k}_a}y_{a\bm{k}_a(q\bm{k}_q)}(\omega)\cre{a\bm{k}_a} \nonumber \\ &+& \frac{1}{2} \sum_{i\bm{k}_ia\bm{k}_ab\bm{k}_b} \yamp{i\bm{k}_i(q\bm{k}_q)}{a\bm{k}_ab\bm{k}_b} (\omega) \cre{a\bm{k}_a} \cre{b\bm{k}_b} \ani{i\bm{k}_i}. \end{eqnarray} In particular, $\Xop_{q\bm{k}_q}$ operators are truncated up to $1h$ (first term of the right hand side (r.h.s.)) and $2h1p$ term (second term of the r.h.s.), and $\Yop_{q\bm{k}_q}$ operators are similarly truncated up to $1p$ (first term of the r.h.s.) and $2p1h$ term (second term of the r.h.s.). These truncation for $\Xop_{q\bm{k}_q}$ and $\Yop_{q\bm{k}_q}$ leads to the expression of the wave function after electron attachment/removal to be \begin{equation} \begin{split} \ket{\Psi^{N-1}_{q\bm{k}_q}} = &e^{\Top} \sum_{i\bm{k}_i} \xamp{{i\bm{k}_i}(q\bm{k}_q)}{} (\omega) \ani{i\bm{k}_i} \ket{\Psi_0} \\ &+ e^{\Top} \sum_{i\bm{k}_ij\bm{k}_ja\bm{k}_a} \xamp{i\bm{k}_ij\bm{k}_j(q\bm{k}_q)}{a\bm{k}_a} (\omega) \cre{a\bm{k}_a} \ani{j\bm{k}_j} \ani{i\bm{k}_i} \ket{\Psi_0} \\ \equiv &e^{\Top} \sum_{1h} \ket{1h} + e^{\Top} \sum_{2h1p} \ket{2h1p} \label{def_Psi_N-1} \end{split} \end{equation} \begin{equation} \begin{split} \ket{\Psi^{N+1}_{q\bm{k}_q}} = &e^{\Top} \sum_{a\bm{k}_a} \yamp{a\bm{k}_a(q\bm{k}_q)}{} (\omega) \cre{a\bm{k}_a} \ket{\Psi_0}\\ &+ e^{\Top} \sum_{i\bm{k}_ia\bm{k}_ab\bm{k}_b} \yamp{i\bm{k}_i(q\bm{k}_q)}{a\bm{k}_ab\bm{k}_b} (\omega) \cre{a\bm{k}_a} \cre{b\bm{k}_b} \ani{i\bm{k}_i} \ket{\Psi_0} \\ \equiv &e^{\Top} \sum_{1p} \ket{1p} + e^{\Top} \sum_{2p1h} \ket{2p1h}, \label{def_Psi_N+1} \end{split} \end{equation} where we introduced notations describing subspace in Hilbert space, $\ket{1h}$, $\ket{2h1p}$, $\ket{1p}$, and $\ket{2p1h}$, representing one electron annihilated, one electron annihilated and one electron excited, one electron created, and one electron created and one electron excited from the HF electron configuration, respectively. The computational cost for CCSD and $\Lambda$-CCSD is $\order{N^6 N_k ^4}$, where $ N_k $ is the number of sampled $k$-points in the Brillouin zone. Solving the IP/EA-EOM-CCSD linear equations is computationally demanding. We use the LU-decomposition method, which costs $\order{N^9 N_k ^6 N_{\omega}}$, where $N_{\omega}$ is the number of $\omega$ mesh. \section{Results} \label{sec:periodic_1d} \subsection{One-dimensional LiH chain}\label{results_LiH} We first show the calculated results of one-dimensional LiH chain. We consider a system in which Li and H atoms are aligned alternately and the Li-H bond lengths are the same everywhere. \begin{figure}[tb] \begin{center} \includegraphics[width=0.7\linewidth]{LiH-Etot-lattice.jpg} \caption{Dependence of the total energy of LiH chain on the lattice constant. Red and blue line represent the total energy obtained from HF and CCSD calculations, respectively. \label{img:LiH-Etot-lattice}} \end{center} \end{figure} We first optimized the lattice constant based on HF and CCSD. The reference state in Eq.~(\ref{wfn_CCSD}) has been obtained by the restricted Hartree--Fock (RHF) method with the STO-3G basis set, i.e., H-$1s$, Li-$1s$, Li-$2s$, and Li-$2p$ orbitals. The number of sampling $k$-points, whom we shall refer to as $ N_k $ throughout this paper, is set to be 8 for this examination. In Fig.~\ref{img:LiH-Etot-lattice}, we show the total energy from HF calculations, $ E_\mathrm{HF}$, and that from CCSD calculations, $ E_\mathrm{CCSD}$. We find that the total energy is minimized at $6.21$ {\AA} in HF and $6.24$ {\AA} in CCSD calculations. Here we note that both the lattice constant and the minimized total energy in HF scheme are comparable to those calculated in the past studies \cite{shukla1998, delhalle1980}. We hereby adopt the latter one to be the lattice constant of LiH chain throughout this paper. Next we determined $ N_k $ by checking the dependence of the lattice constant and the band structures on $ N_k $. We have compared the optimized lattice constant with $ N_k =8$ and that with $ N_k =16$. We have found that the two lattice constants do not change within $0.01$ {\AA} difference. This shows that $ N_k =8$ is large enough for the calculations of the lattice constant. Therefore, we adopted $ N_k =8$ in the subsequent calculations. \begin{figure} \includegraphics[width=0.9\linewidth]{LiH-spectrum-all.jpg} \caption{Band structure of the LiH chain from HF (a) and GFCCSD (b) calculated with $ N_k =8$. $a$ in the horizontal axis represents the lattice constant.} \label{LiH-spectrum-all} \end{figure} Fig.~\ref{LiH-spectrum-all}(a) shows the calculated band structure by HF method. Two valence and four conduction bands appear, all of which are spin-degenerate. The system is a typical ionic one, and an electron is thought to be transferred from Li to H atom. By checking the wave function character of each band, we have confirmed that the lowest band at $-2.3$ Hartree is mainly attributed to Li-$1s$ orbital while the second lowest one to H-$1s$ orbital. The lower two conduction bands are made up of Li-$2p$ orbitals whose directions are orthogonal to the Li-H bonds. The third lowest band is Li-$2s$ orbital, while the highest energy band is Li-$2p$ orbital. The calculated band gap at the $\Gamma$ point is $0.49$ Hartree. We present the band structure in GFCCSD scheme in Fig.~\ref{LiH-spectrum-all}(b) with $\delta=0.005$ Hartree. Compared with the band structure in HF scheme, its quasiparticle bands have become broad especially in the conduction bands representing the finite lifetime of quasiparticles. The calculated band gap is $0.45$ Hartree in GFCCSD, which is narrower than that in HF. This fact agrees with the empirical rule that the correlation effect narrows band gaps. Another striking feature is the emergence of satellite bands at $-1.04$ Hartree. \begin{figure} \begin{minipage}{1\linewidth} \centering \subcaption{Overall peaks} \includegraphics[width=0.7\linewidth]{LiH-DOS-k8.jpg} \end{minipage}\\ ~\\~\\~ \begin{minipage}{1\linewidth} \centering \subcaption{Satellite peaks} \includegraphics[width=0.7\linewidth]{LiH-DOS-k8-sat.jpg} \end{minipage} \caption{The density of states (DOS) of LiH chain calculated from GFCCSD with $ N_k =8$. The positions of the green sticks in (a) represent the eigenvalues obtained in HF calculations. In (a), its overall shape are shown. The regions where satellite peaks emerge are enlarged in (b). \label{img:LiH-DOS-k8}} \end{figure} Fig.~\ref{img:LiH-DOS-k8} shows the density of states (DOS) of the LiH chain. It should be noted that the DOS in Fig.~\ref{img:LiH-DOS-k8} shows very spiky peaks due to the limitation of the $k$-point sampling. It is expected, therefore, that at $ N_k \to \infty$ limit the gaps between the spiky peaks become smaller to be a continuous spectrum. In the plot, we observe two sharp peaks at $-2.30$ and $-0.34$ Hartee, broader peaks near $0.1$ Hartree, and a hump-like one located at around $0.6$ Hartree. We identify that all of these are quasiparticle peaks that correspond to certain energy bands. We can identify the characters of these peaks by checking the wave function at each peak. The two sharp peaks correspond to the lowest and the second lowest bands. The group of peaks derive from the three conduction bands that are located in the range between $0$ and $0.3$ Hartree in Fig.~\ref{LiH-spectrum-all}. The hump-like peak corresponds to the highest conduction band. We also confirm the shift of these peaks from the HF results, which are indicated by green sticks in Fig.~\ref{img:LiH-DOS-k8}. The lowest and second lowest quasiparticle peaks in GFCCSD are about $0.07$ and $0.03$ Hartree higher than in HF scheme each, while the conduction-band minimum is lower by $0.01$ Hartree. The energy position of the satellite peaks We observed are at $-2.93$, $-2.79$, and $-1.04$ Hartree. Above 0 Hartree, in contrast, we find no clear satellite peaks. By integrating the satellite peaks between the first and second peaks, the weight is calculated to be 0.14. \subsubsection{Restricting the active space in LiH chain} \label{subsec:active-space} Since the calculation cost of GFCCSD is huge, which is at least $\order{N^6 N_k ^4}$ for periodic systems, the number of orbitals to take into account should be suppressed, or minimize the size of the active space in other words, as long as the accuracy of the calculation is maintained. One idea is to exclude some orbitals that are unlikely to improve the reference wave functions, such as deep levels or unoccupied orbitals that are far from the Fermi levels. This consideration is what is called the restriction of the active space in quantum chemistry. This has to be done carefully by considering the physical meaning of each orbital in the material. \begin{figure}[tb] \begin{center} \includegraphics[width=0.7\linewidth]{LiH-DOS-k8-comparison.jpg} \caption{Comparison of all-electron (AE) and valence-electron (VE) calculations of LiH chain for DOS. \label{img:LiH-AE-VE-comparison}} \end{center} \end{figure} To check the validity of the choice of the active space in the LiH chain case, we first examined the DOS with only changing the active space from subsec.\ref{results_LiH}. As shown in Fig.~\ref{LiH-spectrum-all}(a), of the two valence bands, the lower one is energetically far from the Fermi energy, implying that its contribution to the correlation energy might be negligible. Therefore, we performed GFCCSD calculations neglecting the lowest band. We compared the DOS from that in subsec.~\ref{results_LiH} and that with the smaller active space, which is presented in Fig.~\ref{img:LiH-AE-VE-comparison}. We find that the peak positions are identical to each other above $-1.5$ Hartree. The VE plot shown as a blue line has no peak below $-2.0$ Hartree because of the lack of the lowest band in its active space. This manifests that by choosing the proper active space, we can reduce the calculation cost without reducing the calculation accuracy. \subsection{One-dimensional C chain} The unit cell of a C chain contains two inequivalent C atoms to form periodically arranged dimers. The geometric structure of C chain has been determined to be the one that minimizes the CCSD total energy. We have relaxed both the lattice constant and the C-C bond lengths at the same time with the STO-3G basis set, which includes C-$1s$, $2s$, $2p$ orbitals. The energy surface is shown in Fig.~\ref{img:C-Etot}. The optimized lattice constant and the C-C bond length have been found to be $5.0$ and $2.29$ Bohr ($2.65$ and $1.21$ {\AA}), respectively. These values are 5\% larger and 1\% smaller than experimental ones, $4.76$ and $2.32$ Bohr ($2.52$ and $1.23$ {\AA}) \cite{shi2016}. \begin{figure}[tb] \begin{minipage}{1\linewidth} \centering \subcaption{Hartree--Fock} \includegraphics[width=0.7\linewidth]{C-EHF.jpg} \end{minipage}\\ ~\\~\\~ \begin{minipage}{1\linewidth} \centering \subcaption{CCSD} \includegraphics[width=0.7\linewidth]{C-ECCSD.jpg} \end{minipage} \caption{Dependence of the total energy of C chain on the lattice constant (horizontal axis) and the C-C distance within a unit cell (vertical axis) calculated with $ N_k =4$. The energy is shown in the unit of Hartree. \label{img:C-Etot}} \end{figure} \begin{figure} \includegraphics[width=0.9\linewidth]{C-spectrum-all.jpg} \caption{Band structure of the C chain from (a) HF and (b) GFCCSD calculated with $ N_k =6$. $a$ in the horizontal axis represents the lattice constant.} \label{img:C-spectrum-all} \end{figure} \begin{figure} \includegraphics[width=0.5\linewidth]{C-spectrum-k8-cut.jpg} \caption{Band structure of the C chain from GFCCSD calculated with $ N_k =8$. $a$ in the horizontal axis represents the lattice constant.} \label{img:C-spectrum-k8} \end{figure} We first examined the HF band structure of the C-chain. The band structure is shown in Fig.~\ref{img:C-spectrum-all}(a). There are doubly degenerate bands at $-11$ Hartree. They are found to derive from the $1s$ orbitals of C atoms. The character of the valence-band, which is doubly degenerate, is a hybridized of two carbon $2p$ orbitals perpendicular to the C-C direction. The C chain is a typical covalent material. Next we explored the possibility of reducing the active space following subsec.~\ref{subsec:active-space}, adopting $ N_k =4$. The doubly degenerate bands at $-11$ Hartree are expected to make little contribution to the chemical bonding of the system. Therefore, it is reasonable to exclude these two bands from the active space. This notion has been found to be valid by confirming that the DOS obtained from all-electron calculation and the one obtained without the deep-level bands coincide with each other near the gap. Fig.~\ref{img:C-spectrum-all} (b) shows the band structure in GFCCSD with optimized parameters stated above. The band gap, which is calculated from the peak positions at the Brillouin zone edge $\pi/a$, is 0.50 Hartree, while in HF it is 0.55 Hartree, suggesting the correction of the band structure by the incorporation of the correlation effect. Also, in this system, we observe satellite peaks below the quasipaticle peak located at $-1$ Hartree. However, one can see a clear difference than those in LiH chain that satellite peaks are much broader than LiH chain. To take a closer look at the satellite peaks, we show the results from $ N_k =8$ in Fig. \ref{img:C-spectrum-k8}. The DOS calculated in GFCCSD is shown in Fig.~\ref{img:C-DOS-k6-cut}. Sharp peaks between $-0.96$ and $-0.8$ Hartree and those between $-0.8$ and $-0.5$ come from the second and the third lowest band, respectively, both of which are $sp$-hybridized orbitals that form $\sigma$-bondings with neighboring atoms. Those between $-0.5$ and $-0.2$, on the other hand, correspond to two degenerate $2p$ orbitals that are orthogonal to the bonding direction and then create $\pi$ bondings. One distinct feature in the plot is the emergence of broad satellite peaks just below the lowest quasiparticle peak at $-0.8$ Hartree. The integrated value of the satellite peaks, which are located below $-1$ Hartree, is $0.96$. Considering that this system is spin-degenerate and thus every spacial orbital is occupied by two electrons, this implies that some quasiparticle peaks between -1 and 0 Hartree consist of less than two electrons. Examining the valence quasiparticle peaks, all of which corresponds to a certain mean-field energy band, we have found that the integration of both the lowest and the second lowest quasiparticle peaks yield 1.5, while those of other peaks are close to 2. This indicates that that the satellite peaks derive the lowest and the second-lowest quasiparticle peaks. \begin{figure}[tb] \begin{minipage}[b]{1\linewidth} \centering \subcaption{Total DOS} \includegraphics[width=0.8\linewidth]{C-DOS-k6-cut.jpg} \end{minipage} \\ \begin{minipage}[b]{1\linewidth} \centering \subcaption{Satellite peaks} \includegraphics[width=0.8\linewidth]{C-DOS-k6-cut-sat.jpg} \end{minipage} \caption{DOS of C chain calculated with $ N_k =6$. The positions of the green sticks in (a) represent the eigenvalues obtained in HF calculations. \label{img:C-DOS-k6-cut}} \end{figure} \subsection{One-dimensional Be chain} \begin{figure} \includegraphics[width=0.9\linewidth]{Be-spectrum-all.jpg} \caption{Band structure of the Be chain calculated from (a) HF and (b) GFCCSD with $ N_k =14$. $a$ in the horizontal axis represents the lattice constant.} \label{img:Be-spectrum-all} \end{figure} \begin{figure}[tb] \begin{minipage}[b]{0.45\linewidth} \centering \subcaption{Deep level} \includegraphics[width=1\linewidth]{Be-spectrum-k14-1s.jpg} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \subcaption{Near the gap} \includegraphics[width=1\linewidth]{Be-spectrum-k14-gap.jpg} \end{minipage} \caption{Enlarged illustration of Fig. \ref{img:Be-spectrum-all}: (a) the lowest valence states and (b) those near the gap. \label{Be-band-fine}} \end{figure} \begin{figure}[tb] \includegraphics[width=0.7\linewidth]{Be-DOS-k14-all.jpg} \caption{Density of states (DOS) of Be chain calculated from GFCCSD with $ N_k =14$. \label{img:Be-DOS-k14-all}} \end{figure} As an example of van der Waals materials, we picked up a one-dimension Be chain for our target. Be atom has closed shells up to $2s$ orbital. Therefore, the force condensing the Be atoms is van der Waals interaction. We used the lattice constant of 3.0 {\AA} which Ref.~\cite{hirata2004} determined. The number of $k$-points is determined to be 14 after checking the accuracy of the total energy within the error of 10 meV/cell. The band structure calculated on HF is shown in Fig.~\ref{img:Be-spectrum-all}(a). The lowest band located at $-4.5$ Hartree in energy has a character of Be-$1s$ orbital. The second lowest is made up of Be-$2s$ orbital with some dispersion resulting from the interaction between the adjacent Be atoms. In contrast, the lowest conduction band at $\Gamma$ point is doubly degenerate having the Be-$2p$ orbitals perpendicular to the Be-Be direction. The highest band is Be-$2p$ orbital pointing the Be-Be direction. We applied the GFCCSD method to this system, whose result is shown in Fig.~\ref{img:Be-spectrum-all}(b). The overall features of quasiparticle peaks are understood by comparing with the HF results. One of the most interesting points is the appearance of two different kinds of satellite peaks. One can see discrete and almost flat satellite peaks (see also Fig.~\ref{Be-band-fine}(a), in which the satellite peaks around $1s$ are shown in the enlarged picture). We have checked the dependency of the number of satellite peaks on $ N_k $ by changing $ N_k $ from 10 to 14. We did not find, however, any differences in the number of satellite peaks. Therefore we conclude that the number of the satellite peaks is completely independent of the number of $k$-points. The similar satellite peaks are also seen in unoccupied side above $0$ Hartree. The doubly degenerate conduction band has many duplicates above the quasiparticle bands. The other type of satellite peaks is observed below the highest valence band in the energy region between $0$ and $-0.7$ Hartree. For the satellite peak, we cannot see duplicate bands different from the other ones. By increasing $ N_k $, we can see a band structure of satellite peaks below the highest valence band with different dispersions of the valence band. The calculated DOS is also shown in Fig.~\ref{img:Be-DOS-k14-all}. We can also see the differences in the two kinds of satellite peaks in the figure: one that appears like a spiky structure, and the one that is broaden or bump peak structure. \section{Conclusion} \label{conclusion} We have calculated the band structures through GFCCSD method for various kinds of systems from ionic to covalent and van der Waals systems for the first time: one-dimensional LiH chain, one-dimensional C chain, and one-dimensional Be chain. We have found that the band gap becomes narrower than in HF due to the correlation effect. We have also shown that the band structures obtained from GFCCSD, which includes both quasiparticle and satellite peaks. Also, taking one-dimensional LiH as an example, we have discussed the validity of restricting the active space to suppress the computational cost of GFCCSD while keeping the accuracy, and found that the calculated results without bands that do not contribute to the chemical bondings were in good agreement with full-band calculations. By GFCCSD method, we can calculate the total energy and band structures within the framework of CCSD with a great accuracy. \begin{acknowledgments} This research was supported by MEXT as Exploratory Challenge on Post-K computer'' (Frontiers of Basic Science: Challenging the Limits). This research used computational resources of the K computer provided by the RIKEN Advanced Institute for Computational Science through the HPCI System Research project (Project ID: hp170261). Y.M. acknowledges the support from JSPS Grant-in-Aid for Young Scientists (B) (Grant No. 16K18075). \end{acknowledgments} \bibliographystyle{apsrev4-1}
1,108,101,564,058
arxiv
\section{Introduction} Blockchain has enabled computer systems to be more secure using a distributed network.~\cite{zhang2022sok,zhang2022,zhang2022BNS,ao2022} However, the current blockchain design suffers from fairness issues in transaction ordering.~\cite{kelkar2022order} Miner extractable value (MEV), first coined by Daian et al. in 2020~\cite{daian_flash_2019-1}, refers to the value that miners can extract by reordering the transactions on the blockchain. For example, on the Proof-of-Work (PoW) Ethereum~\cite{ethereum.org}, miners can order, include, and exclude transactions in mem-pool, a pool where transactions are stored or sorted temporarily before adding to the new blocks. Researchers~\cite{torres2021frontrunner} found that from 2015 to 2020, the 199724 frontrunners had cumulative profits of more than 18.4 billion USD. Since the transition of Ethereum from PoW to Proof-of-Stake (PoS), miners no longer have a role in the blockchain protocol. Instead, validators take charge of validating transactions on the blockchain. However, the method of extracting value by manipulating the transaction order still exists. Therefore, people now use MEV as an abbreviation for the maximum extractable value in PoS Ethereum, the so-called Ethereum 2.0. Existing research recognizes MEV as a severe security issue and proposes potential solutions~\cite{Chainlink,yang_2022_sok} including the prominent Flashbots~\cite {weintraub2022flash}. However, previous studies have mostly analyzed blockchain data, which might not capture the impacts of MEV in a much broader AI society. Therefore, we extend the study of MEV from blockchain data to a broader community on social media platforms. Specifically, our study targets two research questions (RQs): \begin{enumerate} \item\textbf{RQ1}: What are the main keywords and topics being discussed in tweets with \#MEV and \#flashbots hashtags, and what are the connections between those keywords? \item\textbf{RQ2}: What are the connections between the MEV activities on blockchain and discussions on social media platforms? \end{enumerate} In this study, we applied natural language processing (NLP) methods to analyze topics in tweets on MEV comprehensively. We queried more than 20000 tweets with \#MEV and \#Flashbots hashtags from 2019 to October 2022. We also included corresponding Google Trend data in the same period for reference and comparison. To explore the connections between the MEV activities on blockchain and discussions on social media platforms, we collected the gross profit data of MEV from Flashbots. Our results show that the tweets discussed profound topics of ethical concern, including security, equity, emotional sentiments, and the desire for solutions to MEV. According to the keyword statistics, the discussion about MEV is highly concentrated on the Ethereum blockchain. The result also indicates that the MEV problem is one of the most urgent problems on the Ethereum blockchain, and practical solutions are highly demanded. In addition to Flashbots, the topics mention several alternative solutions to MEV, but Flashbots appears to be the most promising one. Some potential nontraditional solutions are also mentioned, such as machine learning. Moreover, other nontechnical keywords indicate that people generally express negative emotions toward MEV, e.g., a feeling of unfairness. We also identify the co-movements of MEV activities on blockchain and social media platforms. Our study contributes to the literature at the interface of blockchain security, MEV solutions, and AI ethics. In Section 2, we discuss the related literature and background. Section 3 introduces the data and methodology. Section 4 presents the results of the two research questions respectively. Section 5 discusses and concludes. We provide a glossary Table~\ref{tab:Glossary Table} in the Appendix. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Venn_diagram.png} \caption{Common topics between MEV and flashbots, and unique topics in the two hashtags} \label{fig:blockchain-security} \end{figure} \section{Related Literature and Background} Our research contributes to three lines of literature: blockchain security, MEV solutions, and AI ethics. \subsection{Blockchain Security: MEV issues} Ethereum blockchain facilitates transactions with the use of smart contracts. In Ethereum, nodes collect transaction information from networks, and miners record the transactions into blocks. Before being added to the blocks, transactions are temporarily stored and sorted in the mem-pool. Miners select transactions in the mem-pool and execute Proof of Work. Whoever (miners) wins the race of PoW can add the block to the network.~\cite{ethereum.org} The order of transactions is predetermined. The execution depends on the initial transaction sets in front of the block or in the same block. However, when Daian et al.~\cite{daian_flash_2019} introduced frontrunning in cryptocurrency decentralized exchange (DEX), miners could change the order of the transactions. In general, MEV is an activity in which attackers (or profit seekers) discover certain instabilities and look for extractable values ~\cite{sandwichattack}. MEV has different strategies to obtain profits. One of the most common strategies is called a sandwich attack. For example, if A is prompting a transaction to purchase a token, the attackers who discovered A’s attempt could buy this token ahead of A at a high price and then sell this token after A. The attackers manage to extract a profit from the series of transactions. Another commonly used strategy is called the arbitrage attack. . In arbitrage, the same good is purchased and sold simultaneously in different markets, and profits are made from the difference in the price of the same good in different markets. In Ethereum, if two or more DEXs offer the same token in different prizes simultaneously, one can buy the cheaper token and sell it at a higher price. Our research contributes to the literature by analyzing concerns about blockchain security discussed on social media platforms. \subsection{MEV solutions: Flashbots and alternatives} Methods to mitigate MEV problems are divided into two main categories: democratization and minimization. MEV minimization sets up roles to make MEV impossible or increases the risk to be larger than the MEV benefits. For example, Ethereum 2.0 upgrades from proof-of-work (PoW) to proof-of-stake (PoS) and introduces slashing ~\cite{piet_extracting_2022} to punish misbehavior regarding MEV. Third-party researchers propose minimization solutions such as fair sequencing services ~\cite{Chainlink} and Conveyor~\cite{network_whats_2021}. Alternatively, MEV democratization tries not to eliminate but to democratize MEV behavior so that everyone has access to information that is available to miners. The most popular approaches, such as Flashbots ~\cite{weintraub2022flash} direct transactions to private relays to reduce the mem-pool bidding war~\cite{noauthor_flashbots_nodate}.\footnote{There are also similar solutions like Eden Network~\cite{piatt_eden_2021} and CoW Protocol~\cite{noauthor_cow_nodate}, etc.} Flashbots aim to mitigate the negative externalities of the current MEV by establishing a fair, transparent, and permissionless ecosystem. Two initiatives mainly support it, \textit{MEV-geth} and \textit{MEV-inspect}~\cite{daian_flash_2019}. Specifically, Flashbots provide a new auction ecosystem with three primary roles: searchers, relays, and miners. Searchers seek MEV opportunities. Once they find potential transactions promoted by peers, they create a bundle that contains a set of transactions. The bundle includes the fee paid to the miners and the searchers themselves. Searchers then send the bundles to the relays instead of the mem pool. Relays collect the bundles and send them to the miners. Since the bundles are sent to the Flashbots, the miners are exclusively Flashbots miners. Miners then collect bundles and select the most profitable ones, and only one transaction can be accounted for in each block. Miners can determine which transaction to mine based on MEV-geth, a forked version of the Go-Ethereum client ~\cite{weintraub2022flash}. Our research contributes to the literature by evaluating MEV solution discussions on social media platforms and their connections to MEV activities on the blockchain. \subsection{AI Ethics} Researchers have expressed concern about the ethical aspects of security issues related to blockchain technologies. Bertino et al. ~\cite{bertino2019data} noted that if data were gathered and used, based on some ethical principles of data transparency, it would provide a novel way for policy-makers to assess the mechanism of blockchain transactions. Another group of researchers proposed a list of ethical concerns about blockchain technology categorized into four areas: technology stack, cryptocurrencies, smart contracts, and decentralization. One of the major concerns in the cryptocurrency area is whether the coin mining mechanism is ethically sustainable and fair ~\cite{tang2019ethics}. Regarding this question, Ben and his colleagues assessed Flashbots. They argued that in the new auction mechanism of Flashbots (as introduced previously), only some users can receive fair profits, and miners benefit more than searchers~\cite{weintraub2022flash}. However, although the researchers provided comprehensive insights into the ethical discussion, the existing research needs to better evaluate the ethical issue of blockchain based on the real-life reactions of blockchain users. In this article, we measure and evaluate people’s reactions and feedback toward blockchain security issues on social media platforms. \section{Data and Methodology} The data and code in this project are open-sourced and can be accessed at \url{https://github.com/SciEcon/blockchain-ethics} \begin{table}[!htbp] \centering \begin{tabular}{|>{\hspace{0pt}}m{0.069\linewidth}|>{\hspace{0pt}}m{0.298\linewidth}|>{\hspace{0pt}}m{0.571\linewidth}|} \hline \textbf{Index} & \textbf{Date} & \textbf{Tweets} \\ \hline 0 & 2021-12-31 15:53:38+00:00 & @willwarren No worries @foldfinance solves thi... \\ \hline 1 & 2021-12-31 05:56:23+00:00 & Vshadow textbackslash{}n\#Imgoodgirl\textbackslash{}n... \\ \hline 2 & 2021-12-31 05:49:28+00:00 & This is what a sandwich attack looks like. The... \\ \hline 3 & 2021-12-29 21:11:16+00:00 & \#Memoria...líderes de jxc apoyaron públicament... \\ \hline 4 & 2021-12-29 17:34:14+00:00 & even if you don’t have that much money to clai... \\ \hline \end{tabular} \caption{Sample Tweets Data} \label{tab:sample-tweets-data} \end{table} \subsection{Data} We collected three datasets. The first includes tweets and Google Trends data. For Tweets, we used \texttt{snscrape}\footnote{\url{https://github.com/JustAnotherArchivist/snscrape}} to query primary data for our research. \texttt{snscrape} is a Python library to scrape posts on a variety of social networks with specific topics or hashtags. We queried two datasets, one with the hashtag \#mev and the other with the hashtag \#flashbots. We queried twitter data from 2019-01-01 to 2022-10-01. In total, we found in total 20574 tweets with hashtag \#mev and 852 tweets with hashtag \#flashbots. The queried data includes two columns which are date and content. Table \ref{tab:sample-tweets-data} shows examples of downloaded data. Next, we use Python library \texttt{pytrend}\footnote{\url{https://github.com/GeneralMills/pytrends}} to query Google Trend data for two topics, ``MEV'' and ``flashbots''. pretend provides API to automatically download reports from Google Trend\footnote{\url{https://trends.google.com/trends}}. Then, we merge the Google Trend data with the tweets by date as in Table~\ref{tab:sample-merge-data}. In addition, we also queried Ethereum MEV records from the flashbots' MEV-Explore dashboard\footnote{Flashbots MEV-Explore public dashboard https://explore.flashbots.net/ consists of various historical statistics of MEV activities on the Ethereum blockchain.} to compare on-chain and social media activities. \begin{table}[!htbp] \centering \begin{tabular}{|l|l|l|p{.2\linewidth}|l|} \hline & \textbf{date} & \textbf{google trend} & \textbf{tweet\_volume} & \textbf{tweet\_len} \\ \hline \textbf{0} & 2021-04-11 & 23 & 3 & 20.666667 \\ \hline \textbf{1} & 2021-04-18 & 23 & 2 & 37.500000 \\ \hline \textbf{2} & 2021-04-25 & 0 & 1 & 42.000000 \\ \hline \textbf{3} & 2021-05-02 & 0 & 1 & 34.000000 \\ \hline \textbf{4} & 2021-05-09 & 24 & 1 & 19.000000 \\ \hline \end{tabular} \caption{Sample Merged Data: column \textbf{date} shows the date in YYYY-MM-DD format; column \textbf{google trend} shows the Google Trend index; column \textbf{tweet\_volume} is the count of tweets with a specific topic (``MEV'' for example); column \textbf{tweet\_len} is the sum of the length of the tweets in one day.} \label{tab:sample-merge-data} \end{table} \subsection{Methodology} Our NLP methods include keyword analysis and Latent Dirichlet Allocation (LDA), similar to the quantitative methods in~\cite{tong_what_2022-1}. \subsubsection{Keywords Analysis Methods} Analyzing the trend of discussion on a topic on social media and its high relevance is helpful to understand the history and development of the topic and future trends. In this study, we trace and then quantify the activity of the hashtags \#mev and \#flashbots on Twitter and compare them with their Google Trends profiles. We first conduct the Spearman correlation test between tweet volume and Google Trends data and then plot the time series for each hashtag to reveal the correlation between their activity on Twitter and Google. Next, we count and sort the keywords’ appearance (irrelevant words such as emojis and common words are excluded) in tweets and draw a word cloud that shows the most relevant topics discussed on social media to the two hashtags \#mev and \#flashbots. After that, we use Python library \texttt{NetworkX}\footnote{\url{https://networkx.org/}} to draw a network on keywords. The edge in the network indicates that two keywords occur in the same topic, and the thickness of the edge is proportional to the frequency of co-occurrence. \subsubsection{Latent Dirichlet Allocation for Topic Analysis} We utilize Latent Dirichlet Allocation (LDA)~\cite{blei_latent_2003} to reveal the topic tendency of the collected tweets on \#mev and \#flashbots. LDA is a statistical model that groups data and explains why some groups of data are similar. The LDA model is widely used in natural language processing. Our research utilizes an LDA model implemented in the Python library \texttt{gensim}~\cite{rehurek_lrec}. The LDA in \texttt{gensim} implementation has three hyperparameters: (1) integer $K$, the number of topics; (2) rational number $\alpha$ between 0 and 1 controls the per-document topic distribution, a higher $\alpha$ results in more topics in a document; (3) rational number $\beta$ between 0 and 1 controls the per-topic word distribution, a higher $\beta$ results in more keywords in a topic. For results, the LDA model produces the probability of the corpus as shown in equation \eqref{eq:lda}. \begin{equation} \begin{aligned} &p(\mathcal{D}|\alpha, \beta)\\ &=\prod_{d=1}^{K}{ \int{p(\theta_d|\alpha)}\left( \prod_{n=1}^{N_d}{ \sum_{Z_{d_n}}{ p(z_{d_n} | \theta_d)p(w_{d_n}|z_{d_n},\beta) } } \right)d\theta_d } \label{eq:lda} \end{aligned} \end{equation} In equation \eqref{eq:lda}, $\theta$ is the joint distribution of a topic mixture, $z$ is the number of topics, and $N$ is the number of words in a set. The model is trained with various $\alpha$, $\beta$, and $K$. We use a coherence score for parameter optimization. A high coherence score in a topic indicates a higher semantic similarity among keywords in which. We manually tried out $K\in\{1,5,10,15,20,25,30\}$ then adopt SA-LDA~\cite{pathik_simulated_2020} algorithm for $\alpha$ and $\beta$ to optimize hyperparameters. Ultimately, we achieved $K=20$, $\alpha=0.31$ and $\beta=0.61$ with a coherence score of 0.4296 for hashtag \#flashbots and $K=5$, $\alpha=0.25$ and $\beta=0.91$ with a coherence score of 0.4687 for hashtag \#mev. \section{Results} \subsection{Answers to RQ1} This section answers the RQ1 based on the four analyses below: \begin{enumerate} \item We calculate and ranked the frequent keywords among the tweets under the hashtag of \#MEV and \#Flashbots. \item We use the LDA analysis to analyze people's reactions and emotions behind the potential MEV security issue. \item We establish Network Analysis (NA) to seek the intrinsic relationship between keywords under each Twitter hashtag. \item We compare the Google Trend data with real-world events. \end{enumerate} \subsubsection{Keywords} \begin{table}[!htbp] \begin{minipage}[c]{0.5\textwidth} \begin{tabular}{|l|l|} \hline Keywords & Frequency \\ \hline \#flashbots & 513 \\ \hline \#Flashbots & 340 \\ \hline MEV & 219 \\ \hline \#MEV & 152 \\ \hline ETH & 85 \\ \hline mist & 84 \\ \hline Flashbots & 76 \\ \hline opensea & 72 \\ \hline \#Ethereum & 67 \\ \hline Support & 65 \\ \hline artist & 65 \\ \hline grow & 65 \\ \hline Shill & 65 \\ \hline Shizzlebotz & 65 \\ \hline \#nftshill & 65 \\ \hline \#PolygonNFT & 65 \\ \hline \#openseaNFT & 65 \\ \hline \#mev & 49 \\ \hline thegostep & 49 \\ \hline transactions & 47 \\ \hline gas & 46 \\ \hline miners & 41 \\ \hline MIST & 41 \\ \hline \#DeFi & 40 \\ \hline Ethereum & 39 \\ \hline team & 39 \\ \hline bertcmiller & 31 \\ \hline transaction & 31 \\ \hline \#FlashBots & 29 \\ \hline \#mistX & 28 \\ \hline NFT & 26 \\ \hline front & 25 \\ \hline \#riverfrontrocks & 25 \\ \hline \#ethereum & 24 \\ \hline EST & 24 \\ \hline \end{tabular} \caption{Wordcount for \#flashbots} \label{wordcount_flashbot} \end{minipage} \begin{minipage}[c]{0.5\textwidth} \begin{tabular}{|l|l|} \hline Keywords & Frequency \\ \hline \#MEV & 19928 \\ \hline \#arbitrage & 7511 \\ \hline Ecocent & 5966 \\ \hline MEV & 5957 \\ \hline WETH & 5339 \\ \hline video & 4398 \\ \hline simply & 4357 \\ \hline explained & 4352 \\ \hline System & 4351 \\ \hline \#HotWater & 4349 \\ \hline info & 3930 \\ \hline USDT & 3318 \\ \hline USDC & 3290 \\ \hline ROI & 2908 \\ \hline ESP & 2749 \\ \hline view & 2578 \\ \hline WBNB & 2382 \\ \hline triangular & 2102 \\ \hline sandwich & 2078 \\ \hline spatial & 1570 \\ \hline profit & 1267 \\ \hline DAI & 1225 \\ \hline days & 1175 \\ \hline contract & 1075 \\ \hline eye & 1016 \\ \hline \#SandwichAttacker & 1009 \\ \hline SAP & 910 \\ \hline EigenPhi & 877 \\ \hline \#mev & 816 \\ \hline pass & 809 \\ \hline WBTC & 768 \\ \hline BUSD & 754 \\ \hline \end{tabular} \caption{Wordcount for \#MEV} \label{wordcount_MEV} \end{minipage} \end{table} We rank the frequency of keywords for both hashtags as in Table~\ref{wordcount_flashbot} and Table~\ref{wordcount_MEV}. By calculating the word count of the keywords in each topic, we found similarities between these two hashtags. First, both have hashtags with relative semantic meanings. For example, like \textit{Ethereum}, \textit{miner}, and \textit{crypto}, the terminologies in blockchain, are the most salient keywords, appearing more than 300 times, except for \#MEV and \#Flashbots. Also, we observed that \#MEV or \#Flashbots would be mentioned in the other hashtag simultaneously. This reflects the coherent relationship between these two hashtags. As we can see through the table~\ref{wordcount_MEV} and \ref{wordcount_flashbot}, Flashbots was mentioned under the topic of MEV nearly 400 times, and MEV was mentioned nearly 400 times as well under the topic of Flashbots. After investigating the word frequency, we also explored the keyword connections by building up the word bigrams. As we can see in figure \ref{Bigram}, with the selection of the semantic meaning, the top frequent bigrams for the \#flashbots are "mev" \& "flashbots", "crucible" \& "copper", and "flashbots" \& "mist". The top frequent bigrams for the \#MEV are "extractable" \& "value", "mev" \& "flashbots", "macri" \& "macri", "miner" \& "extractable", "front" \& "running", "defi" \& "mev" and "cpb" \& "mev". \begin{figure}[!htbp] \centering \includegraphics[width=0.8\linewidth]{figures/Flashbots_MEV_Network.png} \caption{Bigram of keywords: the bigram of keywords, which illustrates the most frequent keyword pairs that were mentioned in tweets under the hashtag of \#MEV and \#Flashbots.} \label{Bigram} \end{figure} \subsubsection{LDA analysis} We used Genism, an open-source library for unsupervised topic modeling to implement text analysis. We selected 1, 3, 5, 10, 15, 20, 25 as the targeted number of topics to calculate the corresponding coherence score. We found that when k = 1 (the number of topics), the coherence score of \#flashbots (0.482) is the highest. When k = 3, the coherence score of \#MEV, which equals 0.47 is the highest outcome in our model. This indicates that when the number of topics for \#flashbots is 1, and the number of topics for \#MEV is 3, the words in the corpus are relatively more semantically interpretable to humans. Figure \ref{Coherence score} plots the different coherence scores under the different number of topics for two hashtags. Since the coherence score is also attributable to the two hyperparameters, we adopted Pathik's SA-LDA algorithm to find a pair of approximately suitable $\alpha$ and $\beta$. We found that when $\alpha$ = 0.31, $\beta$ = 0.61, and the number of topics = 20, the coherence value of \#flashbots equals 0.4296, which is the highest among all the outcomes of tested combinations. In the same way, when $\alpha$= 0.25, $\beta$ = 0.91, and the topic = 5, the approximate highest coherence value of \#MEV equals to 0.4687. After excluding some words without semantic meaning, for example, emojis and persons’ names, we identify the sets of words categorized by different topics in Table~\ref{Semantic}. We summarize our main findings in Figure~\ref{fig:blockchain-security}. Our results show that the tweets discussed profound topics of ethical concerns including security, equity, emotional sentiments, and craving for solutions of MEV. Table \ref{Salient terms} illustrates the top 30 salient terms generated by the LDA model with the selected parameters of \#MEV and \#flashbots. Combined with LDA analysis, we find that the top 30 salient terms, including some frequently mentioned keywords such as \texttt{ethereum}, \texttt{smart contract}, and \texttt{solution}, etc., are not categorized by the topics. That is to say, the LDA model did not recognize some of their semantic meanings successfully. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{figures/Coherence_scoreofMEVflashbots.png} \caption{Coherence Score for \#MEV} \label{Coherence score} \end{figure} \begin{table}[!htbp] \centering \begin{tabular}{ |p{3cm}||p{3cm}|} \hline \multicolumn{2}{|c|}{Top 30 Most Relevant Terms (Overall term frequency)} \\ \hline \#Flashbots& \#MEV\\ \hline mist&macri\\ mistx&mev\\ flashbots&extractable\\ crucible&ethereum\\ gt&value\\ gas&see\\ bundles&op\\ copper&chain\\ team&mist\\ poap&inflate\\ alchemist&si\\ eth&aiz\\ samiches&eth\\ mev&flashbots\\ going&video\\ pm&door\\ bertcmiller&mehwishhayat\\ good&chainlink\\ future&energy\\ week&mempool\\ thanks&front\\ see&capital\\ ethereum&love\\ thegostep&miner\\ crypto&us\\ leaksblockchain&worldpastsaday\\ new&look\\ block&people\\ via&makes\\ miners&fair\\ \hline \end{tabular} \caption{Top 30 most salient terms of two hashtags} \label{Salient terms} \end{table} \subsubsection{Keywords Network} We used NetworkX on Python to visualize the outcome found in Figure~\ref{Network1} and~\ref{Network2}. Each node represents a commonly used keyword extracted from tweets of \#MEV and \#flashbots, and each edge represents a connection between two keywords. The number of edges of each node represents the number of respective networks with other keywords. We can identify the most commonly mentioned keywords and their relative co-occurring keywords by this analysis. We can see in the solution that the most frequent keywords that appeared together with \#flashbots are \texttt{ethereum}, \texttt{smartcontract}, \texttt{solution}, etc. The most frequent keywords connected with \#MEV are \texttt{flashbots}, \texttt{sandwich}, \texttt{miner}, etc. \begin{figure}[!htbp] \centering \begin{subfigure} \centering \includegraphics[width=0.7\linewidth]{figures/mevnetworks.png} \caption{Network of keywords for \#MEV} \label{Network1} \end{subfigure} \par\bigskip \begin{subfigure} \centering \includegraphics[width=0.7\linewidth]{figures/flashbotsnetwork.png} \caption{Network of keywords for \#flashbots} \label{Network2} \end{subfigure} \end{figure} \subsubsection{Google Trend and Twitter Data} We also researched Google Trends data of two keywords (namely, \#MEV and \#flashbots) to compare their Twitter volume and offline activities. We used the Google Trends API Pytrend to query the Google Trends data. Google Trends reveals the popularity of top search queries in Google Search across various regions and languages. In general, the Google Trends of \#MEV and \#flashbots show moderate consistency with the respective Twitter volumes. We ran the Spearman correlation test between Twitter volume and Google Trends, and the results showed that \#MEV had more moderate consistency (with coefficient= 0.45) than \#flashbots (with coefficient = 0.202). Furthermore, we observed that each peak of flashbots google trends have a certain time interval, with the peaks of Flashbots twitter volume. Figure~\ref{Timeseries_flashbots} illustrates the staggering peaks of Google Trend and Twitter volume for \#Flashbots. Also, we found that every peak of Google Trend or Twitter volume of \#Flashbots could match up with a big offline event of Flashbots corporation. For example, on January 2021, Flashbots Auction Alpha (v0.1) was made available for miners and searchers to adopt. The same year, in May, August, and September, Flashbots Auction Alpha updated and published its latest versions corresponding to the peaks shown in Figure~\ref{Timeseries_MEV}. This will be further discussed in the discussion part. However, the situation of \#MEV was slightly different. Although the Spearman correlation test indicates a higher correlation between Google trends and Twitter volume, those two lines do not show much consistency in peaks. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Google_Trend_Tweets_Volume_for_Flashbots.png} \caption{Time series for Google trend and Twitter volume of \#Flashbots} \label{Timeseries_flashbots} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/MEV_trend_volume.png} \caption{Time series for Google trend and Twitter volume of \#MEV} \label{Timeseries_MEV} \end{figure} \subsection{Answers to RQ2} To respond to this research question, we query the gross profit data of MEV from 2019 to 2022 using Flashbots API. Gross profit here refers to the amount that attackers acquire from MEV arbitrages. As presented in figure~\ref{Grossprofit}, we observe that there exist three spikes between late 2020 and July 2021. At the same time, we compare the gross profit data with twitter volume data. Interestingly, we find that each spike in gross profit data corresponds to a spike in Twitter volume data. We find that After July 2021, the gross profit of MEV remained at a relatively stable and low value when the Twitter volume was at a high level. Thus the high Twitter volume could not be explained by the gross profit in MEV. Instead, We discovered some potential causes from offline events. In the third quarter of 2021, EIP-1559~\cite{liu2022}, a proposal to reform the Ethereum fee market, and a new version of Zero-Knowledge Rollups, ZK-rollups~\cite{ZK-rollups}, regarded as one of the complete solutions to prevent MEV problems, were released. We calculated the frequency of the keywords before and after July of 2021 separately, and we found the frequency of keywords "Solution", "Prevent" and "Attack" after 2021.7 was higher than the time before that time. Thus, the release of ZK-rollups and the new Ethereum transaction mechanism might be the driving force for the high social media appearance of MEV topics. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Grossprofit_MEV.PNG} \caption{MEV Gross profit, tweet volume, and Google trend } \label{Grossprofit} \end{figure} \section{Conclusion and Discussion} In conclusion, we apply Natural Language Processing (NLP) methods to comprehensively analyze topics in tweets of MEV. Our results show that the tweets discussed profound topics of ethical concerns including security, equity, emotional sentiments, and the craving for solutions of MEV. We also identify the co-movements of MEV activities on blockchain and on social media platforms. Our study contributes to the literature at the interface of blockchain security, MEV solutions, and AI ethics. Regarding the different peak times of Google trend and Twitter volume of \#Flashbots, we identify connections between the peaks and the new version release in figure~\ref{timeseries_withlabels}. We observed that the announcements of every latest version of Flashbots will be released through Twitter. Therefore, the peaks of the Google trend obviously tend to appear right after the peaks of Twitter volume. The increases in Google and Twitter trends around the new Flashbots version releases are likely to be from the core teams of Flashbots. Future research could further explore how the discussions of \#mev and \#flashbots differ between users of different backgrounds. For example, how are the topics differ between the core developer team and a broader community of the general public? \begin{figure} \centering \includegraphics[width=\linewidth]{figures/Google_trend_Tweetvolume_with_events.png} \caption{\#Flashbots time-series analysis with landmark events} \label{timeseries_withlabels} \end{figure} In the bigrams and the networks in Figure~\ref{Bigram}, for \#MEV, the keywords that appear simultaneously with MEV (or mev), such as \texttt{ethereum}, \texttt{machinelearning}, and \texttt{solution}, indicate that MEV happened more often in Ethereum Blockchain; for \#Flashbots, frequent keywords such as frontrunning and sandwich explain the core problems solved by Flashbots, namely, the sandwich attacks. Further research could study how the topics differ in alternative MEV solutions other than Flashbots. Table~\ref{Semantic} shows the successfully categorized topics by LDA. In \#flashbots, the most frequently discussed topics are about some blockchain terminologies and DEXs platforms. Another salient topic under this hashtag expresses people's emotional and ethical sentiments, such as fairness, trust, gratefulness, expectation, etc. In contrast, the topics under hashtag \#MEV show people's negative concerns about blockchain security issues such as inflation, unfairness, etc. However, existing research points out that the LDA model has a problem with processing sentiment analysis in short text~\cite{wu2021sentiment}. Therefore, we will consider ameliorating the model in future studies. \begin{table}[!htbp] \centering \begin{tabular}{@{}lll@{}} \toprule & \#MEV & \#Flashbots \\ \midrule 1 & Terminologies & Terminologies \\ 2 & Platforms, companies & Platforms, companies \\ 3 & Concerns, Worries & Mechanisms, auction, etc. \\ 4 & Fairness & Trust, grateful, etc. \\ 5 & Complaints & Future, wishes \\ \bottomrule \end{tabular} \caption{Semantic category of the most salient keywords} \label{Semantic} \end{table} \bibliographystyle{spmpsci}
1,108,101,564,059
arxiv
\section{Introduction} The analysis of jet production in various high energy processes has become a major field for testing perturbative QCD. Recently jet production in $\gamma\gamma$ processes, where the two photons of very small virtuality are produced in the collision of electrons and positrons, has come into focus after data have been collected at the TRISTAN \cite{x1} and LEP \cite{x2} colliders. In addition jet production in deep inelastic (high $Q^2$) \cite{x3} and low $Q^2$ $ep$ scattering (equivalent to photoproduction) has been measured at the two HERA experiments H1 \cite{x4} and ZEUS \cite{x5}. Jet production in $\gamma\gamma$ and $\gamma p$ collisions has several similarities. At very small $Q^2$, where $q$ ($Q^2=-q^2$) is the four-momentum transfer of the electron (positron) producing the virtual photons in the $\gamma\gamma$ or $\gamma p$ initial state, the emission of the photon can be described in the Equivalent Photon Approximation. The spectrum of the virtual photons is approximated by the Weizs\"acker-Williams (WWA) formula, which depends only on $y=E_{\gamma}/E_e$, the fraction of the initial electron (positron) energy $E_e$ transferred to the photon with energy $E_{\gamma}$, and on $Q_{\max}^2$ (or $\theta_{\max}$), which is the maximal virtuality (or the maximal electron scattering angle) allowed in the experimental set-up. Concerning the hard scattering reactions both processes have so-called direct and resolved components. Thus, in leading order (LO) QCD the cross section $\sigma(\gamma\gamma\rightarrow \mbox{jets})$ receives contributions from three distinct parts: (i) the direct contribution (DD), in which the two photons couple directly to quarks, (ii) the single-resolved contribution (DR), where one of the photons interacts with the partonic constituents of the other photon, (iii) the double-resolved contribution (RR), where both photons are resolved into partonic constituents before the hard scattering subprocess takes place. In the DD component (in LO) we have only two high-$p_T$ jets in the final state and no additional spectator jets. In the DR contribution one spectator jet originating from low transverse momentum fragments of one of the photons is present, and in the RR component we have two such spectator or photon remnant jets. In the case of $\gamma p$ collisions one of the photons is replaced by the proton which has no direct interaction. Then we have only the DR and RR components which are usually referred to as the direct and the resolved contribution. This means that the DR component for $\gamma\gamma\rightarrow \mbox{jets}$ has the same structure as the direct contribution of $\gamma p\rightarrow \mbox{jets}$ and the RR part for $\gamma\gamma\rightarrow \mbox{jets}$ is calculated in the same way as the resolved cross section for $\gamma p\rightarrow \mbox{jets}$. Of course, to calculate the resolved cross sections we need a description of the partonic constituents of the photon. These parton distributions of the photon are partly perturbative and non-perturbative quantities. To fix the non-perturbative part one needs information from experimental measurements to determine the fractional momentum dependence at a reference scale $M_0^2$. The change with the factorization scale $M^2 \stackrel{>}{\scriptstyle <} M_0^2$ is obtained from perturbative evolution equations. Most of the information on the parton distribution functions (PDF) of the photon comes from photon structure function ($F_2^{\gamma}$) measurements in deep inelastic $e\gamma$ scattering, where, however, mainly the quark distribution function can be determined. In $\gamma\gamma$ and $\gamma p$ high-$p_T$ jet production also the gluon distribution of the photon enters which can be constrained by these processes. When we want to proceed to next-to-leading order (NLO) QCD the following steps must be taken: (i) The hard scattering cross section for the direct and resolved photon processes are calculated up to NLO. (ii) NLO constructions for the PDF's of the proton and the photon are used and are evolved in NLO up to the chosen factorization scale via the Altarelli-Parisi equations and convoluted with the NLO hard-scattering cross sections. (iii) To calculate jet cross sections we must choose a jet definition which may be either a cluster or a cone algorithm in accordance with the choice made in the experimental analysis. There exist several methods for calculating NLO corrections of jet cross sections in high energy reactions \cite{x6}. As in our previous work we apply the phase space slicing method with invariant mass slicing to separate infrared and collinear singular phase space regions. In this approach the contributions from the singular regions are calculated analytically with the approximation that terms of the order of the slicing cut are neglected. In the non-singular phase space regions the cross section is obtained numerically. This method allows the application of different clustering procedures, the use of different variables for describing the final state together with cuts on these variables as given by the measurement of the jet cross sections. This method has been used for the calculation of the NLO DD and DR cross sections in the case of $\gamma\gamma\rightarrow \mbox{jets}$ \cite{x7} and for the calculation of the direct cross section in the case of $\gamma p\rightarrow \mbox{jets}$ \cite{x8}. It is obvious that the NLO calculation of the DR cross section for the $\gamma\gamma$ case is the same as of the NLO direct cross section for the $\gamma p$ case. In the calculation of the $\gamma\gamma$ DD and DR or $\gamma p$ direct cross section one encounters the photon-quark collinear singularity. This is subtracted and absorbed into the quark distribution of the photon in accord with the factorization theorem. This subtraction at the factorization scale $M^2$ produces an interdependence of the three components in the $\gamma\gamma$ and the two components in the $\gamma p$ reaction, so that a unique separation in DD, DR, and RR (for $\gamma\gamma$) and in direct and resolved (for $\gamma p$) contributions is valid only in LO. The calculation of these subtraction terms and of the contributions from the other singular regions has been presented with sufficient details in \cite{x7, x8}. The phase space slicing method was applied also for the calculation of the NLO correction of the $\gamma\gamma$ RR contribution \cite{x9} and the resolved $\gamma p$ cross section \cite{x10}. Results for the inclusive two-jet cross sections together with a comparison to recent experimental data were presented for $\gamma\gamma\rightarrow \mbox{jets}$ in \cite{x11} and for $\gamma p\rightarrow \mbox{jets}$ in \cite{x12}, respectively. In these two papers the specific calculations of the corresponding resolved cross sections in NLO were not explicitly outlined, in particular it was not shown, how the different infrared and collinear singularities cancel between virtual and real contributions and after the subtraction of collinear initial state singularities. In this work we want to fill this gap. We describe the analytic calculation of the various resolved terms in the specific singular regions needed for the NLO corrections. Furthermore we check the analytical results by comparing with results obtained with other methods. In order to have a complete presentation we include also the calculation of the NLO correction to the $\gamma\gamma$ DD cross section already presented in \cite{x7}. We also include the details of the DR contributions in the form of the direct $\gamma p$ cross section taken from \cite{x8}. Details of other material, as for example, various jet definitions, which were only mentioned in our previous work and which now become more and more relevant in the analysis of the experimental data, will also be presented. Since the calculation of the RR (DR) cross section for $\gamma\gamma\rightarrow\mbox{jets}$ is the same as the calculation of the $\gamma p\rightarrow\mbox{jets}$ resolved (direct) cross section we concentrate on the $\gamma p$ case when we present the details of the calculations. So for the $\gamma\gamma$ case we present only the DD contribution which has no analogy in the $\gamma p$ case. We come back to the $\gamma\gamma$ case when we show the numerical results for specific cases including all three components. We organize this work in seven main sections: After this introduction we relate the experimental $ep$ scattering to photon parton scattering in section 2. We describe the Weizs\"acker-Williams approximation, discuss the proton and photon PDF's and explain the experimental and theoretical properties of various jet definitions. Furthermore, section 2 contains the master formul{\ae} for one- and two-jet cross sections. These will be calculated in section 3 in LO and in section 4 in NLO. In both sections, we calculate the relevant phase space for $2\rightarrow 2$ and $2\rightarrow 3$ scattering, respectively. The Born matrix elements are contained in section 3. In section 4 we present the virtual one-loop matrix elements and the tree-level $2\rightarrow 3$ matrix elements, which are then integrated over singular regions of phase space. Next, we demonstrate how all ultraviolet and infrared poles in the NLO calculation cancel or are removed through renormalization and factorization into PDF's. A detailed numerical evaluation of jet cross sections in $\gamma p$ scattering with the purpose to compare with results of other work and to make consistency checks is contained in section 5. We study the renormalization and factorization scale dependence of one- and two-jet cross sections including the direct and resolved components. We also show results for some specific inclusive one- and two-jet cross sections and compare them with experimental data, in case they are available, to demonstrate the usefulness of our methods. Section 6 contains the corresponding numerical studies and comparisons to data for $\gamma\gamma$ scattering. The final conclusions are left for section 7. \setcounter{equation}{0} \section{Photoproduction of Jets at HERA} In this section we set up the general framework for theoretical predictions of the photoproduction of jets in electron-proton scattering. This includes the separation of the perturbatively calculable parts from the non-perturbative parts of the cross section and linking the electron-proton to photon-proton scattering. This link will be discussed first in section 2.1 and consists in the Weizs\"acker-Williams or Equivalent Photon Approximation. The framework of the QCD improved parton model for protons will shortly be discussed in section 2.2. This is necessary, since protons are not pointlike but composed of three valence quarks and sea quarks and gluons. Perturbative QCD is not applicable at distances comparable to the size of the proton, but only at small partonic scales due to the asymptotic freedom of QCD. Therefore, the parton content of the proton is the domain of non-perturbative QCD and has to be described with universal distribution functions. The scale dependence of these functions is governed by the Altarelli-Parisi equations. A similar concept applies for resolved photons, which are discussed in section 2.3. Contrary to direct photons, which are obviously pointlike, resolved photons can be considered to have a complicated hadronic structure like protons. However, they have a different valence structure than protons. Furthermore, a complex relationship between direct and resolved photons will show up in next-to-leading order of QCD. Having related the initial state particles electron and proton in the experiment to the photons and partons in perturbative QCD, we can turn our attention to the final state particles. Section 2.4 describes how the interpretation of partons as jets changes from leading to next-to-leading order of QCD. Several jet definition schemes are discussed with respect to their theoretical and experimental behavior. The last section 2.5 summarizes the different ingredients of the calculation and contains the master formul{\ae} for one- and two-jet photoproduction. Also, the numerous analytical contributions in leading and next-to-leading order of QCD are organized in a tabular form for transparency. \subsection{Photon Spectrum in the Electron} In photoproduction, one would like to study the hard scattering of real photon beams off nuclear targets. This has been done in fixed target experiments, like NA14 at CERN \cite{Aug86} or E683 at Fermilab \cite{Ada94}, where real photons are produced e.g.~in pion decay with energies of up to 400 GeV \cite{Pau92}. If higher energies are required, one must resort to spacelike, almost real photons radiated from electrons. This method is employed at the electron-proton collider HERA. There, electrons of energy $E_e = 26.7$ GeV and recently positrons of energy $E_e = 27.5$ GeV produce photons with small virtuality $Q^2$, which then collide with a proton beam of energy $E_p = 820$ GeV. This corresponds to photon energies of up to $50$ TeV in fixed target experiments. On the theoretical side, the calculation of the electron-proton cross section can be considerably simplified by using the Weizs\"acker-Williams or Equivalent Photon Approximation. Here, one uses current conservation and the small photon virtuality to factorize the electron-proton cross section into a broad-band photon spectrum in the electron and the hard photon-proton cross section. Already in 1924, Fermi discovered the equivalence between the perturbation of distant atoms by the field of charged particles flying by and the one due to incident electromagnetic radiation \cite{Fer24}. His semi-classical treatment was then extended to high-energy electrodynamics in 1933-1935 by Weizs\"acker \cite{Wei34} and Williams \cite{Wil34} independently, who used a Fourier analysis to unravel the predominance of transverse over longitudinal photons radiated from a relativistic charged particle. In the fifties, Curtis \cite{Cur56} and Dalitz and Yennie \cite{Dal57} gave the first field-theoretical derivations and applied the approximation to meson production in electron-nucleon collisions. Chen and Zerwas used infinite-momentum-frame techniques for an extension to photon bremsstrahlung and photon splitting processes \cite{Che75}. For a recent review of the various approximations see \cite{x13} and for the application to $\gamma\gamma$ processes see \cite{x14}. Let us consider the electroproduction process \begin{equation} \mbox{electron}(k) + \mbox{proton}(p) \rightarrow \mbox{electron}(k') + X \end{equation} as shown in figure \ref{fig9}, where $k$, $k'$, and $p$ are the four-momenta \input{fig9.tex} of the incoming and outgoing electron and the proton, respectively. $X$ denotes a generic hadronic system not specified here. $q = k-k'$ is the momentum transfer of the electron to the photon with virtuality $Q^2 = -q^2 \simeq 0$, and the center-of-mass energy of the process is $\sqrt{s_H} = \sqrt{(k+p)^2} = 295.9$ GeV ($= 300.3$ GeV for the positron beam). We restrict ourselves to one-photon exchange without electroweak $Z^0$ admixture. Two-photon exchange is suppressed by an additional order of $\alpha$, and $Z^0$ exchange is suppressed by the pole in the $Z^0$ mass $1/m_{Z^0}^2$. The cross section of this process is \begin{equation} \mbox{d}\sigma_{ep} (ep\rightarrow eX) = \int\frac{1}{8k.p}\frac{e^2W^{\mu\nu}T_{\mu\nu}} {Q^4}\frac{\mbox{d}^3k'}{(2\pi)^32E_e'}, \end{equation} where $W^{\mu\nu}$ and $T_{\mu\nu}$ are the usual hadron and lepton tensors. Exploiting current conservation and the small photon virtuality, we find \begin{equation} W^{\mu\nu}T_{\mu\nu} = 4W_1(Q^2=0,q.p) \left[ Q^2\frac{1+(1-x_a)^2} {x_a^2}-2m_e^2\right] , \label{eq1} \end{equation} where \begin{equation} x_a = \frac{q.p}{k.p} = 1-\frac{k'.p}{k.p} \in [0,1] \end{equation} is the fraction of longitudinal momentum carried by the photon. Next, we have to calculate the phase space for the scattered electron. Using \begin{eqnarray} k' = (E_e',0,E_e'\beta'\sin\theta,E_e'\beta'\cos\theta) &,& k = (E_e,0,0,E_e\beta),\\ \beta' = \sqrt{1-\frac{m_e^2}{E_e'^2}} &,& \beta = \sqrt{1-\frac{m_e^2}{E_e^2}}, \end{eqnarray} and integrating over the azimuthal angle, we find \begin{eqnarray} \frac{\mbox{d}^3k'}{E_e'} &=& 2\pi\beta'^2E_e'\mbox{d}k'\mbox{d}\cos\theta \nonumber \\ &=& \pi\mbox{d}q^2\mbox{d}x_a. \label{eq2} \end{eqnarray} Combining eq.~(\ref{eq1}) and (\ref{eq2}) and integrating over the photon virtuality, we can factorize the electron-proton cross section into \begin{equation} \mbox{d}\sigma_{ep} (ep\rightarrow eX) = \int\limits_0^{1}\mbox{d}x_a F_{\gamma/e}(x_a) \mbox{d}\sigma_{\gamma p}(\gamma p\rightarrow X), \label{eq12} \end{equation} where \begin{equation} F_{\gamma/e}(x_a) = \frac{\alpha}{2\pi} \left[ \frac{1+(1-x_a)^2}{x_a} \ln\frac{Q_{\max}^2}{Q_{\min}^2}+2m_e^2x_a\left( \frac{1}{Q_{\min}^2} -\frac{1}{Q_{\max}^2}\right) \right] \label{eq3} \end{equation} is the renowned Weizs\"acker-Williams approximation and where \begin{equation} \sigma_{\gamma p}(\gamma p\rightarrow X) = -\frac{g_{\mu\nu}W^{\mu\nu}}{8q.p} = \frac{W_1(Q^2=0,q.p)}{4q.p} \end{equation} is the photon-proton cross section. We will only use the leading logarithmic contribution and neglect the second term in eq.~(\ref{eq3}), also calculated by Frixione et al. \cite{Fri93}. There exist two fundamentally different experimental situations. At HERA, the scattered electron is anti-tagged and must disappear into the beam pipe. For such small scattering angles of the electron, the integration bounds $Q_{\min}$ and $Q_{\max}$ can be calculated from the equation \begin{equation} Q^2 = \frac{m_e^2x_a^2}{1-x_a}+\frac{E_e(1+\beta)(A^2-m_e^2)^2}{4A^3}\theta^2 +{\cal O} (\theta^4), \end{equation} where $A=E_e(1+\beta)(1-x_a)$. Using the minimum scattering angle $\theta=0$, we obtain \begin{equation} Q_{\min}^2 = \frac{m_e^2x_a^2}{1-x_a}, \end{equation} which leads us to the final form of the Weizs\"acker-Williams approximation used here: \begin{equation} F_{\gamma/e}(x_a) = \frac{\alpha}{2\pi}\frac{1+(1-x_a)^2}{x_a}\ln \left(\frac{Q_{\max}^2 (1-x_a)}{m_e^2~x_a^2}\right). \label{eq45} \end{equation} At H1 and ZEUS, the maximum virtualities of the photon are given directly as $Q_{\max}^2 = 0.01~\mbox{GeV}^2$ and $4~\mbox{GeV}^2$, respectively. If only the maximum scattering angle of the electron is known, we obtain \begin{equation} Q_{\max}^2 = E_e^2(1-x_a)\theta_{\max}^2 \end{equation} with $\theta_{\max}$ being of the order of $5^\circ$. This form of $Q_{\max}^2$ will be used for the calculation of the $\gamma\gamma$ cross sections in accordance with experimental choices of $\theta_{\max}$ at LEP. Except in equation (\ref{eq45}), we will assume all particles to be massless in this paper. Therefore, all results are only valid in the high-energy limit. When no information about the scattered electron is available, one has to integrate over the whole phase space thus allowing large transverse momenta and endangering the factorization property of the cross section. Then, one only knows that the invariant mass of the produced hadronic system $X$ has to be bounded from below, e.g. by minimal transverse momenta or heavy quark mass thresholds, which constrains the maximum virtuality of the photon only weakly. Possible choices are \begin{equation} Q_{\max} = \frac{\sqrt{s_H}}{2},\frac{\sqrt{x_as_H}}{2},E_e, \cdots . \end{equation} At $e^+e^-$ colliders, bremsstrahlung of electrons is not the only source of almost real photons. The particles in one bunch experience rapid acceleration when they enter the electromagnetic field of the opposite bunch producing beamstrahlung which depends sensitively on the machine parameters. Even higher luminosities can be achieved by colliding the electron beam at some distance from the interaction point with a dense laser beam \cite{Aur95}. \subsection{Parton Distributions in the Proton} The first evidence that the proton has a substructure came from deep inelastic electron-proton scattering $ep\rightarrow eX$. Together with muon- and neutrino-scattering, this process provides us with information on the distribution of partons (quarks and gluons) in the proton. In the parton model, scattering off hadrons is interpreted as an incoherent superposition of scattering off massless and pointlike partons. This is shown in figure \ref{fig10}, \input{fig10} where $p_b$ is the four-momentum of the scattered parton $b$. If the scale $M_b$, at which the proton is probed, is large enough, the transverse momentum of the parton will be small and its phase space can be described by a single variable \begin{equation} x_b = \frac{qp_b}{qp} \in [0,1], \end{equation} the longitudinal momentum fraction of the proton carried by the parton. The other partons in the proton will then not notice the interaction but form a so-called remnant or spectator jet. Multiple scattering is suppressed by factors of $1/M_b^2$, and only the leading twist contributes for large $M_b^2$. The proton remnant carries the four-momentum $p_r$, which again depends only on $x_b$ and the proton momentum $p$. The proton remnant will not be counted as a jet in the following. We can now factorize the photon-proton cross section into \begin{equation} \mbox{d}\sigma_{\gamma p} (\gamma p \rightarrow \mbox{jets + remnant}) = \sum_b \int\limits_0^1 \mbox{d}x_b F_{b/p} (x_b, M_b^2) \mbox{d}\sigma_{\gamma b} (\gamma b \rightarrow \mbox{jets}), \label{eq13} \end{equation} where $F_{b/p} (x_b, M_b^2)$ is the probability of finding a parton $b$ (where $b$ may be a quark, an antiquark, or a gluon) within the proton carrying a fraction $x_b$ of its momentum when probed at the hard scale $M_b$. The usual choice for deep inelastic scattering, $M_b=Q$, is not possible for photoproduction, as $Q^2\simeq 0$ here. Instead, one takes $M_b = \xi E_T$ with $E_T$ being the transverse energy of the outgoing parton or observed jet and $\xi$ being of order 1. Contrary to the hard photon-parton scattering cross section d$\sigma_{\gamma b}$, the parton densities $F_{b/p} (x_b, M_b^2)$ are not calculable in perturbative QCD and have to be taken from experiment. In the leading twist approximation, the parton densities extracted from deep inelastic scattering can be used for any other process with incoming nucleons like photoproduction or proton-antiproton scattering -- they are universal. As stated above, they have to be determined by experiment at some scale $Q^2=Q_0^2$, where one parametrizes the $x$ dependence before evolving up to any value of $Q^2$ according to the Altarelli-Parisi (AP) equations \cite{Alt77}. The input parameters are then determined by a global fit to the data. The $u$, $d$, and $s$ quark densities are rather well known today from muon scattering (BCDMS, NMC, E665) and neutrino scattering (CCFR) experiments, and the $c$ quark is constrained by open charm production at EMC. The small-$x$ region, where the gluon and the sea quarks become important, could, however, not be studied by these experiments. This is mainly the domain of the HERA experiments H1 and ZEUS in DIS as well as in photoproduction. There exist many different sets of parton density functions. Most of them are easily accessible in the CERN library PDFLIB kept up to date by Plothow-Besch \cite{Plo95}. We will now briefly compare the three parametrizations mainly in use today: CTEQ \cite{Lai95}, MRS \cite{Mar94}, and GRV \cite{Glu95}. The first two are very similar to each other, and both take $Q_0^2 = 4~\mbox{GeV}^2$. Very recently, CTEQ and MRS presented new parametrizations (set 4 and set R, respectively) including new HERA and TEVATRON inclusive jet data \cite{Lai96,Mar96a}. The GRV approach is rather different from the two mentioned above. In order to avoid any free additional parameters, their original version was based on the assumption that all gluon and sea distributions are generated dynamically from measured valence quark densities at a very low scale $Q_0 \simeq {\cal O} (\Lambda )$. The QCD coupling scale $\Lambda$ is fitted to the data and ranges from 200 to 344 MeV for the parametrizations considered here and for four quark flavors. \subsection{Parton Distributions in the Photon} Although the photon is the fundamental gauge boson of quantum electrodynamics (QED), which is the most accurately tested field theory, many reactions involving photons are much less well understood. This is due to the fact that the photon can fluctuate into $q\overline{q}$ pairs which in turn evolve into a complicated hadronic structure. At HERA, the photon radiated from the electron can thus interact either directly with a parton in the proton (direct component) or act as a hadronic source of partons which collide with the partons in the proton (resolved component, see figure \ref{fig12}). In the latter case, one does not test the proton structure alone but also the photon structure. \input{fig12} Using the factorization theorem, the cross section for photon-parton scattering can be written as \begin{equation} \mbox{d}\sigma_{\gamma b} (\gamma b \rightarrow \mbox{jets}) = \sum_a \int\limits_0^1 \mbox{d}y_a F_{a/\gamma} (y_a, M_a^2) \mbox{d}\sigma_{ab} (ab \rightarrow \mbox{jets}). \label{eq14} \end{equation} Here, $F_{a/\gamma}(y_a, M_a^2)$ stands for the probability of finding a parton $a$ with momentum fraction $y_a$ in the photon, which has to be universal in all processes. The scale $M_a$ is a measure for the hardness of the parton-parton cross section d$\sigma_{ab}$ calculable in perturbative QCD and is again taken to be of ${\cal O} (E_T)$. The particle $a$ can also be a direct photon. Then, $F_{\gamma/\gamma}(y_a, M_a^2)$ is simply given by the $\delta$-function $\delta(1-y_a)$ and does not depend on the hard scale $M_a$. Before HERA started taking data, information on the hadronic structure of the photon came almost exclusively from deep inelastic $\gamma^{\ast}\gamma$ scattering at $e^+e^-$ colliders. Similarly to deep inelastic $ep$ scattering, this is a totally inclusive process well suited to define a photon structure function. Using $y=Q^2/(x_Bs_H)$, where $Q^2$ denotes the virtuality of the probing photon $\gamma^{\ast}$, and replacing $F_1$ by the longitudinal structure function $F_L(x_B,Q^2) = F_2(x_B,Q^2)-2x_BF_1(x_B,Q^2)$, we can write the deep inelastic scattering (DIS) cross section in the following form \begin{equation} \frac{\mbox{d}^2\sigma}{\mbox{d}x_B\mbox{d}y} = \frac{2\pi\alpha^2s_H}{Q^4}\left\{ \left[ 1+(1-y)^2 \right] F_2^{\gamma}(x_B,Q^2) -y^2F_L^{\gamma}(x_B,Q^2)\right\} , \label{eq9} \end{equation} where in LO $F_2^{\gamma}(x_B,Q^2)$ is related to the singlet quark parton density in the photon similarly as in deep inelastic $ep$ scattering \begin{equation} F_2^{\gamma}(x_B,Q^2) = \sum_q x_B e_q^2 (F_{q/\gamma}(x_B,Q^2)+F_{\overline{q}/\gamma}(x_B,Q^2)). \end{equation} Fitting the input parameters to data is not as easy as in the proton case. First, no momentum sum rule applies for the photonic parton densities as they are all of LO in QED. This and the subleading nature of the gluonic process makes a determination of the gluon from $F_2^{\gamma}$ very difficult. However, a momentum sum rule exists for mesons so that the VMD part of the gluon is constrained. Second, the cross section for $F_2^{\gamma}$ is quite small and falls rapidly with increasing $Q^2$ (see eq. (\ref{eq9})) leading to large statistical errors. Presently, over 20 different parton distributions exist for the photon, and most of them are available in the PDFLIB \cite{Plo95}. The VMD input is insufficient to fit the data at higher $Q^2$ offering two alternatives: GRV \cite{Glu92}, AFG \cite{Aur92}, and SaS \cite{Sch95} follow the same ``dynamical'' philosophy as in the GRV proton case starting from a simple valence-like input at a low scale $Q_0^2 = 0.25~...~0.36~\mbox{GeV}^2$. On the other hand, LAC \cite{Abr91} and GS \cite{Gor92} take a larger value for $Q_0^2 = 4~...~5.3~\mbox{GeV}^2$, assume a more complicated ansatz there, and fit the free parameters to $F_2^{\gamma}$ data. LAC intended to demonstrate the poor constraints on the gluon assuming a very soft gluon (fits 1 and 2) as well as a very hard one (fit 3). However, LAC 3 is ruled out now by recent $e^+e^-$ and HERA data. In their latest parametrization, GS lowered their input scale to $Q_0^2 = 3~\mbox{GeV}^2$, included all available data on $F_2^{\gamma}$, and constrained the gluon from jet production at TRISTAN \cite{Gor96}. The distinction between direct and resolved photon is only meaningful in LO of perturbation theory. In NLO, collinear singularities arise from the photon initial state that have to be absorbed into the photon structure function (cf.~sections 4.2.5 and 4.2.7) and produce a factorization scheme dependence as in the proton case. If one requires approximately the same $F_2^{\gamma}$ in LO and NLO, the quark distributions in the $\overline{\mbox{MS}}$-scheme have quite different shapes in LO and NLO. This is not the case if the $\mbox{DIS}_{\gamma}$-scheme of GRV is used, where the direct-photon contribution to $F_2^{\gamma}$ is absorbed into the photonic quark distributions. This allows for perturbative stability between LO and NLO results. Therefore, the separation between direct and resolved process is an artifact of finite order perturbation theory and depends on the factorization scheme and scale $M_a$. Experimentally, one tries to get a handle on this by introducing kinematical cuts, e.g.~on the photon energy fraction taking part in the hard cross section. \subsection{Jet Definitions} Due to the confinement of color charge, neither incoming nor outgoing partons can be observed directly in experiment, but rather transform into colorless hadrons. This transformation is a long-distance process and is not calculable in perturbative QCD. For the incoming particles, we have already described how the ignorance of universal parton distributions in hadrons is parametrized. A similar method can be used for the final state partons, employing so-called fragmentation functions for the inclusive production of single hadrons \cite{Bin95}. The transformation of partons into individual hadrons forming part of the total final state was first studied by \cite{Fie78}. Alternatively, one can observe beams of many hadrons going approximately into the same direction without the need to specify individual hadrons. The hadrons are then combined into so-called jets by cluster algorithms, where one starts from an initial cluster in phase space and ends at stable fixed points for the jet coordinates. These jet definitions should fulfill a number of important properties. They should \cite{Ell89a} \begin{itemize} \item be simple to implement in the experimental analysis, \item be simple to implement in the theoretical calculation, \item be defined at any order of perturbation theory, \item yield a finite cross section at any order of perturbation theory, \item yield a cross section that is relatively insensitive to hadronization. \end{itemize} Although, in principle, hadronization of the final state should be factorizable from the hard cross section and the initial state, jets do indeed look quite different in $e^+e^-$ and in, at least partly, hadronic collisions like $ep$ scattering. This can be attributed to the ``underlying event'' of remnant jet production from the initial state, which can interfere with the hard jets in the final state. The main problem here is the determination of the true jet energy and the subtraction of the remnant pedestal. In LO QCD, there is a one-to-one correspondence between partons and jets. This results in a complete insensitivity of theory to the experimentally used algorithm or to the resolution parameters. The experimental results depend, however, on these parameters as well as on detector properties and have to be corrected stepwise from detector level to hadron level to parton level. The situation can only be improved by going to NLO in perturbation theory. Here, the emission of one additional soft or collinear parton is calculable with correct treatment of the occurring singularities. A hadron jet can then consist not only of one, but also of two partons. It acquires a certain substructure, and will depend on the experimental algorithm and resolution. Historically, the first resolution criterion was proposed by Sterman and Weinberg \cite{Ste77}. It adds a particle to a jet if its energy is smaller than $\varepsilon M$ or its angle with the jet is less than $2\delta$, which provides a close link to the radiation of secondary partons. The energy cut handles the soft divergencies and the angular cut the collinear divergencies. There is some freedom in the choice for $M$. Usually, one takes the hard scale of the process, e.g.~$Q$ in $e^+e^-$ annihilation. The PETRA experiments used this $(\epsilon,\delta)$ algorithm and the so-called JADE cluster algorithm \cite{Bar86}, which has the advantage of being invariant. Two particles are combined into a cluster, if their invariant mass $s_{ij}=(p_i+p_j)^2$ is smaller than $y M^2$ \cite{Kra84}, where $M^2$ is again a typical scale and $y$ is of ${\cal O} (10^{-2})$. In this way, the soft and collinear divergencies can be described by a single cut-off $y$. There exist a number of different schemes for this algorithm regarding the invariant mass of the combination of the two particles and the combination of the particle four-momenta (JADE-, $E$-, $E0$-, $P$-, and $P0$-scheme). At hadron colliders, cluster algorithms tend to include hadrons from the remnant jet, which is not present in $e^+e^-$ collisions, into the current jet. Therefore, algorithms are preferred here which use a cone in rapidity-azimuth ($\eta-\phi$) space, quite similar to the $\delta$-condition of Sterman and Weinberg. The (pseudo-)rapidity $\eta=-\ln [\tan(\theta/2)]$ parametrizes the polar angle $\theta$ between the hard jet and the beam axis, and $\phi$ is the azimuthal angle of the jet around the beam axis. In the case of cone algorithms, only inclusive cross sections are infrared safe, where an arbitrary number of particles outside the jet cone can be radiated as long as they are softer than the observed jets. According to the standardization of the Snowmass meeting in 1990, calorimeter cells or partons $i$ may have a distance $R_i$ from the jet center provided that \begin{equation} R_i = \sqrt{(\eta_i-\eta_J)^2+(\phi_i-\phi_J)^2} \leq R, \label{eq10} \end{equation} where $\eta_i$ and $\phi_i$ are the coordinates of the parton or the center of the calorimeter cell \cite{Hut92}. Typical values for the resolution parameter $R$ range from 0.7 to 1, where the effects of hadronization and the underlying event are minimized. The transverse energy of the combined jet $E_{T_J}$ is calculated from the sum of the particle $E_{T_i}$ \begin{equation} E_{T_J} = \sum_{R_i\leq R} E_{T_i}, \end{equation} and the jet axis is defined by the weighted averages \begin{eqnarray} \eta_J &=& \frac{1}{E_{T_J}}\sum_{R_i\leq R} E_{T_i}\eta_i, \\ \phi_J &=& \frac{1}{E_{T_J}}\sum_{R_i\leq R} E_{T_i}\phi_i. \end{eqnarray} In perturbative QCD, the final state consists of a limited number of partons. For a single isolated parton $i$, the partonic and jet parameters agree ($(E_{T_i},\eta_i,\phi_i) = (E_{T_J},\eta_J,\phi_J)$) as shown in figure \ref{plot6}a), \begin{figure}[htbp] \begin{center} {\unitlength1cm \begin{picture}(15,5) \epsfig{file=plot6.ps,bbllx=44pt,bblly=308pt,bburx=566pt,bbury=486pt,% height=5cm,clip=} \end{picture}} \caption[Jet Cone Definition According to the Snowmass Convention] {\label{plot6}{\it Jet cone definition according to the Snowmass convention for a) a single parton, b) two combined partons with distance $R$ from the jet axis, and c) two single partons.}} \end{center} \end{figure} whereas two neighboring partons $i$, $j$ will form a combined jet as shown in figure \ref{plot6}b). Equation (\ref{eq10}) only defines the distances $R_i$, $R_j$ of each parton from the jet axis so that the two partons may be separated from each other by \begin{equation} R_{ij} = \sqrt{(\eta_i-\eta_j)^2+(\phi_i-\phi_j)^2} \leq \frac{E_{T_i}+E_{T_j}}{\max (E_{T_i},E_{T_j})} R. \end{equation} If both partons have equal transverse energy, they may then be separated by as much as $2R$. As parton $j$ does not lie inside a cone of radius $R$ around parton $i$ and vice versa, one might with some justification also count the two partons separately as shown in figure \ref{plot6}c). If one wants to study only the highest-$E_T$ jet, this ``double counting'' must be excluded. The selection of the initiating cluster, before a cone is introduced (``seed-finding''), is not fixed by the Snowmass convention, and different approaches are possible. The ZEUS collaboration at HERA uses two different cone algorithms: EUCELL takes the calorimeter cells in a window in $\eta-\phi$ space as seeds to find a cone with the highest $E_T$. The cells in this cone are then removed, and the algorithm continues. On the other hand, PUCELL was adapted from CDF and starts with single calorimeter cells. It then iterates cones around each of them, until the set of enclosed cells is stable. In this case it may happen that two stable jets overlap. If the overlapping transverse energy amounts to a large fraction of the jets, they are merged, otherwise the overlapping energy is split. Alternatively, the overlap could be attributed to the nearest, to the largest, or to both jets. The question of overlapping jets cannot be addressed in a next-to-leading order calculation of photoproduction. There, we only have up to three partons in the final state, which can form at most one recombined jet and no overlap. Experimentally, jets of type b) in figure \ref{plot6} are hard to find because of the missing seed in the jet center. This is a problem in particular with the PUCELL algorithm, which relies on initial clusters and does indeed find smaller cross sections than the less affected EUCELL algorithm \cite{But96}. One possibility to model this theoretically is to introduce an additional parameter $R_{\rm sep}$ \cite{y7} to restrict the distance of two partons from each other: \begin{equation} R_{ij} \leq \min\left[ \frac{E_{T_i}+E_{T_j}}{\max (E_{T_i},E_{T_j})} R, R_{\rm sep}\right] . \end{equation} $R_{\rm sep} = 2R$ means no restriction. In figure \ref{plot7}, we can see that for two partons \begin{figure}[htbp] \begin{center} {\unitlength1cm \begin{picture}(9,5) \epsfig{file=plot7.ps,bbllx=60pt,bblly=272pt,bburx=550pt,bbury=524pt,% height=5cm,clip=} \end{picture}} \caption[Jet Cone Definition with an Additional Parameter $R_{\rm sep}$] {\label{plot7}{\it Jet cone definition with an additional parameter for parton-parton separation $R_{\rm sep}$: a) two partons with similar or equal transverse energies $E_T$, b) two partons with large $E_T$ imbalance.}} \end{center} \end{figure} of similar or equal transverse energies $E_T$, $R_{\rm sep}$ is the limiting parameter, whereas it is the parton-jet distance $R$ for two partons with large $E_T$ imbalance. Numerical studies of the $R$ and $R_{\rm sep}$ dependences will be given in sections 5.3 and 5.5 \cite{But96}. The JADE algorithm clusters soft particles regardless of how far apart in angle they are, because only the invariant mass of the clusters is used. This is improved in the $k_T$ or Durham clustering algorithm of Catani et al. \cite{Cat91}, where one defines the closeness of two particles by \begin{equation} d_{ij} = \min (E_{T_i},E_{T_j})^2 R_{ij}^2. \label{eq11} \end{equation} $R_{ij}$ is again defined in $(\eta-\phi)$ space. For small opening angles, $R_{ij} \ll 1$, eq.~(\ref{eq11}) reduces to \begin{equation} \min (E_{T_i},E_{T_j})^2R_{ij}^2 \simeq \min (E_i,E_j)^2\Delta \theta^2 \simeq k_T^2, \end{equation} the relative transverse momentum squared of the two particles in the jet. Similarly, one can define a closeness to the remnant jet particles $b$, if they are present: \begin{equation} d_{ib} = E_{T_i}^2 R_{ib}^2. \end{equation} Particles are clustered into jets as long as they are closer than \begin{equation} d_{\{ij,ib\}} \leq d_i^2 = E_{T_i}^2 R^2, \end{equation} where $R$ is an adjustable parameter of ${\cal O} (1)$ analogous to the cone size parameter in the cone algorithm. Also, one chooses the same recombination scheme. Consequently, the $k_T$ algorithm produces jets that are very similar to those produced by the cone algorithm with $R = R_{\rm sep}$. \subsection{Hard Photoproduction Cross Sections} We have now established the links between the experimentally observed initial and final states at HERA (electron, proton, and jets) and the partonic subprocess calculable in perturbative QCD. Formally, we can combine eqs.~(\ref{eq12}), (\ref{eq13}), and (\ref{eq14}) into \begin{eqnarray} && \mbox{d}\sigma_{ep}(ep\rightarrow e~+~\mbox{jets + remnants}) = \label{eq15} \\ && = \sum_{a,b}\int\limits_0^1\mbox{d}x_aF_{\gamma/e}(x_a) \int\limits_0^1\mbox{d}y_aF_{a/\gamma}(y_a,M_a^2) \int\limits_0^1\mbox{d}x_bF_{b/p}(x_b,M_b^2) \mbox{d}\sigma_{ab}^{(n)}(ab\rightarrow \mbox{jets}), \nonumber \end{eqnarray} where $x_a$, $y_a$, and $x_b$ denote the longitudinal momentum fractions of the photon in the electron, the parton in the photon, and the parton in the proton, respectively. From now on, we will use the variable $x_a$ as the variable for the {\em parton in the electron} with the consequence that the incoming partons have momenta $p_a=x_ak$ and $p_b=x_bp$ and eq.~(\ref{eq15}) becomes \begin{eqnarray} && \mbox{d}\sigma_{ep}(ep\rightarrow e~+~\mbox{jets + remnants}) = \\ && = \sum_{a,b}\int\limits_0^1\mbox{d}x_aF_{\gamma/e}(\frac{x_a} {y_a}) \int\limits_{x_a}^1\frac{\mbox{d}y_a}{y_a}F_{a/\gamma}(y_a,M_a^2) \int\limits_0^1\mbox{d}x_bF_{b/p}(x_b,M_b^2) \mbox{d}\sigma_{ab}^{(n)}(ab\rightarrow \mbox{jets}). \nonumber \end{eqnarray} Next, one has to fix the kinematics for the photoproduction of jets. All particles are considered to be massless. In the HERA laboratory system, the positive $z$-axis is taken along the proton direction such that $k = E_e (1,0,0,-1)$ and $p = E_p (1,0,0,1)$. For the hard jets, we choose the decomposition of four-momenta into transverse energies, rapidities, and azimuthal angles $p_i = E_{T_i} (\cosh\eta_i, \cos\phi_i,\sin\phi_i,\sinh\eta_i)$. The boost from the HERA laboratory system into the $ep$ center-of-mass system is then simply mediated by a shift in rapidity $\eta_{\rm boost} = 1/2\ln(E_e/E_p)$. Through energy and momentum conservation $p_a + p_b = \sum_i p_i$, the final partons are related kinematically to the invariant scaling variables $x_a$ and $x_b$ \begin{eqnarray} x_a &=& \frac{1}{2E_e}\sum_i E_{T_i}e^{-\eta_i} , \label{eq22}\\ x_b &=& \frac{1}{2E_p}\sum_i E_{T_i}e^{ \eta_i} . \label{eq23} \end{eqnarray} The cross section for the production of an $n$-parton final state from two initial partons $a$, $b$, \begin{eqnarray} \mbox{d}\sigma_{ab}^{(n)} (ab \rightarrow \mbox{jets}) & = & \frac{1}{2s} \overline{|{\cal M}|^2} \mbox{dPS}^{(n)}, \end{eqnarray} depends on the flux factor $1/(2s)$, where $s = x_ax_bs_H$ is the partonic center-of-mass energy, the $n$-particle phase space \begin{eqnarray} \mbox{dPS}^{(n)} & = & \int (2\pi )^d \prod_{i=1}^{n} \frac{\mbox{d}^dp_i \delta (p_i^2)} {(2\pi )^{d-1}} \delta^d \left( p_a+p_b-\sum_{j=1}^n p_j \right), \end{eqnarray} where $d=4-2\varepsilon$ is the space-time dimension, and on the squared matrix element $\overline{|{\cal M}|^2}$. The latter is averaged over initial and summed over final spin and color states. Since we study almost-real photons with $Q^2\simeq 0$ and treat quarks and gluons as massless, only transverse polarizations are possible giving spin average factors of $1/2$ for photons and quarks and $1/(2(1-\varepsilon))$ for gluons in dimensional regularization. The quarks form the fundamental representation of the SU(3) color symmetry group and can therefore carry $N_C = 3$ different colors. The gluons belong to the adjoint representation of dimension $2N_CC_F=N_C^2-1=8$. This is the defining equation for the so-called color factor $C_F$. We therefore average the initial colors with factors of $1/N_C$ for quarks and $1/(2N_CC_F)$ for gluons. In leading order QCD, the phase space for two partons $\mbox{dPS}^{(2)}$ in the final state and the Born matrix elements $T$ have to be computed as displayed in the first line of table \ref{tab1}. In next-to-leading order QCD, new contributions arise. Whereas the virtual corrections $V$ also have a two-particle phase space (second line in table \ref{tab1}), the real corrections of the final state $F$, of the photon initial state $I$, and of the proton initial state $J$ require the inclusion of a third outgoing parton into the phase space $\mbox{dPS}^{(3)}$ (lines three to five in table \ref{tab1}). We will distinguish between direct and resolved photoproduction in the subsequent chapters and treat the two contributions in a completely parallel way. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Order & Phase Space $\mbox{dPS}^{(n)}$ & Direct Matrix Element $\overline{|{\cal M}|^2}$ & Resolved Matrix Element $\overline{|{\cal M}|^2}$ \\ \hline \hline LO & & $T_{\gamma b\rightarrow 12}$, sect.~3.2 & $T_{ab\rightarrow 12}$, sect.~3.3 \\ \cline{1-1}\cline{3-4} & \raisebox{1.5ex}[-1.5ex]{$\mbox{dPS}^{(2)}$, sect.~3.1} & $V_{\gamma b\rightarrow 12}$, sect.~4.1.1 & $V_{ab\rightarrow 12}$, sect.~4.1.2 \\ \cline{2-4} & $\mbox{dPS}^{(3)}$, sect.~4.2.1 & $F_{\gamma b\rightarrow 123}$, sect.~4.2.2 & $F_{ab\rightarrow 123}$, sect.~4.2.3 \\ \cline{2-4} \raisebox{1.5ex}[-1.5ex]{NLO} & & $I_{\gamma b\rightarrow 123}$, sect.~4.2.5 & $I_{ab\rightarrow 123}$, sect.~4.2.7 \\ \cline{3-4} & \raisebox{1.5ex}[-1.5ex]{$\mbox{dPS}^{(3)}$, sect.~4.2.4} & $J_{\gamma b\rightarrow 123}$, sect.~4.2.6 & $J_{ab\rightarrow 123}$, sect.~4.2.8 \\ \hline \end{tabular} \end{center} \caption[Overview of Phase Space Parametrizations and Matrix Elements] {\label{tab1}{\it Summary of phase space parametrizations and matrix elements needed for this NLO calculation of direct and resolved photoproduction.}} \end{table} The two final state partons produced in a LO process have to balance their transverse energies $E_{T_1}=E_{T_2}=E_T$, so that relations (\ref{eq22}) and (\ref{eq23}) simplify to \begin{eqnarray} x_a &=& \frac{E_T}{2E_e} \left( e^{-\eta_1} + e^{-\eta_2} \right) , \label{eq16}\\ x_b &=& \frac{E_T}{2E_p} \left( e^{ \eta_1} + e^{ \eta_2} \right) . \label{eq17} \end{eqnarray} The rapidity $\eta_2$ of the second jet is kinematically fixed by $E_T$, $\eta_1$, and $x_a$ through the relation \begin{equation} \eta_2 = -\ln\left( \frac{2x_aE_e}{E_T}-e^{-\eta_1}\right) . \label{eq18} \end{equation} In the HERA experiments, $x_a$ is restricted to a fixed interval $x_{a,\min} \leq x_a \leq x_{a,\max} < 1$. We shall disregard this constraint and allow $x_a$ to vary in the kinematically allowed range $x_{a,\min} \leq x_a \leq 1$, where \begin{equation} x_{a,\min} = \frac{E_pE_Te^{-\eta_1}}{2E_eE_p-E_eE_Te^{\eta_1}} \end{equation} except when we compare to experimental data. There we shall include the correct limits on $x_a$ dictated by the experimental analysis. From eqs.~(\ref{eq16}) and (\ref{eq17}), we can express $x_b$ as a function of $E_T$, $\eta_1$, and $x_a$: \begin{equation} x_b = \frac{x_aE_eE_Te^{\eta_1}}{2x_aE_eE_p-E_pE_Te^{-\eta_1}}. \label{eq20} \end{equation} The inclusive two-jet cross section for $ep\rightarrow e~+~\mbox{jet}_1~+~ \mbox{jet}_2~+~X$ is obtained from \begin{equation} \frac{\mbox{d}^3\sigma}{\mbox{d}E_T^2\mbox{d}\eta_1\mbox{d}\eta_2} = \sum_{a,b} x_a F_{a/e}(x_a,M_a^2) x_b F_{b/p}(x_b,M_b^2) \frac{\mbox{d}\sigma}{\mbox{d}t}(ab \rightarrow \mbox{jets}), \label{eq19} \end{equation} where \begin{equation} F_{a/e}(x_a,M_a^2) = \int\limits_{x_a}^1\frac{\mbox{d}y_a}{y_a} F_{a/\gamma}(y_a,M_a^2)F_{\gamma/e}\left( \frac{x_a}{y_a}\right) \end{equation} defines the parton content in the electron. d$\sigma$/d$t$ stands for the differential cross section of the partonic subprocess $ab \rightarrow \mbox{jets}$. The invariants of this process are $s = (p_a+p_b)^2,~t = (p_a-p_1)^2$, and $u = (p_a-p_2)^2$. They can be expressed by the final state variables $E_T$, $\eta_1$, and $\eta_2$ and the initial state momentum fractions $x_a$ and $x_b$: \begin{eqnarray} s &=& 4 x_a x_b E_e E_p, \\ t &=& -2 x_a E_e E_T e^{\eta_1} = -2 x_b E_p E_T e^{-\eta_2}, \\ u &=& -2 x_a E_e E_T e^{\eta_2} = -2 x_b E_p E_T e^{-\eta_1}. \end{eqnarray} For the inclusive one-jet cross section, we must integrate over one of the rapidities in (\ref{eq19}). We integrate over $\eta_2$ and transform to the variable $x_a$ using (\ref{eq16}). The result is the cross section for $ep \rightarrow e+\mbox{jet}+X$, which depends on $E_T$ and $\eta$: \begin{equation} \frac{\mbox{d}^2\sigma}{\mbox{d}E_T\mbox{d}\eta} = \sum_{a,b} \int_{x_{a,\min}}^1 \mbox{d}x_a x_a F_{a/e}(x_a,M_a^2) x_b F_{b/p}(x_b,M_b^2) \frac{4E_eE_T}{2x_aE_e-E_Te^{-\eta}} \frac{\mbox{d}\sigma}{\mbox{d}t}(ab\rightarrow \mbox{jet}). \label{eq24} \end{equation} Here, $x_b$ is given by (\ref{eq20}) with $\eta_1 = \eta$. \setcounter{equation}{0} \section{Leading Order Cross Sections} In this section, we consider the leading order contributions to the photoproduction cross section d$\sigma$/d$t$. The leading order direct hard scattering process is of ${\cal O} (\alpha\alpha_s)$, and the leading order resolved hard scattering process is of ${\cal O} (\alpha_s^2)$. Since the latter has to be multiplied with the photon structure function of ${\cal O} (\alpha/\alpha_s)$, both contributions are of the same order ${\cal O} (\alpha\alpha_s)$. To this order, it is necessary to calculate the phase space of two-particle final states, the tree-level matrix elements for direct photons $\gamma b\rightarrow 12$, and those for resolved photons $ab\rightarrow 12$. The two particles produced in the hard scattering then correspond directly to the two jets that can at most be observed. Either particle can be in jet one or jet two, so that the cross sections d$\sigma$/d$t$ have to be considered in their given form and with $(t\leftrightarrow u)$ to give the complete cross section d$\sigma$/d$t(s,t,u)$+d$\sigma$/d$t(s,u,t)$. Furthermore, one has to add symmetry factors of $1/n!$ for $n$ identical particles in the final state. As we will later go from four to $d=4-2\varepsilon$ dimensions to regularize the singularities showing up in next-to-leading order, we already give the results in this section in $d$ dimensions. If one is only interested in leading order results, one can always set $\varepsilon$ to zero in this section. Section 3.1 contains the phase space for $2\rightarrow 2$ scattering in $d$ dimensions. It is calculated in the center-of-mass system of the incoming or equivalently outgoing particles. The matrix elements for direct photons are presented in section 3.2. Only one master diagram for the so-called QCD Compton scattering process $\gamma q\rightarrow gq$ contributes. The gluon initiated process is obtained by crossing. For resolved photons, we have four parton-parton master diagrams. The corresponding matrix elements are given in section 3.3. In addition, we show a table from which the matrix elements for all other processes can be deduced. Section 3.4 contains the direct $\gamma\gamma$ Born matrix element. \subsection{Phase Space for Two-Particle Final States} We give here a brief sketch of the phase space calculation for the scattering of two initial into two final particles in $d=4-2\varepsilon$ dimensions. We start from the general expression \cite{Byc73} \begin{equation} \mbox{dPS}^{(2)} = \int (2\pi )^d \prod_{i=1}^{2} \frac{\mbox{d}^dp_i \delta (p_i^2)} {(2\pi )^{d-1}} \delta^d \left( p_a+p_b-\sum_{j=1}^2 p_j \right). \end{equation} The dijet and single-jet cross sections in eqs.~(\ref{eq19}) and (\ref{eq24}) require partonic cross sections differential in the Mandelstam variable $t$. We therefore insert an additional $\delta$-function with respect to $t$ \begin{equation} \frac{\mbox{dPS}^{(2)}}{\mbox{d}t} = \int (2\pi )^d \prod_{i=1}^{2} \frac{\mbox{d}^dp_i \delta (p_i^2)} {(2\pi )^{d-1}} \delta^d \left( p_a+p_b-\sum_{j=1}^2 p_j \right) \delta (t+2p_ap_1), \end{equation} before we integrate over the $d$-dimensional $\delta$-function leading to \begin{equation} \frac{\mbox{dPS}^{(2)}}{\mbox{d}t} = \int \frac{\mbox{d}^dp_1\delta (p_1^2)}{(2\pi)^{d-2}} \delta ((p_a+p_b-p_1)^2) \delta (t+2p_ap_1). \end{equation} Next, we choose the center-of-mass system of the incoming partons $a$, $b$ as shown in figure \ref{fig15}, \input{fig15} where the outgoing partons are of the same energy and can be described by a single polar angle $\theta$ between $p_1$ and $p_a$. We integrate over the azimuthal angle $\phi$ \begin{equation} \frac{\mbox{dPS}^{(2)}}{\mbox{d}t} = \int \frac{\mbox{d}E_1 E_1^{d-3} \mbox{d}\cos\theta (1-\cos^2\theta )^{\frac{d-4}{2}}} {2^{d-2}\pi^{\frac{d-2}{2}}\Gamma \left( \frac{d-2}{2} \right) } \delta (s-2\sqrt{s}E_1 )\delta (t+\sqrt{s}E_1 (1-\cos\theta )) \end{equation} and over the remaining $\delta$-functions. The final result is \cite{Aur87,Arn89,Gra90} \begin{equation} \frac{\mbox{dPS}^{(2)}}{\mbox{d}t} = \frac{1}{\Gamma(1-\varepsilon)} \left( \frac{4\pi s}{tu} \right) ^\varepsilon \frac{1}{8\pi s}. \end{equation} \subsection{Born Matrix Elements for Direct Photons} For direct photoproduction of two partons, there is only one generic Feynman diagram $\gamma q\rightarrow gq$ as displayed in figure \ref{fig1}. \input{fig1.tex} Unlike deep inelastic scattering, the leading order process already includes the production of a hard gluon, which balances the transverse momentum of the scattered quark. This can either happen in the initial state (left diagram in figure \ref{fig3}) or in the final state (right diagram in figure \ref{fig3}) of this so-called ``QCD Compton Scattering''. \input{fig3.tex} Both diagrams have to be added and squared to give the full matrix element squared. In $d$ dimensions, the result is \begin{equation} |{\cal M}|^2_{\gamma q\rightarrow gq}(s,t,u) = e^2e_q^2g^2\mu^{4\varepsilon}T_{\gamma q\rightarrow gq}(s,t,u), \label{eq25} \end{equation} where the renormalization scale $\mu$ keeps the couplings dimensionless and where \begin{equation} T_{\gamma q\rightarrow gq}(s,t,u) = 8N_CC_F(1-\varepsilon)\left[ (1-\varepsilon) \left( -\frac{u}{s}- \frac{s}{u}\right) +2\varepsilon\right] . \end{equation} Note that the DIS interference term of the two diagrams is missing as it is proportional to $Q^2\simeq 0$. As discussed in section 2.2, the proton not only consists of quarks but also of gluons giving rise to a ``Boson Gluon Fusion'' process $\gamma g\rightarrow q\bar{q}$ not shown here. The corresponding matrix element is obtained from eq.~(\ref{eq25}) by simply crossing the initial quark with the final gluon, or $(s\leftrightarrow t)$, and multiplying by $(-1)$ for crossing a fermion line. Similarly, the contribution for incoming anti-quarks $\gamma\bar{q}\rightarrow g\bar{q}$ can be calculated by crossing $(s\leftrightarrow u)$. The LO direct matrix elements have been known for quite some time \cite{Aur87,Bae89a,Bod92} and are summarized in table \ref{tab2}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|} \hline Process & Matrix Element $\overline{|{\cal M}|^2}$ \\ \hline \hline $\gamma q\rightarrow gq$ & $[|{\cal M}|^2_{\gamma q\rightarrow gq}(s,t,u)]/[4N_C]$ \\ \hline $\gamma\bar{q}\rightarrow g\bar{q}$ & $[|{\cal M}|^2_{\gamma q\rightarrow gq}(u,t,s)]/[4N_C]$ \\ \hline $\gamma g\rightarrow q\bar{q}$ & $-[|{\cal M}|^2_{\gamma q\rightarrow gq}(t,s,u)]/[8(1-\varepsilon)N_CC_F]$ \\ \hline \end{tabular} \end{center} \caption[Squared $2\rightarrow 2$ Matrix Elements for Direct Photoproduction] {\label{tab2}{\it Summary of $2\rightarrow 2$ squared matrix elements for direct photoproduction.}} \end{table} \subsection{Born Matrix Elements for Resolved Photons} For resolved photoproduction of two partons, we have to calculate the four generic parton-parton scattering processes $qq'\rightarrow qq'$, $qq\rightarrow qq$, $q\bar{q}\rightarrow gg$, and $gg\rightarrow gg$. The corresponding Feynman diagrams are displayed in figure \ref{fig2}. \input{fig2.tex} The first process for the scattering of two non-identical quarks $q$ and $q'$ has already been calculated in 1978 by Cutler and Sivers \cite{Cut78}, shortly prior to the other processes \cite{Com77}, and was compared to inclusive jet and hadron production at ISR energies. \input{fig4.tex} Figure \ref{fig4}a) shows that for $qq'\rightarrow qq'$ only the one-gluon exchange between the different quarks in the $t$-channel contributes. The matrix element squared is given by \begin{equation} |{\cal M}|^2_{qq'\rightarrow qq'}(s,t,u) = g^4\mu^{4\varepsilon}T_{qq'\rightarrow qq'}(s,t,u) \label{eq46} \end{equation} with \begin{equation} T_{qq'\rightarrow qq'}(s,t,u) = 4N_CC_F\left( \frac{s^2+u^2}{t^2}-\varepsilon\right) . \end{equation} We will split off the coupling constants for the other Born matrix elements $|{\cal M}|^2$ as in eq.~(\ref{eq46}). Equal quark flavors cannot be distinguished in the final state so that both diagrams in figure \ref{fig4}b) have to be added. In addition to the squares of the individual diagrams already present in the case of different quark flavors, the interference term \begin{equation} T_{qq\rightarrow qq}(s,t,u) = -8C_F(1-\varepsilon)\left( \frac{s^2}{ut}+\varepsilon\right) \end{equation} also contributes to the process $qq\rightarrow qq$. Final gluons are also undistinguishable and can even couple in a non-abelian way as shown in figure \ref{fig4}c). The resulting matrix element for the process $q\bar{q}\rightarrow gg$ is \begin{equation} T_{q\bar{q}\rightarrow gg}(s,t,u) = 4C_F(1-\varepsilon)\left( \frac{2N_CC_F}{ut}-\frac{2N_C^2} {s^2}\right) (t^2+u^2-\varepsilon s^2). \end{equation} The diagrams for the process $gg\rightarrow gg$ in figure \ref{fig4}d) give \begin{equation} T_{gg\rightarrow gg}(s,t,u) = 32N_C^3C_F(1-\varepsilon)^2\left( 3-\frac{ut}{s^2}- \frac{us}{t^2}-\frac{st}{u^2}\right) . \end{equation} All other diagrams can be obtained from the above by simple crossing relations. The complete result in $d$ dimensions can be found in \cite{Ell86} and is summarized in table \ref{tab3}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|} \hline Process & Matrix Element $\overline{|{\cal M}|^2}$ \\ \hline \hline $qq'\rightarrow qq'$ & $[|{\cal M}|^2_{qq'\rightarrow qq'}(s,t,u)]/[4N_C^2]$ \\ \hline $q\bar{q}'\rightarrow q\bar{q}'$ & $[|{\cal M}|^2_{qq'\rightarrow qq'}(u,t,s)]/[4N_C^2]$ \\ \hline $q\bar{q}\rightarrow \bar{q}'q'$ & $[|{\cal M}|^2_{qq'\rightarrow qq'}(t,s,u)]/[4N_C^2]$ \\ \hline $qq\rightarrow qq$ & $[|{\cal M}|^2_{qq'\rightarrow qq'}(s,t,u)+|{\cal M}|^2_{qq'\rightarrow qq'}(s,u,t)+ |{\cal M}|^2_{qq\rightarrow qq}(s,t,u)]/[4N_C^2]/2!$ \\ \hline $q\bar{q}\rightarrow q\bar{q}$ & $[|{\cal M}|^2_{qq'\rightarrow qq'}(u,t,s)+|{\cal M}|^2_{qq'\rightarrow qq'}(u,s,t)+ |{\cal M}|^2_{qq\rightarrow qq}(u,t,s)]/[4N_C^2]$ \\ \hline $q\bar{q}\rightarrow gg$ & $[|{\cal M}|^2_{q\bar{q}\rightarrow gg}(s,t,u)]/[4N_C^2]/2!$ \\ \hline $qg \rightarrow qg$ & $-[|{\cal M}|^2_{q\bar{q}\rightarrow gg}(t,s,u)]/[8(1-\varepsilon)N_C^2C_F]$ \\ \hline $\bar{q}g\rightarrow \bar{q}g$ & $-[|{\cal M}|^2_{q\bar{q}\rightarrow gg}(t,u,s)]/[8(1-\varepsilon)N_C^2C_F]$ \\ \hline $gg\rightarrow q\bar{q}$ & $[|{\cal M}|^2_{q\bar{q}\rightarrow gg}(s,t,u)]/[16(1-\varepsilon)^2N_C^2C_F^2]$ \\ \hline $gg\rightarrow gg$ & $[|{\cal M}|^2_{gg\rightarrow gg}(s,t,u)]/[16(1-\varepsilon)^2N_C^2C_F^2]/2!$ \\ \hline \end{tabular} \end{center} \caption[Squared $2\rightarrow 2$ Matrix Elements for Resolved Photoproduction] {\label{tab3}{\it Summary of $2\rightarrow 2$ squared matrix elements for resolved photoproduction.}} \end{table} \subsection{Born Matrix Element for Direct $\gamma\gamma$ Scattering} Since the gluon has no electromagnetic charge, direct $\gamma\gamma$ scattering can only proceed through the process $\gamma\gamma\rightarrow q\bar{q}$ as shown in figure \ref{kkkfig1}. \input{kkkfig1.tex} The corresponding matrix element is given by \begin{equation} |{\cal M}|^2_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u) = e^4e_q^4\mu^{4\varepsilon}T_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u), \label{kkkeq1} \end{equation} where \begin{equation} T_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u) = 8N_C(1-\varepsilon)\left[ (1-\varepsilon) \left( \frac{u}{t}+ \frac{t}{u}\right) -2\varepsilon\right] . \end{equation} It can be obtained from the direct photoproduction matrix element for $\gamma q\rightarrow gq$ in \ref{eq25} through crossing of $s\rightarrow t$, multiplication with (-1) for the crossing of a fermion line, and replacing the strong coupling constant $g^2$ by $e^2e_q^2$ and the color factor $C_F$ by 1. \setcounter{equation}{0} \section{Next-To-Leading Order Cross Sections} As in any calculation of next-to-leading order cross sections, we have to calculate two types of corrections in photoproduction: virtual and real corrections. For the direct case, these corrections are of ${\cal O} (\alpha\alpha_s^2)$, and for the resolved case, they are of ${\cal O} (\alpha_s^3)$. Throughout this section, direct and resolved photon results will be presented separately, but in a completely parallel way. Whereas the direct contributions have already been published \cite{Kla93,x8}, the resolved contributions are presented here for the first time. The virtual corrections consist of the interference terms of one-loop graphs with the Born graphs calculated in section 3 and will be given in section 4.1. We can still use the same phase space for two final partons as in the last section. The direct matrix elements are contained in section 4.1.1, those for resolved photons in section 4.1.2. Again, we display only the relevant master diagrams, from which all subprocesses can be deduced through crossing. Section 4.1.3 contains the virtual corrections for direct $\gamma\gamma$ scattering. The real corrections are derived from the integration of diagrams with a third parton in the final state over regions of phase space, where this third parton causes singularities in the matrix elements. Different methods can be employed here. We choose the phase space slicing method with an invariant mass cut \cite{Kra84} in section 4.2. Alternatively, the subtraction method could be used \cite{Ell81,Ell89,x6}. We calculate the three particle phase space in two different versions for final state singularities (section 4.2.1) and initial state singularities (section 4.2.4). These sections are followed by the calculation of the matrix elements for final state (sections 4.2.2 through 4.2.3) and initial state corrections (sections 4.2.5 through 4.2.8). The real corrections for direct $\gamma\gamma$ scattering are presented in section 4.2.9. Finally, we demonstrate the cancellation of the infrared and collinear singularities, that show up in the virtual and real corrections separately, in section 4.3. The ultraviolet singularities in the virtual corrections are removed by counter terms in the renormalization procedure. Remaining collinear singularities in the initial states can be absorbed into the photon and proton structure functions. \subsection{Virtual Corrections} The central blobs in figures \ref{fig1} and \ref{fig2} can also contain loop corrections to the leading order matrix elements in figures \ref{fig3} and \ref{fig4}. Up to ${\cal O} (\alpha\alpha_s^2)$, the interference terms between the direct Born diagrams $T$ and the direct one-loop diagrams $V$ have to be taken into account according to \begin{eqnarray} |{\cal M}|^2({\cal O}(\alpha\alpha_s^2)) &=& |\sqrt{\alpha\alpha_s T}+\sqrt{\alpha\alpha_s^3 V}|^2 \nonumber \\ &=& \alpha\alpha_sT+2\alpha\alpha_s^2\sqrt{V^\ast T}+{\cal O} (\alpha\alpha_s^3). \end{eqnarray} Similarly, the interference terms between the resolved Born diagrams $T$ and the resolved one-loop diagrams $V$ have to be taken into account up to ${\cal O} (\alpha_s^3)$. The inner loop momenta are unconstrained from the outer $2\rightarrow 2$ scattering process. We can therefore still make use of the phase space calculated in section 3.1, but have to integrate over the inner degrees of freedom. At the lower and upper integration bounds, we encounter infrared (IR) and ultraviolet (UV) divergencies. Using the dimensional regularization scheme of t'Hooft and Veltman \cite{tHo72}, we integrate in $d=4-2\varepsilon$ dimensions, thus rendering the integrals finite and keeping the theory Lorentz and gauge invariant. The IR and UV poles show up as terms $\propto 1/\varepsilon^2$ and $1/\varepsilon$. As a consequence, the complete higher order cross sections have to be calculated in $d$ dimensions up to ${\cal O} (\varepsilon^2)$ to ensure that no finite terms are missing. The ultraviolet divergencies coming from infinite loop momenta can be removed by renormalizing the fields, couplings, gauge parameters, and masses in the Lagrangian, since QCD is a renormalizable field theory. This is done by multiplying the divergent parameters in the Lagrangian with renormalization constants $Z_i$ and expanding up to the required order in the coupling constant $g$. The resulting counter terms then render the physical Green's functions finite. In addition to the $1/\varepsilon$ poles, universal finite contributions \begin{equation} \frac{1}{\varepsilon}-\gamma_E+\ln (4\pi) \label{eq26} \end{equation} are included into the counter terms according to the modified minimal subtraction ($\overline{\mbox{MS}}$) scheme \cite{tHo73}, which we employ here. The coupling constant $g$ is kept dimensionless by multiplying it with the renormalization scale $\mu$ \begin{equation} g \rightarrow g\mu^{\varepsilon}. \label{eq27} \end{equation} If eq.~(\ref{eq27}) is expanded in powers of $\varepsilon$ and combined with single poles of the type in eq.~(\ref{eq26}), the renormalization procedure will produce an explicit logarithmic dependence on the renormalization scale $\mu$ \begin{equation} \frac{1}{\varepsilon} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \doteq \frac{1}{\varepsilon} + \ln \frac{4\pi\mu^2}{s}, \end{equation} that cancels the first term of the expansion of the running coupling constant \begin{equation} \alpha_s(\mu^2) = \frac{12\pi}{(33-2N_f)\ln \frac{\mu^2} {\Lambda^2}} \left( 1-\frac{6(153-19N_f)}{(33-2N_f)^2} \frac {\ln (\ln \frac{\mu^2}{\Lambda^2} )}% {\ln \frac{\mu^2}{\Lambda^2}} \right) + \cdots. \end{equation} $\alpha_s$ is given here in two-loop approximation as appropriate in NLO QCD, and $N_f$ is the number of flavors in the $q\bar{q}$ loops. We choose $\mu = {\cal O} (E_T)$ as for the factorization scales. Infrared divergencies arise from small loop momenta in the virtual corrections. They have to cancel eventually against the divergencies from the emission of real soft and collinear partons according to the Kinoshita-Lee-Nauenberg theorem \cite{Kin62}. Finally, one obtains finite physical results in the limit $d\rightarrow 4$. \subsubsection{Virtual Corrections for Direct Photons} The one-loop corrections to the left diagram of figure \ref{fig3} are shown in figure \ref{fig5} \input{fig5.tex} and can be classified into a) self-energy diagrams, b) propagator corrections and box diagrams, and c) vertex corrections. They contain an additional virtual gluon, which leads to an extra factor $\alpha_s$. Similar diagrams are obtained for the right diagram of figure \ref{fig3}. As in leading order, the diagrams for the process $\gamma g\rightarrow q\bar{q}$ can be obtained from figure \ref{fig5} by crossing the initial quark and the final gluon or equivalently $(s\leftrightarrow t)$ and multiplying by $(-1)$ for the crossing of a fermion line. However, the virtual corrections can in general contain logarithms $\ln(x/s-i\eta)$ and dilogarithms $\mbox{Li}_2 (x/s)$, where $x$ denotes different Mandelstam variables before and after crossing. The $i\eta$-prescription for Feynman propagators then takes care of additional terms of $\pi^2$, that arise in the quadratic logarithms with negative argument. The virtual contributions have been well known for many years from $e^+ e^- \rightarrow q\bar{q}g$ higher order QCD calculations \cite{Ell81,Fab82}. For the corresponding photoproduction cross section, one substitutes $Q^2 = 0$ and performs the necessary crossings. The result can be found in \cite{Aur87,Bod92,Kla93}. We have also compared with the results in \cite{Gra90} for deep inelastic scattering $eq \rightarrow e'gq$ and $e g \rightarrow e' q\bar{q}$, which can be expressed by the invariants $s$, $t$, and $u$ after setting $Q^2 = 0$. For completeness and for later use, we write the final result in the form \begin{equation} |{\cal M}|^2_{\gamma b\rightarrow 12} (s,t,u) = e^2e_q^2g^2 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} V_{\gamma b \rightarrow 12}(s,t,u) \end{equation} The expression $V_{\gamma q\rightarrow gq}(s,t,u)$ is given by \begin{eqnarray} V_{\gamma q\rightarrow gq}(s,t,u) & = & \left[ C_F\left( -\frac{2}{\varepsilon^2} +\frac{1} {\varepsilon} \left( 2\ln \frac{-t}{s} - 3 \right) +\frac{2\pi^2}{3} -7 +\ln^2\frac{t}{u} \right) \right. \label{eq28}\\ && -\frac{N_C}{2}\left( \frac{2}{\varepsilon^2}+\frac{1}{\varepsilon}\left( \frac{11}{3}+2\ln\frac{-t}{s}-2\ln\frac{-u}{s} \right) +\frac{\pi^2}{3}+\ln^2\frac{t}{u} +\frac{11}{3}\ln\frac{s}{\mu^2}\right) \nonumber\\ && \left. +\frac{N_f}{3}\left( \frac{1}{\varepsilon}+\ln\frac{s}{\mu^2}\right) \right] T_{\gamma q\rightarrow gq}(s,t,u) \nonumber\\ && +8N_CC_F^2\left( -2\ln\frac{-u}{s}+4\ln\frac{-t}{s}-3\frac{s}{u}\ln\frac{-u}{s}-\left( 2+\frac{u}{s} \right) \left( \pi^2+\ln^2\frac{t}{u} \right) \right. \nonumber\\ && \left. -\left( 2+\frac{s}{u} \right) \ln^2\frac{-t}{s} \right) \nonumber\\ && -4N_C^2C_F\left( 4\ln\frac{-t}{s}-2\ln\frac{-u}{s}-\left( 2+\frac{u}{s} \right) \left( \pi^2+\ln^2\frac{t}{u} \right) -\left( 2+\frac{s}{u} \right) \ln^2\frac{-t}{s}\right) .\nonumber \end{eqnarray} The contributions for incoming anti-quarks and gluons $\gamma\bar{q}\rightarrow g\bar{q}$ and $\gamma g\rightarrow q\bar{q}$ could be obtained according to table \ref{tab2}, if the imaginary parts were included above. The result for anti-quarks turns out to be identical to eq.~(\ref{eq28}), and for the gluon initiated process one obtains \begin{eqnarray} V_{\gamma g\rightarrow q\bar{q}}(s,t,u) & = & \left[ C_F\left( -\frac{2}{\varepsilon^2}-\frac{3} {\varepsilon} +\frac{2\pi^2}{3}-7+\ln^2\frac{-t}{s}+\ln^2\frac{-u}{s}\right) \right. \label{eq29}\\ && -\frac{N_C}{2}\left( \frac{2}{\varepsilon^2}+\frac{1}{\varepsilon}\left( \frac{11}{3}-2\ln\frac{-t}{s}-2\ln\frac{-u}{s} \right) +\frac{\pi^2}{3}+\ln^2\frac{tu}{s^2} +\frac{11}{3}\ln\frac{s}{\mu^2}\right) \nonumber\\ && \left. +\frac{N_f}{3}\left( \frac{1}{\varepsilon}+\ln\frac{s}{\mu^2}\right) \right] T_{\gamma g\rightarrow q\bar{q}}(s,t,u) \nonumber\\ && +8N_CC_F^2\left( 2\ln\frac{-t}{s}+2\ln\frac{-u}{s}+3\frac{u}{t}\ln\frac{-t}{s}+3\frac{t}{u}\ln\frac{-u}{s} +\left( 2+\frac{u}{t} \right) \ln^2\frac{-u}{s}\right. \nonumber\\ && \left. +\left( 2+\frac{t}{u}\right) \ln^2\frac{-t}{s}\right) \nonumber\\ && -4N_C^2C_F\left( 2\ln\frac{-t}{s}+2\ln\frac{-u}{s} +\left( 2+\frac{u}{t}\right) \ln^2\frac{-u}{s} +\left( 2+\frac{t}{u} \right) \ln^2\frac{-t}{s}\right) .\nonumber \end{eqnarray} All UV poles have been canceled by counter terms through the renormalization procedure, and the remaining IR poles are contained in the first three lines of eqs.~(\ref{eq28}) and (\ref{eq29}). Contrary to some of the finite terms, they are proportional to the LO Born matrix elements $T_{\gamma q\rightarrow gq}$ and $T_{\gamma g\rightarrow q\bar{q}}$. The explicit dependence of the virtual corrections on the renormalization scale $\mu$ is contained in the logarithms proportional to the Born matrix elements and $N_C$ and $N_f$, respectively. All terms of ${\cal O} (\varepsilon)$ and higher have been omitted since they do not contribute in the physical limit $d\rightarrow 4$. \subsubsection{Virtual Corrections for Resolved Photons} For the resolved virtual corrections, we show in figure \ref{fig6} only \input{fig6.tex} the one-loop diagrams for $qq'\rightarrow qq'$ and classify them again into a) self-energy diagrams, b) propagator corrections and box diagrams, and c) vertex corrections. Due to the additional gluon, these contributions are again one order higher in $\alpha_s$ than the Born terms as in the direct case. The complete set of diagrams can be found in \cite{Ell86} as well as the results \begin{equation} |{\cal M}|^2_{ab\rightarrow 12} (s,t,u) = g^4 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{Q^2} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} V_{ab\rightarrow 12} (s,t,u), \end{equation} where $Q^2$ denotes now an arbitrary scale. For the diagrams in figure \ref{fig6}, we obtain \begin{eqnarray} V_{qq'\rightarrow qq'}(s,t,u) & = & \left[ C_F\left( -\frac{4}{\varepsilon^2} -\frac{1}{\varepsilon} (6+8l(s)-8l(u)-4l(t))\right. \rp\\ && -\frac{2\pi^2}{3}-16-2l^2(t)+l(t)(6+8l(s)-8l(u))\nonumber\\ && -2\frac{s^2-u^2}{s^2+u^2}(2\pi^2+(l(t)-l(s))^2+(l(t)-l(u))^2)\nonumber\\ && \left. +2\frac{s+u}{s^2+u^2}((s+u)(l(u)-l(s))+(u-s)(2l(t)-l(s)-l(u)))\right) \nonumber\\ && +N_C\left( \frac{1}{\varepsilon}\left( 4l(s)-2l(u)-2l(t)\right) +\frac{85}{9}+\pi^2+2l(t)(l(t)+l(u)-2l(s))\right. \nonumber\\ && +\frac{s^2-u^2}{2(s^2+u^2)}(3\pi^2+2(l(t)-l(s))^2+(l(t)-l(u))^2) \nonumber\\ && \left. -\frac{st}{s^2+u^2}(l(t)-l(u)) +\frac{2ut}{s^2+u^2}(l(t)-l(s)) +\frac{11}{3}(l(-\mu^2)-l(t))\right) \nonumber \\ && \left. +\frac{N_f}{2}\left( \frac{4}{3}(l(t)-l(-\mu^2))-\frac{20}{9}\right) \right] T_{qq'\rightarrow qq'}(s,t,u). \nonumber \end{eqnarray} The contributions for incoming quarks and anti-quarks of different flavor can be obtained from table \ref{tab3} as well as the contribution for identical quark flavors. There one also has to include the interference term \begin{eqnarray} V_{qq\rightarrow qq}(s,t,u) & = & \left[ C_F\left( -\frac{4}{\varepsilon^2} -\frac{1}{\varepsilon} (6+4l(s)-4l(t)-4l(u))-\frac{7\pi^2}{6}-16\right. \rp\\ && \left. -\frac{3}{2}(l(t)+l(u))^2+2l(s)(l(t)+l(u))+2(l(t)+l(u))\right) \nonumber\\ && +N_C\left( \frac{1}{\varepsilon}(4l(s)-2l(t)-2l(u))+\frac{85}{9} +\frac{5}{4}(l(t)+l(u))^2\right. \nonumber\\ && \left. -l(t)l(u)-2l(s)(l(t)+l(u)) -\frac{4}{3}(l(t)+l(u))+\frac{5}{4}\pi^2+\frac{11}{3}l(-\mu^2)\right) \nonumber \\ && +\frac{N_f}{2}\left( -\frac{20}{9}+\frac{2}{3}(l(t)+l(u)-2l(-\mu^2))\right) \nonumber \\ && \left. +\frac{1}{N_C}\left( (\pi^2+(l(t)-l(u))^2)\frac{ut}{2s^2}+\frac{u}{s} l(t)+\frac{t}{s}l(u)\right) \right] T_{qq\rightarrow qq}(s,t,u). \nonumber \end{eqnarray} For incoming antiquarks, the Mandelstam variables $(s\leftrightarrow u)$ have to be crossed. The first lines proportional to the color factors $C_F$ and $N_C$ contain the IR singular terms in the two processes discussed above. Furthermore, the complete virtual corrections are proportional to the Born matrix elements. This is not true for the case $q\bar{q}\rightarrow gg$, where the virtual corrections are given by \begin{eqnarray} V_{q\bar{q}\rightarrow gg}(s,t,u) & = & \left[ C_F\left( -\frac{2}{\varepsilon^2}-\frac{3} {\varepsilon}-7 -\frac{\pi^2}{3}\right) \right. \\ && +N_C\left( -\frac{2}{\varepsilon^2}-\frac{11}{3\varepsilon}+\frac{11}{3} l(-\mu^2) -\frac{\pi^2}{3}\right) \nonumber\\ && \left. +\frac{N_f}{2}\left( \frac{4}{3\varepsilon}-\frac{4}{3}l(-\mu^2)\right) \right] T_{q\bar{q}\rightarrow gg}(s,t,u) \nonumber \\ && +\frac{l(s)}{\varepsilon}\left( \lr 4N_C^3C_F+\frac{4C_F}{N_C}\right) \frac{t^2+u^2} {ut}-16N_C^2C_F^2\frac{t^2+u^2}{s^2}\right) \nonumber\\ && +\frac{8N_C^3C_F}{\varepsilon}\left( l(t)\left( \frac{u}{t}-\frac{2u^2}{s^2}\right) +l(u) \left( \frac{t}{u}-\frac{2t^2}{s^2}\right) \rr\nonumber\\ && -\frac{8N_CC_F}{\varepsilon}\left( \frac{u}{t}+\frac{t}{u}\right) (l(t)+l(u))\nonumber\\ && +f^c(s,t,u)+f^c(s,u,t). \nonumber \end{eqnarray} Here, only parts of the IR singular terms are proportional to the Born matrix element $T_{q\bar{q}\rightarrow gg}$. The finite contributions have been put into the function \begin{eqnarray} f^c(s,t,u) & = & 8N_C^2C_F\left[ \frac{l(t)l(u)}{N_C}\frac{t^2+u^2}{2tu}\right. \\ && +l^2(s)\left( \frac{1}{4N_C^3}\frac{s^2}{tu}+\frac{1}{4N_C}\left( \frac{1}{2} +\frac{t^2+u^2}{tu}-\frac{t^2+u^2}{s^2}\right) -\frac{N_C}{4}\frac{t^2+u^2} {s^2}\right) \nonumber\\ && +l(s)\left( \lr\frac{5C_F}{4}-\frac{1}{2N_C}-\frac{1}{N_C^3}\right) -\left( N_C+\frac{1}{N_C^3}\right) \frac{t^2+u^2}{2tu} -\frac{C_F}{2}\frac{t^2+u^2}{s^2}\right) \nonumber\\ && +\pi^2\left( \frac{1}{8N_C}+\frac{1}{N_C^3}\left( \frac{3(t^2+u^2)}{8tu} +\frac{1}{2}\right) +N_C\left( \frac{t^2+u^2}{8tu}-\frac{t^2+u^2}{2s^2}\right) \rr \nonumber\\ && +\left( N_C+\frac{1}{N_C}\right) \left( \frac{1}{8}-\frac{t^2+u^2}{4s^2}\right) \nonumber\\ && +l^2(t)\left( N_C\left( \frac{s}{4t}-\frac{u}{s}-\frac{1}{4}\right) +\frac{1}{N_C} \left( \frac{t}{2u}-\frac{u}{4s}\right) +\frac{1}{N_C^3}\left( \frac{u}{4t} -\frac{s}{2u}\right) \rr\nonumber\\ && +l(t)\left( N_C\left( \frac{t^2+u^2}{s^2}+\frac{3t}{4s}-\frac{5u}{4t} -\frac{1}{4}\right) -\frac{1}{N_C}\left( \frac{u}{4s}+\frac{2s}{u}+\frac{s}{2t}\right) -\frac{1}{N_C^3}\left( \frac{3s}{4t}+\frac{1}{4}\right) \rr\nonumber\\ && \left. +l(s)l(t)\left( N_C\left( \frac{t^2+u^2}{s^2}-\frac{u}{2t}\right) +\frac{1}{N_C}\left( \frac{u}{2s}-\frac{t}{u}\right) +\frac{1}{N_C^3}\left( \frac{s}{u}-\frac{u}{2t}\right) \rr \right] \nonumber \end{eqnarray} Finally, the process $gg\rightarrow gg$ gives \begin{eqnarray} V_{gg\rightarrow gg}(s,t,u) & = & \left[ N_C\left( -\frac{4}{\varepsilon^2}-\frac{22} {3\varepsilon} -\frac{67}{9}+\frac{11}{3}l(-\mu^2)+\frac{\pi^2}{3}\right) \right. \\ && \left. +\frac{N_f}{2}\left( \frac{8}{3\varepsilon}+\frac{20}{9}-\frac{4}{3} l(-\mu^2)\right) \right] T_{gg\rightarrow gg}(s,t,u) \nonumber \\ && +\frac{32N_C^4C_F}{\varepsilon}l(s)\left( 3-\frac{2tu}{s^2}+\frac{t^4+u^4} {t^2u^2}\right) \nonumber\\ && +\frac{32N_C^4C_F}{\varepsilon}l(t)\left( 3-\frac{2us}{t^2}+\frac{u^4+s^4} {u^2s^2}\right) \nonumber\\ && +\frac{32N_C^4C_F}{\varepsilon}l(u)\left( 3-\frac{2st}{u^2}+\frac{s^4+t^4} {s^2t^2}\right) \nonumber\\ && +8N_C^3C_F(f^d(s,t,u)+f^d(t,u,s)+f^d(u,s,t)), \nonumber \end{eqnarray} where, once more, not all IR singularities are proportional to the complete Born matrix element and the finite terms are given by the function \begin{eqnarray} f^d(s,t,u) & = & N_C\left[ \left( \frac{2(t^2+u^2)}{tu}\right) l^2(s) +\left( \frac{4s(t^3+u^3)}{t^2u^2}-6\right) l(t)l(u)\right. \\ && \left. +\left( \frac{4}{3}\frac{tu}{s^2}-\frac{14}{3}\frac{t^2+u^2}{tu}-14 -8\left( \frac{t^2}{u^2}+\frac{u^2}{t^2}\right) \rr l(s)-1-\pi^2\right] \nonumber\\ && +\frac{N_f}{2}\left[ \left( \frac{10}{3}\frac{t^2+u^2}{tu}+\frac{16}{3} \frac{tu}{s^2}-2\right) l(s)-\frac{s^2+tu}{tu}l^2(s)\right. \nonumber\\ && \left. -2\frac{t^2+u^2}{tu}l(t)l(u)+2-\pi^2\right] .\nonumber \end{eqnarray} All UV singularities have already been absorbed into the Lagrangian, and terms of ${\cal O} (\varepsilon)$ and higher have been neglected. As mentioned previously, the crossing of Mandelstam variables according to table \ref{tab3} may change the signs of the arguments in the logarithms. This is accounted for in the definition of \begin{equation} l(x) = \ln\left| \frac{x}{Q^2}\right| , \end{equation} where $x$ may be any of the variables $s$, $t$, or $u$ and is normalized to $Q^2=\max(s,t,u)$. Due to the $i\eta$-prescription in the propagators, terms bilinear in the logarithms may give an extra $\pi^2$: \begin{eqnarray} l^2(x) &=& \ln^2\left( \frac{x}{Q^2}\right) -\pi^2,~\mbox{if}~x>0, \\ l^2(x) &=& \ln^2\left( -\frac{x}{Q^2}\right) ,\hspace{0.65cm}\mbox{if}~x<0. \end{eqnarray} \subsubsection{Virtual Corrections for Direct $\gamma\gamma$ Scattering} The virtual diagrams contributing to direct $\gamma\gamma$ scattering are shown in figure \ref{kkkfig2}. \input{kkkfig2.tex} As has already been seen in the Born case, direct $\gamma\gamma\rightarrow q\bar{q}$ scattering is intimately related to $\gamma g\rightarrow q\bar{q}$ scattering. Again, we have to replace the strong coupling constant by its electromagnetic counterpart resulting in \begin{equation} |{\cal M}|^2_{\gamma\gamma\rightarrow 12} (s,t,u) = e^4e_q^4 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} V_{\gamma \gamma \rightarrow 12}(s,t,u). \end{equation} The expression $V_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u)$ is given by \begin{eqnarray} V_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u) & = & C_F\left( -\frac{2}{\varepsilon^2}-\frac{3} {\varepsilon} +\frac{2\pi^2}{3}-7+\ln^2\frac{-t}{s}+\ln^2\frac{-u}{s}\right) T_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u) \label{kkkeq2}\\ && +8N_CC_F\left( 2\ln\frac{-t}{s}+2\ln\frac{-u}{s}+3\frac{u}{t}\ln\frac{-t}{s}+3\frac{t}{u}\ln\frac{-u}{s} +\left( 2+\frac{u}{t} \right) \ln^2\frac{-u}{s}\right. \nonumber\\ && \left. +\left( 2+\frac{t}{u}\right) \ln^2\frac{-t}{s}\right) . \nonumber \end{eqnarray} Note that due to the abelian structure of QED, the color class $N_C$, which is present in eq.~(\ref{eq29}) and arises there from the triple-gluon vertex, does not appear here. \subsection{Real Corrections} For the calculation of the hard scattering cross section in next-to-leading order, we must include all diagrams with an additional parton in the final state. The four-vectors of these subprocesses will be labeled by $p_ap_b\rightarrow p_1p_2p_3$, where $p_a$ is the momentum of the incoming photon or parton in the photon and $p_b$ is the momentum of the incoming parton in the proton. The invariants will be denoted by $s_{ij}= (p_i+p_j)^2$ and the previously defined Mandelstam variables $s$, $t$, and $u$. For massless partons, the $2 \rightarrow 3$ contributions can contain singularities at $s_{ij}=0$. They can be extracted with the dimensional regularization method and canceled against those associated with the one-loop contributions. In order to achieve this, we go through the following steps. First, we calculate the phase space for $2\rightarrow 3$ scattering in $d$ dimensions and factorize it into a regular part corresponding to $2\rightarrow 2$ scattering and a singular part corresponding to the unresolved two-parton subsystem. Next, the matrix elements for the $2 \rightarrow 3$ subprocesses are calculated in $d$ dimensions as well. They are squared and averaged/summed over initial/final state spins and colors and can be used for incoming quarks, antiquarks, and gluons with the help of crossing. One can distinguish three classes of singularities in photoproduction $(Q^2\simeq 0)$ depending on which propagators in the squared matrix elements vanish. Examples for these three classes are shown in figure \ref{fig17}. \input{fig17} The X marks the propagator leading to the divergence. In the first graph, it is the invariant $s_{23}$ given by momenta of the final state, which causes the singularity. Therefore, this divergence will be called final state singularity. The second graph becomes singular for $-t_{a1}=s_{a1}=(p_a+p_1)^2=0$, when the photon and the final quark momentum are parallel. This is the class of photon initial state singularities. In the third graph, the singularity occurs at $-t_{b3}=s_{b3}=(p_b+p_3)^2$, where $p_b$ is the initial parton momentum in the proton. This stands for proton initial state singularity. Since resolved photons behave like hadrons, they produce similar initial state divergencies. The first class is familiar from calculations for jet production in $e^+e^-$ annihilation \cite{Fab82}, the third class from jet production in deep inelastic $ep$ scattering ($Q^2\neq 0$) \cite{Gra90}. The second class occurs only for direct photoproduction \cite{Bod92}. When squaring the sum of all relevant $2\rightarrow 3$ matrix elements, we encounter terms, where more than one of the invariants become singular, e.g.~when one of the gluon momenta $p_3 \rightarrow 0$ so that $s_{23}=0$ and $-t_{b3}=s_{b3}=0$. These infrared singularities are disentangled by a partial fractioning decomposition, so that every term has only one vanishing denominator. This also allows the separation of the different classes of singularities in figure \ref{fig17}. It turns out that the results for direct photoproduction are always proportional to the LO cross sections involved in the hard scatter \begin{equation} F,I,J_{\gamma b\rightarrow 123} = KT_{\gamma b\rightarrow 12}, \end{equation} where $F$, $I$, and $J$ denote final state, photon initial state and proton initial state contributions. For resolved photoproduction, this is no more true in general but only for the quark-quark scattering subprocesses with a less complicated color structure than those involving real gluons. For gluonic processes, only parts of the leading order cross sections can be factorized \begin{equation} F,I,J_{ab\rightarrow 123} = \sum_i K^{(i)}T^{(i)}_{ab\rightarrow 12}, \end{equation} where \begin{equation} T_{ab\rightarrow 12} = \sum_i T^{(i)}_{ab\rightarrow 12} \end{equation} is again the full Born matrix element. As the last step, the decomposed matrix elements have to be integrated up to $s_{ij} \leq y s$. $y$ characterizes the region, where the two partons $i$ and $j$ cannot be resolved. Then, the singular kernels $K$ and $K^{(i)}$ produce terms $\propto 1/\varepsilon^2$ and $1/\varepsilon$, which will cancel against those in the virtual diagrams or be absorbed into structure functions, and finite corrections proportional to $\ln^2 y$, $\ln y$, and $y^0$. For small values of $y$, terms of ${\cal O} (y)$ can be neglected. In the following, we shall give the results for the different classes of singularities separately. \subsubsection{Phase Space for Three-Particle Final States} For the real corrections, we have to consider all subprocesses, which have an additional third parton in the final state attached to the $2\rightarrow 2$ scattering process. We therefore calculate the phase space for $2\rightarrow 3$ scattering in this section and choose as the coordinate system the center-of-mass system of partons $1$ and $3$ as shown in figure \ref{fig16}. \input{fig16} The angle between $p_a$ and $p_1$ is called $\theta$, the azimuthal angle between the planes defined by $p_a$ and $p_1$ and $p_a$ and $p_2$ is called $\phi$, and the angle $\chi^\ast$ between $p_a$ and $p_2$ is defined in the overall center-of-mass system of the incoming partons $a$ and $b$ denoted by an asterisk $(^\ast)$. As we have to accommodate a third final particle in the scattering process, the Mandelstam variables in this section \begin{eqnarray} s &=& (p_a+p_b)^2, \\ t &=& (p_a-p_1-p_3)^2-2p_1p_3, \\ u &=& (p_a-p_2)^2 \end{eqnarray} differ slightly from the previously used ones, but still satisfy the relation $s+t+u=0$ for massless partons. In the limit of soft ($p_3 = 0$) or collinear ($p_3 \parallel p_1$) particle emission, they can however be written in a form similar to $2\rightarrow 2$ scattering as $t=(p_a-\overline{p}_1)^2$ and $u=(p_a-p_2)^2$. Here $\overline{p}_1=p_1+p_3$ represents the four-momentum of the recombined jet, and $p_2$ is the four-momentum of the second jet. We start again from the general expression \cite{Byc73} \begin{equation} \mbox{dPS}^{(3)} = \int (2\pi )^d \prod_{i=1}^{3} \frac{\mbox{d}^dp_i \delta (p_i^2)} {(2\pi )^{d-1}} \delta^d \left( p_a+p_b-\sum_{j=1}^3 p_j \right) \end{equation} and describe the final state singularity with the new variable \begin{equation} z' = \frac{p_1p_3}{p_ap_b}. \end{equation} We can then insert two additional $\delta$-functions with respect to $t$ and $z'$ \begin{equation} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z'} = \int (2\pi )^d \prod_{i=1}^{3} \frac{\mbox{d}^dp_i \delta (p_i^2)} {(2\pi )^{d-1}} \delta^d \left( p_a+p_b-\sum_{j=1}^3 p_j \right) \delta (t+s+(p_a-p_2)^2)\delta \left( z'-\frac{2p_1p_3}{s}\right) , \end{equation} before we integrate over the $\delta(p_i^2)$ and the space-like components of the $d$-dimensional $\delta$-function to eliminate $p_3$. The resulting expression is \begin{equation} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z'} = \int \frac{\mbox{d}^{d-1}p_1\mbox{d}^{d-1}p_2}{(2\pi)^{2d-3}2E_12E_22E_3} \delta\left( E_a+E_b-\sum_{j=1}^3E_j\right) \delta(t+s+(p_a-p_2)^2)\delta \left( z' -\frac{2p_1p_3}{s}\right) . \end{equation} We now decompose $p_1$ into its energy $E_1$ and angular components $\theta$ and $\phi$ in the center-of-mass system of partons $1$ and $3$, and $p_2$ into its components $E_2^{\ast}$, $\chi^{\ast}$, and $\phi_2^{\ast}$ in the overall center-of-mass system with the result \begin{eqnarray} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z'} &=& \int \frac{1}{(2\pi)^{2d-3}8E_3}\frac{2\pi^{\frac{d-3}{2}}}{\Gamma\left( \frac{d-3}{2} \right) }E_1^{d-3}\mbox{d}E_1\sin^{d-3}\theta\mbox{d}\theta\sin^{d-4}\phi \mbox{d}\phi\frac{\pi^{\frac{d-4}{2}}}{\Gamma\left( \frac{d-2}{2}\right) } E_2^{\ast^{d-3}}\mbox{d}E_2^\ast\\ && \sin^{d-3}\chi^\ast\mbox{d}\chi^\ast\mbox{d}\phi_2^\ast \delta\left( E_a+E_b-\sum_{j=1}^3E_j\right) \delta(t+s-2E_a^\ast E_2^\ast (1-\cos\chi^\ast)) \delta \left( z'-\frac{4E_1^2}{s}\right) .\nonumber \end{eqnarray} Integrating over the remaining $\delta$-functions and the trivial azimuthal angle $\phi_2^\ast$ up to $2\pi$, we arrive at \begin{equation} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z'} = \int \frac{(16\pi^2)^\varepsilon}{128\pi^3\Gamma(2-2\varepsilon)}z'^{-\varepsilon} u^{-\varepsilon}(t+sz')^{-\varepsilon}(b(1-b))^{-\varepsilon} \frac{\mbox{d}b}{N_b}\sin^{-2\varepsilon}\phi \frac{\mbox{d}\phi}{N_\phi}, \end{equation} where we have substituted the polar angle $\theta$ by \begin{equation} b=\frac{1}{2}(1-\cos\theta), \end{equation} and $N_b$ and $N_\phi$ are the normalization factors \begin{eqnarray} N_b & = & \int_0^1 \mbox{d}b (b(1-b))^{-\varepsilon} = \frac{\Gamma ^2 (1-\varepsilon)}{\Gamma (2-2\varepsilon)}, \\ N_\phi & = &\int_0^\pi \sin^{-2\varepsilon} \phi \mbox{d}\phi = \frac{\pi 4^\varepsilon \Gamma (1-2\varepsilon)}{\Gamma^2(1-\varepsilon)}. \label{eq30} \end{eqnarray} Finally, we can factorize this three particle phase space into \begin{equation} \mbox{dPS}^{(3)} = \mbox{dPS}^{(2)} \mbox{dPS}^{(r)}, \end{equation} where \begin{equation} \frac{\mbox{dPS}^{(2)}}{\mbox{d}t} = \frac{1}{\Gamma(1-\varepsilon)} \left( \frac{4\pi s}{tu} \right) ^\varepsilon \frac{1}{8\pi s} \end{equation} is the phase space for the two observed jets with momenta $\overline{p}_1$ and $p_2$ already calculated in section 3.1 and \begin{equation} \mbox{dPS}^{(r)} = \left( \frac{4\pi}{s} \right) ^\varepsilon \frac{\Gamma (1-\varepsilon)} {\Gamma (1-2\varepsilon)} \frac{s}{16 \pi ^2} \frac{1}{1-2\varepsilon} \mbox{d}\mu_F \end{equation} is the phase space of the unresolved subsystem of partons $1$ and $3$. The integration measure is \begin{equation} \mbox{d}\mu_F = \mbox{d}z' z'^{-\varepsilon} \left( 1+\frac{z's}{t} \right) ^{-\varepsilon} \frac{\mbox{d}b}{N_b} b^{-\varepsilon} (1-b)^{-\varepsilon} \frac{\mbox{d}\phi}{N_\phi} \sin^{-2\varepsilon} \phi. \end{equation} The full range of integration in $\mbox{dPS}^{(r)}$ is given by $z' \in [0,-t/s]$, $b \in [0,1]$, and $\phi \in [0,\pi ]$. The singular region is defined by the requirement that partons $p_1$ and $p_3$ are recombined, i.e.~$p_3 = 0$ or $p_3$ parallel to $p_1$, so that $s_{13}=(p_1+p_3)^2=0$. We integrate over this region up to $s_{13} \leq y s$, which restricts the range of $z'$ to $0 \leq z' \leq \min\{ -t/s, y \} \equiv y_F$. \subsubsection{Final State Corrections for Direct Photons} The real corrections to the QCD Compton Scattering process of figure \ref{fig1} arise from two different mechanisms. Either an additional gluon can be radiated from the quark or the gluon leading to the left diagram in figure \ref{fig7}, \input{fig7.tex} or a quark-antiquark pair is emitted by a gluon as shown in the right diagram of figure \ref{fig7}. Both cases lead to an extra factor of $\alpha_s$, when the matrix elements are squared, so that all real corrections discussed in the following are of ${\cal O} (\alpha\alpha_s^2)$. The outgoing quark-antiquark pair can be of the same or different flavor than the incoming quark. The diagrams for incoming anti-quarks or gluons can be obtained from figure \ref{fig7} by crossing a final quark or gluon line with the incoming quark line. The corresponding matrix elements can then be obtained from table \ref{tab4}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|} \hline Process & Matrix Element $\overline{|{\cal M}|^2}$ \\ \hline \hline $\gamma q\rightarrow qgg$ & $[|{\cal M}|^2_{\gamma q\rightarrow qgg}(s,t,u)]/[4N_C]/2!$ \\ \hline $\gamma \bar{q}\rightarrow \bar{q}gg$ & $[|{\cal M}|^2_{\gamma q\rightarrow qgg}(s,t,u)]/[4N_C]/2!$ \\ \hline $\gamma q\rightarrow qq\bar{q}$ & $[|{\cal M}|^2_{\gamma q\rightarrow qq\bar{q}}(s,t,u)]/[4N_C]$ \\ \hline $\gamma g\rightarrow gq\bar{q}$ & $[|{\cal M}|^2_{\gamma g\rightarrow gq\bar{q}}(s,t,u)]/[8(1-\varepsilon)N_CC_F]$ \\ \hline \end{tabular} \end{center} \caption[Squared $2\rightarrow 3$ Matrix Elements for Direct Photoproduction] {\label{tab4}{\it Summary of $2\rightarrow 3$ squared matrix elements for direct photoproduction.}} \end{table} All possible topologies and orders of outgoing particles have to be considered for the full matrix elements of the processes, although they are not shown here explicitly. The complete result in $d$ dimensions can be found in \cite{Aur87} in a very compact notation not suitable to isolate the singularities. We therefore re-calculate the matrix elements with the help of REDUCE \cite{Hea85}, check their sums with the results in \cite{Aur87}, and keep only those that have singularities in a final state invariant $s_{ij}=(p_i+p_j)^2$. This can either be a soft or collinear gluon leading to a quadratic pole (left diagram of figure \ref{fig7}) or a collinear quark-antiquark pair leading only to a single pole (right diagram of figure \ref{fig7}). With the help of the singular invariant \begin{equation} z' = \frac{p_1p_3}{p_ap_b}, \end{equation} we can approximate the nine other invariants describing the $2\rightarrow 3$ scattering process and express them through the $2\rightarrow 2$ Mandelstam variables $s$, $t$, and $u$ and the variable $b=1/2(1-\cos\theta)$ as defined in section 4.2.1: \begin{eqnarray} p_ap_b &=& \frac{s}{2} \\ p_ap_1 &=& \frac{s}{2}\frac{-t}{s}b \\ p_ap_2 &=& \frac{s}{2}\frac{-u}{s} \\ p_ap_3 &=& \frac{s}{2}\frac{-t}{s}(1-b) \\ p_bp_1 &=& \frac{s}{2}\frac{-u}{s}b \\ p_bp_2 &=& \frac{s}{2}\frac{-t}{s} \\ p_bp_3 &=& \frac{s}{2}\frac{-u}{s}(1-b) \\ p_1p_2 &=& \frac{s}{2}b \\ p_1p_3 &=& \frac{s}{2}z' \\ p_2p_3 &=& \frac{s}{2}(1-b) \end{eqnarray} For the subprocess $\gamma q\rightarrow qgg$, where a gluon becomes soft or collinear to the other outgoing gluon or the quark, the approximated matrix element takes the form \begin{eqnarray} |{\cal M}|^2_{\gamma q\rightarrow qgg}(s,t,u) &=& e^2e_q^2g^4\mu^{6\varepsilon} \frac{1}{sz'}\left[ 4C_F\left( (1-b) (1-\varepsilon)-2+\frac{-2t/s}{z'-t/s(1-b)}\right) \right. \\ && \left. -4N_C\left( \frac{-t/s}{z'-t/s(1-b)}-\frac{-u/s}{z'-u/s(1-b)} -\frac{2}{z'+(1-b)}+2-b+b^2\right) \right] \nonumber \\ && T_{\gamma q\rightarrow gq}(s,t,u) \nonumber \end{eqnarray} with an identical result for incoming anti-quarks. The four-fermion diagram only produces a divergence from the collinear quark-antiquark pair, which can have $N_f$ flavors \begin{equation} |{\cal M}|^2_{\gamma q\rightarrow qq\bar{q}}(s,t,u) = e^2e_q^2g^4\mu^{6\varepsilon} \frac{1}{sz'}N_f [1-2b(1-b) (1+\varepsilon)]T_{\gamma q\rightarrow gq}(s,t,u), \end{equation} and the gluon-initiated process $\gamma g\rightarrow gq\bar{q}$ can have a gluon that becomes soft or collinear to the quark or anti-quark \begin{eqnarray} |{\cal M}|^2_{\gamma g\rightarrow gq\bar{q}}(s,t,u) &=& e^2e_q^2g^4\mu^{6\varepsilon} \frac{1}{sz'}\left[ 4C_F\left( (1-b) (1-\varepsilon)-2+\frac{2}{z'+(1-b)}\right) \right. \\ && \left. -2N_C\left( \frac{2}{z'+(1-b)}-\frac{-t/s}{z'-t/s(1-b)} -\frac{-u/s}{z'-u/s(1-b)}\right) \right] T_{\gamma g\rightarrow q\bar{q}}(s,t,u). \nonumber \end{eqnarray} Finally, we integrate the above matrix elements over the singular phase space dPS$^{(r)}$ given in section 4.2.1. The necessary integrals are given in appendix A. The integration over $\phi$ in dPS$^{(r)}$ is trivial, as the matrix elements do not depend on this variable. This is true for all direct and resolved photoproduction processes and for all final and initial state contributions, contrary to the case of deep-inelastic scattering. The result is \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{\gamma b\rightarrow 123} (s,t,u) = e^2e_q^2g^2 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} F_{\gamma b \rightarrow 123}(s,t,u). \end{equation} The functions $F_{\gamma b\rightarrow 123}$ are given by \begin{eqnarray} F_{\gamma q\rightarrow qgg} (s,t,u) &=& \left[ 2C_F\left( \frac{1}{\varepsilon^2}-\frac{1}{2\varepsilon}\left( 2\ln\frac{-t}{s}-3\right) \right. \rp\nonumber \\ && \left. -\frac{\pi^2}{3}+\frac{7}{2}-\frac{1}{2}\ln^2\frac{-t}{s}+2\ln y_F\ln\frac{-t}{s} -2\mbox{Li}_2\left( \frac{ y_F s}{t}\right) -\ln^2 y_F-\frac{3}{2}\ln y_F \right) \nonumber \\ && -N_C\left( -\frac{2}{\varepsilon^2}-\frac{1}{\varepsilon}\left( \frac{11}{3}+\ln\frac{-t}{s} -\ln\frac{-u}{s}\right) \right. \nonumber \\ && -\frac{1}{2}\ln^2\frac{-t}{s}+2\ln y_F\ln\frac{-t}{s}-2\mbox{Li}_2\left( \frac{y_Fs} {t}\right) -\frac{67}{9}+\frac{11}{3}\ln y_F+\ln^2 y_F\nonumber \\ && \left. \lp+\ln^2\frac{ y_F s}{-u}-\frac{1}{2}\ln^2 \frac{-u}{s}+ \frac{2\pi^2}{3}+ 2\mbox{Li}_2\left( \frac{ y_F s}{u} \right) \right) \right] T_{\gamma q\rightarrow gq} (s,t,u), \\ F_{\gamma q\rightarrow qq\bar{q}} (s,t,u) &=& N_f \left[ -\frac{1}{3\varepsilon}+\frac{1}{3}\ln y_F -\frac{5}{9}\right] T_{\gamma q\rightarrow gq} (s,t,u),\\ F_{\gamma g\rightarrow gq\bar{q}} (s,t,u) & = & \left[ C_F\left( \frac{2}{\varepsilon^2} +\frac{3}{\varepsilon}-\frac{2\pi^2}{3}+7-2\ln^2 y_F-3\ln y_F \right) \right. \nonumber \\ && -\frac{N_C}{2}\left( \frac{1}{\varepsilon}\left( \ln\frac{-t}{s}+\ln\frac{-u}{s}\right) +\ln^2\frac{ y_F s}{-t} +\ln^2\frac{ y_F s}{-u}-2\ln^2 y_F -\frac{1}{2}\ln^2\frac{-t}{s}\right. \nonumber \\ && \left. \lp-\frac{1}{2}\ln^2\frac{-u}{s}+2\mbox{Li}_2\left( \frac{y_Fs}{t}\right) +2\mbox{Li}_2\left( \frac{y_Fs}{u}\right) \rr\right] T_{\gamma g\rightarrow q\bar{q}} (s,t,u). \end{eqnarray} Terms of ${\cal O} (\varepsilon)$ and of ${\cal O} (y_F)$ have been neglected. The Spence-functions or dilogarithms Li$_2$ resulting from the calculation are written down explicitly even though they are also of ${\cal O} (y_F)$. The complete approximated $2\rightarrow 3$ matrix elements are proportional to the corresponding Born matrix elements $T_{\gamma b\rightarrow 12}$. This is also true for the exact matrix elements in four dimensions as calculated by Berends et al.~\cite{Ber81}, but not for the exact $d$-dimensional matrix elements \cite{Aur87}. \subsubsection{Final State Corrections for Resolved Photons} For the four different generic Born diagrams of resolved photoproduction in figure \ref{fig2}, the real corrections all arise from an additional gluon in the final state as shown in figure \ref{fig8}. \input{fig8.tex} They are of ${\cal O} (\alpha_s^3)$. All other diagrams can be obtained from figure \ref{fig8} by crossing one or two outgoing gluon lines into the initial state, which leads to the gluon initiated processes with quarks in the final state, or by crossing an incoming and and outgoing quark line, which leads to the processes with incoming anti-quarks. Process c) is symmetric under the interchange of the three gluons and under the interchange of the quark and anti-quark. Process d) is completely symmetric under the interchange of any of the five gluons. The complete list of matrix elements is written down in table \ref{tab5}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|} \hline Process & Matrix Element $\overline{|{\cal M}|^2}$ \\ \hline \hline $qq'\rightarrow qq'g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g}(s,t,u)]/[4N_C^2]$ \\ \hline $q\bar{q}'\rightarrow q\bar{q}'g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g}(u,t,s)]/[4N_C^2]$ \\ \hline $q\bar{q}\rightarrow \bar{q}'q'g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g}(t,s,u) +|{\cal M}|^2_{q\bar{q}\rightarrow q\bar{q}g}(s,t,u)]/[4N_C^2]$ \\ \hline $qg\rightarrow qq'\bar{q}'$ & $-[|{\cal M}|^2_{q\bar{q}\rightarrow q\bar{q}g}(t,s,u)]/[8(1-\varepsilon)N_C^2C_F]$ \\ \hline $qq\rightarrow qqg$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g}(s,t,u)+|{\cal M}|^2_{qq'\rightarrow qq'g}(s,u,t)+ |{\cal M}|^2_{qq\rightarrow qqg}(s,t,u)]/[4N_C^2]/2!$ \\ \hline $q\bar{q}\rightarrow q\bar{q}g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g}(u,t,s)+|{\cal M}|^2_{qq'\rightarrow qq'g}(u,s,t)+ |{\cal M}|^2_{qq\rightarrow qqg}(u,t,s)+|{\cal M}|^2_{q\bar{q}\rightarrow q\bar{q}g}(s,t,u)] /[4N_C^2]$ \\ \hline $qg\rightarrow qq\bar{q}$ & $-[2|{\cal M}|^2_{q\bar{q}\rightarrow q\bar{q}g}(t,s,u)]/[8(1-\varepsilon)N_C^2C_F]/2!$ \\ \hline $q\bar{q}\rightarrow ggg$ & $[|{\cal M}|^2_{q\bar{q}\rightarrow ggg}(s,t,u)]/[4N_C^2]/3!$ \\ \hline $qg\rightarrow qgg$ & $[-|{\cal M}|^2_{q\bar{q}\rightarrow ggg}(t,s,u)/3+|{\cal M}|^2_{qg\rightarrow qgg}(s,t,u)] /[8(1-\varepsilon)N_C^2C_F]/2!$ \\ \hline $\bar{q}g\rightarrow\bar{q}gg$ & $[-|{\cal M}|^2_{q\bar{q}\rightarrow ggg}(t,u,s)/3+|{\cal M}|^2_{qg\rightarrow qgg}(u,t,s)] /[8(1-\varepsilon)N_C^2C_F]/2!$ \\ \hline $gg\rightarrow q\bar{q}g$ & $[-|{\cal M}|^2_{qg\rightarrow qgg}(t,s,u)+|{\cal M}|^2_{gg\rightarrow q\bar{q}g}(s,t,u)] /[16(1-\varepsilon)^2N_C^2C_F^2]$ \\ \hline $gg\rightarrow ggg$ & $[|{\cal M}|^2_{gg\rightarrow ggg}(s,t,u)]/[16(1-\varepsilon)^2N_C^2C_F^2]/ 3!$\\ \hline \end{tabular} \end{center} \caption[Final State $2\rightarrow 3$ Matrix Elements for Resolved Photoproduction] {\label{tab5}{\it Summary of $2\rightarrow 3$ squared matrix elements for resolved photoproduction.}} \end{table} In four dimensions, these matrix elements were first calculated by Gottschalk and Sivers \cite{Got80} and by Kunszt and Pietarinen \cite{Kun80}. The result in $d$ dimensions was given in a compact form by Ellis and Sexton \cite{Ell86}. We calculate the matrix elements again with the help of REDUCE \cite{Hea85}, check with the result in \cite{Ell86}, and keep only terms singular in the final state invariant $z'$. The result can be expressed through this variable $z'$, $b$, and the Mandelstam variables $s$, $t$, and $u$. For quark-quark scattering of different flavors in diagram \ref{fig8}a), we can have a soft or collinear gluon in the final state \begin{eqnarray} |{\cal M}|^2_{qq'\rightarrow qq'g}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{4}{sz'}\left[ (2C_F-N_C)\left( \frac{-2}{z'+(1-b)}+\frac{-t/s}{z'-t/s(1-b)} +\frac{-u/s}{z'-u/s(1-b)}\right) \right. \nonumber \\ && \left. +C_F\left( (1-b)(1-\varepsilon)-2+\frac{-2u/s}{z'-u/s(1-b)}\right) \right] T_{qq'\rightarrow qq'}(s,t,u). \end{eqnarray} For equal flavors, there is additionally the interference contribution \begin{eqnarray} |{\cal M}|^2_{qq\rightarrow qqg}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{4}{sz'}\left[ (2C_F-N_C)\left( \frac{-2}{z'+(1-b)}+\frac{-t/s}{z'-t/s(1-b)} +\frac{-u/s}{z'-u/s(1-b)}\right) \right. \nonumber \\ && \left. +C_F\left( (1-b)(1-\varepsilon)-2+\frac{2}{z'+(1-b)}\right) \right] T_{qq\rightarrow qq}(s,t,u) \end{eqnarray} as the final quark lines in diagram \ref{fig8}b) are undistinguishable and can be interchanged. The crossed diagram of quark-antiquark annihilation has an additional singularity, if the final quark-antiquark pair with $N_f$ possible flavors becomes collinear \begin{equation} |{\cal M}|^2_{q\bar{q}\rightarrow q\bar{q}g}(s,t,u) = g^6\mu^{6\varepsilon} \frac{1}{sz'}N_f\left[ 1-2b(1-b)(1+\varepsilon)\right] T_{q\bar{q}\rightarrow gg}(s,t,u). \end{equation} The complete matrix elements given above are proportional to the corresponding Born matrix elements $T_{ab\rightarrow 12}$. Quarks and anti-quarks can, of course, also annihilate into three final gluons as shown in figure \ref{fig8}c), where any pair of two gluons can produce a soft or collinear singularity: \begin{eqnarray} |{\cal M}|^2_{q\bar{q}\rightarrow ggg}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{12}{sz'}\left[ -N_C(b^2-b+2) T_{q\bar{q}\rightarrow gg}(s,t,u) \vphantom{\left( \frac{t^2}{tu}\right) }\right. \\ && +4N_CC_F \nonumber \\ && \left( \frac{-u/s}{z'-u/s(1-b)}(1-\varepsilon)\left( \frac{t^2+u^2 -\varepsilon s^2}{tu}\right) \left( 2N_CC_F-2N_C^2\frac{tu}{s^2}-N_C^2\frac{u^2} {s^2}\right) \right. \nonumber \\ && +\frac{1}{z'+(1-b)}(1-\varepsilon)\left( \frac{t^2+u^2 -\varepsilon s^2}{tu}\right) \left( N_C^2\frac{t^2+u^2}{s^2}\right) \nonumber \\ && \left. \lp+\frac{-t/s}{z'-t/s(1-b)}(1-\varepsilon)\left( \frac{t^2+u^2 -\varepsilon s^2}{tu}\right) \left( 2N_CC_F-2N_C^2\frac{tu}{s^2}-N_C^2\frac{t^2} {s^2}\right) \rr\right] . \nonumber \end{eqnarray} Here, only parts of the Born matrix element $T_{q\bar{q}\rightarrow gg}$ can be factorized. The process $qg\rightarrow qgg$ can be obtained by crossing a final gluon with the incoming anti-quark or $(s\leftrightarrow t)$. In this case, either of the outgoing gluons can also be radiated from the outgoing quark leading to \begin{eqnarray} |{\cal M}|^2_{qg\rightarrow qgg}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{4}{sz'}\left[ C_F\left( (1-b)(1-\varepsilon)-2\right) T_{qg \rightarrow qg}(s,t,u) \vphantom{\left( \frac{t^2}{tu}\right) }\right. \\ && -4N_CC_F\nonumber \\ && \left( \frac{-t/s}{z'-t/s(1-b)}(1-\varepsilon)\left( \frac{s^2+u^2 -\varepsilon t^2}{su}\right) \left( \lr 2N_CC_F-2N_C^2\frac{su}{t^2}\right) \left( -2 +2\frac{C_F}{N_C}\right) \right. \rp\nonumber \\ && \left. +N_C^2\frac{s^2+u^2}{t^2}\right) \nonumber \\ && +\frac{-u/s}{z'-u/s(1-b)}(1-\varepsilon)\left( \frac{s^2+u^2-\varepsilon t^2} {su}\right) \left( 2N_CC_F-2N_C^2\frac{su}{t^2}-N_C^2\frac{u^2}{t^2}\right) \nonumber \\ && \left. \lp+\frac{1}{z'+(1-b)}(1-\varepsilon)\left( \frac{s^2+u^2-\varepsilon t^2} {su}\right) \left( 2N_CC_F-2N_C^2\frac{su}{t^2}-N_C^2\frac{s^2}{t^2}\right) \rr\right] , \nonumber \end{eqnarray} where again only parts of the Born matrix element $T_{qg \rightarrow qg}$ can be factorized. If also the incoming quark is crossed into the final state, the resulting quark-antiquark pair in the final state with $N_f$ flavors can produce the collinear singularity \begin{equation} |{\cal M}|^2_{gg\rightarrow q\bar{q}g}(s,t,u) = g^6\mu^{6\varepsilon} \frac{1}{sz'}N_f\left[ 1-2b(1-b)(1+\varepsilon)\right] T_{gg\rightarrow gg}(s,t,u) \end{equation} proportional to the full Born matrix element. Finally, the five-gluon process in figure \ref{fig8}d) can have three different pairs of soft or collinear gluons in the final state with the result \begin{eqnarray} |{\cal M}|^2_{gg\rightarrow ggg}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{12}{sz'}\left[ -N_C(b^2-b+2) T_{gg\rightarrow gg}(s,t,u) \right. \nonumber \\ && +16N_C^4C_F\nonumber \\ && \left( \frac{-t/s}{z'-t/s(1-b)}(1-\varepsilon)^2\left( 3-\frac{2su} {t^2}+\frac{s^4+u^4}{s^2u^2}\right) \right. \nonumber \\ && +\frac{-u/s}{z'-u/s(1-b)}(1-\varepsilon)^2\left( 3-\frac{2st} {u^2}+\frac{s^4+t^4}{s^2t^2}\right) \nonumber \\ && \left. \lp+\frac{1}{z'+(1-b)}(1-\varepsilon)^2\left( 3-\frac{2tu} {s^2}+\frac{t^4+u^4}{t^2u^2}\right) \rr\right] . \end{eqnarray} Only parts of the leading-order matrix element $T_{gg\rightarrow gg}$ can be factorized. Next, the approximated matrix elements have to be integrated over the final state singularity $z'$ in phase space up to the invariant mass cut $y_F$. Terms of order ${\cal O} (y_F)$ can be neglected if $y_F$ is chosen sufficiently small. We use the phase space for the unobserved subsystem dPS$^{(r)}$ from section 4.2.1 and find \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{ab\rightarrow 123} (s,t,u) = g^4 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{Q^2} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} F_{ab \rightarrow 123}(s,t,u) \end{equation} similar to the direct case. The functions $F_{ab\rightarrow 123}$ are given by \begin{eqnarray} F_{qq'\rightarrow qq'g}(s,t,u) & = & \left[ 2(2C_F-N_C)\left( -\frac{1}{2\varepsilon} (-2l(s)+l(t)+l(u))-\frac{1}{2}\left( -2l^2\left( \frac{s}{y_F}\right) \right. \rp\right. \nonumber \\ && \left. +l^2\left( \frac{t}{y_F}\right) +l^2\left( \frac{u}{y_F}\right) \rr+\frac{1}{4}(-2l^2(s)+l^2(t)+l^2(u)) +2\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{s}\right| \right) \nonumber \\ && \left. -\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{t}\right| \right) -\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{u}\right| \right) \rr \nonumber \\ && +2C_F\left( \frac{1}{\varepsilon^2}+\frac{1}{2\varepsilon}(3-2l(u)) +\frac{1}{2}l^2(u)+\frac{7}{2}-l^2\left( \frac{u}{y_F}\right) -\frac{3}{2}\ln y_F\right. \nonumber \\ && \left. \lp-\frac{\pi^2}{3}-2\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{u}\right| \right) \rr\right] T_{qq'\rightarrow qq'}(s,t,u), \\ F_{qq\rightarrow qqg}(s,t,u) & = & \left[ 2(2C_F-N_C)\left( -\frac{1}{2\varepsilon} (-2l(s)+l(t)+l(u))-\frac{1}{2}\left( -2l^2\left( \frac{s}{y_F}\right) \right. \rp\right. \nonumber \\ && \left. +l^2\left( \frac{t}{y_F}\right) +l^2\left( \frac{u}{y_F}\right) \rr+\frac{1}{4}(-2l^2(s)+l^2(t)+l^2(u)) +2\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{s}\right| \right) \nonumber \\ && \left. -\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{t}\right| \right) -\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{u}\right| \right) \rr \nonumber \\ && +2C_F\left( \frac{1}{\varepsilon^2}+\frac{1}{2\varepsilon}(3-2l(s)) +\frac{1}{2}l^2(s)+\frac{7}{2}-l^2\left( \frac{s}{y_F}\right) -\frac{3}{2}\ln y_F\right. \nonumber \\ && \left. \lp-\frac{\pi^2}{3}-2\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{s}\right| \right) \rr\right] T_{qq\rightarrow qq}(s,t,u), \\ F_{q\bar{q}\rightarrow q\bar{q}g}(s,t,u) & = & N_f\left[ -\frac{1}{3\varepsilon}+\frac{1}{3}\ln y_F -\frac{5}{9}\right] T_{q\bar{q}\rightarrow gg}(s,t,u),\\ F_{q\bar{q}\rightarrow ggg}(s,t,u) & = & \left[ 3N_C\left( \frac{2}{\varepsilon^2}+\frac{11} {3\varepsilon} -\frac{11}{3}\ln y_F+\frac{67}{9}-\frac{2\pi^2}{3}\right) \right] T_{q\bar{q}\rightarrow gg}(s,t,u)\nonumber \\ && +6N_CC_F\left[ \left( -\frac{2}{\varepsilon}l(u)+2l(u)-2l^2\left( \frac{u}{y_F}\right) +l^2(u)-4\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{u}\right| \right) \rr\right. \nonumber \\ && \left( N_C^2\left( \frac{t}{u}-\frac{2t^2}{s^2}\right) -\frac{u}{t}-\frac{t}{u}\right) \nonumber \\ && +\left( -\frac{2}{\varepsilon}l(s)+2l(s)-2l^2\left( \frac{s}{y_F}\right) +l^2(s)-4\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{s}\right| \right) \rr\nonumber \\ && \left( N_C^2\frac{(t^2+u^2)^2}{uts^2}\right) \nonumber \\ && +\left( -\frac{2}{\varepsilon}l(t)+2l(t)-2l^2\left( \frac{t}{y_F}\right) +l^2(t)-4\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{t}\right| \right) \rr\nonumber \\ && \left( N_C^2\left( \frac{u}{t}-\frac{2u^2}{s^2}\right) -\frac{u}{t}-\frac{t}{u}\right) \nonumber \\ && +\left( 2l(u)\left( N_C^2\frac{t}{u}-\frac{s^2}{tu}\right) +2l(s)N_C^2\left( \frac{t}{u}+\frac{u}{t}\right) \right. \nonumber \\ && \left. \lp+2l(t)\left( N_C^2\frac{u}{t}-\frac{s^2}{tu}\right) \rr\right] , \\ F_{qg\rightarrow qgg}(s,t,u) & = & \left[ C_F\left( \frac{1}{\varepsilon^2}+\frac{3} {2\varepsilon} -\frac{3}{2}\ln y_F+\frac{7}{2}-\frac{\pi^2}{3}\right) \right] T_{qg \rightarrow qg}(s,t,u)\nonumber \\ && -N_CC_F\left[ \left( -\frac{2}{\varepsilon}l(u)+2l(u)-2l^2\left( \frac{u}{y_F}\right) +l^2(u)-4\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{u}\right| \right) \rr\right. \nonumber \\ && \left( N_C^2\left( \frac{s}{u}-\frac{2s^2}{t^2}\right) -\frac{u}{s}-\frac{s}{u}\right) \nonumber \\ && +\left( -\frac{2}{\varepsilon}l(s)+2l(s)-2l^2\left( \frac{s}{y_F}\right) +l^2(s)-4\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{s}\right| \right) \rr\nonumber \\ && \left( N_C^2\left( \frac{u}{s}-\frac{2u^2}{t^2}\right) -\frac{u}{s}-\frac{s}{u}\right) \nonumber \\ && +\left( -\frac{2}{\varepsilon}l(t)+2l(t)-2l^2\left( \frac{t}{y_F}\right) +l^2(t)-4\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{t}\right| \right) \rr\nonumber \\ && \left( 2\frac{s^2+u^2}{t^2}+\frac{1}{N_C^2}\left( \frac{u}{s}+\frac{s}{u}\right) \rr\nonumber \\ && +\left( 2l(u)\left( N_C^2\frac{s}{u}-\frac{t^2}{su}\right) +2l(s)\left( N_C^2\frac{u}{s}-\frac{t^2} {su}\right) \right. \nonumber \\ && \left. \lp+2l(t)\left( 2+\frac{1}{N_C^2}\frac{t^2}{su}\right) \rr\right] , \\ F_{gg\rightarrow q\bar{q}g}(s,t,u) & = & N_f\left[ -\frac{1}{3\varepsilon}+\frac{1}{3}\ln y_F -\frac{5}{9}\right] T_{gg\rightarrow gg}(s,t,u),\\ F_{gg\rightarrow ggg}(s,t,u) & = & \left[ 3N_C\left( \frac{2}{\varepsilon^2}+\frac{11} {3\varepsilon} -\frac{11}{3}\ln y_F+\frac{67}{9}-\frac{2\pi^2}{3}\right) \right] T_{gg\rightarrow gg}(s,t,u)\nonumber \\ && +48N_C^4C_F\left[ \left( -\frac{1}{\varepsilon}l(t)+2l(t)-l^2\left( \frac{t}{y_F}\right) +\frac{1}{2}l^2(t)-2\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{t}\right| \right) \rr\right. \nonumber \\ && \left( 3-\frac{2us}{t^2}+\frac{u^4+s^4}{u^2s^2}\right) \nonumber \\ && +\left( -\frac{1}{\varepsilon}l(u)+2l(u)-l^2\left( \frac{u}{y_F}\right) +\frac{1}{2}l^2(u)-2\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{u}\right| \right) \rr\nonumber \\ && \left( 3-\frac{2ts}{u^2}+\frac{t^4+s^4}{t^2s^2}\right) \nonumber \\ && +\left( -\frac{1}{\varepsilon}l(s)+2l(s)-l^2\left( \frac{s}{y_F}\right) +\frac{1}{2}l^2(s)-2\mbox{Li}_2\left( -\left| \frac{y_FQ^2}{s}\right| \right) \rr\nonumber \\ && \left. \left( 3-\frac{2tu}{s^2}+\frac{t^4+u^4}{t^2u^2}\right) \right] . \end{eqnarray} All terms of ${\cal O} (\varepsilon)$ have been omitted since they do not contribute in the physical limit $d\rightarrow 4$. As in the virtual corrections, we account for sign-changing arguments in the logarithms before and after crossing with the definition of \begin{equation} l(x) = \ln\left| \frac{x}{Q^2}\right| , \end{equation} where $x$ is any of the Mandelstam variables $s$, $t$, and $u$ and $Q^2$ is an arbitrary scale chosen to be $Q^2=\max(s,t,u)$. However, there are no additional terms of $\pi^2$ in the real corrections so that \begin{equation} l^2(x) = \ln^2\left| \frac{x}{Q^2}\right| \end{equation} for $x>0$ as well as for $x<0$. \subsubsection{Phase Space for Three-Particle Final States Revisited} Having calculated all real corrections coming from final state singularities, where the singular variable was $s_{13}=(p_1+p_3)^2$, we now turn to the singularities that arise in the photon and proton initial states in the variables $-t_{a3}=s_{a3}=(p_a+p_3)^2$ and $-t_{b3}=s_{b3}=(p_b+p_3)^2$. The singular variables are then defined as \begin{equation} z'' = \frac{p_ap_3}{p_ap_b}~\mbox{and}~z''' = \frac{p_bp_3}{p_ap_b}, \end{equation} respectively. It is convenient to calculate the three-body phase space in the same center-of-mass system of the two final state particles {\em now called} $1$ {\em and} $2$ as in the case of final state corrections (see figure \ref{fig16}), where now $p_3$ is defined in the overall center-of-mass system of partons $a$ and $b$. However, we will not choose the angle $\theta$ between the hard jet and the incoming photon or the related variable $b=1/2(1-\cos \theta)$ as the second independent variable. Instead it is more convenient to choose the fraction of the center-of-mass energy going into the hard subprocess $ab\rightarrow 12$ \begin{equation} z_a = \frac{p_1p_2}{p_ap_b} \in [X_a,1]~\mbox{and}~ z_b = \frac{p_1p_2}{p_ap_b} \in [X_b,1], \end{equation} respectively. In this way, the variables $z_a$ and $z_b$ describe the momentum of the third unobserved particle $p_3=(1-z_a)p_a$ and $p_3=(1-z_b)p_b$. They are bounded from below through the fractions of the initial electron and proton energies transferred to the partons with momenta $z_ap_a$ and $z_bp_b$ in the hard scattering \begin{equation} X_a = \frac{p_1p_2}{kp_b}~\mbox{and}~ X_b = \frac{p_1p_2}{p_ap}. \end{equation} As the three-particle phase space can be calculated for photon and proton initial state singularities in exactly the same manner, we only consider the case of the photon initial state in the following. The Mandelstam variables differ slightly from those used in $2\rightarrow 2$ scattering and final state corrections in order to accommodate the third unobserved particle radiated from the initial state \begin{eqnarray} s &=& (z_ap_a+p_b)^2 = (p_1+p_2)^2, \\ t &=& (z_ap_a-p_1)^2, \\ u &=& (z_ap_a-p_2)^2. \end{eqnarray} In the limit of soft ($p_3 = 0$) or collinear ($p_3 \parallel p_a$) particle emission, they satisfy the relation $s+t+u=sz''\rightarrow 0$ for massless partons. The calculation proceeds as follows: We insert three additional $\delta$-functions with respect to $t$, $z''$, and $z_a$ into the general expression \begin{equation} \mbox{dPS}^{(3)} = \int (2\pi )^d \prod_{i=1}^{3} \frac{\mbox{d}^dp_i \delta (p_i^2)} {(2\pi )^{d-1}} \delta^d \left( p_a+p_b-\sum_{j=1}^3 p_j \right) \end{equation} giving \begin{eqnarray} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z''\mbox{d}z_a} &=& \int (2\pi )^d \prod_{i=1}^{3} \frac{\mbox{d}^dp_i \delta (p_i^2)} {(2\pi )^{d-1}} \delta^d \left( p_a+p_b-\sum_{j=1}^3 p_j \right)\nonumber \\ && \delta (t-(z_ap_a-p_1)^2)\delta\left( z''-\frac{p_ap_3}{p_ap_b}\right) \delta\left( z_a-\frac{p_1p_2}{p_ap_b}\right) . \end{eqnarray} Next, we integrate over the $\delta(p_i^2)$ and the space-like components of the $d$-dimensional $\delta$-function to eliminate $p_2$. In the resulting expression \begin{eqnarray} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z''\mbox{d}z_a} &=& \int \frac{\mbox{d}^{d-1}p_1\mbox{d}^{d-1}p_3}{(2\pi)^{2d-3}2E_12E_22E_3} \delta\left( E_a+E_b-\sum_{j=1}^3E_j\right) \nonumber \\ && \delta(t-(z_ap_a-p_1)^2)\delta \left( z'' -\frac{p_ap_3}{p_ap_b}\right) \delta \left( z_a-\frac{p_1p_2}{p_ap_b}\right) , \end{eqnarray} we now decompose $p_1$ into its energy and angular components in the center-of-mass system of partons $1$ and $2$ and $p_3$ in the overall center-of-mass system \begin{eqnarray} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z''\mbox{d}z_a} &=& \int \frac{1}{(2\pi)^{2d-3}8E_2}\frac{2\pi^{\frac{d-3}{2}}}{\Gamma\left( \frac{d-3}{2} \right) }E_1^{d-3}\mbox{d}E_1\sin^{d-3}\theta\mbox{d}\theta\sin^{d-4}\phi \mbox{d}\phi\frac{\pi^{\frac{d-4}{2}}}{\Gamma\left( \frac{d-2}{2}\right) } E_3^{\ast^{d-3}}\mbox{d}E_3^\ast\nonumber \\ && \sin^{d-3}\chi^\ast\mbox{d}\chi^\ast\mbox{d}\phi_3^\ast \delta\left( E_a+E_b-\sum_{j=1}^3E_j\right) \delta(t+2z_aE_aE_1 (1-\cos\theta))\nonumber \\ && \delta \left( z''-\frac{2z_aE_a^\ast E_3^\ast}{s}(1-\cos\chi^\ast)\right) \delta \left( z_a-\frac{4z_aE_1^2}{s}\right) . \end{eqnarray} Integrating over the remaining $\delta$-functions and the trivial azimuthal angle $\phi_2^\ast$ up to $2\pi$, we arrive at \begin{equation} \frac{\mbox{dPS}^{(3)}}{\mbox{d}t\mbox{d}z''\mbox{d}z_a} = \int \frac{(16\pi^2)^\varepsilon}{128\pi^3\Gamma^2(1-\varepsilon)} z''^{-\varepsilon}(ut)^{-\varepsilon}(1-z'')^{-1+2\varepsilon}(1-z_a-z'')^ {-\varepsilon}z_a^{-1+\varepsilon}\sin^{-2\varepsilon}\phi\frac{\mbox{d}\phi} {N_\phi}, \end{equation} where $N_\phi$ is the normalization factor given in eq.~(\ref{eq30}). Finally, we can factorize this three particle phase space into \begin{equation} \mbox{dPS}^{(3)} = \mbox{dPS}^{(2)} \mbox{dPS}^{(r)}, \end{equation} where dPS$^{(2)}$ is the usual phase space for the two observed jets $1$ and $2$ from section 3.1 and \begin{equation} \mbox{dPS}^{(r)} = \left( \frac{4\pi}{s} \right) ^\varepsilon \frac{\Gamma (1-\varepsilon)} {\Gamma (1-2\varepsilon)} \frac{s}{16 \pi ^2}H(z'') \mbox{d}\mu_I \end{equation} is the phase space of the unresolved subsystem of partons $a$ and $3$. The integration measure is \begin{equation} \mbox{d}\mu_I = \mbox{d}z'' z''^{-\varepsilon} \frac{\mbox{d}z_a}{z_a}\left( \frac{z_a}{1-z_a}\right) ^\varepsilon \frac{\mbox{d}\phi}{N_\phi} \sin^{-2\varepsilon} \phi \frac{\Gamma(1-2\varepsilon)}{\Gamma^2(1-\varepsilon)}, \end{equation} and the function \begin{equation} H(z'') = (1-z'')^{-1+2\varepsilon}\left( 1-\frac{z''}{1-z_a}\right) ^{-\varepsilon} = 1+{\cal O} (z'') \end{equation} can be approximated by 1 as it leads only to negligible terms of ${\cal O} (y)$. The integration of $\mbox{dPS}^{(r)}$ over $z''\in [0,-u/s]$, $z_a\in[X_a,1]$, and $\phi\in [0,\pi ]$ is restricted to the singular region of $z''$, when partons $p_a$ and $p_3$ are recombined, such that $0 \leq z'' \leq \min\{ -u/s, y \} \equiv y_I$. \subsubsection{Photon Initial State Corrections for Direct Photons} Singularities from the initial direct photon arise from diagrams of type b) in figure \ref{fig17}. The photon here splits up into a quark-antiquark pair, and one of the two becomes a part of the photon remnant. If the (anti-)quark is collinear to the incoming photon, a simple $1/\varepsilon$ pole is produced. Quadratic $1/\varepsilon^2$ poles corresponding to soft {\em and} collinear singularities do not exist, since there is no direct coupling to a gluon line. In this specific case, it is therefore not necessary to perform a partial fractioning decomposition. The matrix elements for the $2\rightarrow 3$ processes $\gamma q\rightarrow qgg$ and $\gamma q\rightarrow qq\bar{q}$ are computed from the generic diagrams in figure \ref{fig7} as in the case of final state corrections in section 4.2.2. Those for the processes with incoming anti-quarks and gluons are obtained by simple crossing of the diagrams or Mandelstam variables according to table \ref{tab4}. We keep only terms that are singular in the variable \begin{equation} z'' = \frac{p_ap_3}{p_ap_b} \end{equation} and express the ten invariants for $2\rightarrow 3$ scattering through $z''$, the longitudinal momentum fraction $z_a$ transferred from the photon to the parton entering the hard subprocess, and the variables $s$, $t$, and $u$ defined in section 4.2.4: \begin{eqnarray} p_ap_b &=& \frac{s}{2z_a} \label{eq37}\\ p_ap_1 &=& \frac{s}{2z_a}\frac{-t}{s} \\ p_ap_2 &=& \frac{s}{2z_a}\frac{-u}{s} \\ p_ap_3 &=& \frac{s}{2z_a} z'' \\ p_bp_1 &=& \frac{s}{2z_a} \frac{-u}{s}z_a \\ p_bp_2 &=& \frac{s}{2z_a} \frac{-t}{s}z_a \\ p_bp_3 &=& \frac{s}{2z_a} (1-z_a) \\ p_1p_2 &=& \frac{s}{2z_a} z_a \\ p_1p_3 &=& \frac{s}{2z_a} \frac{-t}{s}(1-z_a) \\ p_2p_3 &=& \frac{s}{2z_a} \frac{-u}{s}(1-z_a) \label{eq38} \end{eqnarray} Under these approximations, a singular kernel can be factorized out, which is universal for all processes of the type $\gamma b\rightarrow 123$ and describes the splitting of a boson (in this case the photon) into two fermions (i.e.~the quark-antiquark pair). Consequently, a parton from the photon scatters now off the parton in the proton and the relevant parton-parton Born matrix elements show up. In the case of \begin{eqnarray} |{\cal M}|^2_{\gamma q\rightarrow qgg}(s,t,u) &=& e^2e_q^2g^4\mu^{6\varepsilon} \frac{1}{sz''}\left[ z_a^2+(1-z_a)^2-\varepsilon\right] T_{q\bar{q}\rightarrow gg}(s,t,u), \label{eq39} \end{eqnarray} it is the quark-antiquark annihilation process into two gluons. For \begin{eqnarray} |{\cal M}|^2_{\gamma q\rightarrow qq\bar{q}}(s,t,u) &=& e^2e_q^2g^4\mu^{6\varepsilon} \frac{1}{sz''}\left[ z_a^2+(1-z_a)^2-\varepsilon\right] T_{q\bar{q}\rightarrow q\bar{q}}(s,t,u), \end{eqnarray} it is the process $q\bar{q}\rightarrow q\bar{q}$. The result for \begin{eqnarray} |{\cal M}|^2_{\gamma g\rightarrow gq\bar{q}}(s,t,u) &=& e^2e_q^2g^4\mu^{6\varepsilon} \frac{1}{sz''}\left[ z_a^2+(1-z_a)^2-\varepsilon\right] T_{qg \rightarrow qg}(s,t,u) \label{eq40} \end{eqnarray} is easily obtained by crossing $(s\leftrightarrow t)$ in the first process and multiplying by $(-1)$. As the matrix elements do not depend on the variable $\phi$, the integration over $\phi$ in dPS$^{(r)}$ is as trivial as before. In addition, the integration over the simple pole in $z''$ can also be carried out easily. However, the integration over the momentum fraction $z_a$ involves a convolution with the photon spectrum in the electron. Since these exist only in a parametrized form too complicated to integrate, the $z_a$-integration has to be done numerically and is written down explicitly in the final result \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{\gamma b\rightarrow 123} (s,t,u) = \int\limits_{X_a}^1\frac{\mbox{d}z_a}{z_a} e^2g^2 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} I_{\gamma b \rightarrow 123}(s,t,u). \label{eq31} \end{equation} The functions $I_{\gamma b \rightarrow 123}$ are given by \begin{eqnarray} I_{\gamma q\rightarrow qgg} (s,t,u) &=& \left[ -\frac{1}{\varepsilon}\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a) +\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) +\frac{e_q^2}{2}\right] T_{q\bar{q}\rightarrow gg} (s,t,u),\hspace{7mm} \\ I_{\gamma q\rightarrow qq\bar{q}} (s,t,u) &=& \left[ -\frac{1}{\varepsilon}\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a) +\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) +\frac{e_q^2}{2}\right] T_{q\bar{q}\rightarrow q\bar{q}} (s,t,u),\hspace{7mm} \\ I_{\gamma g\rightarrow gq\bar{q}} (s,t,u) &=& \left[ -\frac{1}{\varepsilon}\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a) +\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) +\frac{e_q^2}{2}\right] T_{qg \rightarrow qg} (s,t,u).\hspace{7mm} \end{eqnarray} Terms of ${\cal O} (\varepsilon)$ and ${\cal O} (y)$ have been omitted as before. The collinear $1/\varepsilon$ poles are proportional to the splitting function \begin{equation} P_{q\leftarrow \gamma} (z_a) = 2N_Ce_q^2P_{q\leftarrow g}(z_a) \end{equation} for photons into quarks, where $e_q$ is the fractional charge of the quark coupling to the photon and \begin{equation} P_{q\leftarrow g}(z_a)=\frac{1}{2}\left[ z_a^2+(1-z_a)^2\right] \end{equation} is the Altarelli-Parisi splitting function for gluons into quarks. This function appears in the evolution equation of the photon structure function as an inhomogeneous or so-called point-like term (see section 2.3). Therefore, the photon initial state singularities can be absorbed into the photon structure function. The necessary steps are well known \cite{Bod92,Ell80}. We define the renormalized distribution function of a parton $a$ in the electron $F_{a/e}(X_a,M_a^2)$ as \begin{equation} F_{a/e} (X_a,M_a^2) = \int_{X_a}^1 \frac{\mbox{d}z_a}{z_a} \left[ \delta_{a\gamma } \delta (1-z_a) + \frac{\alpha}{2\pi} R_{q\leftarrow \gamma }(z_a, M_a^2)\right] F_{\gamma /e} \left( \frac{X_a}{z_a}\right) , \end{equation} where $R$ has the general form \begin{equation} R_{a \leftarrow \gamma } (z_a, M_a^2) = -\frac{1}{\varepsilon}P_{q\leftarrow \gamma }(z_a)\frac{\Gamma (1-\varepsilon)} {\Gamma (1-2\varepsilon)} \left( \frac{4\pi\mu^2}{M_a^2} \right) ^\varepsilon + C_{q\leftarrow \gamma} (z_a) \label{eq32} \end{equation} with $C = 0$ in the $\overline{\mbox{MS}}$ scheme. The renormalized partonic cross section for $\gamma b \rightarrow \mbox{jets}$ is then calculated from \begin{equation} \mbox{d}\sigma(\gamma b\rightarrow \mbox{jets}) = \mbox{d}\bar{\sigma} (\gamma b\rightarrow \mbox{jets}) - \frac{\alpha}{2\pi} \int \mbox{d}z_a R_{q\leftarrow \gamma }(z_a,M_a^2) \mbox{d}\sigma (ab\rightarrow \mbox{jets}) . \label{eq33} \end{equation} d$\bar{\sigma} (\gamma b\rightarrow \mbox{jets})$ is the higher order cross section before the subtraction procedure, and d$\sigma (ab\rightarrow \mbox{jets})$ contains the LO parton-parton scattering matrix elements $T_{ab\rightarrow 12}(s,t,u)$. The factor $4\pi\mu^2/M_a^2$ in eq.~(\ref{eq32}) is combined with the factor $4\pi\mu^2/s$ in eq.~(\ref{eq31}) and leads to $M_a^2$ dependent terms of the form \begin{equation} -\frac{1}{\varepsilon}P_{q\leftarrow \gamma }(z_a)\left[ \left( \frac{4\pi\mu^2}{s}\right) ^\varepsilon -\left( \frac{4\pi\mu^2}{M_a^2}\right) ^\varepsilon\right] = -P_{q\leftarrow \gamma }(z_a) \ln\left( \frac{M_a^2}{s}\right) , \end{equation} so that the cross section after subtraction in eq.~(\ref{eq33}) will depend on the factorization scale $M_a^2$. \subsubsection{Proton Initial State Corrections for Direct Photons} Initial state singularities cannot only show up on the direct photon side, but also on the proton side. The parton from the proton, which undergoes the hard scattering, will then radiate a soft or collinear secondary parton. A quark can, e.g., radiate a gluon as in figure \ref{fig17}c), which will then not be observed but contributes to the proton remnant. As we can now have soft gluons, we find not only single but also quadratic poles in $\varepsilon$. After singling out the matrix elements for the diagrams in figure \ref{fig7} singular in the variable \begin{equation} z''' = \frac{p_bp_3}{p_ap_b}, \end{equation} we therefore have to decompose them with partial fractioning using REDUCE \cite{Hea85}. The invariants are approximated quite similarly to the photon initial state corrections and are given by \begin{eqnarray} p_ap_b &=& \frac{s}{2z_b} \\ p_ap_1 &=& \frac{s}{2z_b}\frac{-t}{s}z_b \\ p_ap_2 &=& \frac{s}{2z_b}\frac{-u}{s}z_b \\ p_ap_3 &=& \frac{s}{2z_b} (1-z_b) \\ p_bp_1 &=& \frac{s}{2z_b} \frac{-u}{s} \\ p_bp_2 &=& \frac{s}{2z_b} \frac{-t}{s} \\ p_bp_3 &=& \frac{s}{2z_b} z''' \\ p_1p_2 &=& \frac{s}{2z_b} z_b \\ p_1p_3 &=& \frac{s}{2z_b} \frac{-u}{s}(1-z_b) \\ p_2p_3 &=& \frac{s}{2z_b} \frac{-t}{s}(1-z_b) \end{eqnarray} The result for the first subprocess $\gamma q\rightarrow qgg$ turns out to be \begin{eqnarray} |{\cal M}|^2_{\gamma q\rightarrow qgg}(s,t,u) &=& e^2e_q^2g^4\mu^{6\varepsilon}\frac{1}{sz'''} \left[ 4C_F\left( (1-z_b)(1-\varepsilon)-2+\frac{-2t/s}{z'''-t/s(1-z_b)}\right) \right. \nonumber \\ && \left. -4N_C\left( \frac{-t/s}{z'''-t/s(1-z_b)}-\frac{-u/s}{z'''-u/s(1-z_b)}\right) \right] T_{\gamma q\rightarrow gq}(s,t,u). \end{eqnarray} Since the initial quark has to couple to the photon, it cannot vanish in the beam pipe. The singular kernel only describes the radiation of a gluon from the quark, factorizing the leading-order QCD Compton process $\gamma q\rightarrow gq$. According to table \ref{tab4}, we find the same result for incoming anti-quarks. In the four-fermion process \begin{equation} |{\cal M}|^2_{\gamma q\rightarrow qq\bar{q}}(s,t,u) = e^2e_q^2g^4\mu^{6\varepsilon} \frac{1}{sz'''}\left[ \frac{1+(1-z_b)^2}{z_b}-\varepsilon z_b\right] T_{\gamma g\rightarrow q\bar{q}}(s,t,u), \end{equation} the initial quark necessarily produces a collinear final state quark. Thus, we have a non-diagonal transition of the quark-initiated process $\gamma q\rightarrow qq\bar{q}$ into the gluon-initiated process $\gamma g\rightarrow q\bar{q}$. The other non-diagonal transition shows up in \begin{equation} |{\cal M}|^2_{\gamma g\rightarrow gq\bar{q},1}(s,t,u) = e^2e_q^2g^4\mu^{6\varepsilon} \frac{2}{sz'''}C_F\left[ (z_b^2+(1-z_b)^2)(1+\varepsilon)-\varepsilon\right] T_{\gamma q\rightarrow gq}(s,t,u), \end{equation} where the gluon-initiated process $\gamma g\rightarrow gq\bar{q}$ is transformed into the quark-initiated process $\gamma q\rightarrow gq$. The splitting of a gluon into a collinear quark-antiquark pair is analogous to the splitting of the direct photon in the last section. However, the gluon also possesses a non-abelian coupling to other gluons, which can become soft or collinear in \begin{eqnarray} |{\cal M}|^2_{\gamma g\rightarrow gq\bar{q},2}(s,t,u) &=& e^2e_q^2g^4\mu^{6\varepsilon} \frac{2}{sz'''}N_C\left[ 2\left( \frac{1}{z_b}+z_b(1-z_b)-2 \right) \right. \nonumber \\ && \left. +\frac{-t/s}{z'''-t/s(1-z_b)}-\frac{-u/s}{z'''-u/s(1-z_b)} \right] T_{\gamma g\rightarrow q\bar{q}}(s,t,u) \end{eqnarray} and factorizes the photon-gluon fusion process $\gamma g\rightarrow q\bar{q}$. The complete list of proton initial state corrections given above has to be integrated over the phase space region, where parton $3$ is an unobserved part of the proton remnant. The phase space for three-particle final states is taken from section 4.2.4, where $(z''\leftrightarrow z''')$ and $(z_a\leftrightarrow z_b)$ have to be interchanged. The relevant integrals are calculated in appendix B. The result is \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{\gamma b\rightarrow 123} (s,t,u) = \int\limits_{X_b}^1\frac{\mbox{d}z_b}{z_b} e^2e_q^2g^2 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} J_{\gamma b \rightarrow 123}(s,t,u), \label{eq34} \end{equation} where the functions $J_{\gamma b\rightarrow 123}$ are given by \begin{eqnarray} J_{\gamma q\rightarrow qgg} (s,t,u) &=& \left[ 2C_F\left( -\frac{1}{\varepsilon}\frac{1}{C_F} P_{q\leftarrow q}(z_b) +\delta (1-z_b) \left( \frac{1}{\varepsilon^2} +\frac{1}{2\varepsilon} \left( 3-2\ln \frac{-t}{s}\right) +\frac{1}{2}\ln^2\frac{-t}{s}+\pi^2\right) \right. \rp \nonumber \\ && +1-z_b+(1-z_b) \ln \left( y_J\frac{1-z_b}{z_b}\right) +2R_+\left( \frac{-t}{s}\right) -2\ln\left( \frac{-t}{s}\left( \frac{1-z_b}{z_b}\right) ^2\right) \nonumber \\ && \left. -2\frac{z_b}{1-z_b}\ln\left( 1+\frac{-t}{y_Js} \frac{1-z_b}{z_b}\right) \rr \nonumber \\ && -N_C\left( \delta (1-z_b) \left( \frac{1}{\varepsilon} \ln \frac{u}{t} +\frac{1}{2}\ln^2\frac{-t}{s}-\frac{1}{2}\ln^2\frac{-u}{s}\right) +2R_+\left( \frac{-t}{s}\right) -2R_+\left( \frac{-u}{s}\right) \right. \nonumber \\ && -2\ln\left( \frac{-t}{s}\left( \frac{1-z_b}{z_b}\right) ^2\right) +2\ln\left( \frac{-u}{s}\left( \frac{1-z_b}{z_b}\right) ^2\right) \nonumber \\ && \left. \lp-2\frac{z_b}{1-z_b}\ln\left( 1+\frac{-t}{y_Js}\frac{1-z_b}{z_b}\right) +2\frac{z_b}{1-z_b}\ln\left( 1+\frac{-u}{y_Js}\frac{1-z_b}{z_b}\right) \rr\right] \nonumber \\ && T_{\gamma q\rightarrow gq}(s,t,u), \\ J_{\gamma q\rightarrow qq\bar{q}} (s,t,u) &=& \frac{1}{2}\left[ -\frac{1}{\varepsilon}\frac{1}{C_F} P_{g\leftarrow q}(z_b) +\frac{1}{C_F} P_{g\leftarrow q}(z_b)\left( \ln\left( y_J\frac{1-z_b}{z_b}\right) +1\right) \right. \nonumber \\ && \left. -2\frac{1-z_b}{z_b}\right] T_{\gamma g\rightarrow q\bar{q}} (s,t,u), \\ J_{\gamma g\rightarrow gq\bar{q},1} (s,t,u) &=& 2C_F\left[ -\frac{1}{\varepsilon} P_{q\leftarrow g}(z_b) +P_{q\leftarrow g}(z_b)\left( \ln\left( y_J\frac{1-z_b}{z_b}\right) -1\right) +\frac{1}{2} \right] \nonumber \\ && T_{\gamma q\rightarrow gq} (s,t,u), \\ J_{\gamma g\rightarrow gq\bar{q},2} (s,t,u) &=& N_C\left[ -\frac{1}{\varepsilon}\frac{1}{N_C}P_{g\leftarrow g}(z_b) +\delta (1-z_b)\left( \frac{1}{\varepsilon^2}+\frac{1}{\varepsilon}\frac{1}{N_C} \left( \frac{11}{6}N_C -\frac{1}{3}N_f\right) -\frac{1}{2\varepsilon}\ln\frac{tu}{s^2}\right. \rp\nonumber \\ && \left. +\frac{1}{4}\ln^2\frac{-t}{s} +\frac{1}{4}\ln^2\frac{-u}{s}+\pi^2\right) -2R_+\left( \frac{-t}{s}\right) -2R_+\left( \frac{-u}{s}\right) \nonumber \\ && +2\ln\left( \frac{-t}{s}\left( \frac{1-z_b}{z_b}\right) ^2\right) +2\ln\left( \frac{-u}{s}\left( \frac{1-z_b}{z_b}\right) ^2\right) \nonumber \\ && +2\frac{z_b}{1-z_b}\ln\left( 1+\frac{-t}{y_Js} \frac{1-z_b}{z_b}\right) +2\frac{z_b}{1-z_b}\ln\left( 1+\frac{-u}{y_Js}\frac{1-z_b}{z_b}\right) \nonumber \\ && \left. \lp-4\left( \frac{1-z_b}{z_b}+z_b(1-z_b)\right) \ln\left( y_J\frac{1-z_b}{z_b}\right) \right] \left( -\frac{N_C}{4}\right) \right] T_{\gamma g\rightarrow q\bar{q}} (s,t,u). \end{eqnarray} Here, we have used the function \begin{eqnarray} R_+(x) & = & \left( \frac{\ln \left( x \left( \frac{1-z_b}{z_b}\right)^2\right)}{1-z_b}\right)_+ \end{eqnarray} as an abbreviation. As the integration over $z_b$ in eq.~(\ref{eq34}) runs from $X_b$ to 1, the $+$-distributions \cite{Alt78} are defined as \begin{equation} D_+[g] = \int_{X_b}^1 \mbox{d}z_b D(z_b) g(z_b) -\int_0^1 \mbox{d}z_b D(z_b) g(1) , \label{eq44} \end{equation} where \begin{equation} g(z_b) = \frac{1}{z_b} F_{b'/p}\left( \frac{X_b}{z_b}\right) h(z_b), \end{equation} and $h(z_b)$ is a regular function of $z_b$ \cite{Fur82}. This leads to additional terms not given here explicitly when eq.~(\ref{eq44}) is transformed so that both integrals are calculated in the range $[X_b,1]$. Some of the $J_{\gamma b\rightarrow 123}$ contain infrared singularities $\propto 1/\varepsilon^2$, which must cancel against the corresponding singularities in the virtual contributions. The singular parts are decomposed in such a way that the Altarelli-Parisi kernels in four dimensions \begin{eqnarray} P_{q\leftarrow q} (z_b) & = & C_F \left[ \frac{1+z_b^2}{(1-z_b)_+} + \frac{3}{2} \delta (1-z_b) \right] , \\ P_{g\leftarrow q} (z_b) & = & C_F \left[ \frac{1+(1-z_b)^2}{z_b} \right] , \\ P_{g\leftarrow g} (z_b) & = & 2 N_C \left[ \frac{1}{(1-z_b)_+}+\frac{1}{z_b}+z_b(1-z_b)-2 \right] + \left[ \frac{11}{6}N_C-\frac{1}{3}N_f\right] \delta (1-z_b),\\ P_{q\leftarrow g} (z_b) & = & \frac{1}{2} \left[ z_b^2+(1-z_b)^2 \right] \end{eqnarray} proportional to $1/\varepsilon$ are split off. They also appear in the evolution equations for the parton distribution functions in the proton in section 2.2. The singular terms proportional to these kernels are absorbed as usual into the scale dependent structure functions \begin{equation} F_{b/p} (X_b,M_b^2) = \int_{X_b}^1 \frac{\mbox{d}z_b}{z_b} \left[ \delta_{bb'} \delta (1-z_b) + \frac{\alpha_s}{2\pi} R'_{b\leftarrow b'} (z_b, M_b^2) \right] F_{b'/p} \left( \frac{X_b}{z_b}\right) , \end{equation} where $F_{b'/p}(X_b/z_b)$ is the LO structure function before absorption of the collinear singularities and \begin{equation} R'_{b \leftarrow b'} (z_b, M_b^2) = -\frac{1}{\varepsilon} P_{b\leftarrow b'} (z_b) \frac{\Gamma (1-\varepsilon)} {\Gamma (1-2\varepsilon)} \left( \frac{4\pi\mu^2}{M_b^2} \right) ^\varepsilon + C'_{b\leftarrow b'} (z_b) \label{eq42} \end{equation} with $C' = 0$ in the $\overline{\mbox{MS}}$ scheme. Then, the renormalized higher order hard scattering cross section d$\sigma (\gamma b \rightarrow $jets) is calculated from \begin{equation} \mbox{d}\sigma(\gamma b\rightarrow \mbox{jets}) = \mbox{d}\bar{\sigma} (\gamma b\rightarrow \mbox{jets}) - \frac{\alpha_s}{2\pi} \int \mbox{d}z_b R'_{b\leftarrow b'}(z_b,M_b^2) \mbox{d}\sigma (\gamma b'\rightarrow \mbox{jets}) \label{eq43} . \end{equation} d$\bar{\sigma} (\gamma b\rightarrow \mbox{jets})$ is the higher order cross section before the subtraction procedure, and d$\sigma (\gamma b'\rightarrow \mbox{jets})$ contains the lowest order matrix elements $T_{\gamma q\rightarrow gq}(s,t,u)$ and $T_{\gamma g\rightarrow q\bar{q}}(s,t,u)$ in $d$ dimensions. This well known factorization prescription \cite{Ell80,Bur80,Arn89} removes finally all remaining collinear singularities. It is universal and leads for all processes to the same definition of structure functions if the choice concerning the regular function $C'$ in (\ref{eq42}) is kept fixed. Similar to the case of photon initial state singularities, the higher order cross sections in (\ref{eq43}) will depend on the factorization scale $M_b$ due to terms of the form $P_{b\leftarrow b'}(z_b) \ln (M_b^2/s)$. \subsubsection{Photon Initial State Corrections for Resolved Photons} We now turn back to the case of resolved photons and consider their initial state corrections. We start with the singularities on the photonic side of the hard scattering cross section. For direct photons, there was only one possible singularity coming from the splitting of the photon into a quark-antiquark pair as shown in figure \ref{fig17}b). However, resolved photons contribute to the hard scattering like hadrons through their partonic structure. Therefore they produce similar poles to the proton initial state singularities as in figure \ref{fig17}c) or those in section 4.2.6. These poles can be of quadratic strength due to the radiation of soft and collinear gluons off the quarks in the photon. We use the same ${\cal O} (\alpha_s^3)$ matrix elements that were already calculated for the final state corrections of resolved photons in section 4.2.3. The relevant diagrams are those for parton-parton scattering in figure \ref{fig8}. As stated before, these diagrams show only the generic types and have also to be used in their crossed forms for the complete set of diagrams. All possible processes are listed in table \ref{tab6} together with their matrix elements, initial spin and color averages, and statistical factors. Only the matrix elements singular in the variable $z_a''=p_ap_3/p_ap_b$ are kept, where $p_3$ is the momentum of the parton soft or collinear to the original parton in the photon with momentum $p_a$. Parton $3$ will then be a part of the photon remnant. The matrix elements are expressed through the ten approximated invariants given in eqs.~(\ref{eq37})-(\ref{eq38}) and decomposed with the help of partial fractioning. It is then possible to factorize them into singular kernels and parts of the leading-order parton-parton matrix elements. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|} \hline Process & Matrix Element $\overline{|{\cal M}|^2}$ \\ \hline \hline $qq'\rightarrow qq'g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g,1}(s,t,u)+|{\cal M}|^2_{qq'\rightarrow qq'g,2}(s,t,u)] /[4N_C^2]$ \\ \hline $q\bar{q}'\rightarrow q\bar{q}'g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g,1}(u,t,s)+|{\cal M}|^2_{qq'\rightarrow qq'g,2}(u,t,s)] /[4N_C^2]$ \\ \hline $q\bar{q}\rightarrow \bar{q}'q'g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g,1}(t,s,u)]/[4N_C^2]$ \\ \hline $qg\rightarrow qq'\bar{q}'$ & $[ |{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(s,t,u) +|{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(u,t,s)+|{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(t,s,u)$\\ & $ -|{\cal M}|^2_{qq'\rightarrow qq'g,2}(t,s,u)]/[8(1-\varepsilon)N_C^2C_F]$ \\ \hline $qq\rightarrow qqg$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g,1}(s,t,u)+|{\cal M}|^2_{qq'\rightarrow qq'g,1}(s,u,t)+ |{\cal M}|^2_{qq\rightarrow qqg}(s,t,u) $ \\ & $ +2|{\cal M}|^2_{qq'\rightarrow qq'g,2}(s,t,u)]/[4N_C^2]/2!$ \\ \hline $q\bar{q}\rightarrow q\bar{q}g$ & $[|{\cal M}|^2_{qq'\rightarrow qq'g,1}(u,t,s)+|{\cal M}|^2_{qq'\rightarrow qq'g,1}(u,s,t)+ |{\cal M}|^2_{qq\rightarrow qqg}(u,t,s) $ \\ & $ +|{\cal M}|^2_{qq'\rightarrow qq'g,2}(u,t,s)]/[4N_C^2]$ \\ \hline $qg\rightarrow qq\bar{q}$ & $[ |{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(s,t,u)+|{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(s,u,t) +|{\cal M}|^2_{qg\rightarrow qq\bar{q}}(s,t,u)$\\ & $ +|{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(u,t,s)+|{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(t,u,s) +|{\cal M}|^2_{qg\rightarrow qq\bar{q}}(u,t,s)$\\ & $ +|{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(t,s,u)+|{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(u,s,t) +|{\cal M}|^2_{qg\rightarrow qq\bar{q}}(t,s,u) $ \\ & $ -2|{\cal M}|^2_{qq'\rightarrow qq'g,2}(t,s,u)]/[8(1-\varepsilon)N_C^2C_F]/2!$ \\ \hline $q\bar{q}\rightarrow ggg$ & $[|{\cal M}|^2_{q\bar{q}\rightarrow ggg}(s,t,u)]/[4N_C^2]/3!$ \\ \hline $qg\rightarrow qgg$ & $[-|{\cal M}|^2_{q\bar{q}\rightarrow ggg}(t,s,u)/3+|{\cal M}|^2_{qg\rightarrow qgg,1}(s,t,u) +|{\cal M}|^2_{qg\rightarrow qgg,2}(s,t,u)]/[8(1-\varepsilon)N_C^2C_F]/2!$ \\ \hline $\bar{q}g\rightarrow\bar{q}gg$ & $[-|{\cal M}|^2_{q\bar{q}\rightarrow ggg}(t,u,s)/3+|{\cal M}|^2_{qg\rightarrow qgg,1}(u,t,s) +|{\cal M}|^2_{qg\rightarrow qgg,2}(u,t,s)]/[8(1-\varepsilon)N_C^2C_F]/2!$ \\ \hline $gg\rightarrow q\bar{q}g$ & $[-|{\cal M}|^2_{qg\rightarrow qgg,1}(t,s,u)+|{\cal M}|^2_{gg\rightarrow q\bar{q}g}(s,t,u)] /[16(1-\varepsilon)^2N_C^2C_F^2]$ \\ \hline $gg\rightarrow ggg$ & $[|{\cal M}|^2_{gg\rightarrow ggg}(s,t,u)]/[16(1-\varepsilon)^2N_C^2C_F^2]/ 3!$\\ \hline \end{tabular} \end{center} \caption[Initial State $2\rightarrow 3$ Matrix Elements for Resolved Photoproduction] {\label{tab6}{\it Summary of $2\rightarrow 3$ squared matrix elements for resolved photoproduction.}} \end{table} Let us start with the process $qq'\rightarrow qq'g$ in figure \ref{fig8}a). Here, the final gluon can be soft or collinear to the incoming quark $a$ \begin{eqnarray} |{\cal M}|^2_{qq'\rightarrow qq'g,1}(s,t,u) &=& g^6\mu^{6\varepsilon}\frac{2}{sz''}\nonumber \\ && \left[ (2C_F-N_C)\left( \frac{-2}{z''+(1-z_a)} +\frac{-t/s}{z''-t/s(1-z_a)}+\frac{-u/s}{z''-u/s(1-z_a)}\right) \right. \nonumber \\ && \left. +C_F\left( (1-z_a)(1-\varepsilon)-2+\frac{-2u/s}{z''-u/s(1-z_a)}\right) \right] T_{qq'\rightarrow qq'}(s,t,u), \label{eq41} \end{eqnarray} so that the original quark will also participate in the Born process $qq'\rightarrow qq'$. However, the quark can also go into the photon remnant. It will then radiate a gluon that scatters from the quark $b$ with different flavor \begin{equation} |{\cal M}|^2_{qq'\rightarrow qq'g,2}(s,t,u) = g^6\mu^{6\varepsilon}\frac{1}{sz''}\left[ \frac{1+(1-z_a)^2}{z_a}-\varepsilon z_a \right] T_{qg \rightarrow qg}(s,t,u). \end{equation} For equal flavors in figure \ref{fig8}b), the interference contribution \begin{eqnarray} |{\cal M}|^2_{qq\rightarrow qqg}(s,t,u) &=& g^6\mu^{6\varepsilon}\frac{2}{sz''}\nonumber \\ && \left[ (2C_F-N_C)\left( \frac{-2}{z''+(1-z_a)} +\frac{-t/s}{z''-t/s(1-z_a)}+\frac{-u/s}{z''-u/s(1-z_a)}\right) \right. \nonumber \\ && \left. +C_F\left( (1-z_a)(1-\varepsilon)-2+\frac{2}{z''+(1-z_a)}\right) \right] T_{qq\rightarrow qq}(s,t,u) \end{eqnarray} has the same kernel for the color factor $(2C_F-N_C)$, but a crossed version for the color factor $C_F$, and is proportional to the Born interference contribution $T_{qq\rightarrow qq}$. If the final gluon is crossed into the initial state for the processes $qg\rightarrow qq'\bar{q}'$ and $qg\rightarrow qq\bar{q}$ with unlike and like quark flavors, it can split up into a collinear quark-antiquark pair quite similar to the splitting of direct photons in section 4.2.5. Consequently, the singular kernels in \begin{equation} |{\cal M}|^2_{qg\rightarrow qq'\bar{q}'}(s,t,u) = g^6\mu^{6\varepsilon}\frac{2}{sz''}C_F\left[ (z_a^2+(1-z_a)^2)(1+\varepsilon)-\varepsilon\right] T_{qq'\rightarrow qq'}(s,t,u) \end{equation} and \begin{equation} |{\cal M}|^2_{qg\rightarrow qq\bar{q}}(s,t,u) = g^6\mu^{6\varepsilon}\frac{4}{sz''}C_F\left[ (z_a^2+(1-z_a)^2)(1+\varepsilon)-\varepsilon\right] T_{qq\rightarrow qq}(s,t,u) \end{equation} are the same as in eqs.~(\ref{eq39})-(\ref{eq40}), and we have the quark-quark scattering processes $qq'\rightarrow qq'$ and $qq\rightarrow qq$ on the tree level. The additional factor of $(1+\varepsilon)$ is due to the averaging of initial gluon spins with $1/(2(1-\varepsilon))\simeq 1/2(1+\varepsilon)$, whereas the photons were averaged just by $1/2$. The third process to be considered is taken from figure \ref{fig8}c). Obviously, only a gluon can go into the photon remnant so that in \begin{eqnarray} |{\cal M}|^2_{q\bar{q}\rightarrow ggg}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{6}{sz''}\left[ C_F\left( (1-z_a)(1-\varepsilon)-2\right) T_{q\bar{q}\rightarrow gg}(s,t,u) \right. \\ && +4N_CC_F \nonumber \\ && \left( \frac{-u/s}{z''-u/s(1-z_a)}(1-\varepsilon)\left( \frac{t^2+u^2 -\varepsilon s^2}{tu}\right) \left( 2N_CC_F-2N_C^2\frac{tu}{s^2}-N_C^2\frac{u^2} {s^2}\right) \right. \nonumber \\ && +\frac{1}{z''+(1-z_a)}(1-\varepsilon)\left( \frac{t^2+u^2 -\varepsilon s^2}{tu}\right) \left( \lr 2N_CC_F-2N_C^2\frac{ut}{s^2}\right) \left( -2+2\frac{C_F}{N_C} \right) \right. \nonumber \\ && \left. +N_C^2\frac{t^2+u^2}{s^2}\right) \nonumber \\ && \left. \lp+\frac{-t/s}{z''-t/s(1-z_a)}(1-\varepsilon)\left( \frac{t^2+u^2 -\varepsilon s^2}{tu}\right) \left( 2N_CC_F-2N_C^2\frac{tu}{s^2}-N_C^2\frac{t^2} {s^2}\right) \rr\right] \nonumber \end{eqnarray} we find the same simple pole for the color factor $C_F$ as in eq.~(\ref{eq41}) and still have quark-antiquark annihilation into gluons in the Born process. The double poles are, however, more complicated and factorize only parts of the Born matrix elements. If a final gluon is crossed into the initial state, a soft or collinear gluon can also be radiated from the initial gluon \begin{eqnarray} |{\cal M}|^2_{qg\rightarrow qgg,1}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{8}{sz''}\left[ N_C\left( \frac{1}{z_a}+z_a(1-z_a)-2\right) T_{qg \rightarrow qg}(s,t,u) \right. \nonumber \\ && -2N_CC_F \nonumber \\ && \left( \frac{-u/s}{z''-u/s(1-z_a)}(1-\varepsilon)\left( \frac{s^2+u^2 -\varepsilon t^2}{su}\right) \left( 2N_CC_F-2N_C^2\frac{su}{t^2}-N_C^2\frac{u^2} {t^2}\right) \right. \nonumber \\ && +\frac{1}{z''+(1-z_a)}(1-\varepsilon)\left( \frac{s^2+u^2 -\varepsilon t^2}{su}\right) \left( 2N_CC_F-2N_C^2\frac{us}{t^2}-N_C^2\frac{s^2}{t^2}\right) \nonumber \\ && \left. \lp+\frac{-t/s}{z''-t/s(1-z_a)}(1-\varepsilon)\left( \frac{s^2+u^2 -\varepsilon t^2}{su}\right) \left( N_C^2\frac{s^2+u^2}{t^2}\right) \rr\right] , \end{eqnarray} leaving a new kernel and a partly factorizable $qg \rightarrow qg$ scattering process behind. Alternatively, the quark can go into the photon remnant \begin{equation} |{\cal M}|^2_{qg\rightarrow qgg,2}(s,t,u) = g^6\mu^{6\varepsilon}\frac{1}{sz''}\left[ \frac{1+(1-z_a)^2}{z_a}-\varepsilon z_a \right] T_{gg\rightarrow gg}(s,t,u), \end{equation} which leads to a $gg\rightarrow gg$ Born process. For collinear quarks, only a single divergence is possible. The next kernel for $gg\rightarrow q\bar{q}g$ is already known from $qg\rightarrow qq'\bar{q}'$ and $qg\rightarrow qq\bar{q}$ \begin{equation} |{\cal M}|^2_{gg\rightarrow q\bar{q}g}(s,t,u) = g^6\mu^{6\varepsilon}\frac{2}{sz''}C_F\left[ (z_a^2+(1-z_a)^2)(1+\varepsilon)-\varepsilon\right] T_{qg \rightarrow qg}(s,t,u), \end{equation} where a gluon splits into a quark-antiquark pair, but now with a different leading-order matrix element $qg \rightarrow qg$. Finally in \begin{eqnarray} |{\cal M}|^2_{gg\rightarrow ggg}(s,t,u) &=& g^6\mu^{6\varepsilon} \frac{12}{sz''}\left[ N_C\left( \frac{1}{z_a}+z_a(1-z_a)-2\right) T_{gg\rightarrow gg}(s,t,u) \right. \nonumber \\ && +8N_C^4C_F \nonumber \\ && \left( \frac{-t/s}{z''-t/s(1-z_a)}(1-\varepsilon)^2\left( 3-\frac{2su} {t^2}+\frac{s^4+u^4}{s^2u^2}\right) \right. \nonumber \\ && +\frac{-u/s}{z''-u/s(1-z_a)}(1-\varepsilon)^2\left( 3-\frac{2st} {u^2}+\frac{s^4+t^4}{s^2t^2}\right) \nonumber \\ && \left. \lp+\frac{1}{z''+(1-z_a)}(1-\varepsilon)^2\left( 3-\frac{2tu} {s^2}+\frac{t^4+u^4}{t^2u^2}\right) \rr\right] , \end{eqnarray} it is clear that the Born process must also be completely gluonic and we can only have the gluon splitting into two gluons as in $qg\rightarrow qgg$. All these contributions have to be considered several times and also in crossed forms according to table \ref{tab6}. What remains to be done is the integration over the phase space of particle $3$ in a region, where it can be considered a part of the photon remnant. We do so with the help of the phase space dPS$^{(r)}$ calculated in section 4.2.4 and leave the $z_a$-integration for numerics. Again, the reason is the analytically not integrable form of the photonic parton densities that have to be convoluted with the matrix elements. The result is \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{ab\rightarrow 123} (s,t,u) = \int\limits_{X_a}^1\frac{\mbox{d}z_a}{z_a} g^4 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{Q^2} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} I_{ab\rightarrow 123}(s,t,u). \end{equation} The integrated matrix elements are put into the functions $I_{ab \rightarrow 123}$: \begin{eqnarray} I_{qq'\rightarrow qq'g,1}(s,t,u) & = & \left[ (2C_F-N_C) \left( \delta (1-z_a) \left( -\frac{1}{2\varepsilon }(-2l(s)+l(t)+l(u))\right. \rp\right. \nonumber \\ && \left. +\frac{1}{4}(-2l^2(s)+l^2(t)+l^2(u))\right) +\frac{z_a}{(1-z_a)}_+ (-2l(s)+l(t)+l(u))\nonumber \\ && +2\frac{z_a}{1-z_a}\ln\left( 1+\frac{|s|}{y_IQ^2}\frac{1-z_a}{z_a}\right) -\frac{z_a}{1-z_a}\ln\left( 1+\frac{|t|}{y_IQ^2} \frac{1-z_a}{z_a}\right) \nonumber \\ && \left. -\frac{z_a}{1-z_a}\ln\left( 1+\frac{|u|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && +C_F\left( -\frac{1}{\varepsilon}\frac{1}{C_F}P_{q\leftarrow q}(z_a) +\delta (1-z_a)\left( \frac{1}{\varepsilon^2}+\frac{1}{2\varepsilon}(3-2l(u)) \right. \rp\nonumber \\ && \left. +\frac{1}{2}l^2(u)+\pi^2\right) +1-z_a+(1-z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) +2R_+\left( \left| \frac{u}{Q^2}\right| \right) \nonumber \\ && \left. \lp-2l\left( u\left( \frac{1-z_a}{z_a}\right) ^2\right) -2\frac{z_a}{1-z_a}\ln\left( 1+\frac{|u|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \right) \right] \nonumber \\ && T_{qq'\rightarrow qq'}(s,t,u), \\ I_{qq'\rightarrow qq'g,2}(s,t,u) & = & \frac{1}{2}\left[ -\frac{1}{\varepsilon}\frac{1}{C_F} P_{g\leftarrow q}(z_a) +\frac{1}{C_F} P_{g\leftarrow q}(z_a)\left( \ln\left( y_I\frac{1-z_a}{z_a}\right) +1\right) \right. \nonumber \\ && \left. -2\frac{1-z_a}{z_a}\right] T_{qg \rightarrow qg}(s,t,u), \\ I_{qq\rightarrow qqg}(s,t,u) & = & \left[ (2C_F-N_C) \left( \delta (1-z_a) \left( -\frac{1}{2\varepsilon }(-2l(s)+l(t)+l(u))\right. \rp\right. \nonumber \\ && \left. +\frac{1}{4}(-2l^2(s)+l^2(t)+l^2(u))\right) +\frac{z_a}{(1-z_a)}_+ (-2l(s)+l(t)+l(u))\nonumber \\ && +2\frac{z_a}{1-z_a}\ln\left( 1+\frac{|s|}{y_IQ^2}\frac{1-z_a}{z_a}\right) -\frac{z_a}{1-z_a}\ln\left( 1+\frac{|t|}{y_IQ^2} \frac{1-z_a}{z_a}\right) \nonumber \\ && \left. -\frac{z_a}{1-z_a}\ln\left( 1+\frac{|u|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && +C_F\left( -\frac{1}{\varepsilon}\frac{1}{C_F}P_{q\leftarrow q}(z_a) +\delta (1-z_a)\left( \frac{1}{\varepsilon^2}+\frac{1}{2\varepsilon}(3-2l(s)) \right. \rp\nonumber \\ && \left. +\frac{1}{2}l^2(s)+\pi^2\right) +1-z_a+(1-z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) +2R_+\left( \left| \frac{s}{Q^2}\right| \right) \nonumber \\ && \left. \lp-2l\left( s\left( \frac{1-z_a}{z_a}\right) ^2\right) -2\frac{z_a}{1-z_a}\ln\left( 1+\frac{|s|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \right) \right] \nonumber \\ && T_{qq\rightarrow qq}(s,t,u), \\ I_{qg\rightarrow qq'\bar{q}'}(s,t,u) & = & 2C_F\left[ -\frac{1}{\varepsilon} P_{q\leftarrow g}(z_a) +P_{q\leftarrow g}(z_a)\left( \ln\left( y_I\frac{1-z_a}{z_a}\right) -1\right) +\frac{1}{2}\right] \nonumber \\ && T_{qq'\rightarrow qq'}(s,t,u), \\ I_{qg\rightarrow qq\bar{q}}(s,t,u) & = & 4C_F\left[ -\frac{1}{\varepsilon} P_{q\leftarrow g}(z_a) +P_{q\leftarrow g}(z_a)\left( \ln\left( y_I\frac{1-z_a}{z_a}\right) -1\right) +\frac{1}{2}\right] \nonumber \\ && T_{qq\rightarrow qq}(s,t,u), \\ I_{q\bar{q}\rightarrow ggg}(s,t,u) & = & \left[ 3C_F\left( -\frac{1}{\varepsilon}\frac{1}{C_F} P_{q\leftarrow q}(z_a) +\delta (1-z_a) \left( \frac{1}{\varepsilon^2}+\frac{3}{2\varepsilon}+\pi^2\right) \right. \rp\nonumber \\ && \left. \lp +1-z_a+(1-z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) \rr\right] T_{q\bar{q}\rightarrow gg}(s,t,u)\nonumber \\ && +3N_CC_F\left[ \left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(t)+2l(t)+l^2(t)\right) +4R_+\left( \left| \frac{t}{Q^2}\right| \right) \right. \rp\nonumber \\ && \left. -4l\left( t\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|t|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( N_C^2\left( \frac{u}{t}-\frac{2u^2}{s^2}\right) -\frac{u}{t}-\frac{t}{u}\right) \nonumber \\ && +\left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(s)+2l(s)+l^2(s)\right) +4R_+\left( \left| \frac{s}{Q^2}\right| \right) \right. \nonumber \\ && \left. -4l\left( s\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|s|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( 2\frac{t^2+u^2}{s^2}+\frac{1}{N_C^2}\left( \frac{u}{t}+\frac{t}{u}\right) \rr\nonumber \\ && +\left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(u)+2l(u)+l^2(u)\right) +4R_+\left( \left| \frac{u}{Q^2}\right| \right) \right. \nonumber \\ && \left. -4l\left( u\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|u|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( N_C^2\left( \frac{t}{u}-\frac{2t^2}{s^2}\right) -\frac{u}{t}-\frac{t}{u}\right) \nonumber \\ && +\delta (1-z_a)\left( 2l(t)\left( N_C^2 \frac{u}{t}-\frac{s^2}{tu}\right) +2l(s)\left( 2+\frac{1}{N_C^2}\frac{s^2}{tu}\right) \right. \nonumber \\ && \left. \lp+2l(u)\left( N_C^2 \frac{t}{u}-\frac{s^2}{tu}\right) \rr\right] ,\\ I_{qg\rightarrow qgg,1}(s,t,u) & = & \left[ 2N_C\left( -\frac{1}{\varepsilon}\frac{1}{N_C} P_{g\leftarrow g}(z_a) \right. \rp\nonumber \\ && +\delta (1-z_a)\left( \frac{1}{\varepsilon^2}+\frac{1}{\varepsilon}\frac{1}{N_C} \left( \frac{11}{6}N_C -\frac{1}{3}N_f\right) +\pi^2\right) \nonumber \\ && \left. \lp +2\ln\left( y_I\frac{1-z_a}{z_a}\right) \left( \frac{1}{z_a}+z_a(1-z_a)-1\right) \right) \right] T_{qg \rightarrow qg}(s,t,u)\nonumber \\ && -2N_CC_F\left[ \left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(t)+2l(t)+l^2(t)\right) +4R_+\left( \left| \frac{t}{Q^2}\right| \right) \right. \rp\nonumber \\ && \left. -4l\left( t\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|t|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( N_C^2\frac{(s^2+u^2)^2}{ust^2} \right) \nonumber \\ && +\left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(s)+2l(s)+l^2(s)\right) +4R_+\left( \left| \frac{s}{Q^2}\right| \right) \right. \nonumber \\ && \left. -4l\left( s\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|s|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( N_C^2\left( \frac{u}{s}-\frac{2u^2}{t^2}\right) -\frac{u}{s}-\frac{s}{u}\right) \nonumber \\ && +\left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(u)+2l(u)+l^2(u)\right) +4R_+\left( \left| \frac{u}{Q^2}\right| \right) \right. \nonumber \\ && \left. -4l\left( u\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|u|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( N_C^2\left( \frac{s}{u}-\frac{2s^2}{t^2}\right) -\frac{u}{s}-\frac{s}{u}\right) \nonumber \\ && +\delta (1-z_a)\left( 2l(t) N_C^2 \left( \frac{u}{s}+\frac{s}{u}\right) +2l(s)\left( N_C^2 \frac{u}{s}-\frac{t^2}{su}\right) \right. \nonumber \\ && \left. \lp+2l(u)\left( N_C^2 \frac{s}{u}-\frac{t^2}{su}\right) \rr\right] ,\\ I_{qg\rightarrow qgg,2}(s,t,u) & = & \frac{1}{2}\left[ -\frac{1}{\varepsilon}\frac{1}{C_F} P_{g\leftarrow q}(z_a) +\frac{1}{C_F} P_{g\leftarrow q}(z_a)\left( \ln\left( y_I\frac{1-z_a}{z_a}\right) +1\right) \right. \nonumber \\ && \left. -2\frac{1-z_a}{z_a}\right] T_{gg\rightarrow gg}(s,t,u), \\ I_{gg\rightarrow q\bar{q}g}(s,t,u) & = & 2C_F\left[ -\frac{1}{\varepsilon} P_{q\leftarrow g}(z_a) +P_{q\leftarrow g}(z_a)\left( \ln\left( y_I\frac{1-z_a}{z_a}\right) -1\right) +\frac{1}{2}\right] \nonumber \\ && T_{qg \rightarrow qg}(s,t,u), \\ I_{gg\rightarrow ggg}(s,t,u) & = & \left[ 3N_C\left( -\frac{1}{\varepsilon}\frac{1}{N_C} P_{g\leftarrow g}(z_a)\right. \rp \nonumber \\ && +\delta (1-z_a)\left( \frac{1}{\varepsilon^2}+\frac{1}{\varepsilon}\frac{1}{N_C} \left( \frac{11}{6}N_C -\frac{1}{3}N_f\right) +\pi^2\right) \nonumber \\ && \left. \lp +2\ln\left( y_I\frac{1-z_a}{z_a}\right) \left( \frac{1}{z_a}+z_a(1-z_a)-1\right) \right) \right] T_{gg\rightarrow gg}(s,t,u)\nonumber \\ && +12N_C^4C_F\left[ \left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(t)+4l(t)+l^2(t)\right) +4R_+\left( \left| \frac{t}{Q^2}\right| \right) \right. \rp\nonumber \\ && \left. -4l\left( t\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|t|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( 3-\frac{2us}{t^2}+\frac{u^4+s^4}{u^2s^2}\right) \nonumber \\ && +\left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(u)+4l(u)+l^2(u)\right) +4R_+\left( \left| \frac{u}{Q^2}\right| \right) \right. \nonumber \\ && \left. -4l\left( u\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|u|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left( 3-\frac{2ts}{u^2}+\frac{t^4+s^4}{t^2s^2}\right) \nonumber \\ && +\left( \delta (1-z_a)\left( -\frac{2}{\varepsilon}l(s)+4l(s)+l^2(s)\right) +4R_+\left( \left| \frac{s}{Q^2}\right| \right) \right. \nonumber \\ && \left. -4l\left( s\left( \frac{1-z_a}{z_a}\right) ^2\right) -4\frac{z_a}{1-z_a}\ln\left( 1+ \frac{|s|}{y_IQ^2}\frac{1-z_a}{z_a}\right) \rr\nonumber \\ && \left. \left( 3-\frac{2tu}{s^2}+\frac{t^4+u^4}{t^2u^2}\right) \right] . \end{eqnarray} The absorption of the collinear poles $1/\varepsilon$ proportional to the different Altarelli-Parisi splitting functions is handled in a completely analogous way as in section 4.2.6 for proton initial state corrections. The only difference is that the poles are absorbed into the photon structure function and not the proton structure function. As always, we omit terms of higher order in $\varepsilon$ and the invariant mass cut-off $y$. \subsubsection{Proton Initial State Corrections for Resolved Photons} Finally, the next-to-leading order ${\cal O} (\alpha_s^3)$ resolved photoproduction cross section receives also initial state corrections on the proton side. It is, however, not necessary to calculate these contributions again. The relevant diagrams in figure \ref{fig8} are the same as in the last section as is the singularity structure in figure \ref{fig17}c). This is due to the fact that resolved photons behave like hadrons. The proton initial state formul{\ae} are obtained from those in section 4.2.7 by interchanging $(z''\leftrightarrow z''')$ and $(z_a\leftrightarrow z_b)$, so that we consider matrix elements that are singular in the variable \begin{equation} z''' = \frac{p_bp_3}{p_ap_b}. \end{equation} The parton $b$ in the proton gives a fraction \begin{equation} z_b = \frac{p_1p_2}{p_ap_b} \end{equation} of its momentum to the $2\rightarrow 2$ hard scattering process and the rest of $(1-z_b)$ to particle $3$ in the proton remnant. The list of approximated invariants is the same as in section 4.2.6 for proton initial state corrections of direct photons, and the list of contributing matrix elements is the same as in table \ref{tab6}. The integration over the singular region of phase space gives \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{ab\rightarrow 123} (s,t,u) = \int\limits_{X_b}^1\frac{\mbox{d}z_b}{z_b} g^4 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{Q^2} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} J_{ab\rightarrow 123}(s,t,u), \end{equation} where the $z_b$-integration is done numerically to allow for inclusion of the proton structure function. The functions \begin{equation} J_{ab\rightarrow 123}(s,t,u)=I_{ab\rightarrow 123}(s,t,u) \end{equation} are identical to those from the last chapter. For the IR singularities and finite contributions proportional to the $\delta(1-z_b)$-function, the integration can of course be carried out trivially. Thus, these singularities cancel against those from the virtual corrections for resolved photoproduction. The collinear singularities proportional to the Altarelli-Parisi splitting functions are now absorbed into the proton structure functions and not the photon structure functions as before. \subsubsection{Real Corrections for Direct $\gamma\gamma$ Scattering} Real corrections to direct $\gamma\gamma$ scattering arise through the radiation of a gluon off one of the quark lines in the underlying Born process $\gamma\gamma\rightarrow q\bar{q}$. This can be inferred from figure \ref{kkkfig3}. \input{kkkfig3.tex} The calculation of the final and initial state singular parts proceeds along the same lines as for direct and resolved photoproduction. Let us first consider the final state singularity. There, the approximated matrix element \begin{equation} |{\cal M}|^2_{\gamma\gamma\rightarrow q\bar{q}g}(s,t,u) = e^4e_q^4g^2\mu^{6\varepsilon} \frac{1}{sz'} 4C_F\left( (1-b) (1-\varepsilon)-2+\frac{2}{z'+(1-b)}\right) T_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u) \end{equation} is integrated over the singular phase space to give \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{\gamma\gamma\rightarrow 123} (s,t,u) = e^4e_q^4 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} F_{\gamma\gamma \rightarrow 123}(s,t,u) \end{equation} with \begin{equation} F_{\gamma\gamma\rightarrow q\bar{q}g} (s,t,u) = C_F\left( \frac{2}{\varepsilon^2} +\frac{3}{\varepsilon}-\frac{2\pi^2}{3}+7-2\ln^2 y_F-3\ln y_F \right) T_{\gamma\gamma\rightarrow q\bar{q}} (s,t,u). \end{equation} Turning to the initial state, we find the approximated matrix element \begin{equation} |{\cal M}|^2_{\gamma\gamma\rightarrow q\bar{q}g}(s,t,u) = e^4e_q^4g^2\mu^{6\varepsilon} \frac{1}{sz''}\left[ z_a^2+(1-z_a)^2-\varepsilon\right] T_{\gamma q\rightarrow gq}(s,t,u). \end{equation} After integration, this yields \begin{equation} \int\mbox{dPS}^{(r)} |{\cal M}|^2_{\gamma \gamma\rightarrow 123} (s,t,u) = \int\limits_{X_a}^1\frac{\mbox{d}z_a}{z_a} e^4e_q^2 \mu^{4\varepsilon} \frac{\alpha_s}{2\pi} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} I_{\gamma \gamma \rightarrow 123}(s,t,u) \end{equation} with \begin{equation} I_{\gamma\gamma\rightarrow q\bar{q}g} (s,t,u) = \left[ -\frac{1}{\varepsilon}\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a) +\frac{1}{2N_C}P_{q\leftarrow \gamma}(z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) +\frac{e_q^2}{2}\right] T_{\gamma q\rightarrow gq} (s,t,u).\hspace{7mm} \end{equation} The pole is proportional to the Altarelli-Parisi splitting function and is absorbed into the photon parton density. \subsection{Finite Next-To-Leading Order Cross Sections} We conclude this section with a summary of all singularities that appeared in the next-to-leading order cross section of jet photoproduction. There were three types of singularities: \begin{itemize} \item UV singularities in the virtual corrections \item IR singularities in the virtual and real corrections \item Collinear singularities in the initial state real corrections \end{itemize} All of them were regularized dimensionally by going from four to $d=4-2\varepsilon$ dimensions, where the regulator $\varepsilon$ had a positive sign for the ultraviolet (UV) and a negative sign for the infrared (IR) divergencies. The ultraviolet divergencies were encountered in the calculation of the virtual diagrams in section 4.1. The diagrams had an additional inner ``virtual'' particle and were classified into self-energy diagrams, propagator corrections, box diagrams, and vertex corrections. The inner loop momenta could not be observed and had to be integrated up to infinity. The resulting UV singularities could be removed by renormalizing the fields, couplings, gauge parameters, and masses in the Lagrangian through multiplicative renormalization constants $Z_i$, which show up in perturbation theory as counter terms order by order in the strong coupling $\alpha_s$. The counter terms were not given explicitly, so that all matrix element formul{\ae} in section 4.1 are already UV-divergence free and all fields, couplings etc.~have to be considered physical and renormalized. As an example, the counter term for the QCD Compton graph $\gamma q\rightarrow gq$ has the form \begin{eqnarray} |{\cal M}|^2_{\gamma q\rightarrow gq,CT}(s,t,u) &=& e^2e_q^2g^2\mu^{4\varepsilon}\frac{\alpha_s} {2\pi}\left( \frac{4\pi\mu^2}{s}\right) ^\varepsilon\frac{\Gamma(1-\varepsilon)}{\Gamma(1-2\varepsilon)} \nonumber \\ && \left( \frac{1}{\varepsilon}+\ln\frac{s}{\mu^2}\right) \left( \frac{1}{3}N_f-\frac{11}{6}N_C \right) T_{\gamma q\rightarrow gq}(s,t,u) \end{eqnarray} in the $\overline{\mbox{MS}}$ scheme and leads to a logarithmic dependence of the cross section on the renormalization scale $\mu$. The second type of singularities, IR divergencies, were produced at the lower end of the loop integration in the virtual corrections and through soft or collinear real particle emission in the real corrections. They were written down explicitly throughout the last sections and have to cancel according to the Kinoshita-Lee-Nauenberg theorem \cite{Kin62}. We demonstrate this cancellation separately for direct and resolved photoproduction in tables \ref{tab7} and \ref{tab8} and for direct $\gamma\gamma$ scattering in table \ref{kkktab1}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Process & Color Factor & NLO Correction & Singular Parts of Matrix Elements\\ \hline \hline $\gamma q\rightarrow gq$ & $C_F$ & Virtual Corr. & $\left[ -\frac{2}{\varepsilon^2}-\frac{1} {\varepsilon}(3-2l(t))\right] T_{\gamma q\rightarrow gq}(s,t,u) $\\ & & Final State & $\left[ +\frac{1}{\varepsilon^2}+\frac{1} {2\varepsilon}(3-2l(t))\right] T_{\gamma q\rightarrow gq}(s,t,u) $\\ & & Initial State & $\left[ +\frac{1}{\varepsilon^2}+\frac{1} {2\varepsilon}(3-2l(t))\right] T_{\gamma q\rightarrow gq}(s,t,u) $\\ \cline{2-4} & $N_C$ & Virtual Corr. & $\left[ -\frac{1}{\varepsilon^2}-\frac{1} {2\varepsilon}\left( \frac{11}{3}-2l(s)+2l(t)-2l(u)\right) \right] T_{\gamma q\rightarrow gq}(s,t,u) $\\ & & Final State & $\left[ +\frac{1}{\varepsilon^2}+\frac{1} {2\varepsilon}\left( \frac{11}{3}-~~l(s)+~~l(t)-~~l(u)\right) \right] T_{\gamma q\rightarrow gq}(s,t,u) $\\ & & Initial State & $\left[ \hspace{8.5mm} +\frac{1} {2\varepsilon}\left( \hspace{6.5mm}-~~l(s)+~~l(t)-~~l(u)\right) \right] T_{\gamma q\rightarrow gq}(s,t,u) $\\ \cline{2-4} & $N_f$ & Virtual Corr. & $+\frac{1}{3\varepsilon} T_{\gamma q\rightarrow gq}(s,t,u) $\\ & & Final State & $-\frac{1}{3\varepsilon} T_{\gamma q\rightarrow gq}(s,t,u) $\\ \hline \end{tabular} \end{center} \caption[Cancellation of IR Singularities for Direct Photoproduction] {\label{tab7}{\it Cancellation of IR singularities from virtual, final state, and initial state NLO corrections for the direct partonic subprocesses and different color factors.}} \end{table} There is only one generic $2\rightarrow 2$ diagram $\gamma q\rightarrow gq$ in direct photoproduction as shown in figure \ref{fig1}, from which the photon-gluon fusion process can be deduced with the help of crossing relations. Therefore we show only this process in table \ref{tab7}, but for the three contributing color factors $C_F$, $N_C$, and $N_f$. The real corrections for the first two classes come from the process $\gamma q\rightarrow qgg$ and are equally divided among final and initial state singularities. The last class, however, occurs only in the splitting of the final gluon into an additional quark-antiquark pair with $N_f$ flavors. This is the reason why no initial state singularity is present here. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Process & Color Factor & NLO Correction & Singular Parts of Matrix Elements\\ \hline \hline $qq'\rightarrow qq'$ & $C_F$ & Virtual Corr. & $\left[ -\frac{4}{\varepsilon^2}-\frac{1} {\varepsilon}(6+8l(s)-8l(u)-4l(t))\right] T_{qq'\rightarrow qq'}(s,t,u) $\\ & & Final State & $\left[ +\frac{2}{\varepsilon^2}+\frac{1} {\varepsilon}(3+4l(s)-4l(u)-2l(t))\right] T_{qq'\rightarrow qq'}(s,t,u) $\\ & & Initial State & $\left[ +\frac{2}{\varepsilon^2}+\frac{1} {\varepsilon}(3+4l(s)-4l(u)-2l(t))\right] T_{qq'\rightarrow qq'}(s,t,u) $\\ \cline{2-4} & $N_C$ & Virtual Corr. & $\left[ +\frac{1}{\varepsilon}(4l(s)-2l(u) -2l(t))\right] T_{qq'\rightarrow qq'}(s,t,u) $\\ & & Final State & $\left[ -\frac{1}{\varepsilon}(2l(s)-~l(u) -~l(t))\right] T_{qq'\rightarrow qq'}(s,t,u) $\\ & & Initial State & $\left[ -\frac{1}{\varepsilon}(2l(s)-~l(u) -~l(t))\right] T_{qq'\rightarrow qq'}(s,t,u) $\\ \hline $qq\rightarrow qq$ & $C_F$ & Virtual Corr. & $\left[ -\frac{4}{\varepsilon^2}-\frac{1} {\varepsilon}(6+4l(s)-4l(t)-4l(u))\right] T_{qq\rightarrow qq}(s,t,u) $\\ & & Final State & $\left[ +\frac{2}{\varepsilon^2}+\frac{1} {\varepsilon}(3+2l(s)-2l(t)-2l(u))\right] T_{qq\rightarrow qq}(s,t,u) $\\ & & Initial State & $\left[ +\frac{2}{\varepsilon^2}+\frac{1} {\varepsilon}(3+2l(s)-2l(t)-2l(u))\right] T_{qq\rightarrow qq}(s,t,u) $\\ \cline{2-4} & $N_C$ & Virtual Corr. & $\left[ +\frac{2}{\varepsilon}(2l(s)-l(t) -l(u))\right] T_{qq\rightarrow qq}(s,t,u) $\\ & & Final State & $\left[ -\frac{1}{\varepsilon}(2l(s)-l(t) -l(u))\right] T_{qq\rightarrow qq}(s,t,u) $\\ & & Initial State & $\left[ -\frac{1}{\varepsilon}(2l(s)-l(t) -l(u))\right] T_{qq\rightarrow qq}(s,t,u) $\\ \hline $q\bar{q}\rightarrow gg$ & $C_F$ & Virtual Corr. & $\left[ -\frac{2}{\varepsilon^2}-\frac{3} {\varepsilon}\right] T_{q\bar{q}\rightarrow gg}(s,t,u) $\\ & & Initial State & $\left[ +\frac{2}{\varepsilon^2}+\frac{3} {\varepsilon}\right] T_{q\bar{q}\rightarrow gg}(s,t,u) $\\ \cline{2-4} & $N_C$ & Virtual Corr. & $\left[ -\frac{2}{\varepsilon^2}-\frac{11} {3\varepsilon}\right] T_{q\bar{q}\rightarrow gg}(s,t,u) $\\ & & Final State & $\left[ +\frac{2}{\varepsilon^2}+\frac{11} {3\varepsilon}\right] T_{q\bar{q}\rightarrow gg}(s,t,u) $\\ \cline{2-4} & $N_f$ & Virtual Corr. & $+\frac{2}{3\varepsilon} T_{q\bar{q}\rightarrow gg}(s,t,u) $\\ & & Final State & $-\frac{1}{3\varepsilon} T_{q\bar{q}\rightarrow gg}(s,t,u) $\\ & & Initial State & $-\frac{1}{3\varepsilon} T_{q\bar{q}\rightarrow gg}(s,t,u) $\\ \cline{2-4} & $1$ & Virtual Corr. & $+\frac{1}{\varepsilon}l(s)\left( \lr 4N_C^3 C_F+\frac{4C_F}{N_C}\right) \frac{t^2+u^2}{tu}-16N_C^2C_F^2\frac{t^2+u^2}{s^2}\right) $ \\ & & Final State & $-\frac{1}{2\varepsilon}l(s)\left( \lr 4N_C^3 C_F+\frac{4C_F}{N_C}\right) \frac{t^2+u^2}{tu}-16N_C^2C_F^2\frac{t^2+u^2}{s^2}\right) $ \\ & & Initial State & $-\frac{1}{2\varepsilon}l(s)\left( \lr 4N_C^3 C_F+\frac{4C_F}{N_C}\right) \frac{t^2+u^2}{tu}-16N_C^2C_F^2\frac{t^2+u^2}{s^2}\right) $ \\ \cline{2-4} & $8N_C^3C_F$ & Virtual Corr. & $+\frac{1}{\varepsilon}\left( l(t)\left( \frac{u}{t} -\frac{2u^2}{s^2}\right) +l(u)\left( \frac{t}{u}-\frac{2t^2}{s^2}\right) \rr $\\ & & Final State & $-\frac{1}{2\varepsilon}\left( l(t)\left( \frac{u}{t} -\frac{2u^2}{s^2}\right) +l(u)\left( \frac{t}{u}-\frac{2t^2}{s^2}\right) \rr $\\ & & Initial State & $-\frac{1}{2\varepsilon}\left( l(t)\left( \frac{u}{t} -\frac{2u^2}{s^2}\right) +l(u)\left( \frac{t}{u}-\frac{2t^2}{s^2}\right) \rr $\\ \cline{2-4} & $8N_CC_F$ & Virtual Corr. & $-\frac{1}{\varepsilon}\left( \frac{u}{t}+\frac{t}{u}\right) (l(t)+l(u)) $\\ & & Final State & $+\frac{1}{2\varepsilon}\left( \frac{u}{t}+\frac{t}{u}\right) (l(t)+l(u)) $\\ & & Initial State & $+\frac{1}{2\varepsilon}\left( \frac{u}{t}+\frac{t}{u}\right) (l(t)+l(u)) $\\ \hline $gg\rightarrow gg$ & $N_C$ & Virtual Corr. & $\left[ -\frac{4}{\varepsilon^2}-\frac{22} {3\varepsilon}\right] T_{gg\rightarrow gg}(s,t,u) $\\ & & Final State & $\left[ +\frac{2}{\varepsilon^2}+\frac{11} {3\varepsilon}\right] T_{gg\rightarrow gg}(s,t,u) $\\ & & Initial State & $\left[ +\frac{2}{\varepsilon^2}+\frac{11} {3\varepsilon}\right] T_{gg\rightarrow gg}(s,t,u) $\\ \cline{2-4} & $N_f$ & Virtual Corr. & $+\frac{4}{3\varepsilon}T_{gg\rightarrow gg}(s,t,u) $\\ & & Final State & $-\frac{2}{3\varepsilon}T_{gg\rightarrow gg}(s,t,u) $\\ & & Initial State & $-\frac{2}{3\varepsilon}T_{gg\rightarrow gg}(s,t,u) $\\ \cline{2-4} & $32N_C^4C_F$ & Virtual Corr. & $+\frac{1}{\varepsilon} \left( l(s)\left( 3 -2\frac{tu}{s^2}+\frac{t^4+u^4}{t^2u^2}\right) + \mbox{cycl.~perm.} \right) $\\ & & Final State & $-\frac{1}{2\varepsilon}\left( l(s)\left( 3 -2\frac{tu}{s^2}+\frac{t^4+u^4}{t^2u^2}\right) + \mbox{cycl.~perm.} \right) $\\ & & Initial State & $-\frac{1}{2\varepsilon}\left( l(s)\left( 3 -2\frac{tu}{s^2}+\frac{t^4+u^4}{t^2u^2}\right) + \mbox{cycl.~perm.} \right) $\\ \hline \end{tabular} \end{center} \caption[Cancellation of IR Singularities for Resolved Photoproduction] {\label{tab8}{\it Cancellation of IR singularities from virtual, final state, and initial state NLO corrections for the resolved partonic subprocesses and different color factors.}} \end{table} For resolved photoproduction, we have the four generic processes in figure \ref{fig2}. They are presented in table \ref{tab8} and divided further into color factor classes. All other processes can be obtained through crossing. The real corrections for quark-quark scattering with different and like flavors arise simply from the emission of an additional gluon and factorize the complete Born matrix element. Final and initial state corrections contribute at equal parts. For processes involving more gluons, the situation is more complex. In $q\bar{q}\rightarrow gg$, an additional gluon leads to different color factors depending on whether it is radiated in the initial state $(C_F)$ or the final state $(N_C)$. A final gluon can also split up into $N_f$ flavors accounting for half of the real corrections in the class $N_f$. The other half comes from the process $qg\rightarrow qgg$, where the initial gluon splits up into a quark-antiquark pair with $N_f$ flavors. So far, the factorization property of $q\bar{q}\rightarrow gg$ holds. However, the logarithmic contributions to the emission of a third initial or final gluon in the color classes $1$, $8N_C^3C_F$, and $8N_CC_F$ only factorize parts of the leading order cross section. In the completely gluonic process $gg\rightarrow gg$ it does not matter where the third gluon is radiated (color class $N_C$). Still, a final gluon can split up into $N_f$ flavors or an initial gluon can have come from $N_f$ different quarks. Finally, the logarithmic contributions proportional to $32N_C^4C_F$ only factorize parts of the Born cross section, but are symmetric under cyclic permutations of the Mandelstam variables as a completely symmetric process must be. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Process & Color Factor & NLO Correction & Singular Parts of Matrix Elements\\ \hline \hline $\gamma\gamma\rightarrow q\bar{q}$ & $C_F$ & Virtual Corr. & $\left[ -\frac{2}{\varepsilon^2}-\frac{1} {\varepsilon}(3-2l(t))\right] T_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u) $\\ & & Final State & $\left[ +\frac{2}{\varepsilon^2}+\frac{1} {\varepsilon}(3-2l(t))\right] T_{\gamma\gamma\rightarrow q\bar{q}}(s,t,u) $\\ \hline \end{tabular} \end{center} \caption{\label{kkktab1} {\it Cancellation of IR singularities from virtual and final state NLO corrections for direct $\gamma\gamma$ scattering.}} \end{table} For direct $\gamma\gamma$ scattering, there is only one Born matrix element $\gamma\gamma\rightarrow q\bar{q}$ and only one color class $C_F$. The virtual singularities are canceled by the final state singularities alone as shown in table \ref{kkktab1}. The third and last class of singularities are those in the initial state from collinear real particle emission. Although they are generally classified as infrared singularities as well, they are not included in the tables \ref{tab7}, \ref{tab8}, and \ref{kkktab1} above. These single poles proportional to the Altarelli-Parisi splitting functions $P_{q\leftarrow q}$(z), $P_{g\leftarrow q}(z)$, $P_{q \leftarrow g}(z)$, and $P_{g\leftarrow g(z)}$ do not cancel against similar poles from virtual corrections. They are absorbed into the renormalized photon and proton structure functions according to the $\overline{\mbox{MS}}$ scheme and leave behind a logarithmic dependence of the hard cross section on the factorization scales $M_a$ and $M_b$. The finite next-to-leading order cross section for the photoproduction of two jets was given in section 2.5 as \begin{equation} \frac{\mbox{d}^3\sigma}{\mbox{d}E_T^2\mbox{d}\eta_1\mbox{d}\eta_2} = \sum_b x_a F_{a/e}(x_a,M_a^2) x_b F_{b/p}(x_b,M_b^2) \frac{\mbox{d}\sigma}{\mbox{d}t}(ab \rightarrow p_1p_2) \end{equation} with the partonic cross section \begin{equation} \frac{\mbox{d}\sigma}{\mbox{d}t} (ab \rightarrow p_1p_2) = \frac{1}{2s} \overline{|{\cal M}|^2}\frac{\mbox{dPS}^{(2)}}{\mbox{d}t}. \end{equation} We can now return to the physical four dimensions letting $\varepsilon\rightarrow 0$ everywhere in the calculation. The strong coupling constant $\alpha_s$ and the parton densities in the electron $F_{a/e}(x_a,M_a^2)$ and in the proton $F_{b/p}(x_b,M_b^2)$ are renormalized and defined in the $\overline{\mbox{MS}}$ scheme. \setcounter{equation}{0} \section{Numerical Results for Photoproduction} In this section we present numerical results first for single and dijet NLO cross sections in complete photoproduction, as they were defined in section 2.5. We use the analytical results that have been calculated in leading order in section 3 and in next-to-leading order in section 4. All UV, IR, and collinear initial state singularities canceled or could be removed through a renormalization procedure, leading to finite results in four dimensions. Equivalent numerical results for single and dijet NLO cross sections in complete $\gamma\gamma$ collision are presented in the next chapter. The cross section formul{\ae} are implemented in a FORTRAN computer program, which consists of four main parts as explained in table \ref{tab9}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|} \hline Part No. & Number of Jets & Contributions \\ \hline \hline 1 & 2 & Analytical contributions in LO and NLO \\ \hline 2 & 2 & Numerical contributions in jet cone 1 \\ \hline 3 & 2 & Numerical contributions in jet cone 2 \\ \hline 4 & 3 & Numerical contributions outside jet cones \\ \hline \end{tabular} \end{center} \caption[Organization of the FORTRAN Program] {\label{tab9}{\it Organization of the FORTRAN program into four main parts.}} \end{table} The first part contains the analytical results of sections 3 and 4, and the numerical parts are needed for an implementation of the Snowmass jet definition (see section 5.1). Obviously, the third part is only needed for two-jet cross sections. For one-jet cross sections, this region of phase space is included in part four. The main task of the computer program is to integrate over different parameters. For example, total or partly differential cross sections are integrated over $E_T$, $\eta_1$ and/or $\eta_2$. Furthermore, the momentum fractions of the initial photon in the electron $x_a$, the parton in the photon $y_a$, and the parton in the proton $x_b$ have to be integrated. Additional momentum fractions $z_a$ and $z_b$ appear in the initial state corrections. Finally, the numerical contributions have to be integrated over the phase space of the third parton $E_{T_3}$, $\eta_3$, and $\phi_3$. The two $\delta$-functions for the momentum fractions of the partons going into the hard $2\rightarrow 2$ scattering $X_a$ and $X_b$, \begin{equation} \delta \left( X_a -\frac{1}{2E_e}\sum_i E_{T_i}e^{-\eta_i} \right) ~\mbox{and}~ \delta \left( X_b -\frac{1}{2E_p}\sum_i E_{T_i}e^{~\eta_i} \right) , \end{equation} reduce the total number of integrations by two. The non-trivial task of computing up to seven-dimensional integrals is solved with the Monte Carlo routine VEGAS written by G.P. Lepage \cite{Lep78}, which adapts the spacing of the integration bins to the size of the integrand in the bin. The input parameters for our predictions will be kept constant throughout this chapter, if not stated otherwise. All cross sections are for HERA conditions, where electrons of energy $E_e=26.7$~GeV collide with protons of energy $E_p=820$~GeV. Positive rapidities $\eta$ correspond to the proton direction. We use the Weizs\"acker-Williams approximation of eq.~(\ref{eq45}) with $Q_{\rm max}^2 = 4~\mbox{GeV}^2$ as in the ZEUS experiment, but do not restrict the range of longitudinal photon momentum $x_a$ in the electron. For the parton densities in the proton, we choose the next-to-leading order parametrization CTEQ3 in the $\overline{\mbox{MS}}$ scheme, which already includes HERA deep inelastic scattering data. The corresponding $\Lambda$ value of $\Lambda^{(4)}=239$~MeV is also used in the two-loop calculation of the strong coupling constant $\alpha_s$ with four flavors. We do not use the one-loop approximation for leading order calculations, because the effects of the next-to-leading order hard scattering contributions are then isolated. The parton densities in the photon are taken from the NLO fit of GRV and transformed from the DIS$_{\gamma}$ into the $\overline{\mbox{MS}}$ scheme. The renormalization and factorization scales are equal to $E_T$. We use the Snowmass jet definition with $R=1$, no jet double counting, and no $R_{\rm sep}$ parameter. This makes the cross sections independent of the phase space slicing parameter $y$, which is fixed at $y=10^{-3}$. This value is sufficiently small to justify the omission of ${\cal O} (y)$ terms in the analytical calculation (see section 5.1). In section 5.1, we check our analytical results with respect to $y$-cut independence and with two different existing programs for direct and resolved one-jet production. Section 5.2 contains studies of the dependence of the cross sections on renormalization and factorization scales and on various cancellation mechanisms. Theoretical predictions for one- and two-jet cross sections are presented in sections 5.3 and 5.4, before we conclude this section with a comparison of our calculation to data from the H1 and ZEUS collaborations in section 5.5. \subsection{Check of the Analytical Results} The first check of our analytical results for the photoproduction of jets in next-to-leading order of QCD has already been given in section 4.3. There it has been shown that all infrared divergencies cancel in a consistent way between the virtual and the real corrections, giving a finite cross section in four dimensions. We have therefore missed none of the singular terms. This cancellation mechanism could only work since we integrated the $2\rightarrow 3$ matrix elements over regions, where two final state particles or an initial and a final state particle had an invariant mass $s_{ij}=(p_i+p_j)^2$ smaller than a fraction $y_{F,I,J}$ of the center-of-mass energy $s$. Naturally, this leads to a dependence of the cross sections on these phase space slicing parameters $y_{F,I,J}$. We choose $y_{\rm cut}=y_F=y_I=y_J$ in the following. One possibility to deal with the dependence of the cross section on the invariant mass cut-off $y_{\rm cut}$ is to use it as a definition for the experimentally observed jets. As we noted already in section 2.4, the experimental jet definition has only a correspondence in theory beyond the leading order. Furthermore, we mentioned a special kind of algorithm, i.e.~the JADE cluster algorithm, which uses exactly the invariant mass criterion to cluster hadrons into jets. For $e^+e^-$-experiments, one can then identify the theoretical (partonic) with the experimental (hadronic) cut-off \begin{equation} y_{\rm cut}^{\rm theory} = y_{\rm cut}^{\rm experiment}. \end{equation} In this case, the corresponding cross sections are for {\em exclusive} dijet production. The numerical value of $y_{\rm cut}$ is constrained in a two-fold manner. First, the real corrections contain simple and quadratic logarithmic terms $\ln y_{\rm cut},~\ln^2 y_{\rm cut}$, coming from the single and quadratic $1/\varepsilon$ poles, which force the cross section to become negative for $y_{\rm cut} < 10^{-2}$. Therefore one should choose $y_{\rm cut} \geq 10^{-2}$. Second, we have always assumed the singular region to be small in the real corrections. This means we have omitted terms like $y_{\rm cut} \ln y_{\rm cut},~y_{\rm cut},$ ... This forces us to take $y_{\rm cut} \ll 1$. The typical value for the JADE algorithm is therefore given by $y_{\rm cut} \simeq 10^{-2}$. The omitted terms of ${\cal O} (y_{\rm cut})$ can be calculated numerically to improve the precision of the calculation. This makes it necessary to include the full and un-approximated $2\rightarrow 3$ matrix elements in the FORTRAN program, which are then integrated numerically in the region between the analytical cut-off $y_{\rm cut}^{\rm theory}$ and the experimental cut-off $y_{\rm cut}^{\rm experiment}$. It is important to note that the full $2\rightarrow 3$ matrix elements have to be integrated {\em after partial fractioning}. Regions, where a single invariant is small but is not a pole in the matrix element, cannot be neglected. The theoretical invariant mass cut is then reduced to a purely technical variable, on which the physical prediction will no more depend. If one integrates over the phase space outside the experimental cut-off as well, one arrives at {\em inclusive} dijet cross sections. In this case, the full $2\rightarrow 3$ matrix elements contribute as a leading-order process through the production of three jets, where the third jet is unobserved. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot9.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$y_{\rm cut}$-Independence for Direct Photoproduction] {\label{plot9}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for direct photons at $E_T=20$~GeV and $\eta=1$, as a function of $y_{\rm cut}$. The analytical (dashed) and numerical (dotted) contributions have a quadratic logarithmic dependence on $y_{\rm cut}$, which cancels in the full next-to-leading order result (dot-dashed curve).}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot10.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$y_{\rm cut}$-Independence for Resolved Photoproduction] {\label{plot10}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for resolved photons at $E_T=20$~GeV and $\eta=1$, as a function of $y_{\rm cut}$. Again the sum of two-body (dashed) and three-body (dotted) curves is $y_{\rm cut}$-independent (dot-dashed curve) like the leading order (full) curve.}} \end{center} \end{figure} At HERA, we have to deal with a partly hadronic process, for which cluster algorithms are not so well suited. We will therefore follow the two experiments H1 and ZEUS and use the Snowmass cone algorithm, where hadrons $i$ are combined into a single jet, if they lie inside a cone in azimuth-rapidity space from the jet center. The experimental cone size $R$ was already defined in section 2.4 \begin{equation} R_i = \sqrt{(\eta_i-\eta_J)^2+(\phi_i-\phi_J)^2} \leq R \end{equation} and will be chosen to be $R=1$ in the following. As $R$ takes the role of $y_{\rm cut}^{\rm experiment}$, we expect the sum of the analytical two-body and numerical three-body cross sections to be independent of $y_{\rm cut}^{\rm theory}$. This is the second decisive test of our analytical results. In figure \ref{plot9} we plot the inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for direct photons at $E_T=20$~GeV and $\eta=1$ as a function of $y_{\rm cut}$. The leading order prediction (full curve) is trivially independent of $y_{\rm cut}$. The analytical two-body contributions (dashed curve) exhibit a quadratic logarithmic dependence on $y_{\rm cut}$ and turn negative below $y_{\rm cut}= 2.3\cdot 10^{-2}$. We add the numerical contributions inside and outside the jet cone into the three-body contributions (dotted curve), which are also quadratically logarithmically dependent on $y_{\rm cut}$, but positive. The sum of two- and three-body contributions is the full inclusive next-to-leading order result (dot-dashed curve) and is independent of $y_{\rm cut}$. This proves that the $y_{\rm cut}$-dependent finite terms in our analytical results are correct. Towards very small values of $y_{\rm cut}\simeq 10^{-5}$, the dot-dashed curve drops slightly. This is due to the limited accuracy in the numerical integration of the three-body contributions and can be remedied with increased computer power. Figure \ref{plot10} is the analogue to figure \ref{plot9} for resolved photoproduction. The two-body curve turns negative at a slightly smaller value of $y_{\rm cut}= 1.5\cdot 10^{-2}$. As the number of contributing partonic subprocesses is much larger than in the direct case, more computer time was needed for similar statistical accuracy. The ultimate test for our analytical and numerical results consists in the comparison of our prediction with existing predictions, that were obtained by B\"odeker \cite{Bod92} (direct case) and by Salesch \cite{Sal93} (resolved case) with different methods. This can only be done for the single-jet inclusive cross section d$^2\sigma/$d$E_T$d$\eta$, since dijet cross sections are not available. B\"odeker's results are obtained with the subtraction method of Ellis et al.~\cite{Ell89} to cancel soft and collinear singularities. Salesch calculated finite cone corrections in addition to the results by Aversa et al. \cite{Ave88}, who used the so-called small cone approximation. Here, a small jet cone radius $\delta$ is used to slice the phase space and isolate the final state divergencies. The dependence on $\delta$ cancels, when the finite cone corrections are added. We take exactly the same parameters in our calculation and in the reference calculations. Furthermore, we choose $y_{\rm cut} = 10^{-3}$ to ensure that we are not sensitive to the omission of terms of ${\cal O} (y_{\rm cut})$. The $E_T$ dependence is checked in figure \ref{plot11} with B\"odeker for direct photons and in figure \ref{plot12} with Salesch for resolved photons at $\eta = 1$. Our prediction (full curve) agrees with the older calculations (open circles) at a level better than 1\%. The numerical statistics are so good that the error bars in B\"odeker's and Salesch's programs are not seen. For comparison, we include the leading order dotted curves, where the predictions are identical. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkdptsgs.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Comparison of $E_T$-Dependence with B\"odeker for Direct Photons] {\label{plot11}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for direct photons at $\eta=1$, as a function of $E_T$. The subtraction method results (open circles) agree with our phase space slicing prediction (full curve).}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkrptsgs.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Comparison of $E_T$-Dependence with Salesch for Resolved Photons] {\label{plot12}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for resolved photons at $\eta=1$, as a function of $E_T$. The small cone approximation and finite cone corrections (open circles) agree with our invariant mass cut method (full curve). The leading order (dotted) curve is shown for comparison.}} \end{center} \end{figure} We compare the same inclusive single-jet cross section as a rapidity distribution for $E_T = 20$~GeV. We choose now a linear scale for the ordinate in figures \ref{plot13} for direct and \ref{plot14} for resolved photoproduction. This makes it possible to see the error bars in the reference calculations (open circles) which are at the same level as the agreement between the different programs. Errors of $\sim 1\%$ present in our program are not shown. With more computer time, the agreement would certainly be even better. From these comparisons, we conclude that also the cut-independent finite terms in our analytical results and in our numerical FORTRAN program are correct. Even more, it was possible to check the transverse energy, rapidity, and other distributions not shown here for every single direct and resolved partonic subprocess. We found perfect agreement everywhere. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkdysgs.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Comparison of $\eta$-Dependence with B\"odeker for Direct Photons] {\label{plot13}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for direct photons at $E_T=20$~GeV, as a function of $\eta$. Our predictions (full curve) agree with B\"odeker's predictions (open circles).}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkrysgs.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Comparison of $\eta$-Dependence with Salesch for Resolved Photons] {\label{plot14}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for resolved photons at $E_T=20$~GeV, as a function of $\eta$. Good agreement is seen for Salesch's (open circles) and our calculation (full curve). The leading order (dotted) curve is shown for comparison.}} \end{center} \end{figure} It is clear that the results presented in this work can also be applied to the calculation of inclusive jet cross sections in $p\overline{p}$ collisions by just replacing the photon parton distributions with those of the antiproton and considering only the resolved case. We could have extended our tests further by comparing with published results for the $p\overline{p}$ case. Such results have been given in \cite{Ell89} using the subtraction method and in \cite{y30} based on the phase-space slicing method. In connection with work on the large transverse momentum production of single inclusive jets in $p\overline{p}$ collisions \cite{y31}, in which the program based on \cite{Sal93} was used, we compared with results from \cite{Ell89} and found good agreement. A comparison with the two-jet results in \cite{y30} has not been attempted. After completion of this work, two new complete calculations of jet production in low $Q^2$ $ep$ collisions have been presented. In \cite{y33}, the subtraction method for canceling infrared and collinear singularities was applied to obtain various inclusive one- and two-jet cross sections. These authors compared inclusive single-jet and two-jet cross sections with our published results \cite{x12} and found good agreement \cite{y33}. Another work \cite{y32} uses also the phase-space slicing method, but with two separate parameters for the infrared and the collinear singular regions as in their earlier work \cite{Bae89a}. Their results compare to our results in \cite{x12}. A direct thorough comparison of the results obtained with the same input has not been done yet. \subsection{Renormalization- and Factorization-Scale Dependences} From the numerical checks in section 5.1, we can now be sure to have a reliable computer program for the calculation of jet cross sections in photoproduction. We will use this program here to study the dependence of these cross sections on the three relevant scales: \begin{itemize} \item the renormalization scale $\mu$ \item the factorization scale for the photon $M_a$ \item the factorization scale for the proton $M_b$ \end{itemize} We expect a reduced dependence in next-to-leading order for all three scales with respect to the leading order. As stated in sections 4.1, 2.2, and 2.3, we take $\mu$, $M_a$, and $M_b$ to be of ${\cal O} (E_T)$. Therefore, we will always plot the single-jet inclusive cross section d$^2\sigma$/d$E_T$d$\eta$ as a function of scale$/E_T$ in the following. Direct and resolved photoproduction will be presented separately at a fixed transverse jet energy of $E_T=20$~GeV and at a rapidity of $\eta=1$, where the cross sections are at their maximum. We start with the dependence on the renormalization scale $\mu$. Leading order ${\cal O} (\alpha\alpha_s)$ cross sections depend on $\mu$ only through the running of the strong coupling constant $\alpha_s(\mu^2)$, which has to be implemented in the one-loop approximation \begin{equation} \alpha_s(\mu^2) = \frac{12\pi}{(33-2N_f)\ln \frac{\mu^2}{\Lambda^2}} \end{equation} for consistency. The results are shown as dotted curves in figures \ref{plot15} and \ref{plot16}. For direct photons, the curve drops by one half when going from $\mu=1/4E_T$ to $\mu=4E_T$. The resolved photon hard cross section is one order higher in $\alpha_s$ $({\cal O} (\alpha_s^2))$ and therefore drops even by a factor of four. This strong $1/\ln(\mu^2)$-behavior is slightly weakened in the two-loop approximation \begin{equation} \alpha_s(\mu^2) = \frac{12\pi}{(33-2N_f)\ln \frac{\mu^2} {\Lambda^2}} \left( 1-\frac{6(153-19N_f)}{(33-2N_f)^2} \frac {\ln (\ln \frac{\mu^2}{\Lambda^2} )}% {\ln \frac{\mu^2}{\Lambda^2}} \right), \end{equation} where one adds further logarithmic terms in $\mu^2$. These curves are shown in dashed form in figures \ref{plot15} and \ref{plot16}. However, going from one-loop to two-loop $\alpha_s$ affects not so much the shape but the overall normalization. The cross sections are reduced by 20\% in the direct and by almost 40\% in the resolved case. Although not physically relevant, the dashed curves can be compared to the full curves and enable us to see solely the effect of the explicit logarithmic terms \begin{equation} \frac{1}{\varepsilon} \left( \frac{4\pi\mu^2}{s} \right) ^\varepsilon \doteq \frac{1}{\varepsilon} + \ln \frac{4\pi\mu^2}{s}, \end{equation} that appear in the one-loop corrections to the hard cross section in section 4.1. They cancel partly the $\alpha_s$-dependence, so that the full curves in figures \ref{plot15} and \ref{plot16} are much flatter compared to the leading order curves with one- and two-loop $\alpha_s$. The direct curve varies by only 20\% and exhibits a maximum near $\mu/E_T\simeq 1/2$. According to the principle of minimal sensitivity \cite{Ste81}, $\mu = 1/2 E_T$ would then be the optimal scale. The dependence of the resolved photoproduction cross section is also weakened and amounts to a factor of two in NLO. It meets the one-loop LO result at $\mu \simeq 1/2 E_T$. This scale is sometimes preferred since the NLO corrections are small in its vicinity. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot15.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Renormalization Scale Dependence for Direct Photoproduction] {\label{plot15}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for direct photons at $E_T=20$~GeV and $\eta=1$, as a function of $\mu/E_T$. From $\mu=1/4E_T$ to $\mu=4 E_T$, the leading order predictions drop by one half. The two-loop curve (dashed) is 20\% smaller than the one-loop curve (dotted). The next-to-leading order (full) curve only varies by 20\%.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot16.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Renormalization Scale Dependence for Resolved Photoproduction] {\label{plot16}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for resolved photons at $E_T=20$~GeV and $\eta=1$, as a function of $\mu/E_T$. From $\mu=1/4E_T$ to $\mu=4 E_T$, the leading order predictions drop by a factor of four. The two-loop curve (dashed) is 40\% smaller than the one-loop curve (dotted). The next-to-leading order (full) curve only varies by a factor of two.}} \end{center} \end{figure} The next scale that we will study is the factorization scale in the photon $M_a$. It separates the soft parton content in the photon given by the parton distribution function $F_{a/\gamma}(y_a,M_a^2)$ from the hard partonic cross section d$\sigma_{ab} (ab\rightarrow\mbox{jets})$. As discussed in section 2.3, direct and resolved processes are only separable in leading order. The direct parton distribution is then simply a $\delta$-function $\delta(1-y_a)$ and does not depend on any scale. This can be seen from the dotted curve in figure \ref{plot17}. The resolved part depends, however, strongly on the scale $M_a$ as could already be inferred from $e^+e^-$-data on the $F_2^{\gamma}$ structure function. We expect the $M_a$-dependence to be dominated by the asymptotic pointlike contribution to $F_2^{\gamma}$ \begin{equation} F_2^{\gamma}(x_B,M_a^2) = \alpha\left[ \frac{1}{\alpha_s(M_a^2)}a(x_B)+b(x_B)\right] \end{equation} that behaves like $\ln M_a^2$. Clearly we see this behavior in the leading order dot-dashed curve in figure \ref{plot17}, which is identical to the dotted curve in figure \ref{plot18} and is obtained with the GRV photon distributions. As the GRV distributions are fitted in the DIS$_{\gamma}$-scheme, we transform them into the $\overline{\mbox{MS}}$ scheme by adding the correct finite terms independent of $M_a$. The renormalization scheme dependence was studied by B\"odeker, Kramer, and Salesch \cite{Bod94} and will not be studied here again. The main result is that the dependence disappears in next-to-leading order, because the transformation of the resolved part is compensated through scheme dependent terms in the initial state singularities of the direct photon. The renormalization of the singularities in section 4.2.5 \begin{equation} F_{a/\gamma} (y_a,M_a^2) = \int_{y_a}^1 \frac{\mbox{d}z_a}{z_a} \left[ \delta_{a\gamma } \delta (1-z_a) + \frac{\alpha}{2\pi} R_{q\leftarrow \gamma }(z_a, M_a^2)\right] F_{\gamma /\gamma} \left( \frac{y_a}{z_a}\right) \end{equation} gave also rise to scale dependent terms \begin{equation} R_{a \leftarrow \gamma } (z_a, M_a^2) = -\frac{1}{\varepsilon}P_{q\leftarrow \gamma }(z_a)\frac{\Gamma (1-\varepsilon)} {\Gamma (1-2\varepsilon)} \left( \frac{4\pi\mu^2}{M_a^2} \right) ^\varepsilon \end{equation} through the $1/\varepsilon$ pole \begin{equation} -\frac{1}{\varepsilon}P_{q\leftarrow \gamma }(z_a)\left[ \left( \frac{4\pi\mu^2}{s}\right) ^\varepsilon -\left( \frac{4\pi\mu^2}{M_a^2}\right) ^\varepsilon\right] = -P_{q\leftarrow \gamma }(z_a) \ln\left( \frac{M_a^2}{s}\right) . \end{equation} We can see this negative logarithmic behavior $(-\ln M_a^2)$ in the dashed curve in figure \ref{plot17} for next-to-leading order direct photoproduction. It has the opposite sign than the LO resolved contribution, so that the sum of both will eventually be almost independent of $M_a$ (full curve in figure \ref{plot17}). The next-to-leading order resolved contribution has a similar, but slightly steeper shape than the leading order and is shown as a full curve in figure \ref{plot18}. Therefore, the dependence of the complete NLO calculation on the photon factorization scale is slightly stronger than in the full line of figure \ref{plot17}. The difference will be compensated in next-to-next-to-leading order (NNLO) of direct photoproduction, which is beyond the scope of our calculation. These findings agree with the earlier analysis of the $M_a$ dependence of the inclusive single-jet cross sections in \cite{Bod94} and constitute a nice check of our formalism. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot17.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Photon Factorization Scale Dependence for Direct and Complete Photoproduction] {\label{plot17}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for direct photons at $E_T=20$~GeV and $\eta=1$, as a function of $M_a/E_T$. The LO direct curve (dotted) is independent of $M_a$ as is the sum (full curve) of NLO direct (dashed) and LO resolved (dot-dashed) contributions.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot18.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Photon Factorization Scale Dependence for Resolved Photoproduction] {\label{plot18}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for resolved photons at $E_T=20$~GeV and $\eta=1$, as a function of $M_a/E_T$. The logarithmic behavior of the resolved part is slightly steeper in NLO (full curve) than in LO (dotted curve).}} \end{center} \end{figure} For the proton, we have a similar scale $M_b$ as in the photon case that separates the soft and the hard part of the hadronic and partonic cross sections. Of course, the proton is not point-like and does not have a direct component. The scale dependence of the proton distribution functions $F_{b/p}(x_b,M_b^2)$ is governed by the AP equations. If we solve them iteratively, we can assume that in \begin{equation} \frac{\mbox{d}F_{b/p}(x_b,M_b^2)}{\mbox{d}\ln M_b^2} = \frac{\alpha_s^{(0)}}{2\pi}\int\limits_{x_b}^1\frac{\mbox{d}z_b}{z_b} P_{b\leftarrow b'}\left( \frac{x_b}{z_b}\right) F_{b/p}^{(0)}(z_b) \end{equation} the starting distributions $F_{b/p}^{(0)}(z_b)$ and the coupling constant $\alpha_s^{(0)}$ are scale independent. Then, the integral over $\mbox{d} \ln M_b^2$ can be carried out leading to a simple logarithmic behavior \begin{equation} F_{b/p}(x_b,M_b^2) = \frac{\alpha_s^{(0)}}{2\pi}\int\limits_{x_b}^1\frac{\mbox{d}z_b}{z_b} P_{b\leftarrow b'}\left( \frac{x_b}{z_b}\right) F_{b/p}^{(0)}(z_b) \ln \left( \frac{M_b^2}{s}\right) . \end{equation} For the LO resolved photon contribution, this approximation is obviously good enough as can be seen from the dotted curve in figure \ref{plot20}. The cross section drops by 20\% when going from $M_b=1/4E_T$ to $M_b=4E_T$. Of course, the full solution of the AP equations is not so simple, and the LO direct photon contribution in figure \ref{plot19} deviates slightly from a simply logarithmic graph. The variation of the cross section here amounts only to 6.5\%. In next-to-leading order, initial state singularities arise as in the photon case and are absorbed into the proton structure function \begin{equation} F_{b/p} (x_b,M_b^2) = \int_{x_b}^1 \frac{\mbox{d}z_b}{z_b} \left[ \delta_{bb'} \delta (1-z_b) + \frac{\alpha_s}{2\pi} R'_{b\leftarrow b'} (z_b, M_b^2) \right] F_{b'/p} \left( \frac{x_b}{z_b}\right) . \end{equation} Again, finite terms accompany the pole in the $\overline{\mbox{MS}}$ scheme \begin{equation} R'_{b \leftarrow b'} (z_b, M_b^2) = -\frac{1}{\varepsilon} P_{b\leftarrow b'} (z_b) \frac{\Gamma (1-\varepsilon)} {\Gamma (1-2\varepsilon)} \left( \frac{4\pi\mu^2}{M_b^2} \right) ^\varepsilon, \end{equation} which depend on $M_b^2$ through \begin{equation} -\frac{1}{\varepsilon}P_{b\leftarrow b'}(z_b)\left[ \left( \frac{4\pi\mu^2}{s}\right) ^\varepsilon -\left( \frac{4\pi\mu^2}{M_b^2}\right) ^\varepsilon\right] = -P_{b\leftarrow b'}(z_b) \ln\left( \frac{M_b^2}{s}\right) . \end{equation} These logarithms cancel the leading logarithmic behavior of the parton distribution functions in the proton. Consequently, the full next-to-leading order curves in figures \ref{plot19} and \ref{plot20} depend only very weakly on the renormalization scale $M_b$ in the proton. Finally, we consider the total scale dependence for complete photoproduction in figure \ref{plot49}. The dependence of the inclusive single-jet cross section on $M=\mu=M_a=M_b$ is dominated by the renormalization scale dependence in the one-loop (dotted curve) and two-loop (dashed curve) approximation for $\alpha_s$. Since resolved photoproduction is more important than direct photoproduction at $E_T=20$~GeV, figure \ref{plot49} resembles figure \ref{plot16} more than figure \ref{plot15}. The total scale dependence is reduced by almost a factor of two in next-to-leading order (full curve). \begin{figure}[h] \begin{center} {\unitlength1cm \begin{picture}(12,7.5) \epsfig{file=plot19.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Proton Factorization Scale Dependence for Direct Photoproduction] {\label{plot19}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for direct photons at $E_T=20$~GeV and $\eta=1$, as a function of $M_b/E_T$. The weak behavior of the LO (dotted) curve is even more reduced in NLO (full curve).}} \end{center} \end{figure} \begin{figure}[h] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot20.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Proton Factorization Scale Dependence for Resolved Photoproduction] {\label{plot20}{\it Inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ for resolved photons at $E_T=20$~GeV and $\eta=1$, as a function of $M_b/E_T$. The approximately logarithmic dependence in LO (dotted curve) is reduced in NLO (full curve).}} \end{center} \end{figure} \begin{figure}[h] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot49.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Total Scale Dependence for Complete Photoproduction] {\label{plot49}{\it Complete inclusive single-jet cross section d$^2\sigma/$d$E_T$d$\eta$ at $E_T=20$~GeV and $\eta=1$, as a function of $M/E_T$ for $M=\mu=M_a=M_b$. The strong dependence in LO (dotted and dashed curves) is reduced in NLO (full curve).}} \end{center} \end{figure} \subsection{One-Jet Cross Sections} In the last section, we have studied the dependence of the inclusive single-jet cross section on the various scales, which cannot be predicted from theory. It was shown that the next-to-leading order cross section is much less dependent on the renormalization and factorization scales than the leading order cross section, due to cancellation mechanisms. We continue to study one-jet cross sections here, but shift the emphasis towards experimentally relevant distributions. The two main experimental observables in the double differential cross section \begin{equation} \frac{\mbox{d}^2\sigma}{\mbox{d}E_T\mbox{d}\eta} = \sum_b \int_{x_{a,\min}}^1 \mbox{d}x_a x_a F_{a/e}(x_a,M_a^2) x_b F_{b/p}(x_b,M_b^2) \frac{4E_eE_T}{2x_aE_e-E_Te^{-\eta}} \frac{\mbox{d}\sigma}{\mbox{d}t}(ab\rightarrow p_1p_2) \end{equation} are the hardness or transverse energy $E_T$ of the observed jet and its orientation in the detector, i.e.~the angle $\theta$ it forms with the beam axis. Instead of this angle, we use the pseudorapidity $\eta = -\ln [\tan(\theta/2)]$ as usual. As the electron is only weakly deflected, it stays in the beam pipe like the proton remnant. Therefore, the jets are homogeneously distributed in the azimuthal angle $\phi$, and we have integrated over $\phi$ in the theoretical prediction. The single-jet cross sections for direct photoproduction have already been published \cite{x8}, those for resolved photoproduction are shown here for the first time with our phase space slicing method. Using a different method similar results have been published in \cite{Kra94,Sal93,y28,y29}. We start with distributions in the jet transverse energy $E_T$, which is at the same time a measure for the resolution of partons within the photon and the proton. In figure \ref{plot21}, we plot the direct photon contribution for transverse energies between 5 and 70~GeV. In this interval, the cross section drops by almost six orders of magnitude. The next-to-leading order curves are always above the leading order predictions. To separate the curves for three different rapidities, we multiplied those for $\eta=0$ and $\eta=2$ by factors of 0.1 and 0.5, respectively. At $\eta=0$, the phase space restricts the accessible range in $E_T$ to $E_T < 50$~GeV. The so-called $k$-factor, i.e.~the ratio of NLO to LO, drops from 1.8 at 5~GeV to 1.2 at 70~GeV for $\eta=1$, so that the higher order corrections are more important at small scales, where the strong coupling $\alpha_s$ is large. Had we calculated the leading order with one-loop $\alpha_s$, the correction would, however, not be so large. The corresponding curves for resolved photoproduction are shown in figure \ref{plot22}. Here, the curves for $\eta=1$ and $\eta=2$ lie closer to each other in spite of the rescaling for $\eta=2$. This already hints at a broader maximum in the rapidity distribution compared to the direct case (see below). The NLO and LO curves lie further apart due to the large $k$-factor of about 1.8 over the whole $E_T$-range. This can be understood from the non-perturbative nature of resolved photons, where higher order corrections are more important at all scales due to the many channels involved. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot21.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Single-Jet Cross Section for Direct Photons] {\label{plot21}{\it Inclusive single-jet cross section $\mbox{d}^2\sigma/\mbox{d}E_T\mbox{d}\eta$ for direct photons as a function of $E_T$ for various rapidities $\eta = 0, 1, 2$ in LO and NLO. The cross section for $\eta = 0~(\eta = 2)$ is multiplied by a factor of $0.1~(0.5)$.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot22.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Single-Jet Cross Section for Resolved Photons] {\label{plot22}{\it Inclusive single-jet cross section $\mbox{d}^2\sigma/\mbox{d}E_T\mbox{d}\eta$ for resolved photons as a function of $E_T$ for various rapidities $\eta = 0, 1, 2$ in LO and NLO. The cross section for $\eta = 0~(\eta = 2)$ is multiplied by a factor of $0.1~(0.5)$.}} \end{center} \end{figure} The rapidity distribution for direct photons is presented in figure \ref{plot23} at a transverse energy of $E_T=20$~GeV. In the available phase space of $\eta\in[-1,4]$, the cross section exhibits a rather broad maximum and steep edges. The $k$-factor is 1.25 in the central region. We have already mentioned that the corresponding distribution for resolved photons is expected to have an even broader maximum. This proves to be true in figure \ref{plot24}, where the maximum is also slightly shifted in the proton direction from $\eta = 1$ to $\eta = 1.5$. The reason can be found in the lower longitudinal momentum of photon components with respect to a direct photon. The next-to-leading order cross section is about 80\% larger than the leading order cross section, which is compatible to the observation of a large $k$-factor from the $E_T$-distribution above. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot23.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$\eta$-Dependence of Single-Jet Cross Section for Direct Photons] {\label{plot23}{\it Inclusive single-jet cross section $\mbox{d}^2\sigma/\mbox{d}E_T\mbox{d}\eta$ for direct photons as a function of $\eta$ for $E_T = 20$~GeV in LO and NLO.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot24.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$\eta$-Dependence of Single-Jet Cross Section for Resolved Photons] {\label{plot24}{\it Inclusive single-jet cross section $\mbox{d}^2\sigma/\mbox{d}E_T\mbox{d}\eta$ for resolved photons as a function of $\eta$ for $E_T = 20$~GeV in LO and NLO.}} \end{center} \end{figure} We now look at the interplay of direct and resolved photoproduction in inclusive single-jet cross sections. Figures \ref{plot25} and \ref{plot26} present the leading order and next-to-leading order predictions for complete photoproduction (full curves), which are the sums of the direct (dotted) and resolved (dashed) curves already presented above. The rapidity of the observed jet is fixed at $\eta=1$. Obviously, the resolved photon component is dominating at low transverse energies due to the photon structure function and unimportant at large transverse energies, where the perturbative point-like coupling is large. In leading order, the intersection of the resolved and direct curves lies near 26~GeV and is shifted towards 37~GeV in next-to-leading order. The resolved process has larger NLO corrections than the direct process, so that the region, in which the latter is important, moves to higher $E_T$. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot25.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Single-Jet Cross Section for Complete Photoproduction in LO] {\label{plot25}{\it Inclusive single-jet cross section $\mbox{d}^2\sigma/\mbox{d}E_T\mbox{d}\eta$ for complete photoproduction at $\eta = 1$, as a function of $E_T$. The full curve is the sum of the LO direct (dotted) and LO resolved (dashed) contributions.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot26.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Single-Jet Cross Section for Complete Photoproduction in NLO] {\label{plot26}{\it Inclusive single-jet cross section $\mbox{d}^2\sigma/\mbox{d}E_T\mbox{d}\eta$ for complete photoproduction at $\eta = 1$, as a function of $E_T$. The full curve is the sum of the NLO direct (dotted) and NLO resolved (dashed) contributions.}} \end{center} \end{figure} The rapidity distributions for full photoproduction are shown as full curves in figures \ref{plot27} and \ref{plot28} for leading and next-to-leading order at a transverse energy of $E_T=20$~GeV. One expects the direct photon to be important mostly in the electron direction at small or negative rapidities. This behavior can be seen in both figures, but only close to the boundary of phase space below $\eta=0$. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot27.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$\eta$-Dependence of Single-Jet Cross Section for Complete Photoproduction in LO] {\label{plot27}{\it Inclusive single-jet cross section $\mbox{d}^2 \sigma/\mbox{d}E_T\mbox{d}\eta$ for complete photoproduction at $E_T=20$~GeV, as a function of $\eta$. The full curve is the sum of the LO direct (dotted) and LO resolved (dashed) contributions.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot28.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$\eta$-Dependence of Single-Jet Cross Section for Complete Photoproduction in NLO] {\label{plot28}{\it Inclusive single-jet cross section $\mbox{d}^2 \sigma/\mbox{d}E_T\mbox{d}\eta$ for complete photoproduction at $E_T=20$~GeV, as a function of $\eta$. The full curve is the sum of the NLO direct (dotted) and NLO resolved (dashed) contributions.}} \end{center} \end{figure} In figures \ref{plot35} and \ref{plot36}, we plot the dependence of the single-jet inclusive cross section on the jet cone size $R$ in the Snowmass convention \cite{Hut92}. As discussed in section 2.4, a theoretical definition for jets containing more than one parton is only possible in next-to-leading order. Therefore only the full curves depend on $R$, whereas the leading order curves are constant. The dependence has the functional form of the next-to-leading order result \cite{Sal93,Kra94} \begin{equation} \frac{\mbox{d}^2\sigma}{\mbox{d}E_T\mbox{d}\eta} = a+b\ln R+cR^2, \end{equation} For the direct photon contribution in figure \ref{plot35}, the leading and higher order predictions are equal at $R\simeq 1$ for one-loop $\alpha_s$ (dotted) and at $R\simeq 0.6$ for two-loop $\alpha_s$ (dashed). For the resolved photon contribution in figure \ref{plot36}, the situation is different. The stable points lie at $R\simeq 0.7$ for one-loop (dotted) and $R\simeq 0.12$ for two-loop $\alpha_s$. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot35.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Jet Cone Size Dependence of Single-Jet Cross Section for Direct Photons] {\label{plot35}{\it Inclusive single-jet cross section $\mbox{d}^2 \sigma/\mbox{d}E_T\mbox{d}\eta$ for direct photons at $E_T=20$~GeV and $\eta=1$, as a function of the jet cone size $R$. Only the NLO (full) curve and not the LO curves with one- (dotted) or two-loop (dashed) $\alpha_s$ depends on $R$.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot36.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Jet Cone Size Dependence of Single-Jet Cross Section for Resolved Photons] {\label{plot36}{\it Inclusive single-jet cross section $\mbox{d}^2 \sigma/\mbox{d}E_T\mbox{d}\eta$ for resolved photons at $E_T=20$~GeV and $\eta=1$, as a function of the jet cone size $R$. Only the NLO (full) curve and not the LO curves with one- (dotted) or two-loop (dashed) $\alpha_s$ depends on $R$.}} \end{center} \end{figure} \subsection{Two-Jet Cross Sections} We now turn to two-jet cross sections, where one does not integrate over the second rapidity $\eta_2$ or, alternatively, over the momentum fraction $x_a$ of the parton in the electron. The differential cross section \begin{equation} \frac{\mbox{d}^3\sigma}{\mbox{d}E_T^2\mbox{d}\eta_1\mbox{d}\eta_2} = \sum_b x_a F_{a/e}(x_a,M_a^2) x_b F_{b/p}(x_b,M_b^2) \frac{\mbox{d}\sigma}{\mbox{d}t}(ab \rightarrow p_1p_2) \end{equation} then yields the maximum of information possible on the parton distributions and is better suited to constrain them than the observation of inclusive single-jets. Since dijet production is a more exclusive process than one-jet production, the cross sections are smaller and require higher luminosity or longer running time in the experiments. This is the reason why H1 and ZEUS have only recently started to analyze dijet data and why we can present here the first theoretical calculation for complete dijet photoproduction. It is important to note that only in leading order the transverse energies of the two observed jets balance $(E_{T_1}=E_{T_2}=E_T)$. In next-to-leading order inclusive cross sections, there may be a third unobserved jet which must have full freedom to become infinitely soft. Therefore, the transverse energies of both jets cannot be observed without spoiling infrared safety. This is an artifact of fixed order perturbation theory and will go away in ${\cal O} (\alpha\alpha_s^3)$, where one may as well have a fourth jet. We will calculate similar distributions as in the last section, i.e.~in the transverse energy {\em of the first jet}, $E_{T_1}$, and in both observable rapidities $\eta_1$ and $\eta_2$. Yet, the rapidities $\eta_1$ and $\eta_2$ always belong to the two jets with largest transverse energy in the event. As in the one-jet case, the direct distributions have already been presented in a recent paper \cite{x8}, whereas the resolved and complete cross sections are shown here and partly in \cite{x12}. First, we look at the distributions in the transverse energy $E_{T_1}$ in figure \ref{plot29} for direct photons. We fix $\eta_1$ at $\eta_1=1$ and $\eta_2$ at three different values of $\eta_2 = 0,~1,~2$. The curves for $\eta_2=0$ and $\eta_2=2$ are rescaled by factors of $0.1$ and $0.5$ as before. We expect the dijet cross sections to be smaller than the single-jet cross sections in figure \ref{plot21}. Indeed, for $E_T,~E_{T_1}=5$~GeV and $\eta,~\eta_1,~\eta_2=1$, the cross section drops by almost an order of magnitude from $10.6$~nb to $1.29$~nb. The $k$-factors in dijet production are basically the same as in single-jet production. There is, however, one novel feature: for $\eta_2=0$, the third jet in next-to-leading order opens up some additional phase space, so that $E_{T_1}$ can go up to $E_{T_1}< 55$ GeV instead of only 39~GeV in leading order. A similar behavior is seen for resolved photoproduction in figure \ref{plot30}. At the lowest value of $E_{T_1}=5$~GeV and back-to-back jets ($\eta_1=\eta_2=1$), the two-jet cross section ($4.91$~nb) is even more than one order of magnitude smaller than the one-jet cross section ($67.5$~nb). The ratio of NLO to LO is 1.7 for resolved photons over the whole $E_{T_1}$-range. The phase space for $\eta_2=0$ increases from $E_{T_1}=39$~GeV to $E_{T_1}=50$~GeV. This slightly smaller value is due to the reduced center-of-mass energy in resolved photoproduction, as some of the energy always goes into the photon remnant. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot29.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Dijet Cross Section for Direct Photoproduction] {\label{plot29}{\it Inclusive dijet cross section $\mbox{d}^3 \sigma/\mbox{d}E_{T_1}\mbox{d}\eta_1\mbox{d}\eta_2$ for direct photons as a function of $E_{T_1}$ for $\eta_1=1$ and three values of $\eta_2 =0,1,2$. The cross section for $\eta_2=0$ ($\eta_2=2$) is multiplied by 0.1 (0.5).}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot30.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Dijet Cross Section for Resolved Photoproduction] {\label{plot30}{\it Inclusive dijet cross section $\mbox{d}^3 \sigma/\mbox{d}E_{T_1}\mbox{d}\eta_1\mbox{d}\eta_2$ for resolved photons as a function of $E_{T_1}$ for $\eta_1=1$ and three values of $\eta_2 =0,1,2$. The cross section for $\eta_2=0$ ($\eta_2=2$) is multiplied by 0.1 (0.5).}} \end{center} \end{figure} Next, we present the dependence of the cross sections on the two rapidities in form of the three-dimensional lego-plots \ref{plot31} and \ref{plot32}. The leading order is always shown on the left side and is completely symmetric in $\eta_1$ and $\eta_2$. For the next-to-leading order cross sections on the right hand sides of figures \ref{plot31} and \ref{plot32}, this is no longer exactly true due to the presence of a ``trigger'' jet with transverse energy $E_{T_1}$, which is fixed at $E_{T_1}=20$~GeV. The next-to-leading order lego-plots are only approximately symmetric. This can best be seen at the bottom of the contour plots, where at least one of the two observed jets is far off the central region. The NLO predictions are considerably larger than the LO predictions, especially in the resolved case. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot31.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Lego-Plot of Dijet Cross Section for Direct Photoproduction] {\label{plot31}{\it Inclusive dijet cross section $\mbox{d}^3 \sigma/\mbox{d}E_{T_1}\mbox{d}\eta_1\mbox{d}\eta_2$ at $E_{T_1}= 20$~GeV for direct photons, as a function of $\eta_1$ and $\eta_2$. The LO plot (left) is exactly symmetric, the NLO plot (right) only approximately.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot32.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Lego-Plot of Dijet Cross Section for Resolved Photoproduction] {\label{plot32}{\it Inclusive dijet cross section $\mbox{d}^3 \sigma/\mbox{d}E_{T_1}\mbox{d}\eta_1\mbox{d}\eta_2$ at $E_{T_1}= 20$~GeV for resolved photons, as a function of $\eta_1$ and $\eta_2$. The LO plot (left) is exactly symmetric, the NLO plot (right) only approximately.}} \end{center} \end{figure} This becomes even clearer when we plot the projections of the lego-plots for fixed $\eta_1=0,~1,~2$ and $3$. In figure \ref{plot33}, we plot the leading and next-to-leading order distributions in $\eta_2$ for direct photoproduction. It is clearly seen that the second jet tends to be back-to-back with the first jet, since the maximum always occurs at $\eta_2\simeq\eta_1$. However, at $\eta_1=3$ this is no more permitted by phase space. We obtain the same $k$-factor of 1.25 in the central regions as in single-jet production. The $\eta_2$-distributions for resolved photons in figure \ref{plot34} are considerably broader than those in figure \ref{plot33} due to the smearing of the hard cross sections with the distribution function of partons in the photon. The maxima of the plots are also not so much dominated by kinematics but more by the quark and gluon structure of the photon in different $x$ regimes. They do not lie at $\eta_2=\eta_1$ any more. Therefore, dijet rapidity distributions are best suited to constrain the photon structure. We will come back to this point in the next section 5.5, when we compare similar plots to data from ZEUS. The $k$-factors range from 1.65 in the central regions to more than 3 in the proton forward direction. The shapes of the distributions are very similar in LO and in NLO. The absolute values make, however, an important difference. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot33.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Rapidity Dependence of Dijet Cross Section for Direct Photoproduction] {\label{plot33}{\it Projections of the LO (full curves) and NLO (dashed curves) triple differential dijet cross section for direct photons at $E_{T_1}=20$~GeV and fixed values of $\eta_1=0,~1,~2,$ and $3$, as a function of $\eta_2$.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot34.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Rapidity Dependence of Dijet Cross Section for Resolved Photoproduction] {\label{plot34}{\it Projections of the LO (full curves) and NLO (dashed curves) triple differential dijet cross section for resolved photons at $E_{T_1}=20$~GeV and fixed values of $\eta_1=0,~1,~2,$ and $3$, as a function of $\eta_2$.}} \end{center} \end{figure} Direct and resolved contributions are now added to give physical, complete photoproduction results. The dijet cross section is first plotted as a function of the transverse energy $E_{T_1}$. Figure \ref{plot37} gives the LO result, figure \ref{plot38} the NLO result. Like the direct and resolved cross sections alone, the full two-jet cross sections are about an order of magnitude smaller than the one-jet cross sections (see figures \ref{plot25} and \ref{plot26}). The point where direct and resolved contributions are equally important is lowered towards $E_{T_1}=20$~GeV in leading order and $E_{T_1}=30$~GeV in next-to-leading order, so that direct photons are better observed in dijet production. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot37.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Dijet Cross Section for Complete Photoproduction in LO] {\label{plot37}{\it Inclusive dijet cross section $\mbox{d}^3\sigma /\mbox{d}E_{T_1}\mbox{d}\eta_1\mbox{d}\eta_2$ for full photoproduction at $\eta_1=\eta_2=1$ as a function of $E_{T_1}$. The full curve is the sum of the LO direct (dotted) and LO resolved (dashed) contributions.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot38.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[$E_T$-Dependence of Dijet Cross Section for Complete Photoproduction in NLO] {\label{plot38}{\it Inclusive dijet cross section $\mbox{d}^3\sigma /\mbox{d}E_{T_1}\mbox{d}\eta_1\mbox{d}\eta_2$ for full photoproduction at $\eta_1=\eta_2=1$ as a function of $E_{T_1}$. The full curve is the sum of the NLO direct (dotted) and NLO resolved (dashed) contributions.}} \end{center} \end{figure} If one plots the complete two-jet cross sections as a function of $\eta_2$, the different behaviors of direct and resolved photons add up to the full curves in figures \ref{plot39} and \ref{plot40}. These plots are best suited to decide in which rapidity regions one can look best for the resolved photon structure. We have already seen that this will be in situations where the two jets are not back-to-back, e.g.~for $\eta_1=0$ and positive $\eta_2$ in the upper left plots of figures \ref{plot39} and \ref{plot40}. On the other hand, the proton structure can best be studied with direct photons, when the cross section is not folded with another distribution. A possible scenario is $\eta_1=0$ and negative values of $\eta_2$. This is especially interesting for the small-$x$ components of the proton like the gluons and the quark sea. Another interesting observation is that the relative importance of direct and resolved processes changes dramatically when calculating dijet photoproduction in next-to-leading order ${\cal O} (\alpha\alpha_s^2)$: resolved processes are much more important at $E_{T_1}=20$~GeV than one would have guessed from a leading order estimate. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot39.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Rapidity Dependence of Dijet Cross Section for Complete Photoproduction in LO] {\label{plot39}{\it Projections of the complete triple differential dijet cross section at $E_{T_1}=20$~GeV and fixed values of $\eta_1=0,~1,~2,$ and $3$, as a function of $\eta_2$. The full curve is the sum of the LO direct (dotted) and LO resolved (dashed) contributions.}} \end{center} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=plot40.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,clip=,angle=270} \end{picture}} \caption[Rapidity Dependence of Dijet Cross Section for Complete Photoproduction in NLO] {\label{plot40}{\it Projections of the complete triple differential dijet cross section at $E_{T_1}=20$~GeV and fixed values of $\eta_1=0,~1,~2,$ and $3$, as a function of $\eta_2$. The full curve is the sum of the NLO direct (dotted) and NLO resolved (dashed) contributions.}} \end{center} \end{figure} \subsection{Comparison of Photoproduction Results to H1 and ZEUS Data} In this section we compare the next-to-leading order calculation to recent one- and two-jet data from the H1 and ZEUS collaborations at HERA. Both collaborations have continuously measured various cross sections for the photoproduction of jets since HERA started running in 1992. With the increased luminosity in recent years, the data have improved and many aspects of jet production could be studied. In our comparison we restrict ourselves to the measurements of one- and two-jet cross sections just recently published which are based either on 1994 or 1995 data. In particular, we shall compare with the inclusive single-jet data of 1994 from the ZEUS collaboration \cite{y1}, with the inclusive dijet data of 1994 from ZEUS \cite{y2}, with inclusive dijet data of 1995 from ZEUS \cite{y2}, and with inclusive dijet data of 1994 from the H1 collaboration \cite{y3}. We start with the single-jet cross section d$^2\sigma$/d$\eta$d$E_T$ integrated over $E_T\geq E_{T_{\min}}$ as a function of $\eta$. This cross section d$\sigma$/d$\eta$ has been measured in the $\eta$ range between -1 and 2 and with the $E_T$ thresholds $E_{T_{\min}} = 14,17,21,$ and 25 GeV. The cross section d$\sigma$/d$\eta$ for $E_T>14$ GeV has also been measured in three different regions of $W$: 134 GeV $< W <$ 190 GeV, 190 GeV $< W <$ 233 GeV, and 233 GeV $< W <$ 277 GeV. The measurements refer to jets at the hadron level and are performed for two cone radii in the $\eta-\phi$ plane, $R=0.7$ and $R=1$ using the iterative cone algorithm PUCELL. The complete data have already been compared to our next-to-leading order calculations in the ZEUS publication \cite{y1}. Therefore, we show only a selection for some specific kinematical ranges and compare them with the data. Our results for d$\sigma$/d$\eta$ for $E_T>17$ GeV, $R=1$ and $R=0.7$ are shown in figures \ref{kkkplot1a17r1} and \ref{kkkplot1a17r07}, \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot1a17r1.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot1a17r1}{\it $\eta$ dependence of the inclusive single-jet photoproduction cross section integrated over $E_T>17$GeV with jet cone size $R=1$. We compare our NLO prediction with GRV and GS96 photon parton densities and the two extreme $R_{\rm sep}$ values to 1994 data from ZEUS.}} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot1a17r07.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot1a17r07}{\it $\eta$ dependence of the inclusive single-jet photoproduction cross section integrated over $E_T>17$GeV with jet cone size $R=0.7$. We compare our NLO prediction with GRV and GS96 photon parton densities and the two extreme $R_{\rm sep}$ values to 1994 data from ZEUS.}} \end{figure} respectively, and are compared to the ZEUS data. The error bars in figures \ref{kkkplot1a17r1}, \ref{kkkplot1a17r07}, \ref{kkkplot1b06r1}, and \ref{kkkplot1b06r07} only contain the statistical error. The systematic error and the uncertainty associated with the absolute energy scale is not included, which adds an additional 30\% error (see \cite{y1}). The theoretical predictions include resolved and direct processes in NLO. For the proton, the CTEQ4M \cite{Lai96} parton densities have been used. For the photon distribution, the GRV-HO \cite{Glu92}, converted to $\overline{\mbox{MS}}$ factorization, and as an alternative set the recent GS96 \cite{Gor96} parametrizations have been chosen as input. The renormalization and factorization scales have been put equal to $E_T$, and $\alpha_s$ was calculated with the two-loop formula with $\Lambda_{\overline{\mbox{MS}}}^{(4)}=296$ MeV as used in the proton parton densities. In figures \ref{kkkplot1a17r1} and \ref{kkkplot1a17r07}, two curves are presented for both photon distribution sets labeled as $R_{\rm sep}=2 R$ and $R_{\rm sep}=1 R$. They correspond to two choices of the $R_{\rm sep}$ parameter. Since our calculations include only up to three partons in the final state, the maximum number of partons in a single jet is two. Therefore, the overlapping and merging effects of the experimental jet algorithm cannot be simulated in the theoretical calculation \cite{y7}. To account for these effects, the $R_{\rm sep}$ parameter was introduced \cite{y7}. It has the effect that two partons are not merged into a single jet if their separation in the $\eta-\phi$ plane is more than $R_{\rm sep}$. Then $R_{\rm sep} = 2 R$ means that no further restriction is introduced and the cone algorithm is applied in its original form. Experimentally, the two extreme values of $R_{\rm sep}=2 R$ and $1 R$ correspond to a fixed cone algorithm (like EUCELL) and to the $k_T$ clustering algorithm (like KTCLUS), whereas an iterative cone algorithm (like PUCELL) is described by some intermediate value. In both calculation and data analysis, the maximal virtuality of the photon is equal to $Q_{\max}^2=4$ GeV$^2$, and the full $W$ range, which corresponds to 0.20 $< x_a <$ 0.85 in the EPA formula, is used. Looking at figures \ref{kkkplot1a17r1} and \ref{kkkplot1a17r07}, we observe that the behavior of the measured cross sections is different for $R=0.7$ and $R=1$. For $R=1$, the shape of the cross section is well described for -1 $< \eta <$ 0.5. For higher values of $\eta$, the data stay almost constant as a function of $\eta$, whereas the theoretical curves decrease as a function of $\eta$ for both radii $R=1$ and $R=0.7$. However, when $R=0.7$ is used, the shape and magnitude of the NLO results agree quite well with the measured differential cross section in the entire $\eta$ range. This is also the case in the comparison for the lower $E_T$ threshold, $E_{T_{\min}}=14$ GeV, with $R=0.7$ shown in \cite{y1}. For the higher $E_T$ thresholds, 21 and 25 GeV, the NLO predictions give a good description of the measured cross sections in magnitude and shape for both cone radii $R=0.7$ and $R=1$ (see \cite{y1}). In general, the choice $R=0.7$ should be preferred for the comparison between data and theory. Figure \ref{kkkplot1a17r07} shows that the predictions with the GS96 parametrization of the photon parton distributions agree better with the data than the GRV-HO parametrization which is above the data for both $R_{\rm sep}$ parameter values over the whole $\eta$ range. Concerning the $R_{\rm sep}$ parameter the curve for $R_{\rm sep}=1 R$ is in somewhat better agreement than the curve for $R_{\rm sep}=2 R$. We have checked that a value of $R_{\rm sep}=1.4 R$ gives the best agreement. This supports a recent study of the jet shape function, which depends sensitively on this parameter \cite{y8}. By comparing with recent measurements of this jet shape by the ZEUS collaboration \cite{y9}, it was found that $R_{\rm sep}= 1.5 R$ gives very good agreement with the jet shape data for PUCELL in the same $\eta$ and $E_T$ range \cite{y8}. A comparison of d$\sigma$/d$\eta$ for $E_T>14$ GeV in different $W$ regions has also been presented. As an example, we show d$\sigma$/d$\eta$ as a function of $\eta$ for the largest $W$ range: 233 GeV $< W <$ 277 GeV (corresponding to 0.55 $< x_a <$ 0.85) in figure \ref{kkkplot1b06r1} ($R=1$) and in figure \ref{kkkplot1b06r07} ($R=0.7$). \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot1b06r1.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot1b06r1}{\it $\eta$ dependence of the inclusive single-jet photoproduction cross section integrated over 233 GeV $< W <$ 277 GeV with jet cone size $R=1$. We compare our NLO prediction with GRV and GS96 photon parton densities and the two extreme $R_{\rm sep}$ values to 1994 data from ZEUS.}} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot1b06r07.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot1b06r07}{\it $\eta$ dependence of the inclusive single-jet photoproduction cross section integrated over 233 GeV $< W <$ 277 GeV with jet cone size $R=0.7$. We compare our NLO prediction with GRV and GS96 photon parton densities and the two extreme $R_{\rm sep}$ values to 1994 data from ZEUS.}} \end{figure} Whereas the $R=1$ theoretical cross section (figure \ref{kkkplot1b06r1}) agrees for low values of $\eta$, it disagrees in the high $\eta$ region with the data. This disagreement shows up particularly in the high $W$ range \cite{y1}. The measured differential cross section is again well described by the NLO calculation for $R=0.7$ (see figure \ref{kkkplot1b06r07}).% \footnote{The wiggles in the curves in figures \ref{kkkplot1b06r1} and \ref{kkkplot1b06r07} are due to insufficient numerical accuracy and have no physical significance.} The excess of the measured cross section with respect to the calculations in the high $\eta$ range and for the smaller $E_T$ thresholds for $R=1$ is supposed to be due to the presence of the underlying event in the data which yields a larger amount of extra energy lying in the jet cone for $R=1$ but not for the smaller cone $R=0.7$. From figure \ref{kkkplot1a17r1} it is clear that the excess occurs only in the large $\eta$ range where the resolved cross section dominates and where additional interactions of the photon and proton remnants are supposed to occur which are not included in the NLO calculations. These deviations between NLO theory and the data at large $\eta$ and smaller $E_T$'s were found earlier \cite{y10} when the theoretical predictions were compared with the 1993 ZEUS data \cite{x5}. In conclusion, we can say that the NLO calculations describe reasonably well the experimental inclusive single-jet cross section for jets defined with $R=0.7$ in the entire $\eta$ range and for $R=1$, if $E_T$ is large enough. Next, we compare the NLO predictions with inclusive dijet cross sections measured by ZEUS \cite{y2} and H1 \cite{y3}. Inclusive two-jet cross sections depend on one more variable as compared to the inclusive one-jet cross sections considered above. Therefore they are supposed to give a much more stringent test of the theoretical predictions than the inclusive one-jet cross sections. For the comparison with the data it is essential that in the theoretical calculations the same jet definitions are introduced as in the experimental analysis. Furthermore it is important that the theoretical calculations contain the same cuts on the kinematical variables as for the measured cross sections. First experimental data for inclusive two-jet production have been published by the ZEUS collaboration in \cite{x5} and \cite{y13}. The more recent ZEUS analysis based on the 1994 data taking presented in \cite{Der96a} and recently in \cite{y2} extends the earlier analysis in \cite{x5} based on 1993 data in several ways. The larger luminosity obtained in 1994 lead to a reduction of the statistical errors as well as allowing for the measurement of the cross section at higher $E_T$, a region, where uncertainties due to hadronization of partons into jets and of underlying event effects are reduced making the comparison with the NLO predictions more meaningful. Furthermore, the ZEUS collaboration applied three different jet definitions: two variations of the cone algorithm \cite{Hut92} called ``EUCELL'' and ``PUCELL'', and the $k_T$-cluster algorithm ``KTCLUS'' as introduced for hadron-hadron collisions \cite{y16}. The two cone algorithms treat seed finding and jet merging in different ways. Since the NLO calculations contain only up to three partons in the final state, these experimental seed finding and jet merging conditions cannot be fully reproduced. This ambiguity is largely reduced in the $k_T$-cluster algorithm. In the NLO calculations, the two cone algorithms can be simulated by introducing $R_{\rm sep}$ already considered for the one-jet cross sections. The EUCELL definition corresponds to $R_{\rm sep}=2 R$, whereas PUCELL is best simulated with $R_{\rm sep}= 1.4 R$ for the $E_T$ range considered in the experimental analysis (see above) \cite{y8, But96}. The $k_T$-cluster algorithm for hadron-hadron collisions is identical to using $R_{\rm sep}=R$. Therefore by introducing the $R_{\rm sep}$ parameter into the NLO calculations of the two-jet cross sections, all three jet finding definitions used in the experimental analysis can be accounted for. The ZEUS results of the earlier analysis \cite{Der96a} have been compared to the NLO predictions in \cite{x12} for the case of the KTCLUS algorithm and in \cite{x10} for the EUCELL algorithm. These comparisons were done for the differential cross section d$\sigma$/d$\overline{\eta}$, where $\overline{\eta}=1/2(\eta_1+\eta_2)$ is the average rapidity of the observed jets with $E_T$ larger than $E_{T_{\min}}$ for both observed jets. This common cut on the $E_T$ of both jets causes some theoretical problems as has been noticed already some time ago \cite{Kla96}. The new measurements \cite{y2} are for the triple differential cross section d$^3\sigma$/d$E_T$d$\eta_1$d$\eta_2$ using the $k_T$-cluster or the PUCELL algorithm. The jet with the highest $E_T$ (leading jet, $E_T=E_{T_1}$) is required to have $E_T > 14$ GeV and the second highest-$E_T$ jet to have $E_{T_2} > 11$ GeV. This cross section is symmetrized with respect to $\eta_1$ and $\eta_2$ and therefore double counted. By this symmetrization, the experimental ambiguity of determining the leading jet is avoided and the measured cross section corresponds to the calculated cross section where $E_T$ corresponds to the trigger jet which is not necessarily the leading jet with the highest $E_T$. In order to have a handle to enhance direct over resolved photoproduction, one determines also the variable \cite{x5,y21} \begin{equation} x_{\gamma}^{\rm OBS} = \frac{\sum_iE_{T_i}e^{-\eta_i}}{2x_aE_e}, \label{eq51} \end{equation} where the sum runs over the two jets of highest $E_T$ and $x_aE_e$ is the initial photon energy. $x_{\gamma}^{\rm OBS}$ measures the fraction of the photon energy that goes into the production of the two hardest jets. The LO direct and resolved processes populate different regions of $x_{\gamma}^{\rm OBS}$: $x_{\gamma}^{\rm OBS} = 1$ for the direct process and $x_{\gamma}^{\rm OBS} < 1$ for the resolved process. In NLO, the direct process populates also the region $x_{\gamma}^{\rm OBS} < 1$. To obtain a measurement of the enriched direct photoproduction cross section, the cut $x_{\gamma}^{\rm OBS} > 0.75$ is usually introduced. Figure \ref{kkkplot2a} shows d$\sigma$/d$E_T$ for six independent regions in the \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(14,19) \epsfig{file=kkkplot2a.ps,bbllx=70pt,bblly=105pt,bburx=490pt,bbury=715pt,% height=19cm,clip=} \end{picture}} \end{center} \caption{\label{kkkplot2a}{\it $E_T$ dependence of the symmetrized dijet photoproduction cross section integrated over different rapidity bins. We compare our NLO prediction with GRV and GS96 photon parton densities and the full and upper range of $x_{\gamma}^{\rm OBS}$ to preliminary 1995 data from ZEUS.}} \end{figure} $(\eta_1,\eta_2)$ plane. The upper curves in each plot give the dijet cross section for the entire $x_{\gamma}^{\rm OBS}$ region, and the lower curves which, to separate the curves, all have been scaled down by a factor of 5, present the cross section for the direct $\gamma$ region $x_{\gamma}^{\rm OBS} > 0.75$. The NLO calculations are performed for $Q_{\max}^2 = 4$ GeV$^2$ and 134 GeV $< W <$ 277 GeV using the same parton densities for the proton and the photon as in the inclusive one-jet calculations. Furthermore we use $R_{\rm sep}= 1.4 R$ with $R=1$ to simulate the PUCELL algorithm. We compare with the corresponding data from ZEUS \cite{y2} analyzed with the PUCELL algorithm. The agreement between the data and the theoretical predictions is quite reasonable. Except near the backward regions (the last two $\eta_1,\eta_2$ regions), the cross sections for the GRV and GS96 photon densities are very similar. To discriminate between them, the experimental errors must be reduced. We observe that the cross section for $x_{\gamma}^{\rm OBS} > 0.75$ increases as compared to the full cross section (all $x_{\gamma}^{\rm OBS}$) as $E_T$ increases in agreement with the prediction from the calculation. This is the effect of the direct component which shows in general a flatter distribution with increasing $E_T$ than the resolved cross section \cite{x12,x10}. We emphasize that the magnitude as well as the shape of the measured cross section is well reproduced by the calculations except for the first $E_T$ bin and for the case that both jets are in the region $-1 < \eta_{1,2} < 0$. In this region (the last plot), the predictions lie above the data. The same cross section has been calculated also for the $k_T$-cluster algorithm. The results for the GS96 photon densities have been presented together with the corresponding experimental data in \cite{y2}. The agreement between data and theory is quite similar. In \cite{y2} also data for d$\sigma$/d$\eta_2$ for $E_T > 14$ GeV in three regions of $\eta_1$ and the KTCLUS algorithm are compared to our NLO calculations with the GRV-HO and GS96 parametrizations of the photon densities. Both shape and magnitude of the cross sections are roughly reproduced by the calculations except for the small $\eta_1$ region. Comparisons for the same cross section with the PUCELL cone algorithm have been done, too, but are not shown here \cite{y22}. The comparison between measurements and calculations of the triple differential cross section shown so far covers the dependence of this cross section in all three variables $E_T$, $\eta_1$ and $\eta_2$. Another equivalent set of variables consists of the dijet invariant mass $M_{JJ}$, the rapidity of the dijet system $y_{JJ}$, and the scattering angle $\theta^{\ast}$ in the dijet center-of-mass system. The dijet invariant mass is obtained from the relationship \begin{equation} M_{JJ}^2 = 2 E_{T_1} E_{T_2} \left[ \cosh(\eta_1-\eta_2) - \cos(\phi_1-\phi_2)\right] , \end{equation} where $\phi_1$ and $\phi_2$ are the azimuthal angles of the two jets in the HERA frame. For two jets back-to-back in $\phi$ and with equal $E_T$, \begin{equation} M_{JJ} = 2 E_T \cosh\left[ (\eta_1-\eta_2)/2\right] = 2 E_T/\sin\theta^{\ast} \end{equation} and $\cos\theta^{\ast}=\tanh[(\eta_1-\eta_2)/2]$. For events with more than two jets, the two highest $E_T$ jets are used to calculate $M_{JJ}$. The distribution in the dijet mass $M_{JJ}$ provides an additional test and is sensitive to the presence of resonances that decay into two jets. The dijet cross section as a function of $\cos\theta^{\ast}$ is sensitive to the parton-parton dynamics in the direct and resolved contributions \cite{Bae89a}. Direct processes involve quark propagators in the $t$ and $u$ channels leading to a characteristic angular dependence proportional to $(1-|\cos\theta^{\ast}|) ^{-1}$. In the case of the resolved process, $t$-channel gluon exchange processes dominate which lead to an angular dependence proportional to $(1-|\cos\theta^{\ast}|) ^{-2}$. This exchange rises more steeply with increasing $|\cos\theta^{\ast}|$ than in the case of the direct processes. This different behavior in the angular dependence for resolved and direct processes was observed for $M_{JJ} > 23$ GeV \cite{y13}. We have calculated the NLO cross section d$\sigma$/d$\cos\theta ^{\ast}$ for $x_{\gamma}^{\rm OBS} < 0.75$ (= ``resolved'') and $x_{\gamma}^{\rm OBS} > 0.75$ (= ``direct'') and have confirmed the different behavior in the two $x_{\gamma}^{\rm OBS}$ bins as shown in the data (not shown here). The cross sections d$\sigma$/d$M_{JJ}$ and d$\sigma$/d$\cos\theta^{\ast}$ have been measured recently using the sample of dijet events found with the PUCELL cone and the KTCLUS cluster algorithms. This analysis also includes the 1995 data which have an even higher statistics than used in the previous analysis \cite{y13} based on 1994 data. These cross sections have been measured in the kinematic region $Q^2 < 4$ GeV$^2$, $0.2 < x_a < 0.85$ as used previously. The two jets with highest $E_T$ are required to have $E_{T_1}, E_{T_2} > 14$ GeV and the rapidities of these two jets are restricted to $-1 < \eta_1,\eta_2 < 2.5$. The cone radius is $R=1$. The cross section d$\sigma$/d$M_{JJ}$ has been measured in the $M_{JJ}$ range between 47 GeV and 120 GeV integrated over $|\cos\theta^{\ast}| < 0.85$. The cross section d$\sigma$/d$\cos\theta^{\ast}$ has been measured in the interval $0 < |\cos\theta^{\ast}| < 0.8$ integrated over $M_{JJ} > 47$ GeV. The experimental results \cite{y2,y25} for d$\sigma$/d$M_{JJ}$ are shown in figure \ref{kkkplot2dnlo} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot2dnlo.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot2dnlo}{\it $M_{JJ}$ dependence of the dijet photoproduction cross section integrated over $|\cos\theta^{\ast}| < 0.85$. We compare our NLO prediction with GRV and GS96 photon parton densities to 1995 data from ZEUS taken with the PUCELL and KTCLUS jet algorithms.}} \end{figure} separately for the two jet definitions, where the cross sections for the $k_T$ algorithm have been scaled down by a factor of 10 in order to separate the data obtained for the two jet definitions. Systematic errors are available, but only statistical errors are shown here. In figure \ref{kkkplot2dnlo}, two NLO curves are compared to the measurements, the full curve being the result for the GRV-HO and the dashed curve the result for the GS96 parametrization of the photon densities. For the lower $M_{JJ}$ values, the GRV density seems to describe the data better than the GS96 density. However, we have to consider that the data points have an additional systematic error from the energy scale uncertainty \cite{y2}, which is also not shown in figure \ref{kkkplot2dnlo}. It is remarkable that the predictions for both jet algorithms, PUCELL and KTCLUS, agree well with the data over the full range of $M_{JJ}$ where the cross sections exhibits a fall-off of almost three orders of magnitude. The cross section d$\sigma$/d$\cos\theta^{\ast}$ as a function of $|\cos\theta^{\ast}|$ between 0 and 0.8 is plotted in figure \ref{kkkplot2cnlo}. \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot2cnlo.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot2cnlo}{\it $|\cos\theta^{\ast}|$ dependence of the dijet photoproduction cross section integrated over $M_{JJ} > 47$GeV. We compare our NLO prediction with GRV and GS96 photon parton densities to 1995 data from ZEUS using the PUCELL jet algorithm.}} \end{figure} Here only the results for the PUCELL algorithm are shown. For the KTCLUS algorithm, the corresponding cross section is presented in \cite{y2}. Again two curves are shown, for the GRV and GS96 photon densities, respectively. The theoretical curves agree reasonably well in magnitude and shape with the data. If it were not for the systematic error and the energy scale uncertainty, the comparison of data and theory would lead to a preference of the GRV over the GS96 density. We conclude, that the NLO calculations account reasonably well for the shape and the magnitude of the measured d$\sigma$/d$M_{JJ}$ and d$\sigma$/d$\cos\theta^{\ast}$ as well as the triple differential cross section d$^3\sigma$/d$E_T$d$\eta_1$d$\eta_2$ as a function of $E_T$ for various bins in $\eta_1$ and $\eta_2$. The last comparison between data an our NLO theory concerns the double-differential inclusive dijet cross section d$^2\sigma$/d$x_{\gamma}^{\rm OBS}$d$E_T$ published just recently by the H1 collaboration \cite{y3}. This H1 analysis is based on 1994 data. The photoproduction events have been selected with the constraint $Q_{\max}^2 = 4$ GeV$^2$ and $0.2 < x_a < 0.83$. The jets were constructed with the cone algorithm with cone size $R=0.7$. The implementation of the cone algorithm in the H1 analysis is using a fixed cone and is therefore similar to the EUCELL algorithm used by ZEUS. The rapidities of all jets are restricted to the region $-0.5 < \eta < 2.5$. In this specific two-jet analysis, $E_T$ is the average transverse energy of the two jets with the highest $E_T$. The average rapidities of the two jets were between $0 < (\eta_1+\eta_2)/2 < 2$ their difference being $|\eta_1-\eta_2| < 1$, which corresponds to $|\cos\theta^{\ast}| < 0.46$. These two cuts on $\eta_1$ and $\eta_2$ ensured that the jets are in a region with good measurements of the hadronic energy in the detector. The transverse energies of the jets were restricted further to the range \begin{equation} \frac{|E_{T_1}-E_{T_2}|}{E_{T_1}+E_{T_2}} < 0.25, \label{eq50} \end{equation} and $E_T$ was required to lie above 10 GeV. This cut and the cut (\ref{eq50}) ensure that the transverse energy of both observed jets is above 7.5 GeV (to avoid underlying event problems) without using the same $E_T$ cut for both jets, which would cause problems in the NLO calculations \cite{Kla96}. The observable $x_{\gamma}^{\rm OBS}$ was calculated from the same formula (\ref{eq51}) as in the ZEUS analysis. The NLO calculations of the inclusive dijet cross section d$^2\sigma$/d$x_{\gamma}^{\rm OBS}$d$\log_{10}(E_T^2/$GeV$^2$) is based on the same parton distributions for the proton and photon, respectively, as used in the previous sections. The scales are chosen equal to $E_T$ as in the comparisons above. We have chosen $R_{\rm sep} = 2 R$, which we believe simulates best the fixed cone algorithm in the H1 analysis. Otherwise the same cuts on $E_{T_1}, E_{T_2}, \eta_1, \eta_2$ are applied as in the experimental analysis of the two-jet data. Our predictions are shown in figure \ref{kkkplot4a}, \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(14,19) \epsfig{file=kkkplot4a.ps,bbllx=70pt,bblly=105pt,bburx=490pt,bbury=715pt,% height=19cm,clip=} \end{picture}} \end{center} \caption{\label{kkkplot4a}{\it $E_T$ dependence of the symmetrized dijet photoproduction cross section integrated over different $x_{\gamma}^{\rm OBS}$ bins. We compare our NLO prediction with GRV and GS96 photon parton densities and $R_{\rm sep} = 2 R$ to 1994 data from H1.}} \end{figure} again for the GRV-HO (full line) and the GS96 (dotted line) photon parton distributions. The curves are compared to the measured cross sections using the statistical and systematic errors added in quadrature. The data and the theoretical cross sections in the various $x_{\gamma}^{\rm OBS}$ bins between 0.1 and 1 have been multiplied by factors $10^n (n=0,1,2, ... ,6)$ in order to separate the results for the seven bins in $x_{\gamma}^{\rm OBS}$. Our calculations give a good description of the data in magnitude and shape as a function of $E_T$ and $x_{\gamma}^{\rm OBS}$, except for some deficiencies in the two highest $x_{\gamma}^{\rm OBS}$ ranges. The GRV-HO and the GS96 parton distribution functions each give a satisfactory description of the measured cross sections with slight preference for GS96 in the lowest bin $0.1 < x_{\gamma}^{\rm OBS} < 0.2$. It is remarkable that even for $x_{\gamma}^{\rm OBS} < 0.3$ the theoretical cross section agrees so well with the data. This region was always a problem in earlier analyses due to the underlying event problems (see for example the inclusive single-jet cross sections for $R=1$ of the ZEUS collaboration discussed above). These problems are apparently avoided in this analysis by choosing $R=0.7$ and applying sufficiently large $E_T$ cuts. The deviations of the calculated from the measured cross sections could easily be due to hadronization effects which are not included in the NLO calculations. The cross section d$^2\sigma$/d$x_{\gamma}^{\rm OBS}$d$\log_{10}(E_T^2$/GeV$^2$) as a function of $x_{\gamma}^{\rm OBS}$ for fixed $E_T$ should show more clearly the dependence on the photon parton distribution sets. Therefore, we have calculated this cross section for $E_T = 11$ GeV and 13 GeV. The result is shown in figures \ref{kkkplot4c} and \ref{kkkplot4d} for the GRV-HO \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot4c.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot4c}{\it $x_{\gamma}^{\rm OBS}$ dependence of the dijet photoproduction cross section at $E_T = 11$GeV. We compare our NLO prediction with GRV and GS96 photon parton densities to 1994 data from H1 using a fixed cone algorithm.}} \end{figure} \begin{figure}[p] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot4d.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot4d}{\it $x_{\gamma}^{\rm OBS}$ dependence of the dijet photoproduction cross section at $E_T = 13$GeV. We compare our NLO prediction with GRV and GS96 photon parton densities to 1994 data from H1 using a fixed cone algorithm.}} \end{figure} and the GS96 parametrizations and compared to the H1 data \cite{y3}, again including statistical and systematic errors added in quadrature. The agreement with the data is reasonable for both $E_T$'s. It seems that the hadronic maximum at $x_{\gamma}^{\rm OBS} \simeq 0.3$ is reduced from $E_T=11$ GeV to $E_T=13$ GeV, whereas the pointlike/direct peak at $x_{\gamma}^{\rm OBS} > 0.8$ is enhanced flattening out the valley in between. This might be due to the QCD evolution of the VMD part of the photon structure function and the increased importance of the anomalous piece and the direct contribution at larger scales. For $x_{\gamma}^{\rm OBS} < 0.3$, we see a somewhat better agreement for the prediction with GS96 which we already noticed in connection with the comparison in figure \ref{kkkplot4a}. Both parton distribution sets give very similar predictions in the intermediate $x_{\gamma}^{\rm OBS}$ range. The different results in the low and high $x_{\gamma}^{\rm OBS}$ regions show us clearly how the experimental data must improve in accuracy before we can discriminate between the two alternative parametrizations of the photon distributions used in our calculations. \setcounter{equation}{0} \section{Numerical Results for Photon-Photon Scattering} In this section we report on numerical results for inclusive one-jet and two-jet cross sections in $\gamma\gamma$ collisions. We use the analytical results that have been calculated in leading order in section 3 and in next-to-leading order in section 4. The calculation proceeds as in the photoproduction case except that we now have three contributions: (a) the direct contribution, where both virtual photons interact directly with the quarks, (b) the single-resolved contribution, where only one of the photons interacts directly and the other photon is resolved, and (c) the double-resolved contribution, where both photons are resolved. It is clear that (b) and (c) have their analogs in the photoproduction case. For case (a) the analytical results are given in section 3.4 (Born matrix element), in section 4.1.3 (virtual corrections), and in section 4.2.9 (real corrections). Three partons appear in the final state of all three contributions when we include the NLO corrections. Two of these partons are recombined if they obey the cluster or the jet-cone condition. For the $\gamma\gamma$ process we shall use only the Snowmass jet algorithm already introduced in section 2.4 with $R=1$ and $R_{\rm sep}=2$. This algorithm was applied in the analysis of the TOPAZ, AMY \cite{x1}, and OPAL \cite{x2} data. However, there is no problem to introduce other jet definitions in the calculations, as we have done it for the $\gamma p$ case. Before we compare our results with recent data from the OPAL collaboration \cite{y26} obtained at LEP2, we shall present in the next section some general results and discuss some tests of the numerical program. \subsection{Tests and General Results} Similar to the $\gamma p$ case, we have checked that our results are independent of the $y$-cut when the analytic results are added to the numerical results of the $2\rightarrow 3$ processes. This has been studied for all $2\rightarrow 3$ processes separately. As an example, we show in figure \ref{kkkplot7} the $y$-dependence of the complete double-resolved \begin{figure}[h] \epsfig{file=kkkplot7.eps} \caption{\label{kkkplot7}{\it Dependence of the inclusive dijet cross section on the $y$-cut, the boundary between analytical and numerical integration. Only the double-resolved part is shown.}} \end{figure} two-jet cross section for LEP1 kinematic conditions and at $\eta_1=\eta_2=0$, $E_T=10$ GeV. As to be expected, the 2-particle contribution is negative and decreases with decreasing $y$ whereas the 3-particle contribution is positive and shows the opposite behavior. The sum of both contributions is independent in the range $10^{-4} < y < 10^{-2}$. For larger $y$-cuts the independence breaks down because we neglected terms of order $y$ in the analytic contributions. The slight decrease for $y < 10^{-4}$ is caused by insufficient accuracy in the numerical integrations. This could be improved with more CPU time. For the results in figures \ref{kkkplot7} to \ref{kkkplot9}, we use the LEP1 center-of-mass energy $\sqrt{s} = 90$ GeV. The photon spectrum is calculated from the formula \begin{equation} F_{\gamma/e}(x) = \frac{\alpha}{2\pi} \left[ \frac{1+(1-x)^2}{x} \ln\frac{E^2\theta_c^2(1-x)^2+m_e^2x^2}{m_e^2x^2}+2(1-x)\left( \frac{m_e^2x} {E^2\theta_c^2(1-x)^2+m_e^2x^2}-\frac{1}{x}\right) \right] \label{eq61} \end{equation} where $E$ is the beam energy and $\theta_c=3^\circ$. The parton distributions are computed with the NLO parametrization of GRV \cite{Glu92} transformed to the $\overline{\mbox{MS}}$ subtraction scheme. For all scales we set $\mu=M_a=M_b=E_T$ and put $N_f=5$. $\Lambda_{\overline{\mbox{MS}}}^{(5)} = 130$ MeV corresponding to $\Lambda_{\overline{\mbox{MS}}}^{(4)} = 200$ MeV as fixed in the GRV distributions. Inclusive single-jet cross sections have been calculated earlier \cite{x7,y27}. In these calculations, the double-resolved contribution has been obtained with a different method for canceling infrared and collinear singularities \cite{Sal93} which, however, can be applied only to the computation of this particular cross section. These results were compared to TOPAZ and AMY \cite{x1} jet production data and good agreement was found \cite{y27}. The fact that a different method for calculation of the double-resolved contribution to the inclusive one-jet cross section was available was utilized to test the new results based on the $y$-cut slicing method. Good agreement between the two independent methods was achieved for all double-resolved processes ($qq',q\overline{q}',qq,q\overline{q},qg,gg\rightarrow$ one jet) separately. In the following figures we show results for d$^3\sigma/$d$E_T$d$\eta_1$d$\eta_2$ as a function of $E_T$ for special $\eta$ values and as function of $\eta_2$ for $E_T=5$ GeV and $\eta_1=0$ using LEP1 kinematics. In figure \ref{kkkplot8} \begin{figure}[p] \epsfig{file=kkkplot8.eps} \caption{\label{kkkplot8}{\it Direct, single-resolved, and double-resolved contributions and their sum in LO and NLO for the inclusive two-jet cross section at LEP1. Upper figure: $E_T$-spectrum, lower figure: correspondent $k$-factors.}} \end{figure} we show the $E_T$ distribution for $\eta_1=\eta_2=0$ in LO and NLO. We plotted the direct, the single-resolved, and the double-resolved contributions separately and show the sum of all three contributions. From the $E_T$ spectrum it is already visible that the direct contribution is reduced by the NLO corrections whereas for the resolved contribution the NLO corrections are positive. The correction in the double-resolved contribution is near 100\%. This is seen more clearly in the lower part of figure \ref{kkkplot8}, where we plotted the $k$-factor as a function of $E_T$ for the three contributions and the sum. We remark that the LO curves are calculated with the two-loop $\alpha_s$ and the same NLO parton distributions of the photon as in the NLO calculation. For a genuine LO prediction, one would choose the one-loop $\alpha_s$ formula and LO parton distributions. This would change the $k$-factors. In figure \ref{kkkplot8}, the $k$-factors show just the influence of the NLO corrections to the parton-parton scattering cross sections. The upper part of figure \ref{kkkplot8} also shows how the contributions of the three components (direct, single-resolved, and double-resolved) sum up to the full inclusive two-jet cross section. For relatively small $E_T$, this cross section is dominated by the two resolved components. In this region we have a positive NLO correction and a $k$-factor larger than 1. In the medium $E_T$ range and for large $E_T$, the direct component dominates, the net NLO correction is negative, and the $k$-factor is less than 1. It is clear that the importance of the resolved parts for larger $E_T$ increases with increasing center-of-mass energy. In figure \ref{kkkplot9}, we show the $\eta_2$ distribution for $\eta_1=0$ and \begin{figure}[p] \epsfig{file=kkkplot9.eps} \caption{\label{kkkplot9}{\it Direct, single-resolved, and double-resolved contributions and their sum in LO and NLO for the inclusive two-jet cross section at LEP1. Upper figure: $\eta$-spectrum, lower figure: correspondent $k$-factors.}} \end{figure} $E_T=5$ GeV, again for the three components separately and their sum in LO and in NLO. The $k$-factors corresponding to the upper part of figure \ref{kkkplot9} are shown in the lower part. The three components show a very distinct behavior as a function of $\eta_2$. The single-resolved contribution does not have a plateau as broad as the other two components. So, by measuring in different $\eta_2$ regions, it might be possible to enhance one or two of the three contributions. An important test of our calculations is the compensation of the factorization scale dependence between the direct and the single-resolved components and between the single-resolved and the double-resolved components. That this compensation works is shown in figures \ref{kkkplot10} a) and b) for the inclusive two-jet \begin{figure}[p] \epsfig{file=kkkplot10.eps} \caption{\label{kkkplot10}{\it Dependence of the inclusive two-jet cross section on the factorization scale $M_a$ in the photon $\gamma_a$. Upper figure: NLO direct and LO single-resolved photon (only $\gamma_a$ resolved), lower figure: NLO single-resolved (only $\gamma_b$ resolved) and LO double-resolved photon.}} \end{figure} cross section at $E_T=10$ GeV and $\eta_1=\eta_2=0$. For this test we applied LEP2 kinematics, i.e.\ $\sqrt{s}=175$ GeV and $\theta_c=1.72^\circ$ in the photon spectra. In figure \ref{kkkplot10} a), the NLO direct and the LO single-resolved cross section, where only the upper photon $\gamma_a$ is resolved, are plotted as a function of $M_a/E_T$. The dependence of the NLO direct contribution on $M_a$ is clearly visible. This is opposite to the dependence of the single-resolved contribution originating from the scale dependence of the parton distributions. As one can see, the sum is constant as a function of $M_a/E_T$. The same compensation occurs between the NLO single-resolved and the LO double-resolved cross section. In the single-resolved cross section, only the lower photon $\gamma_b$ is resolved. Also in this case (figure \ref{kkkplot10} b) the sum of both contributions is rather independent of $M_a$. Another important topic for the NLO theory is the question of the overall scale dependence and whether this is reduced in NLO as compared to the LO cross section. First we show the dependence on the renormalization scale $\mu$ alone with the factorization scales $M_a=M_b=E_T$ fixed. In figure \ref{kkkplot11} a) it is clearly visible that in NLO the \begin{figure}[p] \epsfig{file=kkkplot11.eps} \caption{\label{kkkplot11}{\it Dependence of the inclusive two-jet cross section on the renormalization scale $\mu$ (a) and on the variation of all scales $\mu=M_a=M_b=M$ (b).}} \end{figure} dependence on $\mu$ is reduced as compared to LO when $\mu$ is varied between $\mu=E_T/2$ and $2E_T$. If all scales $\mu=M_a=M_b=M$ are equal and this common scale $M$ varies in the same range for the LO and NLO cross section we obtain the curves in figure \ref{kkkplot11} b). We see that the dependence in LO is such that the cross section as a function of $M$ increases which is opposite to the behavior as a function of the renormalization scale $\mu$ in figure \ref{kkkplot11} a). This comes from the dependence on the factorization scale which is not compensated (see figures \ref{kkkplot10} a) and b)). In NLO we have the effect that the dependence on the factorization scale is reduced due to the presence of the NLO corrections, so that in NLO the cross section as a function of $M$ is fairly constant, i.e.\ it decreases only slightly with increasing $M/E_T$. In conclusion we can say that the NLO cross section is nearly independent of the scales and therefore presents a much more solid prediction than the LO cross section. When high statistics data become available, one could test the variation of the jet cross section with changing cone radius $R$. This has been studied for the dijet cross section for $E_T=10$ GeV and $\eta_1=\eta_2=0$. The result is displayed in figure \ref{kkkplot12} where the NLO cross section \begin{figure}[h] \epsfig{file=kkkplot12.eps} \caption{\label{kkkplot12}{\it Dependence of the inclusive two-jet cross section on the size of the jet cone $R$ for LO and NLO.}} \end{figure} d$^3\sigma$/d$E_T$d$\eta_1$d$\eta_2$ is shown as a function of $R$ for $R \geq 0.5$. It increases as a function of $R$ almost like $a+b\ln R$ and somewhat stronger for $R>1$. With our definition of the LO cross section the NLO result at $R=0.9$ is equal to the LO cross section, so that the NLO corrections stay moderate if $R$ varies between 0.7 and 1. \subsection{Comparison with Experimental Data} The first measurements of jet production in $\gamma\gamma$ reactions were done by the TOPAZ and AMY \cite{x1} collaborations at TRISTAN. They presented data for the inclusive one- and two-jet cross sections as a function of $E_T$ in the range 2.5 GeV $< E_T <$ 8 GeV with different ranges of rapidity in the two experiments. These data were compared to predictions based on the theoretical work presented here in \cite{x9}. Good agreement between the data of both experiments and the theoretical results was achieved for the one- and two-jet cross sections. The distribution in $E_T$ and also the absolute normalization agreed when the parton distributions were described by the GRV set. At the energy of the TRISTAN collider ($\sqrt{s} = 58$ GeV), the main contribution to the jet cross sections comes from the direct process. However, the data could not be reproduced by the direct component alone. The resolved contributions were necessary to obtain agreement over the whole $E_T$ range. In the fall of 1995 the LEP ring was operated at the center-of-mass energy of 133 GeV. During the short run period, the OPAL collaboration collected data for jet cross sections in $\gamma\gamma$ collisions. They measured the inclusive one-jet cross section integrated over $|\eta|<1.0$ and the inclusive two-jet cross section for $|\eta_1|,|\eta_2|<1$ as a function of $E_T$ in the range 3.0 GeV $< E_T < 14$ GeV \cite{x2}. These data were compared to our calculated NLO one- and two-jet cross sections in \cite{x2,x9}, and good agreement between the measurements and the predictions was found. The calculations were done with $N_f=5$, $\Lambda_{\overline{\mbox{MS}}}^{(5)} =130$ MeV and the NLO GRV parton distributions. In \cite{x11}, also the dependence of the two-jet prediction on other parton distribution sets was studied with the result that the other NLO sets, GS \cite{Gor92} and ACFGP \cite{Aur92}, lead to almost the same results and the OPAL data could not be used to rule out one of these sets. In these calculations, the photon spectra were described by eq.~(\ref{eq61}) with $\theta_c=1.43^\circ$ as in the experimental setup. The cone radius was $R=1$ and $R_{\rm sep}=2$. In the meantime the LEP energy was raised and the OPAL collaboration extended their measurements to the center-of-mass energies of $\sqrt{s}=161$ GeV and 172 GeV \cite{y26}. They presented one-jet cross sections for $|\eta|<1$ and two-jet cross sections for $|\eta_1|,|\eta_2|<1$ ($\sqrt{s} = 161$ GeV) and two-jet cross sections for $|\eta_1|,|\eta_2|<2$ ($\sqrt{s} = 161$ GeV and 172 GeV). The latter cross section was compared to our predictions already in \cite{y26}. As an example, we show the experimental results of the OPAL collaboration from the $\sqrt{s}=161$ GeV run. In figures \ref{kkkplot13} and \ref{kkkplot14} \begin{figure}[h] \epsfig{file=kkkplot13.eps} \caption{\label{kkkplot13}{\it The inclusive one-jet cross section as a function of $E_T$ with $|\eta| < 1$ compared to our NLO calculation. The direct, single-resolved, and double-resolved cross sections and the sum (full line) are shown separately.}} \end{figure} \begin{figure}[h] \epsfig{file=kkkplot14.eps} \caption{\label{kkkplot14}{\it The inclusive two-jet cross section as a function of $E_T$ with $|\eta_1|,|\eta_2| < 1$ compared to our NLO calculation. The direct, single-resolved, and double-resolved cross sections and the sum (full line) are shown separately.}} \end{figure} the inclusive one- and two-jet cross section data as a function of $E_T$ for jets with $|\eta|< 1$ are compared to the NLO calculations with $R=1$, $R_{\rm sep}=2$, and the NLO GRV parametrization for the photon. The parameters were $\theta_c=33$ mrad, $\Lambda_{\overline{\mbox{MS}}}^{(5)} = 130$ MeV, and $\mu=M_a=M_b=E_T$. The direct, single-resolved, double-resolved cross sections and their sum (full line) are shown separately. As we can see, the agreement between data and the predictions is good. The resolved cross sections dominate in the region $E_T \leq 7$ GeV, whereas at high $E_T$ the direct cross section dominates. We emphasize that the inclusive two-jet cross section is measured using events with at least two jets. If an event contains more than two jets, only the two jets with the highest $E_T$ values are used. This definition of the inclusive two-jet cross section coincides with what is done in the theoretical calculation. The good agreement between measured and calculated cross section is remarkable, since the NLO calculation gives the cross sections for massless partons, which are combined to jets, whereas the experimental jet cross sections are measured for jets built of hadrons. The good agreement then tells us that in $\gamma\gamma$ jet production, parton-hadron duality is realized to a high degree and no major disturbance due to underlying event energy is present. This is in contrast to what we observed for $\gamma p$ jet production at lower $E_T$ where there was disagreement at the larger rapidity values. As a last point we confront the recently published $|\cos\theta^\ast|$ distribution of the OPAL collaboration with the theoretical predictions. These data were taken for two $x_{\gamma}^{\rm OBS}$ intervals to separate direct and resolved dominated contributions similar as it was done for jet production in $ep$ scattering (see section 5). The selection cuts were $x_{\gamma}^{\pm} > 0.8$ (direct-dominated) and $x_{\gamma}^{\pm} < 0.8$ (resolved-dominated), where \begin{eqnarray} x_{\gamma}^+ &=& \frac{\sum_{i=1}^2 E_{T_i} e^{-\eta_i}}{2x_aE_{e^+}}\\ x_{\gamma}^- &=& \frac{\sum_{i=1}^2 E_{T_i} e^{ \eta_i}}{2x_bE_{e^-}} \end{eqnarray} are in LO equal to the fractions of the photon energies of the upper and lower vertex entering the hard parton-parton scattering process. Thus, in LO the direct process has $x_{\gamma}^+=x_{\gamma}^-=1$, whereas the double-resolved process occurs only for $x_{\gamma}^+,x_{\gamma}^- < 1$. The direct cross section data for $|\cos\theta^\ast|$ between 0 and 0.85 for the two event classes $x_{\gamma}^{\pm}>0.8$ and $x_{\gamma}^{\pm}<0.8$ are exhibited in figure \ref{kkkplot15}. The data points are normalized to have an \begin{figure}[h] \begin{center} {\unitlength1cm \begin{picture}(12,8) \epsfig{file=kkkplot15.ps,bbllx=520pt,bblly=95pt,bburx=105pt,bbury=710pt,% height=12cm,angle=270,clip=} \end{picture}} \end{center} \caption{\label{kkkplot15}{\it $|\cos\theta^\ast|$ dependence of the dijet $\gamma\gamma$ cross section for $M_{JJ} > 12$GeV, $|\eta_1|,|\eta_2| < 1$ and $|\overline{\eta}| < 1$. We compare our NLO prediction with GRV and GS96 photon parton densities for direct-enhanced and double-resolved enhanced regions to preliminary data from OPAL.}} \end{figure} average value of 1 in the first three bins and are plotted at the center of the bins. The data with $x_{\gamma}^{\pm}> 0.8$ show a small rise with increasing $|\cos\theta^\ast|$, whereas the data for $x_{\gamma}^{\pm}< 0.8$ show a much stronger rise in $|\cos\theta^\ast|$, similar to the findings in $\gamma p$ scattering, where we pointed out how this qualitative behavior is related to the exchanges of the LO cross sections. The theoretical curves for the cross section in the two $x_{\gamma}^{\pm}$ ranges are also shown using the two sets for the parton distribution functions, NLO GRV and NLO GS96 \cite{Gor96}. In these calculations, $|\eta_1|,|\eta_2| < 1$, $|\overline{\eta}| < 1$, where $\overline{\eta} = \frac{1}{2} (\eta_1+\eta_2)$, and the invariant two-jet mass $M_{JJ} > 12 $ GeV, as in the experimental selection \cite{y26}. $M_{JJ}$ is calculated from the two jets with highest $E_T$. All other parameters are as stated previously. The theoretical curves in figure \ref{kkkplot15} are normalized at $|\cos\theta^\ast|=0$. We observe that the shape of the $|\cos\theta^\ast|$ distributions in the two $x_{\gamma}^{\pm}$ bins is nicely reproduced in comparison with the data. This means that the qualitative behavior known from LO arguments is still present in NLO and agrees with the experimental data. On the quantitative side, the theoretical cross sections seem to increase stronger towards $|\cos\theta^\ast| = 1$ than the data, both the direct and the double-resolved event sample, indicate. However, a new OPAL analysis shows that the quantitative agreement there is much better for the resolved sample, if the $k_T$-cluster algorithm is used instead of the cone algorithm. We remark that the data analysis is still going on, i.e.\ the experimental data are still preliminary and we have to wait for the final analysis before further conclusions can be drawn. \setcounter{equation}{0} \section{Summary and Outlook} In this work we have presented a complete next-to-leading order calculation of direct and resolved photoproduction of one and two jets in $\gamma p$ and $\gamma\gamma$ collisions. Photon-proton and photon-photon scattering were considered simultaneously since the two processes are intimately related to each other. The results are of great importance not only as a test of quantum chromodynamics (QCD), but also for the measurement of the proton and photon parton densities currently performed at HERA and LEP. First, we embedded the perturbatively calculable photon-parton scattering in the experimentally observable electron-proton scattering process, using the Weizs\"acker-Williams approximation and universal structure functions for the proton and the photon. For these structure functions, we chose recent next-to-leading order parametrizations from the CTEQ and GRV collaborations. Special emphasis was given to experimental and theoretical ambiguities in the Snowmass jet definition. The hard photon-parton and photon-photon scattering cross section was calculated in leading and in next-to-leading order. The analytical calculation included the tree-level Born matrix elements, the virtual corrections with one internal loop, and the real corrections with the radiation of a third particle in the initial or final states. We integrated the latter over singular regions of phase space up to an invariant mass cut $y$. This was done in $d$ dimensions in order to regularize the soft and collinear divergencies. All infrared singularities canceled, as they must according to the Kinoshita-Lee-Nauenberg theorem. The ultraviolet poles in the virtual corrections and the collinear poles in the initial state corrections were absorbed into the Lagrangian and the structure functions, respectively. The cross sections proved to be independent of the technical $y$-cut and less dependent on the renormalization and factorization scales, when added to the regular three-body contributions and integrated numerically. Excellent agreement was found in inclusive single-jet predictions with the existing programs of B\"odeker and Salesch. We extensively studied the direct, resolved, and complete (in the case of $\gamma p$) and the direct, single-resolved, double-resolved, and complete (in the case of $\gamma\gamma$) one- and two-jet distributions in the transverse energies and rapidities of the observed jets as well as the dependence on the jet cone size $R$. Finally, we compared similar distributions to inclusive and dijet data from H1 and ZEUS and OPAL and found good agreement in all cases. It is possible to extend the work presented here in a number of different ways. First, the formalism presented can easily be extended to the calculation of $\gamma p$ and $\gamma\gamma$ collisions, where one of the photons has a virtuality larger than zero, although still small compared to $E_T$ \cite{y34}. Second, other observables than those considered in this work will allow to test the theory further and make it easier to isolate the resolved contributions to obtain information on the parton distributions in the photon for various scales. Third, our program can be used to predict single and dijet cross sections in proton-proton and proton-antiproton scattering at the TEVATRON at Fermilab in Chicago or at the designated LHC at CERN in Geneva. \setcounter{equation}{0} \begin{appendix} \renewcommand{\theequation}{\mbox{\Alph{section}.\arabic{equation}}} \section{Phase Space Integrals for Final State Singularities} This appendix contains the formul{\ae} needed to integrate the real $2\rightarrow 3$ matrix elements in sections 4.2.2 and 4.2.3 over phase space regions with final state singularities. To this aim, we have defined in section 4.2.1 a measure \begin{equation} \int \mbox{d}\mu_F = \int\limits_0^{y_F} \mbox{d}z' z'^{-\varepsilon} \left( 1+\frac{z's}{t} \right) ^{-\varepsilon} \int\limits_0^1 \frac{\mbox{d}b}{N_b} b^{-\varepsilon} \left( 1-b \right) ^{-\varepsilon} \int\limits_0^\pi \frac{\mbox{d}\phi}{N_\phi} \sin ^{-2\varepsilon}\phi, \end{equation} which contains all variables associated with the phase space of the unobserved subsystem $\overline{p}_{1}=p_1+p_3$. The integral over the azimuthal angle $\phi$ is trivial, as the matrix elements are independent of $\phi$. We give the results for the four generic types of integrals in the following: \begin{eqnarray} f_1(a) & = & \int \mbox{d}\mu_F\frac{1}{z'}\frac{a}{z'+ab} =\int \mbox{d}\mu_F\frac{1}{z'}\frac{a}{z'+a(1-b)} \nonumber \\ & = & \frac{1}{2\varepsilon^2}+\frac{1}{\varepsilon}\left(-1-\frac{1}{2}\ln a\right) +\ln a-\frac{1}{2}\ln^2\frac{y_F}{a}+\frac{1}{4} \ln^2 a-\mbox{Li}_2\left( -\frac{y_F}{a} \right)-\frac{\pi^2}{6}+ {\cal O}(\varepsilon), \\ & & \nonumber \\ f_2(a) & = & \int \mbox{d}\mu_F\frac{1}{z'}(1-b)(1-\varepsilon)\nonumber \\ & = & - \frac{1}{2\varepsilon}+\frac{1}{2} + \frac{1}{2}\ln y_F +{\cal O} (\varepsilon), \\ & & \nonumber \\ f_3(a) & = & \int \mbox{d}\mu_F\frac{1}{z'} \nonumber \\ & = & - \frac{1}{\varepsilon}+\ln y_F +{\cal O}(\varepsilon), \\ & & \nonumber \\ f_4(a) & = & \int \mbox{d}\mu_F\frac{1}{z'}(1-b+b^2) \nonumber \\ & = & - \frac{5}{6\varepsilon}+\frac{5}{6}\ln y_F -\frac{1}{18}+{\cal O} (\varepsilon). \end{eqnarray} Terms of order $\varepsilon$ have been omitted since they vanish in the limit $d\rightarrow 4$. \setcounter{equation}{0} \section{Phase Space Integrals for Initial State Singularities} In this appendix, we calculate the formul{\ae} needed to integrate the real $2\rightarrow 3$ matrix elements in sections 4.2.5 through 4.2.8 over phase space regions with initial state singularities. The integration measure was defined in section 4.2.4 and is given by \begin{equation} \int \mbox{d}\mu_I = \int\limits_0^{y_I} \mbox{d}z'' z''^{-\varepsilon} \int\limits_{X_a}^1 \frac{\mbox{d}z_a}{z_a} \left( \frac{z_a}{1-z_a}\right) ^{\varepsilon} \int\limits_0^\pi \frac{\mbox{d}\phi} {N_\phi}\sin^{-2\varepsilon}\phi \frac{\Gamma(1-2\varepsilon)}{\Gamma^2(1-\varepsilon)}. \end{equation} The integration variables are associated with the phase space of the unobserved particle $p_3$. The integral over the azimuthal angle $\phi$ is again trivial, as the matrix elements do not depend on $\phi$. The longitudinal momentum fraction $z_a$ is integrated over numerically, since the cross sections still have to be folded with the parton densities. These are contained in the functions $g(z_a)$ below. We give the results for the four generic types of integrals in the following: \begin{eqnarray} i_1(a) & = & \int \mbox{d}\mu_I g(z_a) \frac{1}{z''}\frac{a}{z''+a(1-z_a)} \nonumber \\ & = & \int\limits_{X_a}^1\frac{\mbox{d}z_a}{z_a}g(z_a) \left[ -\frac{1}{\varepsilon}-\frac{1}{\varepsilon}\frac{z_a}{(1-z_a)_+}+\left( \frac{\ln\left( a\left( \frac{1-z_a}{z_a}\right) ^2\right) }{1-z_a}\right) _+-\ln\left( a\left( \frac{1-z_a} {z_a}\right) ^2\right) \right. \nonumber\\ & & \hspace{2.25cm}\left. -\frac{z_a}{1-z_a}\ln\left( 1+\frac{a(1-z_a)} {y_Iz_a}\right) +\ln\left( y_I\left( \frac{1-z_a}{z_a}\right) \rr\right] \nonumber \\ & & +\hspace{1.05cm}g(1)\left[ \frac{1}{2\varepsilon^2}-\frac{1}{2\varepsilon}\ln a +\frac{1}{4}\ln^2a+\frac{\pi^2}{2}\right] +{\cal O}(\varepsilon),\\ & & \nonumber \\ i_2(a) & = & \int \mbox{d}\mu_I g(z_a) \frac{1}{z''}\nonumber \\ & = & \int\limits_{X_a}^1\frac{\mbox{d}z_a}{z_a}g(z_a) \left[ -\frac{1}{\varepsilon}+\ln\left( y_I\frac{1-z_a}{z_a}\right) \right] +{\cal O}(\varepsilon),\\ & & \nonumber \\ i_3(a) & = & \int \mbox{d}\mu_I g(z_a) \frac{1}{z''}(1-z_a)(1-\varepsilon) \nonumber \\ & = & \int\limits_{X_a}^1\frac{\mbox{d}z_a}{z_a}g(z_a) \left[ -\frac{1}{\varepsilon} (1-z_a)+(1-z_a)+(1-z_a)\ln\left( y_I\frac{1-z_a}{z_a}\right) \right] +{\cal O}(\varepsilon),\\ & & \nonumber \\ i_4(a) & = & \int \mbox{d}\mu_I g(z_a) \frac{1}{z''}\left( \frac{z_a^2-2z_a+2 -\varepsilon z_a^2}{z_a}\right) \frac{1}{1-\varepsilon}\nonumber \\ & = & \int\limits_{X_a}^1\frac{\mbox{d}z_a}{z_a}g(z_a) \left[ -\frac{1}{\varepsilon}\frac{z_a^2-2z_a+2}{z_a}+\frac{z_a^2-2z_a+2}{z_a} \ln\left( y_I\frac{1-z_a}{z_a}\right) -2\frac{1-z_a}{z_a} \right] +{\cal O}(\varepsilon). \end{eqnarray} Terms of order $\varepsilon$ have again been omitted. \end{appendix}
1,108,101,564,060
arxiv
\section{Introduction} Let $\rho$ denote the density operator, acting on the finite-dimensional Hilbert space $H=H_A\otimes H_B$, which describes the state of two quantum systems $A$ and $B$. The state is said to be separable if $\rho$ can be written as a convex combination of product vectors \cite{Wer}, i.e. \begin{equation}\label{separable} \rho = \sum_ip_i|\phi_i,\varphi_i\rangle\langle\phi_i,\varphi_i| = \sum_i p_i \, \rho^A_i\otimes\rho^B_i \,, \end{equation} where $0 \leq p_i \leq 1$, $\sum_i p_i = 1$, and $|\phi_i,\varphi_i\rangle=|\phi_i\rangle_A\otimes|\varphi_i\rangle_B$ ($|\phi\rangle_A\in H_A$ and $|\varphi\rangle_B\in H_B$). If $\rho$ cannot be written as in Eq.\ (\ref{separable}), then the state is said to be entangled. Entanglement is responsible for many of the striking features of quantum theory and, therefore, it has been an object of special attention. Since the early years of quantum mechanics, it has been present in many of the debates regarding the foundations and implications of the theory (see e.g. \cite{epr}), but in the last ten years this interest has greatly increased, specially from a practical point of view, because entanglement is an essential ingredient in the applications of quantum information theory, such as quantum cryptography, dense coding, teleportation and quantum computation \cite{Nie,Bou}. As a consequence, much effort has been devoted to the so-called separability problem, which consists in finding mathematical conditions which provide a practical way to check whether a given state is entangled or not, since it is in general very hard to verify if a decomposition according to the definition of separability (\ref{separable}) exists. Up to now, a conclusive answer to the separability question can only be given when $\dim H_A=2$ and $\dim H_B=2$ or $\dim H_B=3$, in which case the Peres-Horodecki criterion \cite{Per,Hor} establishes that $\rho$ is separable if and only if its partial transpose (i.e. transpose with respect to one of the subsystems) is positive. For higher dimensions this is just a necessary condition \cite{Hor}, since there exist entangled states with positive partial transpose (PPT) which are bound entangled (i.e. their entanglement cannot be distilled to the singlet form). Therefore the separability problem remains open. Much subsequent work has been devoted to finding necessary conditions for separability (see for example \cite{iso,NieKem,Rud1,Che,Hof,Guh,deV}), given that they can assure the presence of entanglement in experiments and that, in principle, they might complement the strong Peres-Horodecki criterion by detecting PPT entanglement. Nevertheless, there also exist a great variety of sufficient conditions (such as \cite{ZHSL,Gur}), non-operational necessary and sufficient conditions (see for instance \cite{Hor,Ter,Wu}), or necessary and sufficient conditions which apply to restricted sets such as low-rank density matrices \cite{lowrank}. Furthermore, given a generic separable density matrix it is not known how to decompose it according to Eq.\ (\ref{separable}) save for the ($2\times2$)-dimensional case \cite{Woo,San}. The (approximate) separability problem is NP-hard \cite{NP}, but several authors have devised nontrivial algorithms for it (see \cite{Ian3} for a survey). In this paper we derive a necessary condition and three sufficient conditions for the separability of bipartite quantum systems of arbitrary dimensions. The proofs of the latter conditions are constructive, so they provide decompositions in product states as in Eq.\ (\ref{separable}) for the separable states that fulfill them. Our results are obtained using the Bloch representation of density matrices, which has been used in previous works to characterize the separability of a certain class of bipartite qubit states \cite{Hor2} and to study the separability of bipartite states near the maximally mixed one \cite{Cav,Run}. The approach presented here is different and more general. We will also provide examples that show the usefulness of the conditions derived here. Remarkably, the necessary condition is strong enough to detect PPT entangled states. Finally, we will compare this condition to the so-called computable cross-norm \cite{Rud1} or realignment \cite{Che} (CCNR) criterion, which exhibits a powerful PPT entanglement detection capability, showing that for a certain class of states our condition is stronger. \section{Bloch Representation} $N$-level quantum states are described by density operators, i.e. unit trace Hermitian positive semidefinite linear operators, which act on the Hilbert space $H\simeq\mathbb{C}^N$. The Hermitian operators acting on $H$ constitute themselves a Hilbert space, the so-called Hilbert-Schmidt space $HS(H)$, with inner product $\langle\rho,\tau\rangle_{HS}=\textrm{Tr}(\rho^\dag\tau)$. Accordingly, the density operators can be expanded by any basis of this space. In particular, we can choose to expand $\rho$ in terms of the identity operator $I_N$ and the traceless Hermitian generators of $SU(N)$ $\lambda_i$ $(i=1,2,\ldots,N^2-1)$, \begin{equation}\label{bloch} \rho=\frac{1}{N}(I_N+r_i\lambda_i), \end{equation} where, as we shall do throughout this paper, we adhere to the convention of summation over repeated indices. The generators of $SU(N)$ satisfy the orthogonality relation \begin{equation}\label{ortogonalidad} \langle\lambda_i,\lambda_j\rangle_{HS}=\textrm{Tr}(\lambda_i\lambda_j)=2\delta_{ij}, \end{equation} and they are characterized by the structure constants of the corresponding Lie algebra, $f_{ijk}$ and $g_{ijk}$, which are, respectively, completely antisymmetric and completely symmetric, \begin{equation}\label{algebrasu} \lambda_i\lambda_j=\frac{2}{N}\delta_{ij}I_N+if_{ijk}\lambda_k+g_{ijk}\lambda_k. \end{equation} The generators can be easily constructed from any orthonormal basis $\{|a\rangle\}_{a=0}^{N-1}$ in $H$ \cite{Hio}. Let $l,j,k$ be indices such that $0\leq l\leq N-2$ and $0\leq j<k\leq N-1$. Then, when $i=1,\ldots,N-1$ \begin{equation}\label{generadoresa} \lambda_i=w_l\equiv\sqrt{\frac{2}{(l+1)(l+2)}}\left(\sum_{a=0}^l|a\rangle\langle a|-(l+1)|l+1\rangle\langle l+1|\right),\quad \end{equation} while when $i=N,\ldots,(N+2)(N-1)/2$ \begin{equation}\label{generadoresb} \lambda_i=u_{jk}\equiv|j\rangle\langle k|+|k\rangle\langle j|, \end{equation} and when $i=N(N+1)/2,\ldots,N^2-1$ \begin{equation}\label{generadoresc} \lambda_i=v_{jk}\equiv-i(|j\rangle\langle k|-|k\rangle\langle j|). \end{equation} The orthogonality relation (\ref{ortogonalidad}) implies that the coefficients in (\ref{bloch}) are given by \begin{equation}\label{ri} r_i=\frac{N}{2}\textrm{Tr}(\rho\lambda_i). \end{equation} Notice that the coefficient of $I_N$ is fixed due to the unit trace condition. The vector $\textbf{r}=(r_1 r_2\cdots r_{N^2-1})^t\in \mathbb{R}^{N^2-1}$, which completely characterizes the density operator, is called Bloch vector or coherence vector. The representation (\ref{bloch}) was introduced by Bloch \cite{Blo} in the $N=2$ case and generalized to arbitrary dimensions in \cite{Hio}. It has an interesting appeal from the experimentalist point of view, since in this way it becomes clear how the density operator can be constructed from the expectation values of the operators $\lambda_i$, \begin{equation} \langle\lambda_i\rangle=\textrm{Tr}(\rho\lambda_i)=\frac{2}{N}r_i. \end{equation} As we have seen, every density operator admits a representation as in Eq.\ (\ref{bloch}); however, the converse is not true. A matrix of the form (\ref{bloch}) is of unit trace and Hermitian, but it might not be positive semidefinite, so to guarantee this property further restrictions must be added to the coherence vector. The set of all the Bloch vectors that constitute a density operator is known as the Bloch-vector space $B(\mathbb{R}^{N^2-1})$. It is widely known that in the case $N=2$ this space equals the unit ball in $\mathbb{R}^3$ and pure states are represented by vectors on the unit sphere. The problem of determining $B(\mathbb{R}^{N^2-1})$ when $N\geq3$ is still open and a subject of current research (see for example \cite{KimKos} and references therein). However, many of its properties are known. For instance, using Eq.\ (\ref{algebrasu}), one finds that for pure states ($\rho^2=\rho$) it must hold \begin{equation}\label{rpuro} ||\textbf{r}||_2=\sqrt{\frac{N(N-1)}{2}},\quad r_ir_jg_{ijk}=(N-2)r_k, \end{equation} where $||\cdot||_2$ is the Euclidean norm on $\mathbb{R}^{N^2-1}$. In the case of mixed states, the conditions that the coherence vector must satisfy in order to represent a density operator have been recently provided in \cite{Kim,Byr}. Regrettably, their mathematical expression is rather cumbersome. It is also known \cite{Har,Kos} that $B(\mathbb{R}^{N^2-1})$ is a subset of the ball $D_R(\mathbb{R}^{N^2-1})$ of radius $R=\sqrt{\frac{N(N-1)}{2}}$, which is the minimum ball containing it, and that the ball $D_r(\mathbb{R}^{N^2-1})$ of radius $r=\sqrt{\frac{N}{2(N-1)}}$ is included in $B(\mathbb{R}^{N^2-1})$. That is, \begin{equation}\label{inclusion} D_r(\mathbb{R}^{N^2-1})\subseteq B(\mathbb{R}^{N^2-1})\subseteq D_R(\mathbb{R}^{N^2-1}). \end{equation} In the case of bipartite quantum systems of dimensions $M\times N$ ($H\simeq\mathbb{C}^M\otimes\mathbb{C}^N$) composed of subsystems $A$ and $B$, we can analogously represent the density operators as\footnote{This representation is sometimes referred in the literature as Fano form (see e.\ g.\ \cite{BZ}), since this author was the first to consider it \cite{Fan}.} \begin{equation}\label{bipartitebloch} \rho=\frac{1}{MN}(I_M\otimes I_N+r_i\lambda_i\otimes I_N+s_jI_M\otimes\tilde{\lambda}_j+t_{ij}\lambda_i\otimes\tilde{\lambda}_j), \end{equation} where $\lambda_i$ ($\tilde{\lambda}_j$) are the generators of $SU(M)$ ($SU(N)$). Notice that $\textbf{r}\in \mathbb{R}^{M^2-1}$ and $\textbf{s}\in \mathbb{R}^{N^2-1}$ are the coherence vectors of the subsystems, so that they can be determined locally, \begin{equation} \rho_A=\textrm{Tr}_B\rho=\frac{1}{M}(I_M+r_i\lambda_i),\quad\rho_B=\textrm{Tr}_A\rho=\frac{1}{N}(I_N+s_i\tilde{\lambda}_i). \end{equation} The coefficients $t_{ij}$, responsible for the possible correlations, form the real matrix $T\in \mathbb{R}^{(M^2-1)\times (N^2-1)}$, and, as before, they can be easily obtained by \begin{equation}\label{T} t_{ij}=\frac{MN}{4}\textrm{Tr}(\rho\lambda_i\otimes\tilde{\lambda}_j)=\frac{MN}{4}\langle\lambda_i\otimes\tilde{\lambda}_j\rangle. \end{equation} \section{Separability Conditions from the Bloch Representation} The Bloch representation of bipartite quantum systems (\ref{bipartitebloch}) allows us to find a simple characterization of separability for pure states. \vspace*{12pt} \noindent {\bf Proposition~1:} A pure bipartite quantum state with Bloch representation (\ref{bipartitebloch}) is separable if and only if \begin{equation}\label{pureseparable} T=\textbf{r\,s}^t \end{equation} holds. \vspace*{12pt} \noindent {\bf Proof:} Simply notice that Eq.\ (\ref{bipartitebloch}) can be rewritten as \begin{equation} \rho=\rho_A\otimes\rho_B+\frac{1}{MN}[(t_{ij}-r_is_j)\lambda_i\otimes\tilde{\lambda}_j]. \end{equation} Since the $\lambda_i\otimes\tilde{\lambda}_j$ are linearly independent, $(t_{ij}-r_is_j)\lambda_i\otimes\tilde{\lambda}_j=0$ if and only if $t_{ij}-r_is_j=0$ $\forall\, i,j$.\hfill$\square$ \vspace*{12pt} \noindent {\bf Remark~1:} In the case of mixed states, Eq.\ (\ref{pureseparable}) provides a sufficient condition for separability, since then $\rho=\rho_A\otimes\rho_B$. \vspace*{12pt} Attending to Proposition 1, we can characterize separability from the Bloch representation point of view in the following terms: \emph{A bipartite quantum state with Bloch representation (\ref{bipartitebloch}) is separable if and only if there exist vectors} $\textbf{u}_i\in \mathbb{R}^{M^2-1}$ \emph{and} $\textbf{v}_i\in \mathbb{R}^{N^2-1}$ \emph{satisfying Eq.\ (\ref{rpuro}) and weights $p_i$ satisfying $0 \leq p_i \leq 1$, $\sum_i p_i = 1$ such that} \begin{equation}\label{separable2} T=p_i\textbf{u}_i\,\textbf{v}_i ^t,\quad \textbf{r}=p_i\textbf{u}_i,\quad \textbf{s}=p_i\textbf{v}_i\,. \end{equation} This allows us to derive the two theorems below, which provide, respectively, a necessary condition and a sufficient condition for separability. We will make use of the Ky Fan norm $||\cdot||_{KF}$, which is commonly used in Matrix Analysis (the reader who is not familiarized with this issue can consult for example \cite{HorJoh}). We recall that the singular value decomposition theorem ensures that every matrix $A\in \mathbb{C}^{m\times n}$ admits a factorization of the form $A=U\Sigma V^\dag$ such that $\Sigma=(\sigma_{ij})\in \mathbb{R}_+^{m\times n}$ with $\sigma_{ij}=0$ whenever $i\neq j$, and $U\in \mathbb{C}^{m\times m}$, $V\in \mathbb{C}^{n\times n}$ are unitary matrices. The Ky Fan matrix norm is defined as the sum of the singular values $\sigma_i\equiv\sigma_{ii}$, \begin{equation}\label{kyfan} ||A||_{KF}=\sum_{i=1}^{\min\{m,n\}}\sigma_i=\textrm{Tr}\sqrt{A^\dag A}. \end{equation} This norm has previously been used in the context of the separability problem, though in a different way, in the CCNR criterion. \vspace*{12pt} \noindent {\bf Theorem~1:} If a bipartite state of $M\times N$ dimensions with Bloch representation (\ref{bipartitebloch}) is separable, then \begin{equation}\label{teorema1} ||T||_{KF}\leq\sqrt{\frac{MN(M-1)(N-1)}{4}} \end{equation} must hold. \vspace*{12pt} \noindent {\bf Proof:} Since $T$ has to admit a decomposition of the form (\ref{separable2}) with \begin{equation} ||\textbf{u}_i||_2=\sqrt{\frac{M(M-1)}{2}},\quad||\textbf{v}_i||_2=\sqrt{\frac{N(N-1)}{2}}, \end{equation} we must have \begin{equation} ||T||_{KF}\leq p_i||\textbf{u}_i\,\textbf{v}_i ^t||_{KF}=p_i\sqrt{\frac{MN(M-1)(N-1)}{4}}||\textbf{n}_i\,\tilde{\textbf{n}}_i ^t||_{KF}, \end{equation} where $\textbf{n}_i,\tilde{\textbf{n}}_i$ are unit vectors. Thus, $||\textbf{n}_i\,\tilde{\textbf{n}}_i ^t||_{KF}=1$ $\forall i$ and the result follows. \hfill$\square$ \vspace*{12pt} As said before, $T$ contains all the information about the correlations, so that $||T||_{KF}$ measures in a certain sense the size of these correlations. In this way, Theorem 1 has a clear physical meaning: there is an upper bound to the correlations contained in a separable state. $||T||_{KF}$ is a consistent measure of the correlations since it is left invariant local changes of basis, i.e. it is invariant under local unitary transformations of the density operator. This fact was mentioned in \cite{Hor2} when $M=N=2$; in the next proposition we give a general proof. \vspace*{12pt} \noindent {\bf Proposition~2:} Let $U_A$ ($U_B$) denote a unitary transformation acting on subsystem $A$ ($B$). If \begin{equation}\label{uni} \rho'=\big(U_A\otimes U_B\big)\rho\left(U_A^\dag\otimes U_B^\dag\right), \end{equation} then $||T'||_{KF}=||T||_{KF}$. \vspace*{12pt} \noindent {\bf Proof:} Let $\rho_A$ and $\rho'_A$ denote density operators acting on $H_A\simeq\mathbb{C}^M$ such that $\rho'_A=U_A\rho_A U_A^\dag$. Since $||\cdot||_{HS}$ is unitarily invariant we have that $||\rho_A||_{HS}=||\rho'_A||_{HS}$. But using the orthogonality relation (\ref{ortogonalidad}) and Eq.\ (\ref{ri}) we find that \begin{equation} ||\rho_A||_{HS}^2=\frac{1}{M}\left(1+\frac{2}{M}||\textbf{r}||_2^2\right), \end{equation} hence $||\textbf{r}||_2=||\textbf{r}'||_2$. This implies that the coherence vectors of different realizations of the same state are related by a rotation, i.e. there exists a rotation $O_A$ acting on $\mathbb{R}^{M^2-1}$ such that $\textbf{r}'=O_A\textbf{r}$. This means that \begin{equation} U_Ar_i\lambda_iU_A^\dag=\left(O_A\textbf{r}\right)_i\lambda_i. \end{equation} Now, when a bipartite state $\rho$ is subjected to a product unitary transformation (\ref{uni}) there will be rotations $O_A$ acting on $\mathbb{R}^{M^2-1}$ and $O_B$ acting on $\mathbb{R}^{N^2-1}$ such that \begin{equation}\label{trans} \textbf{r}'=O_A\textbf{r},\quad\textbf{s}'=O_B\textbf{s},\quad T'=O_ATO_B^\dag. \end{equation} Thus, the result follows taking into account that $||\cdot||_{KF}$ is unitarily invariant. \hfill$\square$ \vspace*{12pt} The characterization of the separability problem given in Eq.\ (\ref{separable2}) suggests the possibility of obtaining a sufficient condition for separability using a constructive proof. One such condition is stated in the following proposition. \vspace*{12pt} \noindent {\bf Proposition~3:} If a bipartite state of $M\times N$ dimensions with Bloch representation (\ref{bipartitebloch}) satisfies \begin{equation}\label{proposition3} \sqrt{\frac{2(M-1)}{M}}||\textbf{r}||_2+\sqrt{\frac{2(N-1)}{N}}||\textbf{s}||_2+\sqrt{\frac{4(M-1)(N-1)}{MN}}||T||_{KF}\leq1, \end{equation} then it is a separable state. \vspace*{12pt} \noindent {\bf Proof:} Let T have the singular value decomposition $T=\sigma_i\textbf{u}_i\,\textbf{v}_i ^t$, with $||\textbf{u}_i||_2=||\textbf{v}_i||_2=1$. If we define \begin{equation} \widetilde{\textbf{u}}_i=\sqrt{\frac{M}{2(M-1)}}\textbf{u}_i,\quad\widetilde{\textbf{v}}_i=\sqrt{\frac{N}{2(N-1)}}{\textbf{v}}_i, \end{equation} we can rewrite \begin{equation} T=\sqrt{\frac{4(M-1)(N-1)}{MN}}\sigma_i\widetilde{\textbf{u}}_i\,\widetilde{\textbf{v}}_i ^t. \end{equation} Then, if condition (\ref{proposition3}) holds, we can decompose $\rho$ as the following convex combination of the density matrices $\varrho_i$, $\varrho'_i$, $\rho_r$, $\rho_s$ and $\frac{1}{MN}I_{MN}$, \begin{eqnarray}\nonumber \rho= \sqrt{\frac{4(M-1)(N-1)}{MN}}\frac{1}{2}\sigma_i(\varrho_i+\varrho'_i)+\sqrt{\frac{2(M-1)}{M}}||\textbf{r}||_2\rho_r +\sqrt{\frac{2(N-1)}{N}}||\textbf{s}||_2\rho_s\\\label{decompositionth2} +\left(1-\sqrt{\frac{2(M-1)}{M}}||\textbf{r}||_2-\sqrt{\frac{2(N-1)}{N}}||\textbf{s}||_2-\sqrt{\frac{4(M-1)(N-1)}{MN}}||T||_{KF}\right)\frac{I_{MN}}{MN}, \end{eqnarray} where $\varrho_i$, $\varrho'_i$, $\rho_r$ and $\rho_s$ are such that $$\textbf{r}_i=\widetilde{\textbf{u}}_i,\quad \textbf{s}_i=\widetilde{\textbf{v}}_i,\quad T_i=\widetilde{\textbf{u}}_i\,\widetilde{\textbf{v}}_i ^t,$$ $$\textbf{r}'_i=-\widetilde{\textbf{u}}_i,\quad \textbf{s}'_i=-\widetilde{\textbf{v}}_i,\quad T'_i=\widetilde{\textbf{u}}_i\,\widetilde{\textbf{v}}_i ^t,$$ $$\textbf{r}_r=\sqrt{\frac{M}{2(M-1)}}\frac{\textbf{r}}{||\textbf{r}||_2},\quad \textbf{s}_r=0,\quad T_r=0,$$ $$\textbf{r}_s=0,\quad \textbf{s}_s=\sqrt{\frac{N}{2(N-1)}}\frac{\textbf{s}}{||\textbf{s}||_2},\quad T_s=0.$$ Notice that by virtue of Eq.\ (\ref{inclusion}) all the above coherence vectors belong to the corresponding Bloch spaces and, therefore, the reductions of $\varrho_i$, $\varrho'_i$, $\rho_r$ and $\rho_s$ constitute density matrices. Moreover, all these matrices satisfy condition (\ref{pureseparable}), hence they are equal to the tensor product of their reductions. Therefore, they constitute density matrices and they are separable, and so must be $\rho$.\hfill$\square$ \vspace*{12pt} One could ask whether Proposition 3 can be strengthened using a condition more involved than Eq.\ (\ref{proposition3}). As we shall see in the following theorem, the answer is positive. \vspace*{12pt} \noindent {\bf Theorem~2:} Let \begin{equation}\label{c} c=\max\left\{\sqrt{\frac{2(M-1)}{M}}||\textbf{r}||_2,\sqrt{\frac{2(N-1)}{N}}||\textbf{s}||_2\right\}. \end{equation} If a bipartite state of $M\times N$ dimensions with Bloch representation (\ref{bipartitebloch}) such that $c\neq0$ satisfies \begin{equation}\label{teorema2} c+\sqrt{\frac{4(M-1)(N-1)}{MN}}\left|\left|T-\frac{\textbf{r\,s}^t}{c}\right|\right|_{KF}\leq1, \end{equation} then it is a separable state. \vspace*{12pt} \noindent {\bf Proof:} On the analogy of the proof of Proposition 3, let $T-\frac{\textbf{r\,s}^t}{c}$ have the singular value decomposition $\sigma'_i\textbf{x}_i\,\textbf{y}_i ^t$, where $||\textbf{x}_i||_2=||\textbf{y}_i||_2=1$. If we define \begin{equation} \widetilde{\textbf{x}}_i=\sqrt{\frac{M}{2(M-1)}}\textbf{x}_i,\quad\widetilde{\textbf{y}}_i=\sqrt{\frac{N}{2(N-1)}}{\textbf{y}}_i, \end{equation} we can rewrite \begin{equation} T-\frac{\textbf{r\,s}^t}{c}=\sqrt{\frac{4(M-1)(N-1)}{MN}}\sigma'_i\widetilde{\textbf{x}}_i\,\widetilde{\textbf{y}}_i ^t. \end{equation} Now, if condition (\ref{teorema2}) holds we can decompose $\rho$ in separable states as \begin{eqnarray}\nonumber \rho&=& \sqrt{\frac{4(M-1)(N-1)}{MN}}\frac{1}{2}\sigma'_i(\varrho_i+\varrho'_i) +c\rho_{rs}\\ &+& \left(1-c-\sqrt{\frac{4(M-1)(N-1)}{MN}}\left|\left|T-\frac{\textbf{r\,s}^t}{c}\right|\right|_{KF}\right)\frac{1}{MN}I_{MN}, \end{eqnarray} where $\varrho_i$, $\varrho'_i$ and $\rho_{rs}$ are such that $$\textbf{r}_i=\widetilde{\textbf{x}}_i,\quad \textbf{s}_i=\widetilde{\textbf{y}}_i,\quad T_i=\widetilde{\textbf{x}}_i\,\widetilde{\textbf{y}}_i ^t,$$ $$\textbf{r}'_i=-\widetilde{\textbf{x}}_i,\quad \textbf{s}'_i=-\widetilde{\textbf{y}}_i,\quad T'_i=\widetilde{\textbf{x}}_i\,\widetilde{\textbf{y}}_i ^t,$$ $$\textbf{r}_{rs}=\frac{\textbf{r}}{c},\quad \textbf{s}_{rs}=\frac{\textbf{s}}{c},\quad T_{rs}=\frac{\textbf{r\,s}^t}{c^2}.$$ As in the previous proof, and since $$\frac{\textbf{r}}{c}\leq\sqrt{\frac{M}{2(M-1)}}\frac{\textbf{r}}{||\textbf{r}||_2},\quad\frac{\textbf{s}}{c}\leq\sqrt{\frac{N}{2(N-1)}}\frac{\textbf{s}}{||\textbf{s}||_2},$$ all these coherence vectors belong to the corresponding Bloch spaces, and $\varrho_i$, $\varrho'_i$ and $\rho_{rs}$ satisfy (\ref{pureseparable}).\hfill$\square$ \vspace*{12pt} Notice that the use of the triangle inequality in Eq.\ (\ref{teorema2}) clearly shows that Theorem 2 is stronger than Proposition 3. Nevertheless, Proposition 3 provides the right way to understand the limit $c\rightarrow0$ in Theorem 2. The proof of these two results is constructive, so for the states that fulfill Eqs.\ (\ref{proposition3}) and/or (\ref{teorema2}) they provide a decomposition in separable states. These states are in general not pure, but they are equal to the tensor product of their reductions, so to obtain a decomposition in product states as in Eq.\ (\ref{separable}) simply apply the spectral decomposition to the reductions of $\varrho_i$, $\varrho'_i$, $\rho_r$, $\rho_s$ and/or $\rho_{rs}$. \vspace*{12pt} \noindent {\bf Remark~2:} The conditions of Proposition 3 and Theorem 2 depend only on $\textbf{r}$, $\textbf{s}$ and $T$. However, there can also be obtained sufficient conditions for separability which include more parameters. For example, one can derive the following sufficient condition, which also depends on the singular value decomposition of $T$, \begin{equation}\label{remark2} \left|\left|\sqrt{\frac{N}{2(N-1)}}\textbf{r}-\sigma_i\textbf{u}_i\right|\right|_2+\left|\left|\sqrt{\frac{M}{2(M-1)}}\textbf{s}-\sigma_i\textbf{v}_i\right|\right|_2+||T||_{KF}\leq\sqrt{\frac{MN}{4(M-1)(N-1)}}, \end{equation} since in this case $\rho$ admits a decomposition in separable states as in Eq.\ (\ref{decompositionth2}) but with $\varrho'_i=\varrho_i$, $$\textbf{r}_r=\sqrt{\frac{M}{2(M-1)}}\frac{\textbf{r}-\sqrt{\frac{2(N-1)}{N}}\sigma_i\textbf{u}_i}{\left|\left|\textbf{r}-\sqrt{\frac{2(N-1)}{N}}\sigma_i\textbf{u}_i\right|\right|_2}\textrm{ and } \textbf{s}_s=\sqrt{\frac{N}{2(N-1)}}\frac{\textbf{s}-\sqrt{\frac{2(M-1)}{M}}\sigma_i\textbf{v}_i}{\left|\left|\textbf{s}-\sqrt{\frac{2(M-1)}{M}}\sigma_i\textbf{v}_i\right|\right|_2}.$$ However, it seems reasonable to expect that condition (\ref{remark2}) will be stronger than those of Proposition 3 and Theorem 2 in few cases. \vspace*{12pt} For a restricted class of states the conditions of Theorem 1 and Proposition 3 take the same form, thus providing a necessary and sufficient condition which is equivalent to that of \cite{Hor2}: \vspace*{12pt} \noindent {\bf Corollary~1:} A bipartite state of qubits ($M=N=2$) with maximally mixed subsystems (i.e. $\textbf{r}=\textbf{s}=0$) is separable if and only if $||T||_{KF}\leq1$. \vspace*{12pt} \section{Efficacy of the New Criteria} \subsection{Examples} In what follows we provide examples of the usefulness of the criteria derived in the previous section to detect entanglement. We start by showing that Theorem 1 is strong enough to detect bound entanglement. \vspace*{12pt} \noindent {\it Example~1:} Consider the following $3\times3$ PPT entangled state found in \cite{Ben}: \begin{equation}\label{upb} \rho=\frac{1}{4}\left(I_9-\sum_{i=0}^4|\psi_i\rangle\langle\psi_i|\right), \end{equation} where $|\psi_0\rangle=|0\rangle(|0\rangle-|1\rangle)/\sqrt{2}$, $|\psi_1\rangle=(|0\rangle-|1\rangle)|2\rangle/\sqrt{2}$, $|\psi_2\rangle=|2\rangle(|1\rangle-|2\rangle)/\sqrt{2}$, $|\psi_3\rangle=(|1\rangle-|2\rangle)|0\rangle/\sqrt{2}$ and $|\psi_4\rangle=(|0\rangle+|1\rangle+|2\rangle)(|0\rangle+|1\rangle+|2\rangle)/3$. To construct the Bloch representation of this state we use as generators of $SU(3)$ the Gell-Mann operators, which are a reordering of those of Eqs.\ (\ref{generadoresa})-(\ref{generadoresc}), \begin{equation}\label{gellmann} \lambda_1=u_{01},\, \lambda_2=v_{01},\, \lambda_3=w_0,\, \lambda_4=u_{02},\, \lambda_5=v_{02},\, \lambda_6=u_{12},\, \lambda_7=v_{12},\, \lambda_8=w_1. \end{equation} Then, for the state (\ref{upb}) one readily finds \begin{equation} T=-\frac{1}{4}\left( \begin{array}{rrrrrrrr} 1 & 0 & 0 & 1 & 0 & 1 & 0 & \frac{\sqrt{27}}{2} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{9}{4} & 0 & -\frac{9}{8} & 0 & 0 & 0 & 0 & \frac{\sqrt{27}}{8} \\ 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & -\frac{9}{4} & 1 & 0 & 1 & 0 & -\frac{\sqrt{27}}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -\frac{\sqrt{27}}{4} & 0 & \frac{\sqrt{27}}{8} & 0 & 0 & \frac{\sqrt{27}}{2} & 0 & -\frac{3}{8} \\ \end{array} \right), \end{equation} so that $||T||_{KF}\simeq3.1603$, which violates condition (\ref{teorema1}). Thus, using Theorem 1 we know that the state is entangled. \vspace*{12pt} The above example proves that there exist cases in which Theorem 1 is stronger than the PPT criterion. One can see that this is not true in general, not even for the $2\times2$ case. \vspace*{12pt} \noindent {\it Example~2:} Consider the following bipartite qubit state, \begin{equation}\label{ejemplo2} \rho_\pm=p|\psi^\pm\rangle\langle\psi^\pm|+(1-p)|00\rangle\langle00|\,, \end{equation} where $p \in [0,1]$ and \begin{equation} |\psi^\pm\rangle=\frac{1}{\sqrt{2}}\big(|01\rangle\pm|10\rangle\big). \end{equation} The Peres-Horodecki criterion establishes that state (\ref{ejemplo2}) is separable iff $p=0$ \cite{Per}. For its Bloch representation we use as generators of $SU(2)$ the standard Pauli matrices $\sigma_x=u_{01}$, $\sigma_y=v_{01}$ and $\sigma_z=w_0$, thus finding that \begin{equation} \rho_\pm=\frac{1}{4}(I_2\otimes I_2+(1-p)\sigma_z\otimes I_2+(1-p)I_2\otimes\sigma_z\pm p\,\sigma_x\otimes\sigma_x\pm p\,\sigma_y\otimes\sigma_y+(1-2p)\sigma_z\otimes\sigma_z). \end{equation} Therefore, $||T||_{KF}=2p+|1-2p|$, which implies that $||T||_{KF}\leq1$ if $p\leq1/2$, so entanglement is detected only if $p>1/2$. \vspace*{12pt} \noindent {\it Example~3:} Werner states \cite{Wer} in arbitrary dimensions ($M=N=D$) are those whose density matrices are invariant under transformations of the form $\big(U\otimes U\big)\rho\left(U^\dag\otimes U^\dag\right)$. They can be written as \begin{equation}\label{wernerd} \rho_W=\frac{1}{D^3-D}[(D-\phi)I_D\otimes I_D+(D\phi-1)V], \end{equation} where $-1\leq\phi\leq1$ and $V$ is the ``flip'' or ``swap'' operator defined by $V\varphi\otimes\widetilde{\varphi}=\widetilde{\varphi}\otimes\varphi$. These states are separable iff $\phi\geq0$ \cite{Wer}. Using Eq.\ (\ref{T}) or inverting Eqs.\ (\ref{generadoresa})-(\ref{generadoresc}) we find that \begin{equation}\label{V} V=\sum_{i,j}|ij\rangle\langle ji|=\frac{1}{D}I_D\otimes I_D+\frac{1}{2}\sum_lw_l\otimes w_l+\frac{1}{2}\sum_{j<k}(u_{jk}\otimes u_{jk}+v_{jk}\otimes v_{jk}), \end{equation} so that \begin{equation} \rho_W=\frac{1}{D^2}\left(I_D\otimes I_D+\frac{D(D\phi-1)}{2(D^2-1)}\lambda_i\otimes\lambda_i\right), \end{equation} where $\lambda_i$ are the generators of $SU(D)$ defined as in Eqs.\ (\ref{generadoresa})-(\ref{generadoresc}). Then, $||T||_{KF}=D|D\phi-1|/2$, so that Theorem 1 only recognizes entanglement when $\phi\leq(2-D)/D$, while Proposition 3 guarantees that the state is separable if $(D-2)/[D(D-1)]\leq\phi\leq1/(D-1)$. When the latter condition holds, we can provide the decomposition in product states. To illustrate the procedure, consider the Werner state in, for simplicity, $2\times 2$ dimensions. In this case $V=I_2\otimes I_2-2|\psi^-\rangle\langle\psi^-|$, and defining $p=(1-2\phi)/3$ the state takes the simple form \begin{equation}\label{werner} \rho=\frac{1-p}{4}I_2\otimes I_2+p|\psi^-\rangle\langle\psi^-|=\frac{1}{4}(I_2\otimes I_2-p\,\sigma_x\otimes\sigma_x-p\,\sigma_y\otimes\sigma_y-p\,\sigma_z\otimes\sigma_z). \end{equation} From Corollary 1 we obtain that $\rho$ is separable iff $p\leq1/3$ as expected. From Proposition 3 we find that \begin{equation} \rho=\sum_{i=x,y,z}\sum_{j=1}^2\frac{p}{2}\rho_j^{(i)}+(1-3p)\frac{1}{4}(I_2\otimes I_2), \end{equation} where \begin{equation} \rho_1^{(i)}=\frac{1}{4}(I_2\otimes I_2+\sigma_i\otimes I_2-I_2\otimes\sigma_i-\sigma_i\otimes\sigma_i),\quad\rho_2^{(i)}=\frac{1}{4}(I_2\otimes I_2-\sigma_i\otimes I_2+I_2\otimes\sigma_i-\sigma_i\otimes\sigma_i). \end{equation} In this case we can reduce the number of product states in the decomposition to 8 by noticing that $\rho_1^{(i)}=|01\rangle_{i}\langle01|$ and $\rho_2^{(i)}=|10\rangle_i\langle10|$, where $\{|0\rangle_i,|1\rangle_i\}$ denote the eigenvectors of $\sigma_i$, so that, for instance, \begin{align} \rho&=\sum_{i=x,y}\frac{p}{2}(|01\rangle_{i}\langle01|+|10\rangle_i\langle10|)+\frac{1-p}{4}(|01\rangle_z\langle01|+|10\rangle_z\langle10|)+\nonumber\\ &+\frac{1-3p}{4}(|00\rangle_z\langle00|+|11\rangle_z\langle11|). \end{align} It is known, however, that a separable bipartite qubit state admits a decomposition in a number of product states less than or equal to 4 \cite{Woo,San}. \vspace*{12pt} \noindent {\it Example~4:} Isotropic states \cite{iso} in arbitrary dimensions ($M=N=D$) are invariant under transformations of the form $\big(U\otimes U^\ast\big)\rho\left(U^\dag\otimes U^{\ast\dag}\right)$. They can be written as mixtures of the maximally mixed state and the maximally entangled state \begin{equation}\label{maxentangled} |\Psi\rangle=\frac{1}{\sqrt{D}}\sum_{a=0}^{D-1}|aa\rangle, \end{equation} so they read\footnote{In the two-qubit case the Werner ($U\otimes U$ invariant) states (\ref{werner}) and isotropic ($U\otimes U^\ast$ invariant) states (\ref{isotropic}) are identical up to a local unitary transformation. For this reason some authors refer to the isotropic states as generalized Werner states, which might lead to confusion.} \begin{equation}\label{isotropic} \rho=\frac{1-p}{D^2}I_D\otimes I_D+p|\Psi\rangle\langle\Psi|. \end{equation} These states are known to be separable iff $p\leq(D+1)^{-1}$ \cite{iso} (see also \cite{Run,Pit}). Their Bloch representation can be easily found as in the Werner case, and it is given by \begin{equation} \rho=\frac{1}{D^2}\left(I_D\otimes I_D+\frac{pD}{2}\sum_{i=1}^{(D+2)(D-1)/2}\lambda_i\otimes\lambda_i-\frac{pD}{2}\sum_{i=D(D+1)/2}^{D^2-1}\lambda_i\otimes\lambda_i\right), \end{equation} where, as before, $\lambda_i$ are the generators of $SU(D)$ defined in Eqs.\ (\ref{generadoresa})-(\ref{generadoresc}). Now, $||T||_{KF}=pD(D^2-1)/2$. Thus, Theorem 1 is strong enough to detect all the entangled states ($||T||_{KF}\leq D(D-1)/2\Leftrightarrow p\leq(D+1)^{-1}$), while Proposition 3 ensures that the states are separable when $p\leq(D+1)^{-1}(D-1)^{-2}$. \subsection{Comparison with the CCNR criterion} Let $\rho$ be written in terms of the canonical basis $\{E_{ij}\otimes E_{kl}\}$ of $HS(H_A\otimes H_B)$ as \begin{equation} \rho=c_{ijkl}E_{ij}\otimes E_{kl}. \end{equation} The computable cross-norm criterion, proposed by O. Rudolph (see \cite{Rud1,Rud2} and references therein), states that for all separable states the operator $U(\rho)$ acting on $HS(H_A\otimes H_B)$ defined by \begin{equation} U(\rho)\equiv c_{ijkl}|E_{ij}\rangle\langle E_{kl}|, \end{equation} where $|E_{mn}\rangle$ denotes the ket vector with respect to the inner product in $HS(H_A)$ or $HS(H_B)$, is such that $||U(\rho)||_{KF}\leq1$. Soon after, K. Chen and L.-A. Wu derived the realignment method \cite{Che}, which yields the same results as the cross-norm criterion from simple matrix analysis. Basically, it states that a certain realigned version of a separable density matrix cannot have Ky Fan norm greater than one, thus providing a simple way to compute this condition. This is why we refer to it as the CCNR criterion. Like Theorem 1, it is able to detect all entangled isotropic states and recognizes entanglement for the same range of Werner states \cite{Rud1}. Although being weaker than the PPT criterion in $2\times2$ dimensions, it is also capable of detecting bound entangled states. However, the CCNR criterion detects optimally the entanglement of the state of Example 2 \cite{Rud1}, so one could think that it is stronger than Theorem 1. To check this possibility and to evaluate the ability of bound entanglement detection of Theorem 1, we have programmed a routine that generates $10^6$ random $3\times3$ PPT entangled states following \cite{Bru}. Our theorem detected entanglement in about $4\%$ of the states while the CCNR criterion recognized $18\%$ of the states as entangled. Moreover, every state detected by Theorem 1 was also detected by the CCNR criterion. This suggests that the CCNR criterion is stronger than Theorem 1 when $M=N$. We will show that this is indeed the case, but we will also see that this is not true when $M\neq N$. First we will prove the following lemma: \vspace*{12pt} \noindent {\bf Lemma~1:} $$\left|\left|\left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)\right|\right|_{KF}\geq ||A||_{KF}+||D||_{KF},$$ where $A,B,C,D$ are complex matrices of adequate dimensions. \vspace*{12pt} \noindent {\bf Proof:} Let $A$ and $D$ have the singular value decompositions $A=U_A\Sigma_AV_A^{\dag}$ and $D=U_D\Sigma_DV_D^{\dag}$. It is clear from the definition that the Ky Fan norm is unitarily invariant. Therefore, we have that \begin{align} \left|\left|\left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)\right|\right|_{KF}&=\left|\left|\left( \begin{array}{cc} U_A^{\dag} & 0 \\ 0 & U_D^{\dag} \\ \end{array} \right) \left( \begin{array}{cc} A & B \\ C & D \\ \end{array} \right)\left( \begin{array}{cc} V_A & 0 \\ 0 & V_D \\ \end{array} \right) \right|\right|_{KF}\nonumber\\ & \geq \textrm{Tr }\Sigma_A + \textrm{Tr }\Sigma_D, \end{align} where we have used that $||X||_{KF}\geq \textrm{Tr } X$, which is a direct consequence of the following characterization of the Ky Fan norm (see Eq.\ (3.4.7) in \cite{HorJoh}): \begin{equation} ||X||_{KF}=\max\{|\textrm{Tr } XU|: U \textrm{ is unitary}\}. \end{equation} \hfill$\square$ \vspace*{12pt} \noindent {\bf Proposition~4:} In the case of states with maximally mixed subsystems Theorem 1 is stronger than the CCNR criterion when $M\neq N$, while when $M=N$ they are equivalent. \vspace*{12pt} \noindent {\bf Proof:} When $\textbf{r}=\textbf{s}=0$ we have that \begin{equation} U(\rho)=\frac{1}{MN}(|I_M\rangle\langle I_N|+t_{ij}|\lambda_i\rangle\langle\tilde{\lambda}_j|). \end{equation} Since the matrix associated to the operator $U(\rho)$ is in this case block-diagonal we find that \begin{align} ||U(\rho)||_{KF}&=\frac{1}{\sqrt{MN}}\left|\left|\frac{|I_M\rangle}{\sqrt{M}}\frac{\langle I_N|}{\sqrt{N}}\right|\right|_{KF}+\frac{2}{MN}\left|\left|t_{ij}\frac{|\lambda_i\rangle}{\sqrt{2}} \frac{\langle\tilde{\lambda}_j|}{\sqrt{2}}\right|\right|_{KF}\nonumber\\ & =\frac{1}{\sqrt{MN}}+\frac{2}{MN}||T||_{KF}. \end{align} Thus, for states with maximally mixed subsystems the CCNR criterion is equivalent to \begin{equation} ||T||_{KF}\leq\frac{\sqrt{MN}(\sqrt{MN}-1)}{2}, \end{equation} from which the statement readily follows. \hfill$\square$ \vspace*{12pt} \noindent {\bf Proposition~5:} The CCNR criterion is stronger than Theorem 1 when $M=N$. \vspace*{12pt} \noindent {\bf Proof:} Since in this case in general $\textbf{r},\textbf{s}\neq0$, the matrix associated to the operator $U(\rho)$ is no longer block-diagonal. Hence, using Lemma 1, we now have that \begin{equation} ||U(\rho)||_{KF}\geq\frac{1}{N}+\frac{2}{N^2}||T||_{KF}, \end{equation} which proves the result considering that in the $M=N$ case the condition of Theorem 1 can be written as \begin{equation} \frac{1}{N}+\frac{2}{N^2}||T||_{KF}\leq1. \end{equation} \hfill$\square$ \vspace*{12pt} Proposition 4 explains why both criteria yield the same results for Werner and isotropic states. However, since $T$ is diagonal in these cases, the computations are much simpler in our formalism than in that of the CCNR criterion. Furthermore, when $M\neq N$ we have explicitly constructed entangled states which are detected by Theorem 1 but not by the CCNR criterion. Regrettably, Theorem 1 is not able to detect the PPT entangled states in $2\times4$ dimensions constructed by P. Horodecki in \cite{Hor97}. \section{Summary and Conclusions} We have used the Bloch representation of density matrices of bipartite quantum systems in arbitrary dimensions $M\times N$, which relies on two coherence vectors $\textbf{r}\in \mathbb{R}^{M^2-1}$, $\textbf{s}\in \mathbb{R}^{N^2-1}$ and a correlation matrix $T\in \mathbb{R}^{(M^2-1)\times (N^2-1)}$, to study their separability. This approach has led to an alternative formulation of the separability problem, which has allowed us to characterize entangled pure states (Proposition 1), and to derive a necessary condition (Theorem 1) and three sufficient conditions (Proposition 3, Theorem 2 and Remark 2) for the separability of general states. In the case of bipartite systems of qubits with maximally mixed subsystems Theorem 1 and Proposition 3 take the same form, thus yielding a necessary and sufficient condition for separability. We have shown that, despite being weaker than the PPT criterion in $2\times2$ dimensions, Theorem 1 is strong enough to detect PPT entangled states. We have also shown that it is capable of recognizing all entangled isotropic states in arbitrary dimensions but not all Werner states, like the CCNR criterion. Although the CCNR criterion turns out to be stronger than Theorem 1 when $M=N$, we have also proved that our theorem is stronger than the CCNR criterion for states with maximally disordered subsystems when $M\neq N$. Therefore, although Theorem 1 does not fully characterize separability, we believe that in combination with the above criteria it can improve our ability to understand and detect entanglement. Theorem 2, together with Proposition 3 (which is weaker save for the limiting case $c=0$) and the result of Remark 2 (which is more involved), offers a sufficiency test of separability, which, as a by-product, provides a decomposition in product states of the states that satisfy its hypothesis. $||T||_{KF}$ acts as a measure of the correlations inside a bipartite state and it is left invariant under local unitary transformations of the density matrix. This suggests the possibility of considering it as a rough measure of entanglement, as in the case of the realignment method \cite{Che}. We think that this subject deserves further study. We also believe that a deeper understanding of the geometrical character of the Bloch-vector space could lead to an improvement of the separability conditions presented here. \section*{Acknowledgements} \noindent The author is very much indebted to Jorge S\'anchez-Ruiz for useful comments and, particularly, for his suggestion of the present form of Theorem 2, which improved on a previous version of the theorem. He is also very thankful to Otfried G\"{u}hne for discussions and remarks that led to substantial improvements in Section 4.2. Financial support by Universidad Carlos III de Madrid and Comunidad Aut\'onoma de Madrid (project No. UC3M-MTM-05-033) is gratefully acknowledged.
1,108,101,564,061
arxiv
\section{Introduction} Divisors are fundamental objects of study within algebraic geometry and commutative algebra. In this package for \emph{Macaulay2} \cite{M2} we provide a wrapper object for studying Weil and Cartier divisors. We include tools for studying divisors on both affine and projective varieties. In this package, divisors are stored (roughly) as formal linear combinations of height one prime ideals, with coefficients from $\bZ$, $\bQ$, or $\bR$. We include group and scaling operations for divisors, as well as various methods for constructing modules $\O_X(D)$ from divisors $D$ (and vice versa). We also include code for determining whether divisors are linearly or $\bQ$-linearly equivalent, and for checking whether divisors are Cartier or $\bQ$-Cartier (or finding the non-Cartier locus). Finally, we also include a number of functions for handling reflexive modules, ideals and their powers. We realize there is a Divisor class defined in a tutorial in the \emph{Macaulay2} help system. In that implementation, divisors are given as a pair of ideals---an ideal corresponding to the positive part and an ideal corresponding to the negative part. Our approach offers the advantage that it is easier for the user to see the structure of the divisor. Additionally, certain operations are much faster in our approach. We warn the user that when a divisor is created, Gr\"obner bases are constructed for each prime ideal defining a component of the divisor. Hence, the construction phase may be slower than other potential implementations (and in fact slower than our initial implementation). However, we feel that this choice offers advantages of execution speed for several functions as well as substantial improvements in code readability. Within the package, it is tacitly assumed that the ambient ring on which we are working is normal. This includes the projective case, so care should be taken to make sure the graded ring you are working on satisfies Serre's second condition, see for example \cite[Theorem 8.22A]{Hartshorne} or \cite[Proposition 2.2.21]{BrunsHerzog}. While one can talk about subvarieties of codimension 1 on more general schemes, the correspondence between divisors and reflexive sheaves is much more complicated, so we restrict ourselves to the normal case. For an introduction to the theory of rank-1-reflexive sheaves on ``nice'' schemes, see \cite{HartshorneGeneralizedDivisorsOnGorensteinSchemes,HartshonreGeneralizedDivisorsAndBiliaison}; and for a more basic introduction see, for instance, \cite[Chapter II, Sections 5--7]{Hartshorne}. This paper is structured as follows. We first give a brief introduction to the construction, conversion, and group operation functions in \autoref{sec.Construction}. We then discuss the methods for converting divisors $D$ to modules $\O_X(D)$ and converting modules back to divisors in \autoref{sec.Modules}. \autoref{sec.Checks} describes how to determine if divisors satisfy varies properties (for instance {\tt isCartier} or {\tt isSNC}). We conclude with a section on future plans. \subsection*{Acknowledgements} We thank Tommaso de Fernex, David Eisenbud, Daniel Grayson, Anurag Singh, Greg Smith, Mike Stillman, and the referees for useful conversations and comments on the development of this package. We also thank the referees for numerous useful comments on this paper. \section{Construction, conversion and group operations for divisors} \label{sec.Construction} This package includes a number of ways to construct a divisor (an object of class {\tt WeilDivisor}), illustrated below. \begin{verbatim} i1 : needsPackage "Divisor"; i2 : R = QQ[x,y,u,v]/ideal(x*y-u*v); i3 : D = divisor({2, 3}, {ideal(x,u), ideal(x, v)}) o3 = 3*Div(x, v) + 2*Div(x, u) o3 : WeilDivisor on R i4 : E = divisor(x) o4 = Div(u, x) + Div(v, x) o4 : WeilDivisor on R i5 : F = divisor( (ideal(x,u))^2*(ideal(x,v))^3 ) o5 = 3*Div(v, x) + 2*Div(u, x) o5 : WeilDivisor on R \end{verbatim} The output is a formal sum of height one prime ideals. The first method requires a list of integers and a list of prime ideals. The third construction method finds a divisor defined by the given ideal in codimension 1. We have different classes for $\bQ$-divisors and $\bR$-divisors ({\tt QWeilDivisor} and {\tt RWeilDivisor} respectively), these are constructed via the {\tt divisor} function with the {\tt CoeffType =>} option set or by multiplying a {\tt WeilDivisor} by a rational or real number. See the documentation. All types of divisors are ancestors of the {\tt HashTable} class. Internally, they are hash tables where each key is a list of Gr\"obner basis generators for a prime height-one ideal and each associated value is a list, the first entry of which is the coefficient of the prime divisor and the second entry is the prime ideal used to display the divisor (it tries to match how the user entered it for ease of reading). Besides the keys corresponding prime divisors, there is a key that specifies the ambient ring and another key that points to a {\tt CacheTable}. One can convert one type of divisor to another more general class, either by multiplication by appropriate coefficients or by calling appropriate functions. \begin{verbatim} i2 : R = QQ[x,y,u,v]/ideal(x*y-u*v); i3 : D = divisor({1, -3}, {ideal(x,u), ideal(y,u)}); o3 : WeilDivisor on R i4 : 1/1*D o4 = -3*Div(y, u) + Div(x, u) o4 : QWeilDivisor on R i5 : toQWeilDivisor(D) o5 = Div(x, u) + -3*Div(y, u) o5 : QWeilDivisor on R \end{verbatim} One can convert $\bQ$ or $\bR$-divisors back to Weil divisors as follows. \begin{verbatim} i3 : D = divisor( {2/3, -1/2}, {ideal(x,u), ideal(y, v)}, CoeffType=>QQ) o3 = 2/3*Div(x, u) + -1/2*Div(y, v) of R o3 : QDiv i4 : isWDiv(D) o4 = false i5 : isWDiv(6*D) o5 = true i6 : toWDiv(6*D) o6 = 4*Div(x, u) + -3*Div(y, v) of R o6 : WDiv \end{verbatim} See the documentation for more examples. Alternately, the functions {\tt ceiling} and {\tt floor} will convert any $\bQ$ or $\bR$-divisor to a Weil divisor by taking the ceiling or floor of the coefficients respectively. More generally, one can call the method {\tt applyToCoefficients} to apply any function to the coefficients of a divisor (since divisors are a type of {\tt HashTable}, this is just done via the {\tt applyValues} function). Divisors form an Abelian group and one can add {\tt WeilDivisor/QWeilDivisor/RWeilDivisor} to each other to obtain new divisors. Likewise one can scale by integers, rational numbers or real numbers. \begin{verbatim} i3 : D = divisor({1, -2}, {ideal(x,u), ideal(x, v)}); E = divisor(u); o3 : WeilDivisor on R o4 : WeilDivisor on R i5 : 3*D+E o5 = 4*Div(x, u) + -6*Div(x, v) + Div(u, y) o5 : WeilDivisor on R i6 : D - (1/2)*E o6 = -2*Div(x, v) + 1/2*Div(x, u) + -1/2*Div(u, y) o6 : QWeilDivisor on R \end{verbatim} Since divisors are implemented as subclasses of hash tables, these operations are easily executed internally via the {\tt merge} and {\tt applyValues} commands. \section{Modules, ideals, divisors and applications} \label{sec.Modules} It is well known that divisors are so useful because of their connections with invertible and reflexive sheaves. This package includes many functions for conversion between these types of objects. For instance, we have the following. \begin{verbatim} 1 : R = QQ[x,y,z]/ideal(x*y-z^2); needsPackage "Divisor"; i3 : D = divisor(ideal(x, z)); o3 : WeilDivisor on R i4 : OO(D) o4 = image {-1} | x z | {-1} | z y | o4 : R-module, submodule of R i5 : divisor(o4) o5 = -Div(z, x) o5 : WeilDivisor on R i6 : divisor(o4, IsGraded=>true) o6 = Div(z, x) o6 : WeilDivisor on R \end{verbatim} The function {\tt OO} produces a module $M$ so that $\widetilde{M} \cong \mathcal{O}_X(D)$ (and the gradings of $M$ are set appropriately). The function {\tt divisor(M)} only produces a divisor $E$ such that $\O_X(E)$ is isomorphic $\widetilde{M}$. In particular, ${\tt divisor}({\tt OO}(D))$ will only produce a divisor linearly equivalent to $D$. The computation of {\tt OO(D)} is done via a straightforward strategy. If ${\tt D} = \sum_{i = 1}^m a_i P_i$ where $a_i$ are integers and the $P_i$ are primes, then we can compute $\bigotimes P_i^{-a_i}$ (keeping in mind negative exponents mean applying $\Hom_R(\blank, R)$) and compute the reflexification (see the method {\tt reflexify}). We do several things make this computation faster. Firstly, we break up the divisor into the positive and negative parts, and handle them separately (applying the {\tt reflexify} method as little as possible). Then, instead of computing $P_i^{|a_i|}$, which can have many generators, we form an ideal generated by the generators of $P_i$ raised to the $|a_i|$-th powers. Since this agrees with $P_i^{|a_i|}$ in codimension 1, it will give the correct answer up to reflexification. We have noticed substantial speed improvements using this technique. The function {\tt divisor(Module)} works as follows. First, it embeds the module as an ideal $I \subseteq R$ via the function {\tt embedAsIdeal}. After we have an ideal $I$, we call {\tt divisor(I)}. This finds a divisor $D$ such that $\O_X(D)$ is isomorphic to the given ideal $I$ (in a non-graded sense). The function {\tt divisor(Ideal)} does this by looking at the minimal height 1 primes $Q_i$ of the ideal $I$ and finding the maximum power $n_i$ such that $I \subseteq Q_i^{(n_i)}$ (the symbolic power). Note that because $Q_i$ has height 1, we know that $Q_i^{(n_i)} = (Q_i^{n_i})^{**}$ where $\blank^{**}$ denotes reflexification/S2-ification of the ideal. Finding this maximal power is done by a binary search. Again, for speed, we compute $(Q_i^{n_i})^{**}$ as $(Q_i^{[n_i]})^{**}$. If the {\tt IsGraded} flag is set to {\tt true}, {\tt divisor(Module)} corrects the degree of the divisor by adding or subtracting the divisor of an element of appropriate degree (you can see this being done in the example above). Finding the element of appropriate degree is accomplished via the function {\tt findElementOfDegree}, which uses Smith normal form in the multi-degree setting to solve the system of linear diophantine equations and find a monomial of the given multi-degree. \begin{remark} A variant of the function {\tt embedAsIdeal} appeared in the \emph{Macaulay2} documentation in the Divisor tutorial, it also appeared in the work of Moty Katzman. Our version is slightly more robust than those as it tries to embed the module into the ring in several ways, including some random attempts (see the documentation for how to control the number of random attempts). \end{remark} Instead of calling {\tt divisor(Module)}, one can call {\tt divisor(Module, Section => f)}. This function finds the unique effective divisor $D$ corresponding to a global section $f \in M$ of our module. The function {\tt divisor(Ideal, Section => f)} behaves similarly. The strategy is the same as above, additionally one tracks the section and adds a divisor corresponding to the section at the end. It is worth mentioning that the function {\tt canonicalDivisor} simply computes the canonical module via an appropriate $\Ext$ and then calls {\tt divisor(Module)}. If you wish to construct a canonical divisor on a projective variety, make sure to set the {\tt IsGraded} option to {\tt true}. \subsection{Pulling back divisors} Utilizing the module and divisor correspondence {\tt pullBack} pulls back a divisor along a map $\Spec S \to \Spec R$ induced by a ring map $R \to S$. The user has a choice of two algorithms built into this function. The first works for nearly any map, provided that the divisor is Cartier, and it also works for arbitrary divisors in the flat or finite case. The second, which is the default strategy, only gives accurate answers if the map is flat, or if the map is finite (or if the prime components of the divisor are Cartier). It can be faster than the first algorithm, especially for divisors with large coefficients. To use the first algorithm, use is {\tt Strategy => Sheaves}, to use the second, use the {\tt Strategy => Primes}. Let us briefly describe these two strategies. The first algorithm pulls back the sheaf $\O(D)$, keeping track of a section appropriately. The second algorithm extends each prime ideal defining a prime divisor of $D$ to an ideal of $S$, then it calls {\tt divisor(Ideal)} on each such ideal and sums them keeping track of coefficients appropriately. Consider the following example where we look at pulling back a divisor after blowing up the origin (we only consider one chart of the blowup). \begin{verbatim} i2 : R = QQ[x,y]; i3 : S = QQ[a,b]; i4 : f = map(S, R, {a*b, b}); o4 : RingMap S <--- R i5 : D = divisor(x*y*(x+y)*(x-y)) o5 = Div(x+y) + Div(-x+y) + Div(x) + Div(y) o5 : WeilDivisor on R i6 : pullback(f, D) o6 = Div(a+1) + Div(a-1) + 4*Div(b) + Div(a) o6 : WeilDivisor on S \end{verbatim} Note one of the components was lost in this pull-back, as it should have been. The coefficient of the exceptional divisor is also $4$, as it should be. \subsection{Global sections} There are only a few built-in functions for dealing with global sections of modules corresponding to divisors in the current version (in the future we hope to add more tools to do this). Of course, the user may call {\tt basis(0, OO(D))} to get the global sections of a module corresponding to a divisor. In this section, we describe briefly two functions for handling global properties of divisors. The function {\tt mapToProjectiveSpace} gets the global sections of $\O(D)$ and then computes the corresponding map to projective space. This of course assumes the divisor is graded. In the example below we project $\mathbb{P}^1 \times \mathbb{P}^1$ to one of its terms by calling {\tt mapToProjectiveSpace} along a divisor of one of the rulings. \begin{verbatim} i2 : R = QQ[x,y,u,v]/ideal(x*y-u*v); i3 : D = divisor(ideal(x,u)); o3 : WeilDivisor on R i4 : mapToProjectiveSpace(D) o4 = map(R,QQ[YY , YY ],{v, x}) 1 2 o4 : RingMap R <--- QQ[YY , YY ] 1 2 \end{verbatim} Still assuming the divisor is graded, the function {\tt baseLocus} finds a defining ideal for the locus where $\O(D)$ is \emph{not} generated by global sections. This is done by computing the cokernel of $\O^{\oplus n} \to \O(D)$ where $H^0(X, \O(D))$ has a basis of $n$ distinct global sections and the map is the obvious one. In the following example, we compute the base locus of a point on an elliptic curve, and also two times a point on an elliptic curve (which is degree 2 and hence base point free). \begin{verbatim} i2 : R = QQ[x,y,z]/ideal(y^2*z-x*(x+z)*(x-z)); i3 : D = divisor( ideal(x,y) ); o3 : WeilDivisor on R i4 : baseLocus(D) o4 = ideal (y, x) o4 : Ideal of R i5 : baseLocus(2*D) o5 = ideal 1 o5 : Ideal of R \end{verbatim} \section{Checking properties of divisors} \label{sec.Checks} The package {\tt Divisor} can check divisors for several properties. First, we describe the method {\tt isCartier}. \begin{verbatim} i2 : R = QQ[x,y,z]/ideal(x^2-y*z); i3 : D = divisor(ideal(x,y)); i4 : isCartier(D) o4 = false i5 : nonCartierLocus(D) o5 = ideal (z, y, x) o5 : Ideal of R i6 : isCartier(2*D) o6 = true i7 : isCartier(D, IsGraded => true) o7 = true \end{verbatim} The algorithm behind this function is as follows. We compute $\O_X(-D) \cdot \O_X(D)$ and check whether it is equal to $\O_X$. In general, $\O_X(-D) \cdot \O_X(D)$ always defines an ideal defining the non-Cartier locus of $D$, hence the command {\tt nonCartierLocus}. If the option {\tt IsGraded => true}, then the relevant functions saturate the ideals with respect to the irrelevant ideal. We also briefly describe the method {\tt isQCartier}. \begin{verbatim} i8 : isQCartier(5, D) o8 = 2 \end{verbatim} This checks whether any multiples $n \cdot D$ of a Weil divisor or $\bQ$-divisor $D$ are Cartier for any integer $n$ less than or equal to the first argument (in this case $n \leq 5$), it may actually search a little higher than the first argument in the $\bQ$-Cartier case due to rounding issues. If it finds that $nD$ is Cartier, it returns the integer $n$. If it doesn't find any Cartier divisors, it returns {\tt 0}. Some other useful functions are {\tt isPrincipal} and {\tt isLinearEquivalent}. Checking whether a divisor is principal just comes down to checking whether $\O_X(D)$ is a free module and checking whether $D \sim E$ just boils down to checking whether $D-E$ is principal. In the graded case, we can do this via \emph{Macaulay2} using the {\tt prune} and {\tt isFreeModule} commands. Unfortunately, we do not know an algorithm for deciding if a non-graded module is free (although we still try to prune the module and more). Therefore {\tt isPrincipal} and {\tt isLinearEquivalent} can give a false negative for non-graded divisors (the function warns you if this might be the case). Likewise, the option {\tt IsGraded} can be applied within {\tt isLinearEquivlavent}, which checks that $\O_X(D-E)$ is principal of degree zero. We can also check whether a divisor $D$ has simple normal crossings by calling {\tt isSNC}. This first checks that the ambient space of $D$ is regular, then it checks that each prime divisor of $D$ defines a regular scheme, finally it checks that every intersection of of prime divisors of $D$ also defines a regular scheme of the appropriate dimension. \section{Future plans} \label{sec.Plans} There are a number of ways that this package should be expanded. One of the most important things to be done is to further develop the global methods related to divisors. We have recently added the ability to check whether a divisor is very ample via the {\tt isVeryAmple} function, which uses the {\tt RationalMaps} package. However, there is much more to be done. Some basic intersection theory between divisors and smooth curves would be natural to include. While the latest version of the package stores the outputs of some functions in the cache, this can still be improved. For example, there are likely ways to take advantage of knowing that a given divisor is Cartier or $\bQ$-Cartier. \bibliographystyle{skalpha}
1,108,101,564,062
arxiv
\section{Introduction} While there is a substantial rise in implementing STEM \cite{stem12} applications into science education on the high school level \cite{stem15}, there still remains an inadequacy in the percentage of real-life applications in the science curriculum, especially in countries like Turkey. From a broader perspective, this leads to a conclusion that students' perceptions of science are not sufficiently associated with the measurable, testable and reproducible physical processes but rather with the applications of memorized mathematical expressions \cite{fen09}. The main underlying causes of students' misconceptions in the science applications can be attributed to the inadequacy or even non-existence of laboratory infrastructures, the orientation of the experimental setups towards the demonstration of mostly classical mechanical concepts, and the failure of such demonstrations in piquing the curiosity and/or enthusiasm of the students. These points, of course, surface if we can leave aside the general issues that affect the high school education on a more general level such as the large student population density and/or the shortage of qualified teachers, the inconsistency within the goals and objectives in the implementation of the general curriculum into the classrooms \cite{ozden2007}. Furthermore, it has been argued that scientific literacy is best taught by seeing science education as `education through science\cite{holbrook2009}. However, experiments in high school science courses are often detached from scientific frontiers that excite many students, discoveries like the observation of gravitational waves or the Higgs Boson. Therefore, it is of interest to design hands-on experiments that can be constructed and run by high school students themselves, suitable for their experience and attentiveness levels, yet still be connected to frontier fields such as cosmology and particle physics. Towards that goal, we have attempted (a) to develop innovative high-school-level experimental setups and documents that are connected to particle physics, (b) to test the developed materials first with interns, and then (c) to convert these into a week-long summer school program. Our attempt was carried in the form of an experimental particle physics school held in the summer of 2018 with financial support from T\"{U}B\.{I}TAK under the 4004 grant 118B491. For this organization: (1) An experienced team was formed from people who had prepared setups for CERN's high school contests, and/or supervised high school students, and/or provided training to Turkish high-school teachers for years at CERN. Actual researchers from CERN were also included in the team, as an extra means of improving the enthusiasm of the students. (2) Experimental setups were specifically designed to keep the technical and theoretical information required for comprehending the underlying processes at a minimum level for high school students. Considering the fact that not all of the students have the same scientific background, necessary accommodation was acheived by introducing lectures focusing on the basics. (3) The context of the experiments and the needed manual skills were selected from a wide range of possibilities in order to generate a wider range of opportunities for each of the students to enjoy and improve themselves. (4) Certain parts of the setups were chosen to allow participants to share their experiences with other students afterwards, and even perform entirely new experiments themselves. In this proceeding, we report the application process, the student profile, the program and the outcomes of the school. Activities held, experiments performed and lectures given are summarized. Finally, we briefly describe the assessments and the evaluations performed during and after the school. \section{Application Process} Following the announcement of the school over social media, the applications were accepted over a period of about two weeks. The applicants were asked to have one reference letter submitted and were expected to fill in an online form, in which they provided (i) basic identification data (name, gender, address, name and location of the high school they are attending, grade), (ii) information on any relevant technical experience (Arduino, Raspberry Pi, 3D printers, and programming in general), (iii) a brief description of past scientific activities (school projects, attendance at science fairs, participation in summer schools, etc.), and (iv) the average grade points from maths and physics courses of the most recent semester. Finally they were asked a couple of open-ended qyestions like \textit{"What does science mean to you?"} and \textit{"Write down 3 questions you wish to find answers to when you attend lidyef."}. The school had initially been conceived with only 11th grade students in mind, but before the start of the application process a decision was made to accommodate a small quota of 12th graders in order to facilitate peer education and to evaluate the interest level of students who would soon start preparing intensively for the university entrance exam in Turkey. In total, 681 valid applications were received. Some statistics are provided below: \begin{flitemize} \item \textbf{Gender distribution:} 44.5\% female, 55.5\% male. \item \textbf{Grade distribution:} 52.1\% 11th grade, 47.9\% 12th grade. \item \textbf{Distribution by province} is shown in Figure~\ref{fig:application_geo}. \item \textbf{Type of school:} 41.9\% Anatolian high school, 30.7\% science high school, 9.1\% private Anatolian high school, 5.9\% private science high school, 5.1\% religious high school, 7.3\% other types. \item \textbf{Last available physics grade:} 88.4 $\pm$ 33.3 and \textbf{mathematics grade:} 90.7 $\pm$ 13.3. \end{flitemize} \begin{figure}[hbt!] \centering \topcaption{\label{fig:application_geo} The poster and the geographic distribution of the applications} \subfloat[The poster lidyef2018]{\includegraphics[height=0.3\textwidth]{poster-resize.pdf} } \qquad \subfloat[The geographic distribution of the applications to lidyef2018]{\includegraphics[height=0.3\textwidth]{basvuru_iller_002.png} } \end{figure} A group of 4 academicians from the project team evaluated the applications. 30 students (24 from the 11th grade and 6 from the 12th grade) were selected, mostly based on their answers to the open-ended questions. The aim of the open-ended questions was to gauge the level of their motivations and their perceptions of science. Numerical measures such as the physics and math grades functioned only to eliminate the few students with insufficient technical background. To promote equality in opportunity, the students who had not had past opportunities to participate in science events were given preference. Although technical experience was not used as a selection criteria, a mix of experienced and inexperienced-but-highly-motivated students were aimed at the last step of the selection process. Finally, effort was spent to fairly match the fractions of students of a given gender (16 female, 14 male) and geographic location to the those of the national population. \section{Teaching Techniques} The school brought students together from different backgrounds with various abilities and personalities. In order to meet a broad spectrum of individual needs, we focused on implementing various student-focused teaching techniques. In order to facilitate a better grasp of the real-world applications of the topics covered in the lectures, a significant amount of visualization was integrated in the descriptions of the concepts, and the descriptions were enriched by adding daily-life examples. The experiments used in the program were specifically designed to increase the inclusion of the students to the inquiry process by introducing semi-free hands-on activities instead of fully-guided cookbook-type experiments. Throughout the program, the students were encouraged to work together in small groups (5 students in each group). By doing so, we aimed to engage the students in a cooperative learning \cite{coop16} process in which they were expected to work as a group with other students of different abilities. Hence, they had the chance to experience a peer-oriented environment in which they could freely express their ideas and respond to each other, and could develop and/or improve their self-confidence while attaining the necessary communication and critical thinking skills. As a part of the program, we also implemented the inquiry-based teaching method \cite{dos15} by requiring students to work on projects of their own choice. Some basic guidelines for safety and originality of the work were established and supplies were obtained and provided to the students as needed. The students conceived and implemented their projects entirely by themselves (some individually, others in groups of 2-4) during their free times (mostly evenings at their dormitory). Towards the end of the program, they were asked to present their work at an evening event, which stimulated lively discussions with the lecturers, project leaders and guide teachers. Throughout the program, we focused on helping the students explore their own ideas and improve their problem-solving skills. In order to achieve this, in all of lectures and experiments, we prioritized a chain of thought-provoking questions as a source of inspiration for them to be able to have the thinking process on their own and become more independent as learners. In order to accommodate the accelerated growth of technological improvements and to demonstrate the ubiquitous use of computers in particle physics, introductory-level lectures were included on the basics of programming and Arduino prototyping boards, and a Geiger counter application was implemented with Arduino. A disciplined yet friendly atmosphere of mutual respect was created for both the teachers and students. This was facilitated by having the guide teachers to stay at the same lodging as the students. Finally, after successful presentations of their projects, certificates of attendance and Arduino starter sets were handed out to the students, to award their contributions and to give them a chance to keep on exploring after they return to their high schools. \section{Structure of the Program} Lidyef-2018 program spanned a full week. Theory lectures were held in the mornings and experiments and applications in the afternoons. The students were expected to develop their own particle-physics-relate projects in the evenings to be presented at the end of the school. \subsection{Meeting and Introduction} On the first day, the students were picked up from the airports and bus terminals by the guide teachers. Once all had arrived, the program was introduced and the safety issues were explained by the project leaders and the project nurse. A small game was played to introduce students to one another. \subsection{Theoretical Lectures} Theory lectures were held in the mornings in two 40-minute sessions with a 10-minute break in between. The aim was to provide the theoretical background and prepare the students for the experiments and applications. The lectures were taught by experts (recent physics BSc graduates to full professors of particle physics). A complete list of lectures is provided below: \begin{flitemize} \item \textbf{Modern Physics and Cosmic Particles:} Basic concepts of quantum physics and special relativity and cosmic particle physics with a historical context. \item \textbf{Particle Physics:} Review of the Standard Model and the elementary particles. \item \textbf{Electricity and Magnetism:} Theory of electricity and magnetism for detector and accelerator physics. \item \textbf{About CERN:} Introduction to the laboratory, the Large Hadron Collider and its detectors. \item \textbf{Detector Physics:} Short history, basic working principles and types of particle detectors. \item \textbf{Basic Analysis Methods:} Significant figures, experimental uncertainties, precision and accuracy. \item \textbf{Accelerator Physics:} Short history, basic working principles and types of particle accelerators. \item \textbf{Theoretical Particle Physics:} Overview of theoretical particle physics concepts; historical and conceptual construction of modern physics, progress from Newtonian mechanics towards quantum field theories. \item \textbf{Applications of Particle Physics:} Applications in areas like medicine, computing, industry, etc. An engineering point of view into the world of particle physics. \end{flitemize} \subsection{Computer Based Lectures and Applications} A number of computer based lectures and application sessions were also included in the program. While they had initially been planned to span 90-minute periods, upon the feedback received from the students, it was concluded that the students would benefit more from longer sessions with longer discussion parts. Hence the duration of these lectures should be re-evaluated for future programs. The students were split into groups of three during the application sessions (Figure~\ref{fig:computer}). At the end of each application, they were given report sheets to fill out. \begin{figure}[hbt!] \centering \topcaption{\label{fig:computer} Computer based lectures and applications} \subfloat[Arduino applications]{{\includegraphics[width=0.28\textwidth]{arduino_app1-eps-converted-to.pdf} }} \qquad \subfloat[Geiger counter]{{\includegraphics[width=0.28\textwidth]{geiger.jpg} }} \qquad \subfloat[Hypatia screenshot]{{\includegraphics[width=0.28\textwidth]{hypatia6.png} }} % \end{figure} \begin{flitemize} \item \textbf{Introduction to programming:} Introduce how computers work and the main principles and basic methods of computer programming. At the end of the lecture students were advised to play the online "light bot" game (http://lightbot.com/hour-of-code.html). \item \textbf{Arduino lectures and applications:} Programming basic tasks with the Arduino IDE and introduction to taking data from sensors. In the hands-on session, the students were given LEDs, resistors, sensors, etc. and were expected to complete small sections of an already prepared source code that lights up the LEDs in a given pattern, and to print on the screen digital and analog data read from the sensors. \item \textbf{Geiger counter with Arduino:} A Raspberry Pi 3+, an Arduino Uno and a Geiger counter were provided to the students, as well as source code that prints the time that a particle passes through the counter. They were expected to take 6 minutes of data and draw histograms of the counts in 30-sec and and 1-min bins and comment on what they have seen. \item \textbf{Hands-on CERN ATLAS experiment data:} CERN has been supporting so-called Masterclass events for years where high school students analyze data from actual collision events collected by the ATLAS or CMS experiments. At lidyef2018, we followed the $Z$-path of the ATLAS Masterclass \cite{masterclass2018}. The students were introduced to the ATLAS Detector geometry, event reconstruction and software. Then they were expected to analyze $Z\rightarrow\ell\ell$ events using HYPATIA software \cite{hypatia} and reconstruct the mass of the $Z$ boson. \end{flitemize} \subsection{Experiments} Given the budget constraints six copies of each setup was prepared, and the students were split into groups of five to run the experiments concurrently. Before the start of the school, all the setups had been tested by two summer interns, who were themselves high school students. For each experiment, a report sheet was prepared to be filled by each group during the experiment and to be submitted at the end. The reports included the following parts: aim of the experiment, materials used, observations/data collected. The duration of each session was set to 90 minutes, but based on our observations, we would recommend extending this period to 2 hours in the future programs. The five different experiments that were carried out can be seen in Figure~\ref{fig:experiments}. \begin{figure}[hbt!] \centering \topcaption{\label{fig:experiments} Experiments} \subfloat[Cloud chamber setup]{\includegraphics[height=0.15\textwidth]{bulut.jpg} } \subfloat[Laser diffraction experiment]{\includegraphics[height=0.15\textwidth]{young1-eps-converted-to.pdf} } \subfloat[Interns measuring the speed of light with chocolate on microwave]{\includegraphics[height=0.15\textwidth]{cikolata1-eps-converted-to.pdf} } \subfloat[Salad bowl experiment]{\includegraphics[height=0.15\textwidth]{salata.png} } \subfloat[Model of the ATLAS toroid magnet]{\includegraphics[height=0.15\textwidth]{atlas_toroid.jpg} } % \end{figure} \begin{flitemize} \item \textbf{Wilson Cloud Chamber Experiment:} The cloud chamber is not only a detector which lead to the Nobel Prize winning discovery of positrons, and of muons and kaons, but is also used for educational purposes in particle physics. In the experiment, alcohol cloud is formed in a clear aquarium. In a dark room, with the help of a torch, the students could see the tracks of cosmic particles. At the end of the experiment, they discussed the qualitative differences between the observed tracks and which particles those tracks belong to. The background information provided to the students covered cosmic rays and the interactions of particles with matter. \item \textbf{Diffraction Experiment:} To observe the diffraction of light, a common laser pointer, a CD or DVD, ruler and paper was used. Using data about the CDs and observing the interference patterns, first the frequency of red and green light from lasers were computed. Next, using the obtained frequency values, diffraction pattern from a single strand of hair was studied and its thickness was measured. The students were provided background information on various modern physics concepts, especially about light. \item \textbf{Measuring the Speed of Light with Chocolate in a Microwave Oven:} Before this experiment, the students were provided background on the physics of waves and light. The turntable in the microwave owen was removed and two flat bars of chocolate (15.5$\times$7.5\,cm) were placed inside. The standing waves created in the microwave oven caused the chocolate to melt only at certain points: the nodes of the wave. By measuring the distance between the nodes, students obtained the wavelength and then calculated the speed of light. In Figure~\ref{fig:experiments} (c), the summer interns can be seen performing this experiment. \item \textbf{Salad Bowl Experiment}: To demonstrate how electrostatic accelerators work, a salad bowl accelerator model was constructed. Eight strips of conductive (copper) bands were placed on a salad bowl, and they were charged with static electricity obtained from a Van de Graff generator. The connections were done in a way that caused neighbouring bands to be oppositely charged. A ping pong ball coated with a conductive paint (or painted with graphite from a pencil) was placed in the bowl. At each strip it collected alternating electric charges and moving from one strip to the next it got accelerated. The students calculated the speed of the ball in the accelerator and compared the model with accelerators like the Large Hadron Collider. Students were provided background information on electricity, magnetism and accelerator physics. \item \textbf{ATLAS Toroid Model:} A fully working prototype of CERN's ATLAS detector's toroid magnet model can be built using a 3D Printer, copper wire and a low voltage power supply. The parts in the reference \cite{scoollab2016} were printed and glued. The coils were loaded with 80 turns of copper wire. Then all the parts were put together and connected to the power supply. The students observed the magnetic field lines using small compasses. After the experiment, a cathode ray tube was placed in the magnetic field of a pair of Helmholtz coils and the instructors demonstrated how electron trajectories are bent in a magnetic field. \end{flitemize} \subsection{Visits and Live Connection to CERN} The program included a number of \textit{extra curricular} visits, selected to complement the scientific program and also to provide a breathing space to the students. The destinations were: Sak{\i}p Sabanc{\i} Museum; \.{I}stanbul University Astronomy Department, Plenaterium, and Physics Department Laboratories; Bo\u{g}azi\c{c}i University South Campus, Physics Department, Kandilli Solar Observatory and Kandilli Detector, Accelerator and Instrumentation Lab (KahveLab). In addition to the visits, an hour-long live teleconference session was held, in which three Turkish scientists (a PhD student and two senior physicists) working at CERN introduced themselves and answered questions from the students. The aim was to inspire the students and give them a chance to meet scientists working at an international lab. \section{Assessment and Evaluation} Throughout the program, we implemented various methods in order to improve the validity and reliability of the assessment process of the school which are discussed in more detail below. \subsection{The Evaluation Survey} As an assessment tool for the overall success of the program, we prepared an evaluation survey and distributed to the students at the last day of the school. The survey involved questions related to the evaluation of the school program, instructors, guides and experiments and applications in the Likert scale (out of 5, where 1 means \textit{``Very unsatisfied''} and 5 means \textit{``Very satisfied''}). To briefly summarize the results, students rated the program with a high overall score of 4.09 $\pm$ 0.77. The content was found to be sufficient (4.27 $\pm$ 0.87), and the students stated that they would use their gains in the future (4.80 $\pm $ 0.61). They were very pleased with their instructors (4.69 $\pm $ 0.19), regarding them as experts on their fields (4.83 $\pm $ 0.38) and stated having good communication with them (4.80 $\pm $ 0.41). Similarly, they found their communication with guides as favourable (4.81 $\pm $ 0.41) and all agreed that guides were always helpful and had lead them properly (4.77 $\pm$ 0.57). They also agreed that the experiments and lectures had appropriately been designed for their levels (4.40 $\pm $ 1.04 and 4.14 $\pm $ 0.45 respectively), test equipment were in a good shape (4.57 $\pm $ 0.73), and the documentation explaining the experiments were mostly clear (3.97 $\pm $ 1.00). Additionally, they were satisfied about the social program (4.19 $\pm $ 0.41). Median evaluations were usually 4 or 5. Lowest points (1-2) were rarely given and for a few questions. We consider itself a positive sign that the students took the survey seriously, did not hesitate to criticize things they found to be insufficient and proposed improvements. \subsection{Assessment and Evaluation of the Computing Applications} A short test of 10 questions was issued to students in order to evaluate the comprehension of the computing lecture and its applications. The students scored an average of 6.62 out of 10, indicating that the lecture had met its basic objectives. At the end of the computing exercises, most of the students were observed to have written their own software using these Ardunio and Raspbery Pi cards in accordance with the objectives of the lecture. \subsection{Discussion and Evaluation} At the end of the school, a one-hour meeting was held in order to discuss and evaluate the performance. Below are some inferences and recommendations proposed by instructors, guides and students: \begin{itemize} \item All of the students agreed that schools with similar structure and curriculum should be organized regularly, and other students should be given this opportunity as well. \item The project team and students agreed that networking among students and instructors was very important, and could be useful in the future. \item It was proposed to organize the same program for high school teachers. \item The students agreed that all project members were self-sacrificing and helpful during the school. \item The students indicated that they had learned fundamentals of the inquiry process and felt highly motivated towards joining academia. \end{itemize} \subsection{Experiment reports} The student-filled reports from the six experiments were evaluated by two teachers of the program. Students scored an average of 4.2 out of 5. From this score, we have concluded positively about their ability for conducting experiments, writing reports, and preparing experimental setups. \subsection{Study of the Particle Physics Data} As part of the ATLAS data analysis exercise, the students were expected to search for a track of $W$ particles by using the Minerva Software. Most of them achieved to identify 7 tracks out of 10. The fact that 3 tracks were missed was taken as a good indication that the time assigned for the task was not enough and should be increased for future applications. We also delivered a test at the end of this exercise. The average score was 10.7 of out of 12. This score supports that the objective of the task, which was to impart information about particles, the ATLAS detector and basic analysis procedures, was achieved. \subsection{Project Work} As a part of the program, students were expected to work on projects of their own during their free time. All the students participated enthusiastically, with a couple of students contributing to more than one project. A total of ten projects were presented at the end of the school: they had designed games, written books for children, and built lively detector demo boards with LEDs and Arduinos, all demonstrating or teaching the topics covered throughout the week. The presentations were also very colorful and the students were observed to be excited to showcase their products. The breadth and ingenuity of the projects also indicated that the students had been able to obtain the basic knowhow for accessing the necessary information, and for designing and developing products. \section{Conclusions} The school was successfully held between 9-16 September 2018. The results of the assessment procedure discussed above show that both the students and the high school teachers considered the program to be immensely positive. A large fraction of the evaluation forms from the students indicated that the school had a huge impact on how they view the world and the role science plays in it, with many students expressing a desire to choose careers in STEM fields. The student projects were also found to be highly innovative, even by the high school teachers who are familiar with the education system in Turkey. Assessment procedures carried out throughout the week and feedback gathered during and after the lectures and experiments produced reliable results to conclude that the program did meet and surpass its objectives. To interested parties who want to organize similar events, we will make available the video recordings of the lectures, applications and experiments as well as the collected data from assessment methods. Furthermore, we prepared guidelines that can allow secondary education institutions to implement similar experimental setups for their own students \cite{lidyef2018}. We also foresee that the project will make a valueable contribution in increasing the success rate of students from Turkey when they participate in international contests organized by CERN or similar bodies. \section*{Acknowledgements} We wish to express our most sincere gratitude to those institutions and persons without whom lidyef2018 would not happen. We thank Cihan \c{C}i\c{c}ek and Assoc. Prof. Fatih Mercan for their great support while developing and submitting this project; Bo\u{g}azi\c{c}i University Department of Physics for providing us with the necessary lab spaces and classrooms; \.{I}stanbul University, TOBB ETU and KahveLab for their support; \.{I}stanbul Beyo\u{g}lu Anadolu High School for letting us use their 3D printers; KahveLab summer interns Do\u{g}a Aksen and Derin Sivrio\u{g}lu for their help in testing the experimental setups; our `guide' teachers Selma Erge, Ali Osman Erol, \.{I}rem Nekay, Ay\c{s}enur \"{O}zdemir, Yester \"{O}zmerino\u{g}lu, Ahmet Renklio\u{g}lu, Reyhan \"{O}z Y{\i}ld{\i}z for their support throughout the entire school; our instructors Metin Ar{\i}k, Emre \c{C}elebi, Serkant \c{C}etin, Berare G\"{o}kt\"{u}rk, O\u{g}uz Ko\c{c}er, Salim O\u{g}ur, Ayd{\i}n \"{O}zbey, Sezen Sekmen, Ezgi Sunar, G\"{o}khan \"{U}nel, H\"{u}seyin Y{\i}ld{\i}z, Alperen Y\"{u}nc\"{u} for their valuable lectures; Bo\u{g}azi\c{c}i University undergraduate students Sevim A\c{c}{\i}ks\"{o}z and Ekin Nur Cang{\i}r for their voluntary help whenever necessary, our friends Ezgi Ergenlik, Y{\i}lmaz Ergenlik and Mustafa G\"{u}rb\"{u}z for their voluntary local help.
1,108,101,564,063
arxiv
\section{Introduction}\label{SecI} Two-dimensional (2D) atomic crystals have extraordinary electronic and photonic properties which hold great promise in the application of photonics and optoelectronics~\cite{Novoselov2004,Novoselov2005,Bonaccorso2010}. A fundamental understanding of the light-matter interaction in the 2D atomically thin crystals is therefore essential to optoelectronics applications. Reflection and refraction are most common optical phenomena, which are governed by the boundary condition~\cite{Jackson1999}. In general, the interpretation of reflection and refraction on the surface of 2D atomically thin crystals is treated as a homogeneous medium with an effective refractive index and an effective thickness~\cite{Blake2007,Bruna2009,Kravets2010,Peters2011,Zhou2012,Golla2013}. Recently, it has been demonstrated that the Fresnel model based on the certain thickness and effective refractive index fails to explain the overall experiments on light-matter interaction~\cite{Merano2016I,Merano2016II,Merano2016III}. However, the Fresnel model based on the zero-thickness interface can give a complete and convincing description of all the experimental observation. Here, the 2D atomic crystals can be regarded as zero-thickness interface (a real 2D system). As a fundamental physical effect in light-matter interaction, spin-orbit coupling of light is attributed to the transverse nature of the photonic polarization. Photonic spin Hall effect (SHE) manifesting itself as spin- dependent splitting in light-matter interaction is considered as a result of spin-orbit interaction of light~\cite{Onoda2004,Bliokh2006,Hosten2008}. The photonic SHE can be regarded as a direct optical analogy of the SHE in electronic systems~\cite{Dyakonov1971,Hirsch1999,Murakami2003,Sinova2004,Wunderlich2005} where the spin electrons and electric potential are replaced by spin photons and a refractive index gradient, respectively. The analogy has been extensively demonstrated effective for the photonic SHE in 3D bulk crystals~\cite{Bliokh2007,Bliokh2008,Aiello2008,Luo2009,Menard2010,Luo2011,Zhou2013,Korger2014,Ren2015}. However, the effective refractive index fails to adequately explain the light-matter interaction in 2D atomic crystals. It would be interesting how to describe the spin-orbit interaction on the surface of 2D atomic crystals. In this paper, we examine the spin-orbit coupling of light on the surface of the freestanding atomically thin crystals. We develop a general model to describe the spin-orbit interaction of light on the surface of 2D atomic crystals. We find that it is not necessary to involve the effective refractive index to describe the spin-orbit interaction and photonic SHE on the surface of atomically thin crystals. Based on this model, the spin-dependent spatial and angular shifts in photonic SHE can be obtained. The strong spin-orbit interaction and the giant photonic SHE have been predicted, which can be explained as the large polarization rotation of plane-wave components in order to satisfy the transversality of photon. \section{A general model for spin-orbit interaction of light}\label{SecII} We first establish a general model to describe the spin-orbit interaction on the surface of 2D atomic crystals. Let us consider a Gaussian wavepacket with monochromatic frequency $\omega$ impinging from air to the surface of the 2D atomic crystal as shown in Fig.~\ref{Fig1}. The $z$ axis of the laboratory Cartesian frame ($x,y,z$) is normal to the surface of the 2D atomic crystal. A sheet of 2D atomic crystal is placed on the top of a dielectric substrate. In addition, the coordinate frames ($x_i,y_i,z_i$) and ($x_r,y_r,z_r$) are used to denote central wave vector of incidence and reflection, respectively. \begin{figure} \centerline{\includegraphics[width=8cm]{Fig1.eps}} \caption{\label{Fig1} Schematic illustrating the photonic SHE of wavepacket reflected on the surface of atomically thin crystal. On the surface of 2D atomic crystal, the photonic SHE occurs which manifests as the spin-dependent splitting. For the freestanding atomically thin crystal, we can choose the refractive index of substrate as $n=\sqrt{\varepsilon/\varepsilon_0}=1$.} \end{figure} In order to keep the discussion as general as possible, the conductivity and susceptibility tensors for the 2D atomic crystals can be written as \begin{eqnarray} \sigma_T=\left(\begin{array}{lcr}\sigma_{pp} & \sigma_{ps}\\\sigma_{sp}&\sigma_{ss}\end{array}\right),~~~ \chi_T=\left(\begin{array}{lcr}\chi_{pp} & \chi_{ps}\\\chi_{sp}&\chi_{ss}\end{array}\right) .\label{sigmachi} \end{eqnarray} The conductivity and susceptibility tensors can be applied to describe different 2D atomic crystals, such as graphene~\cite{Novoselov2004}, boron-nitride~\cite{Geim2013}, and black phosphorus~\cite{FXia2014}. Based on the boundary conditions, the incident, reflected, and transmitted amplitudes satisfy the following equations: \begin{eqnarray} E^s_i+E^s_r=E^s_t\label{ReflectI}, \end{eqnarray} \begin{eqnarray} \cos\theta_i(E^p_i-E^p_r)=\cos\theta_tE^p_t\label{ReflectII}, \end{eqnarray} \begin{eqnarray} \frac{\cos\theta_i}{Z_0}(E^s_i-E^s_r)&=&\left(\sigma_{ss}+\frac{ik\chi_{ss}}{Z_0} +\frac{\cos\theta_t}{Z}\right)E^s_t\nonumber\\ &&+\left(\frac{ik\chi_{sp}}{Z_0}+\sigma_{sp}\right)\cos\theta_tE^p_t\label{ReflectIII}, \end{eqnarray} \begin{eqnarray} \frac{1}{Z_0}(E^p_i+E^p_r)&=&\left(\sigma_{pp}\cos\theta_t+\frac{ik\chi_{pp}}{Z_0}\cos\theta_t +\frac{1}{Z}\right)E^p_t\nonumber\\ &&+\left(\frac{ik\chi_{ps}}{Z_0}+\sigma_{ps}\right)E^s_t\label{ReflectIV}. \end{eqnarray} Here, $p$ and $s$ represent the parallel and perpendicular polarization states, respectively. $\theta_i$ is the angle of incidence, $\theta_t$ is the transmission angle. ${Z_0}$ is the impedance in air and ${Z}$ is the impedance in media. The Fresnel's coefficients are determined by the incident and reflected amplitudes: $r_{pp}=E^p_r/E^p_i$ , $r_{ss}=E^s_r/E^s_i$, $r_{ps}=E^p_r/E^s_i$ and $r_{sp}=E^s_r/E^p_i$. From Eqs.~(\ref{ReflectI})-(\ref{ReflectIV}), the Fresnel's coefficients are obtained as \begin{equation} r_{pp}=\frac{\alpha^T_+\alpha_-^L+\beta}{\alpha_+^T\alpha_+^L+\beta}\label{RPP}, \end{equation} \begin{equation} r_{ss}=-\frac{\alpha^T_-\alpha_+^L+\beta}{\alpha^T_+\alpha_+^L+\beta}\label{RSS}, \end{equation} \begin{equation} r_{ps}=-r_{sp}=\frac{\Lambda}{\alpha^T_+\alpha_+^L+\beta}\label{Re-co}. \end{equation} Here, $\alpha^L_\pm=(k_{iz}\varepsilon\pm k_{tz}\varepsilon_0+i\varepsilon_0k_{iz}k_{tz}\chi_{pp}+k_{iz}k_{tz}\sigma_{pp}/\omega)/\varepsilon_0$, $\alpha^T_\pm=k_{tz}\pm k_{iz}+ik^2\chi_{ss}+\omega\mu_0\sigma_{ss}$, $\beta=-[ik_{iz}k_{tz}\chi_{ps}+k_{iz}k_{tz}\sigma_{ps}/(\omega\varepsilon_0)](ik^2\chi_{ps}+\omega\mu_0\sigma_{ps})/\mu_0$, $\Lambda=2k_{iz}k_{tz}(ik\chi_{ps}+Z_0\sigma_{ps})$, $k_{iz}=k_i\cos\theta_i$, and $k_{tz}=k_t \cos\theta_t$; $\varepsilon_0$ , $\mu_0$ are permittivity and permeability in vacuum; $\varepsilon$ is the permittivity of substrate; $\sigma_{pp}$, $\sigma_{ss}$ and $\sigma_{ps}$ ($\sigma_{sp}$ ) denote the longitudinal, transverse, and crossing-conductance conductivity, respectively. For horizontal polarization state $|{H}\rangle$ and vertical polarization state $|{V}\rangle$, the reflected polarization states related to the incident polarization states can be written as $[|{{H}}({k}_r)\rangle~|{{V}}({k}_r)\rangle]^T={M}_{R}[|{H}({k}_i)\rangle~|{V}({k}_i)\rangle]^T$. Here, ${M}_{R}$ can be expressed as \begin{eqnarray} \left[ \begin{array}{cc} r_{pp}-\frac{2k_{ry}\cot\theta r_{ps}}{k_{0}} &r_{ps}+\frac{k_{ry}\cot\theta (r_{pp}+r_{ss})}{k_{0}} \\ r_{sp}-\frac{k_{ry}\cot\theta (r_{pp}+r_{ss})}{k_{0}} & r_{ss}-\frac{2k_{ry}\cot\theta r_{ps}}{k_{0}} \end{array}\right]\label{Matrix}, \end{eqnarray} where $k_0=\omega/c$ is the wavevector in vacuum. In above equation, the boundary condition $k_{rx}=-k_{ix}$ and $k_{ry}= k_{iy}$ have been introduced. The polarizations associated with the angular spectrum components experience different rotations in order to satisfy the boundary condition after reflection. In the spin basis set, the polarization states of $|{H}\rangle$ and $|{V}\rangle$ can be decomposed into two orthogonal spin components $|{H}\rangle=(|{+}\rangle+|{-}\rangle)$, and $|{V}\rangle=i(|\mathbf{-}\rangle-|{+}\rangle)/\sqrt{2}$, where $|{+}\rangle$ and $ |{-}\rangle$ represent the left- and right-circular polarization components, respectively. We assume that the wavefunction in momentum space can be specified by the following expression \begin{equation} |\Phi\rangle=\frac{w_{0}}{\sqrt{2\pi}}\exp\left[-\frac{w^{2}_{0}(k_{ix}^{2}+k_{iy}^{2})}{4}\right]\label{GaussianWF}, \end{equation} where $w_{0}$ is the width of wave function. The total wave function is made up of the packet spatial extent and the polarization state. From Eqs.~(\ref{Matrix}) and (\ref{GaussianWF}), the reflected wave function $|{\psi}^{H}_{r}\rangle$ and $|{\psi}^{V}_{r}\rangle$ in the momentum space can be obtained as \begin{eqnarray} |{\psi}^{H}_{r}\rangle&=&\frac{r_{pp}{\pm}ir_{ps}}{\sqrt{2}}(1{\mp}ik_{rx}\delta_{x\pm}^H{\pm}ik_{ry}\delta_{y\pm}^H)\nonumber\\ &&\times\exp\left[-\frac{w^{2}_{0}(k_{ix}^{2}+k_{iy}^{2})}{4}\right]|\pm\rangle,\label{WFH} \end{eqnarray} \begin{eqnarray} |{\psi}^{V}_{r}\rangle&=&\frac{r_{ps}{\mp}ir_{ss}}{\sqrt{2}}(1{\pm}ik_{rx}\delta_{x\pm}^V{\pm}ik_{ry}\delta_{y\pm}^V)\nonumber\\ &&\times\exp\left[-\frac{w^{2}_{0}(k_{ix}^{2}+k_{iy}^{2})}{4}\right]|\pm\rangle,\label{WFV} \nonumber\\\label{WFV} \end{eqnarray} Here, $\delta_{x\pm}^H= (\partial r_{ps}/\partial \theta_i)/(r_{pp}\pm ir_{ps})$, $\delta_{y\pm}^H=(r_{pp}+r_{ss})\cot\theta+\partial r_{ps}/\partial \theta]/(r_{pp}\pm ir_{ps})-2i\cot\theta r_{ps}/[k_0(r_{pp}\pm ir_{ps})]$, $\delta_{x\pm}^V= (\partial r_{ps}/\partial \theta)/(r_{ss}\pm ir_{ps})$, $\delta_{y\pm}^V=(r_{pp}+r_{ss})\cot\theta+\partial r_{ps}/\partial \theta]/[(r_{ss}\pm ir_{ps})-2i\cot\theta r_{ps}/[k_0(r_{ss}\pm ir_{ps})]$. For weak spin-orbit interaction, $\delta^{H,V}_{rx}\ll{w_0}$ and $\delta^{H,V}_{ry}\ll{w_0}$, the reflected wavefunctions can be written as \begin{eqnarray} |{\psi}_r^{H}\rangle&\approx&\frac{r_{pp}\pm i r_{sp}}{\sqrt{2}}\exp(\mp ik_{rx}\delta_{rx\pm}^H\pm ik_{ry}\delta_{ry\pm}^H)\nonumber\\ &&\times\exp\left[-\frac{w^{2}_{0}(k_{ix}^{2}+k_{iy}^{2})}{4}\right]|\pm\rangle,\label{WPHI} \end{eqnarray} \begin{eqnarray} |{\psi}_r^{V}\rangle&\approx&\frac{r_{ps}\mp ir_{ss}}{\sqrt{2}}\exp(\pm ik_{rx}\delta_{rx\pm}^V\pm ik_{ry}\delta_{ry\pm}^V)\nonumber\\ &&\times\exp\left[-\frac{w^{2}_{0}(k_{ix}^{2}+k_{iy}^{2})}{4}\right]|\pm\rangle.\label{WPHI} \end{eqnarray} Here, we have introduced the approximations: $1+i\sigma k_{rx}\delta^{H,V}_{rx\pm}\approx\exp(i\sigma k_{rx}\delta^{H,V}_{rx\pm})$ and $1+i\sigma k_{ry}\delta^{H,V}_{ry}\approx\exp(i\sigma k_{ry}\delta^{H,V}_{ry})$, where $\sigma$ being the Pauli operator. The origin of the spin-orbit interaction terms $\exp(i\sigma k_{rx}\delta^{H,V}_{rx})$ and $\exp(i\sigma k_{ry}\delta^{H,V}_{ry})$ lie in the transverse nature of the photon polarization: The polarizations associated with the plane-wave components experience different rotations in order to satisfy the transversality in reflection. In general, the phases $\varphi_G=k_{rx}\delta^{H,V}_{rx}$ and $\varphi_G=k_{ry}\delta^{H,V}_{ry}$ can be regarded as the spin-redirection Berry phases~\cite{Berry1984,Bliokh2015}. It should be noted that the above approximations do not hold for strong spin-orbit interaction $\delta^{H,V}_{rx}\approx{w_0}$ or $\delta^{H,V}_{ry}\approx{w_0}$ . \section{Strong spin-orbit interaction}\label{SecII} We now develop the theoretical mode to describe the strong spin-orbit interaction of light on the surface of atomically thin crystals. Here, we restrict the isotropic case (such as graphene and boron-nitride), where the Fresnel reflection coefficients $r_{ps}=r_{sp}=0$. By making use of Taylor series expansion based on the arbitrary angular spectrum component, $r_{pp}$ and $r_{ss}$ can be expanded as a polynomial of $k_{ix}$: \begin{eqnarray} r_{pp}&=&r_{pp}(k_{ix}=0)+k_{ix}\left[\frac{\partial r_{pp}(k_{ix})}{\partial k_{ix}}\right]_{k_{ix}=0}\label{Talorkx}, \end{eqnarray} \begin{eqnarray} r_{ss}&=&r_{ss}(k_{ix}=0)+k_{ix}\left[\frac{\partial r_{ss}(k_{ix})}{\partial k_{ix}}\right]_{k_{ix}=0}\label{Talorky}. \end{eqnarray} To accurately describe the strong spin-orbit interaction, the Fresnel reflection coefficients are confined to the first order in Taylor series expansion. We then obtain \begin{eqnarray} |{\psi}^{H}_{r\pm}\rangle&=&\bigg[r_{pp}-\frac{k_{rx}}{k_0}\frac{\partial r_{pp}}{\partial\theta_i}{\mp}i\frac{k_{ry}\cot\theta_i}{k_{0}}(r_{pp}+r_{ss})\nonumber\\ &&{\mp}i\frac{k_{rx}k_{ry}\cot\theta_i}{k_{0}^2}\left(\frac{\partial r_{pp}}{\partial\theta_i}+\frac{\partial r_{ss}}{\partial\theta_i}\right)\bigg]\nonumber\\ &&\times\exp\left[-\frac{w^{2}_{0}(k_{rx}^{2}+k_{ry}^{2})}{4}\right]|\pm\rangle \label{WFHSI} \end{eqnarray} \begin{eqnarray} |{\psi}^{V}_{r\pm}\rangle&=&\bigg[r_{ss}-\frac{k_{rx}}{k_0}\frac{\partial r_{ss}}{\partial\theta_i}{\mp}i\frac{k_{ry}\cot\theta_i}{k_{0}}(r_{pp}+r_{ss})\nonumber\\ &&{\mp}i\frac{k_{rx}k_{ry}\cot\theta_i}{k_{0}^2}\left(\frac{\partial r_{pp}}{\partial\theta_i}+\frac{\partial r_{ss}}{\partial\theta_i}\right)\bigg]\nonumber\\ &&\times\exp\left[-\frac{w^{2}_{0}(k_{ix}^{2}+k_{iy}^{2})}{4}\right]|\pm\rangle.\label{WFVSI} \end{eqnarray} The large polarization in momentum space ($k$) will induces a giant spin-dependent splitting in position space. \begin{figure} \centerline{\includegraphics[width=8cm]{Fig2.eps}} \caption{\label{Fig2} Strong spin-orbit interaction of light on the surface of 2D atomic crystals. The spatial shift (a) and angular shift (b) on the surface of 2D atomic crystals as a function of incident angles $\theta_i$ and the refractive index of substrate $n$. The 2D atomic crystal is chosen as single-layer graphene with $\sigma_{pp}=\sigma_{ss}=6.08\times10^{-5}\Omega$, $\sigma_{ps}=\sigma_{sp}=0$, $\chi_{pp}=\chi_{ss}=1.0\times10^{-9}\mathrm{m}$, and $\chi_{ps}=\chi_{sp}=0$.} \end{figure} The transverse spatial and angular shifts of wave-packet at initial position ($z_r=0$) are given by \begin{equation} \langle{y_{r\pm}^{H,V}}\rangle=\frac{\langle\psi_{r\pm}^{H,V}|\partial_{k_{ry}} |\psi_{r\pm}^{H,V}\rangle}{\langle\psi_{r\pm}^{H,V}|\psi_{r\pm}^{H,V}\rangle}\label{PYHV}. \end{equation} \begin{equation} \langle{\Theta_{ry\pm}^{H,V}}\rangle=\frac{1}{k_0}\frac{\langle\psi_{r\pm}^{H,V}|{k_{ry}} |\psi_{r\pm}^{H,V}\rangle}{\langle\psi_{r\pm}^{H,V}|\psi_{r\pm}^{H,V}\rangle}.\label{AYHV} \end{equation} Substituting Eqs.~(\ref{WFHSI}) and (\ref{WFVSI}) into Eqs.~(\ref{PYHV}) and (\ref{AYHV}), respectively, the transverse spatial and angular shifts for two spin components are achieved. Figure~\ref{Fig2} shows the transverse spatial and angular shifts for the $|H\rangle$ polarization impinging on the surface of single-layer graphene. The transverse shifts are plotted as a function of incident angles and the refractive index of substrate. For one-layer graphene at wavelength $633\mathrm{nm}$, the surface conductivity and the surface susceptibility values are chosen as $6.08\times10^{-5}\Omega$ and $\chi_{pp}=\chi_{ss}=1.0\times10^{-9}\mathrm{m}$, respectively~\cite{Merano2016I}. The large spatial shifts occurs near a certain angle [Fig.~\ref{Fig2}(a)], which can be regarded as the Brewster angle on reflection at the interface of air-substrate~\cite{Luo2011}. The large angular shifts present due to the surface conductivity and the surface susceptibility of the 2D atomic crystal [Fig.~\ref{Fig2}(b)]. It should be mentioned that there are no angular shifts present at the air-substrate interface. The incident angle associated the large spatial and angular shifts increase with the decrease of the reflective index. It should be interesting for the free standing 2D atomic crystals in vacuum where the refractive index of substrate is chosen as $n=1$. \begin{figure} \centerline{\includegraphics[width=8cm]{Fig3.eps}} \caption{\label{Fig3} Giant spin-dependent shifts in photonic SHE when the wave packet is reflected from the surface of freestanding 2D atomically thin crystal. The spatial shifts (a) and the angular shifts (b) on the surface of atomically thin crystal with different layers $m=1,2,3$. The beam waist is chosen as $w_0=20\mathrm{{\mu}m}$. Other parameters are the same as in Fig.~\ref{Fig2}.} \end{figure} Assuming that an individual graphene sheet is a non-interacting monolayer, for few-layer graphene, the surface conductivity and the surface susceptibility values increase linearly with layer number $m$. The parameters for multi-layer graphene are obtained as $m\times6.08\times10^{-5}\Omega$ and $m\times1.0\times10^{-9}\mathrm{m}$. This assumption has been used to analyze the Goos-H\"{a}nchen effect on the surface of graphene, and the theoretic results coincide well with the experimental ones~\cite{Chen2017}. Figure~\ref{Fig3} shows the transverse spatial and angular shifts for the polarization impinging on the surface of free-standing graphene with different layers. The obtained spatial shift reaches $3000\mathrm{nm}$ near the grazing angle, which is several times larger than the wave length [Fig.~\ref{Fig3}(a)]. Correspondingly, the angular shifts reaches $0.25\mathrm{mrad}$ [Fig.~\ref{Fig3}(b)]. Note that the large spatial and angular shifts in Goos-H\"{a}nchen effect have been predicted theoretically~\cite{Merano2016II}. In addition, the quantized beam shifts~\cite{Kamp2016,Cai2017} should also been enhanced in quantum Hall regime when the wavepacket reflection near the grazing angle. We give a simple explanation of why the polarization rotation can be regarded as the origin of photonic SHE. In general, an arbitrary linear polarization state can be decomposed into two orthogonally circular polarization states with opposite phases: \begin{eqnarray} \left( \begin{array}{c} \cos\gamma\\ \sin\gamma \end{array} \right)= \exp(+i\varphi_G)|\mathbf{+}\rangle+\exp(-i\varphi_G)|\mathbf{-}\rangle,\label{Jones} \end{eqnarray} where $\gamma$ is the polarization angle. The polarization rotation will induce a geometric phase gradient and ultimately lead to the spin-dependent shifts. When the polarization rotation occurs in momentum space, the spatial shift will be induced $\langle{y_{r\pm}}\rangle=\sigma\partial\varphi_G/\partial{k_{ry}}$. Similarly, when the polarization rotation occurs in position space, the angular shift will be induced $\Delta{k_{ry\pm}}=\sigma\partial\varphi_G/\partial{y_r}$ and $\langle{\Theta_{ry\pm}}\rangle=\Delta{k_{ry\pm}}/k_r$. \begin{figure} \centerline{\includegraphics[width=8.5cm]{Fig4.eps}} \caption{\label{Fig4}[(a) and (b)] Polarization rotation of the beam reflected on the surface of freestanding 2D atomic crystal without substrate. [(c) and (d)] Polarization rotation on the surface of the 3D bulk crystal $n=1.515$. The spin-orbit interaction can be explained as the polarization rotation in momentum space and position space. Left column: Polarization rotation in momentum space. Right column: Polarization rotation in position space. The incident angle is chosen as $\theta_{i}=85^{\circ}$ and the beam waist is chosen as $w_0=10\mathrm{{\mu}m}$. Other parameters are the same as in Fig.~\ref{Fig2}. To make the polarization rotation characteristics more noticeable, we amplify the rotation angles by $10$ times. } \end{figure} We now examine the polarization rotation characteristics of wave-packet after reflection. From Eqs.~(\ref{Matrix}) and (\ref{GaussianWF}), the representation of the reflected wave function can be written as \begin{eqnarray} |\psi_{r}^H\rangle&=&\exp\left[-\frac{w^{2}_{0}(k_{rx}^{2}+k_{ry}^{2})}{4}\right]\bigg[\left(r_{pp}-\frac{k_{rx}}{k_0}\frac{\partial r_{pp}}{\partial\theta_i}\right)|{H}\rangle\nonumber\\ &&-\frac{k_{ry}\cot\theta_i}{k_{0}}(r_{pp}+r_{ss})|{V}\rangle+\frac{k_{rx}k_{ry}\cot\theta_i}{k_{0}^2}\nonumber\\ &&\times\left(\frac{\partial r_{pp}}{\partial\theta_i}+\frac{\partial r_{ss}}{\partial\theta_i}\right)|{V}\rangle\bigg],\label{HKIC} \end{eqnarray} \begin{eqnarray} |\psi_{r}^V\rangle&=&\exp\left[-\frac{w^{2}_{0}(k_{rx}^{2}+k_{ry}^{2})}{4}\right]\bigg[\left(r_{ss}-\frac{k_{rx}}{k_0}\frac{\partial r_{ss}}{\partial\theta_i}\right)|{V}\rangle\nonumber\\ &&+\frac{k_{ry}\cot\theta_i}{k_{0}}(r_{pp}+r_{ss})|{H}\rangle-\frac{k_{rx}k_{ry}\cot\theta_i}{k_{0}^2}\nonumber\\ &&\times\left(\frac{\partial r_{pp}}{\partial\theta_i}+\frac{\partial r_{ss}}{\partial\theta_i}\right)|{H}\rangle\bigg],\label{VKIC} \end{eqnarray} The wave function in position space is the Fourier transform of the wave function in momentum space: \begin{equation} |\Phi_{r}^{H,V}\rangle=\int\int{dk_{rx}dk_{ry}}|\psi_{r\pm}^{H,V}\rangle|k_{rx},k_{ry}\rangle.\label{Fourier} \end{equation} In fact, after the angular spectrum of incident wave function is known, Eq.~(\ref{Fourier}) together with Eqs.~(\ref{HKIC}) and (\ref{VKIC}) provides the general representation of reflected wave function in position space: \begin{eqnarray} |\Phi_{r}^H\rangle&=&\exp\left[-\frac{(x_{r}^{2}+y_{r}^{2})}{w^{2}_{0}}\right]\bigg[\left(r_{pp}-\frac{ix_r}{z_R}\right)|{H}\rangle\nonumber\\ &&-\frac{iy_r}{z_R}\cot\theta_i(r_{pp}+r_{ss})|{V}\rangle-\frac{x_ry_r}{z_R^2}\cot\theta_i\nonumber\\ &&\times\left(\frac{\partial r_{pp}}{\partial\theta_i}+\frac{\partial r_{ss}}{\partial\theta_i}\right)\bigg]|{V}\rangle\bigg]\label{HPR}, \end{eqnarray} \begin{eqnarray} |\Phi_{r}^V\rangle&=&\exp\left[-\frac{(x_{r}^{2}+y_{r}^{2})}{w^{2}_{0}}\right]\bigg[\left(r_{ss}-\frac{ix_r}{z_R}\right)|{V}\rangle\nonumber\\ &&+\frac{iy_r}{z_R}\cot\theta_i(r_{pp}+r_{ss})|{H}\rangle+\frac{x_ry_r}{z_R^2}\cot\theta_i\nonumber\\ &&\times\left(\frac{\partial r_{pp}}{\partial\theta_i}+\frac{\partial r_{ss}}{\partial\theta_i}\right)|{H}\rangle\bigg]\label{VPR}. \end{eqnarray} The above expressions are only confine to the isotropic case. For anisotropic 2D atomic crystals, more complex characteristics of polarization rotation would be involved. We plot the polarization distributions of the reflected field in Fig.~\ref{Fig4}. In the reflection on the surface of the 2D atomic crystal, a large polarization rotation present in both position and momentum spaces [Fig.~\ref{Fig4}(a) and~\ref{Fig4}(b)]. Therefore, the large geometric phase gradient and giant spin-dependent splitting should also occur in momentum space and position space. As a comparison, the polarization rotation on the 3D bulk crystal are also plotted [Fig.~\ref{Fig4}(c) and~\ref{Fig4}(d)]. Interestingly, only a tiny polarization rotation appears in momentum space which ultimately induces a tiny spin-dependent splitting in position space. There are no polarization rotation appears in position space, and thereby no angular shift occurs in position space. Therefore, the spin-dependent splitting in position space is related to the polarization rotation in momentums space, while the splitting in momentum space is attributed to the polarization rotation in position space. \section{Conclusions} In conclusion, we have developed a general model to describe the spin-orbit interaction of light on the surface of the free standing atomically thin crystals. In this model, the 2D atomic crystals can be regarded as zero-thickness interface. We have found that it is not necessary to involve the effective refractive index to describe the spin-orbit interaction and the photonic SHE in the atomically thin crystals. The giant photonic SHE manifesting itself as large spin-dependent splitting in both position and momentum space have been theoretically predicted. This strong spin-orbit interaction can be explained as the large polarization rotation of plane-wave components in order to satisfy the transversality of photons. We believe that these results may provide insights into the fundamental properties of spin-orbit interaction of light in 2D atomic crystals. \begin{acknowledgements} This research was supported by the National Natural Science Foundation of China (Grants Nos. 11274106 and 11474089). \end{acknowledgements}
1,108,101,564,064
arxiv
\section{Introduction and related results} All graphs in this paper are finite, simple and undirected graphs. By clique number of a graph $G$ we mean the largest order of a complete subgraph in $G$ and denote it by $\omega(G)$. Also $\alpha(G)$ stands for the largest number of independent vertices in $G$. For other notations which are not defined here we refer the reader to \cite{Bondy}. An {\it antimatching} of a graph $G$ is a matching of its complement. A {\it proper coloring} of $G$ is a coloring of the vertices such that any two adjacent vertices have different colors. Given a proper coloring of $G$, a {\it $t$-dominating set} $T = \{ x_1 ,...,x_t \}$ is a set of vertices such that $T$ is colored by $t$ colors and each $x_i$ is adjacent to $t-1$ vertices of different colors. In that case, and if $G$ is colored by exactly $t$ colors, we say we have a {\it $t$-dominating coloring} (or {\it $b$-coloring} with $t$ colors). We denote by $\varphi(G)$ the maximum number $t$ for which there exists a $t$-dominating set in a coloring of $V(G)$ by $t$ colors. This parameter has been defined by Irving and Manlove \cite{IM}, and is called the {\it $b$-chromatic number} of $G$. In a $b$-coloring of a graph $G$ with $b$ colors, any vertex $v$ which has at least $b-1$ neighbors with different colors is called a {\it representative}. We note that in any $b$-coloring of $G$ with $b$ colors there should be at least $b$ representatives with $b$ different colors. It is known that $\chi(G) \leq \varphi(G) \leq \Delta~+1$. Let $G$ be a graph with decreasing degree sequence $d(x_1) \geq d(x_2) \geq \ldots \geq d(x_n)$ and let $m(G)= max~ \{i : d(x_i) \geq (i-1) \}$. In \cite{IM}, the authors proved that for any graph $G$, $\varphi(G) \leq m(G)$ and they show that for tree $T$ the inequality $m(T)-1 \leq \varphi(G) \leq m(T)$ is satisfied. Also in \cite{IM} it is shown that determining $\varphi$ is NP-hard for general graphs, but polynomial for trees. Some authors have obtained upper or lower bounds for $\varphi(G)$ when $G$ belongs to some special families of graphs. In \cite{K}, $b$-chromatic number of graphs with girths five and six has been studied. Let $G$ be a graph of girth at least $5$, of minimum degree $\delta$ and of diameter $D$, it is shown in \cite{K} that $\varphi(G)> min \{\delta, D/6\}$ and that if $G$ is $d$-regular, of girth at least six, then $\varphi(G) = d+1$. In this last case the construction of a $b$-dominating coloring is done in a polynomial time. Kratochvil et al. in \cite{KTV} showed that for a $d$-regular graph $G$ with at least $d^4$ vertices, $\varphi(G)=d+1$. In \cite{KM}, Kouider and Mah\'eo discuss on the $b$-chromatic number of the cartesian product $G \Box H$ of two graphs $G$ and $H$. They prove that $\varphi (G \Box H)\geq \varphi (G)+ \varphi (H)-1$ when $G$ (resp. $H$) admits $\varphi(G)$ (resp. $\varphi(H)$) dominating set which is stable set. We also recall the following result of Klein and Kouider \cite{KK}. Let $\mathcal D$ be $K_4 \setminus e$. Let $G$ be a $P_4$-free graph, then $\varphi(G)= \omega(G)$, for any induced subgraph of $G$ if and only if $G$ is $2 \mathcal D$-free and $3P_3$-free. The aim of this paper is to obtain an upper bound for $b$-chromatic number of a graph $G$ when $G$ is restricted to be in special families of graphs. In section $2$ we consider $K_{1,t}$ -free graphs. In section $3$ we give an upper bound in terms of clique number and minimum clique partition of a graph. Finally in section $4$ bipartite graphs will be considered. We also show that all the bounds obtained in this paper are tight. \section{$K_{1,t}$ -free graphs} In this section we give an upper bound for the $b$-chromatic number of $K_{1,t}$ -free graphs, when $t\geq 3$. If $t=2$ then the graph should be a complete graph for which the $b$-chromatic number is the same as chromatic number. \begin{thm}\label{free} Let $G$ be a $K_{1,t}$ -free graph where $t\geq 3$, then $\varphi(G) \leq (t-1)(\chi(G)-1)+1$. \end{thm} \begin{proof} Suppose $\varphi(G)=b$. Let $C$ be a color class in a $b$-coloring of $G$ with $b$ colors, and let $x$ be any representative of the class $C$. Among the neighbors of the vertex $x$ there exist a set say $S$ of $b-1$ vertices with distinct colors. Let $H$ be the subgraph induced by $S$. By the hypothesis on the graph $G$ we have $\alpha(H) \leq t-1$ and also $\chi(H)\leq \chi(G) - 1$. So $$b-1=|V(H)|\leq \alpha(H).\chi(H)\leq (t-1)(\chi(G)-1).$$ Therefore $b\leq (t-1)(\chi(G)-1)+1$. \end{proof} In the following we show that the bound of the theorem can be achieved for each $t$. \begin{prop} For any integer $t\geq 3$ and $k$, there exists a $K_{1,t}$ -free graph $G$ such that $\chi(G)=k$ and $\varphi(G) = (t-1)(k-1)+1$. \end{prop} \begin{proof} Suppose the graph $H$ is defined as a vertex $v$ such that its neighbors form $t-1$ mutually disjoint cliques with $k-1$ vertices. Now we take $(t-1)(k-1)+1$ disjoint copies of $H$ and connect them sequentially by exactly one edge between any two consecutive copies. These edges can be incident to any vertex other than $v$ and its copies in other copies of $H$. We denote the resulting graph by $G$. It is easily seen that $G$ satisfies the conditions of theorem. \end{proof} We have now the following immediate corollary of theorem \ref{free}. \begin{cor} If $G$ is a claw-free graph, then $\varphi(G)\leq 2\chi(G)-1$. \end{cor} In \cite{CS} the important fact $\chi(G)\leq 2\omega(G)$ is proved for a claw-free graph $G$ satisfying $\alpha(G)\geq 3$, therefore using this result we obtain $\varphi(G)\leq 4\omega(G) - 1$. \section{$b$-coloring and minimum clique partition} In this section we give a bound for the $b$-chromatic number of a graph $G$ in terms of its minimum clique partition. A clique partition for a graph $G$ is any partition of $V(G)$ into subsets say $C_1, C_2, \ldots, C_k$ in such a way that the subgraph of $G$ induced by $C_i$ is a clique, for each $i$. We denote by $\theta(G)$ the minimum number of subsets in a clique partition of the graph $G$. We note that for any graph $G$, $\chi(\overline{G}) = \theta(G)$; also, if $\theta(G)=k$ then $G$ is the complement of a $k$-partite graph. Therefore the following result applies for all graphs. \begin{thm}\label{clique-partition} Let $G$ be a graph with clique partition number $\theta(G)=k$ and clique number $\omega$, then $\varphi(G) \leq \frac{k^2\omega}{2k-1}$. \end{thm} \begin{proof} If $k=1$ then $G$ is complete and equality holds in the inequality of theorem. We suppose now $k\geq 2$. As $\theta(G)=k$, therefore $\alpha(G)\leq k$. Let us consider a $b$-coloring of $G$ with $\varphi(G)=b$ colors. Let $i_j$ be the number of color classes with exactly $j$ elements. As $\alpha(G)\leq k$, we know that $i_j=0$ for $j\geq k+1$. So we have $$b=\sum_{j=1}^{k}i_j.$$ By hypothesis, there exists a partition of $V(G)$ into $k$ complete subgraphs, therefore if $n$ is the order of $G$, $$n = \sum_{j=1}^{k}j.i_j =b+ \sum_{j=2}^{k}(j-1)i_j\leq k\omega.~~~~{\bf (1)}$$ Suppose first that $i_1=0$. Then any color class in the $b$-coloring of $G$ with $b$ colors contains at least two vertices. This shows that $b\leq n/2$ and so $b\leq k\omega/2$. Finally $b \leq \frac{k^2}{2k-1}\omega$, because $\frac{k}{2}\leq \frac{k^2}{2k-1}$. Suppose now $i_1\geq 1$ and let $C_i=\{x_i \}$ for $i = 1, \ldots, i_1$. Then any representative of any color $j$ is adjacent to any $x_i$, where $i,j \leq i_1$ and $i \neq j$. It follows that $\{x_1, \ldots, x_{i_1}\}$ induces a complete subgraph of $G$. On the other hand, by the fact that there exists a partition of $V(G)$ into $k$ cliques and the pigeonhole principle, at least $\frac{\sum_{j=2}^{k}i_j}{k}$ of representative vertices form a complete graph. We know from above that any representative of any color $j$ is adjacent to any $x_i, i \neq j,i \leq i_1 $, consequently there is a complete subgraph of at least $i_1 ~+ \frac{\sum_{j=2}^{k}i_j}{k}$ vertices. We get the following inequality $$i_1 ~+ \frac{\sum_{j=2}^{k}i_j}{k} \leq \omega$$ in other words, $$ki_1 ~+ \sum_{j=2}^{k}i_j \leq k\omega. \hspace{1cm} {\bf (2)}$$ Now we have $$(2k-1)b=\sum_{j=1}^{k}(2k-1)i_j=(k-1)(\sum_{j=1}^{k}ji_j) + ki_1 + i_2 - \sum_{j=3}^{k}((k-1)j-2k+1)i_j~~for~ k\geq 3,$$ or $$(2k-1)b=(k-1)(\sum_{j=1}^k ji_j)+ki_1+i_2~~ for~ k=2.$$ So we have $$(2k-1)b \leq (k-1)n + ki_1 + i_2$$ and by inequality (1), $$(2k-1)b \leq (k-1)k\omega + ki_1 + i_2 \leq k^2 \omega - k( \omega - i_1 - \frac{i_2}{k}).$$ By inequality (2), $$(2k-1)b \leq k^2\omega.$$ The theorem is proved. \end{proof} \begin{preprop} For any positive integers $k\geq 2$ and $\omega$ divisible by $2k-1$, there exists a graph $G$ with $\theta(G)=k$ and with clique number $\omega$, such that $\varphi(G) = \frac{k^2\omega}{2k-1}$. \end{preprop} \begin{proof} In order to construct our graph we first consider three sets of mutually disjoint cliques $\{A_1,\ldots,A_k\}$, $\{B_1,\ldots,B_k\}$ and $\{C_1,\ldots,C_k\}$ where $|A_i| = \frac{\omega}{2k-1}$, $|B_i| = |C_i| = \frac{(k-1)\omega}{2k-1}$, for each $i=1,\ldots,k$. We put an edge between any two vertices $u$ and $v$ in $A_i$ and $A_j$ for each $i$ and $j$, therefore $\bigcup_i A_i$ forms a clique with $\frac{k\omega}{2k-1}$ vertices. Then we join any vertex in $A_i$ to any vertex in $B_j$ for each $i$ and $j$, and also we join the vertices of $A_i$ to all the vertices of $C_i$, for each $i$. We don't have any edge between any two vertices of $B_i$ and $B_j$ when $i\neq j$ and the same holds for $C_i$'s. Finally we put an edge between any two vertices $v\in B_i$ and $u\in C_j$ if $i\neq j$. We color the vertices in $\bigcup_i A_i$ with $1,2, \ldots, \frac{k\omega}{2k-1}$ and the vertices of $\bigcup_i B_i$ with distinct colors $\frac{k\omega}{2k-1}+1, \ldots, \frac{k^2\omega}{2k-1}$. The colors in $C_i$ will be the same as $B_i$ for each $i$. All the vertices of $A=\bigcup_i A_i$ are representatives and the same holds for $B=\bigcup_i B_i$. Now it is enough to show that the constructed graph $G$ has the clique number $\omega$. We first observe that if we identify each of cliques $A_i$'s, $B_i$'s and $C_i$'s with single vertices $a_i$'s, $b_i$'s and $c_i$'s, respectively, then we may define a graph $H$ with $3k$ vertices with vertex set $\{a_1,\ldots,a_k,b_1,\dots,b_k,c_1,\ldots,c_k\}$ where there is an edge between two vertices $u$ and $v$ if and only if their corresponding cliques are jointed in the graph $G$. Therefore to find the maximum number of vertices in a clique of the graph $G$, it is enough to check all cliques in $H$. Let us first set $A= \{a_1,\ldots,a_k\}$, $B=\{b_1,\dots,b_k\}$ and $C= \{c_1,\ldots,c_k\}$. Let $K$ be a clique in $H$. There are two possibilities: {\bf 1}. There is no vertex from $C$ in $K$. In this case $K$ may contain all vertices in $A$ and at most one from $B$, i.e. with at most $k+1$ vertices. This clique results in a clique in $G$ with $\frac{k\omega}{2k-1} + \frac{(k-1)\omega}{2k-1}=\omega$ vertices. {\bf 2}. There is one vertex from $C$ in $K$. In this case $K$ contains only one vertex from $C$ and at most one vertex from $A$ and one from $B$. And this may happen when we consider for example $a_1$ and its neighbor in $C$ and a suitable vertex in $B$. This clique of order three results in a clique in $G$ with $\frac{(k-1)\omega}{2k-1}+ \frac{\omega}{2k-1}=\omega$ vertices. \end{proof} The following result is an immediate corollary of theorem \ref{clique-partition}. \begin{cor} For any graph $G$, with clique-number $\omega(G)$, $$\varphi(G) \leq \frac{\chi^2(\overline{G})}{2~ \chi(\overline{G})-1}~\omega(G).$$ \end{cor} In the case that $G$ is the complement of a bipartite graph we have more knowledge on its $b$-colorings. We first introduce some special graphs which play an important role in $b$-colorings of the complement of bipartite graphs. Before we begin let us mention that when we say there is an anti-matching between two subsets $X$ and $Y$ in a graph $G$, it means that there exists a matching between $X$ and $Y$ in the complement of $G$. Let $G$ be the complement of a bipartite graph with a bipartition $(X,Y)$ in such a way that there are partitions of $X$ and $Y$ into three subsets as $X=A_1\cup B_1\cup C_1$ and $Y=A_2\cup B_2\cup C_2$ such that the following properties hold: \noindent {\bf 1.} Any vertex in $A_1$ is adjacent to any vertex in $A_2\cup B_2$, hence the subgraph induced by $A_1\cup A_2 \cup B_2$ in $G$ is a clique. Also any vertex in $A_2$ is adjacent to any vertex in $C_1$. \noindent {\bf 2.} $|B_1|=|B_2|$ and there is a perfect anti-matching between $B_1$ and $B_2$. \noindent {\bf 3.} $|C_1|=|C_2|$ and there is a perfect anti-matching between $C_1$ and $C_2$. In this case by letting $b=|A_1\cup A_2| + |B_1| + |C_1|=|X|+|A_2|$, we say $G$ belongs to the family $\mathcal{A}_b$. In fact $\mathcal{A}_b$ consists of all the complement of bipartite graphs $G$ which admits the above-mentioned properties. Let us remark that $\varphi(G) \geq b$ for any graph $G$ belonging to $\mathcal{A}_b$. In fact, we color $X \cup A_2$ with different colors; using the antimatchings, we give to $B_2$ the same colors as $B_1$, and to $C_2$ the same colors as $C_1$. \begin{thm}\label{cobip} Let $G$ be the complement of a bipartite graph, then $\varphi(G)\leq \frac{4\omega}{3}$. Furthermore, there is a $b$-coloring for $G$ with $b$ colors if and only if $G$ is in $\mathcal{A}_b$. \end{thm} \begin{proof} The inequality $\varphi(G)\leq \frac{4\omega}{3}$ follows from theorem \ref{clique-partition} where we put $k=2$. If $G$ is in $\mathcal{A}_b$ then by the comment before theorem \ref{cobip} there is a $b$-coloring for $G$ with $b$ colors. Suppose now we have a $b$-coloring for $G=(X,Y;E)$ with $b$ colors $\{1,2,\ldots,b\}$. Let the color classes be $U_1$, $U_2$, \ldots, $U_b$ and without loss of generality we may suppose that $|U_i|=1$ for $i=1,\ldots, t$. Therefore $|U_i| = 2$ for $i >t$. Set $A_1 = X \cap \cup_{i=1}^t U_i$ and $A_2 = Y \cap \cup_{i=1}^t U_i$. Let $u_i, i=t+1, ...,s$ be the representatives contained in $X$ and they form a set $B_1$; let $u_i, i=s+1,...,b$ be the remaining representatives, these are by definition in $Y \setminus A_2$ and they form a set $C_2$. As each color class for $i \geq t+1$, has exactly $2$ elements, there exists a set $B_2$ in $Y$ with $|B_2| = |B_1|$ with the same colors as $B_1$. Similarly there exists a set $C_1$ in $Y$ with $|C_2| = |C_1|$ with the same colors as $C_2$. There are perfect anti-matchings, one between $B_1$ and $B_2$ and another between $C_1$ and $C_2$. By the property of being representative for each element of $B_1 \cup C_2$, and by the unicity of the elements colored by the colors of $A_1 \cup A_2$, $A_1 \cup C_2$ is a clique, $A_2 \cup B_1$ is also a clique. Considering now the partitions $X=A_1\cup B_1\cup C_1$ and $Y=A_2\cup B_2\cup C_2$ we conclude that $G$ belongs to $\mathcal{A}_b$. \end{proof} We get easily the following consequence. \begin{cor} Let $G$ be the complement of a bipartite graph. Then $$\varphi(G)= b$$ if and only if~ $\max \{k : G \in \mathcal{A}_k \}~=b$. \end{cor} \vspace{5mm} Let us remark that for the larger class of graphs $G$ with $\alpha(G)=2$, there is no linear bound for $b$-chromatic number (even for chromatic number) in terms of $\omega(G)$ because, as pointed out in \cite{CS}, or each $k$ there is a graph $G$ with $\alpha(G)=2$ such that $\chi(G)\geq k/2$ and $\omega(G)=o(k)$. \section{Bipartite graphs} In this chapter we suppose $G$ is a bipartite graph. In the following by the {\it biclique number} of $G$ we mean the minimum number of mutually disjoint complete bipartite graphs which cover the vertices of $G$. Any subgraph of $G$ which is complete bipartite graph is called a {\it biclique} of $G$. \begin{thm}\label{bipart} Let $G$ be a bipartite graph with bipartition $(X,Y)$, on $n$ vertices and biclique number $t$. Then $$\varphi(G)\leq \lfloor \frac{n-t+4}{2} \rfloor.$$ \end{thm} \begin{proof} We first prove the theorem for graphs $G=(X,Y)$ which admits a $b$-coloring with $b=\varphi(G)$ colors where there is at least one representative in $X$ and also one in $Y$. Let these representatives be $v\in X$ and $u\in Y$. Then $v$ has at least $b-1$ neighbors in $Y$ and also $u$ has at least $b-1$ neighbors in $X$. These give us two bicliques with cardinality at least $2b-2$ and at most $2b$. As $t$ is the biclique number of $G$ there should be at least $t-2$ vertices in $G$. Therefore $n\geq 2b-2+t-2$ and so $b\leq \frac{n-t+4}{2}$. Now we may suppose that in a $b$-coloring of $G$, all the representatives are in a same part say $X$. Let $i_j$ be the number of color classes in the $b$-coloring with exactly $j$ colors in part $Y$. There are two possibilities. Suppose first that $i_1\geq 1$. Let $w$ be the vertex in any color class with cardinality one in the part $Y$. Then $w$ belongs to $Y$ and has $b-1$ neighbors which are representatives of different colors. So $w$ is representative. This is a contradiction with the hypothesis on $X$. Now let $p$ be the minimum number with $i_p\neq 0$. So $p\geq 2$. We have $n\geq b+ bp = b(p+1)$. We may suppose at this stage that all vertices in $X$ are representatives and of different colors, and, also any vertex $y$ in $Y$ is adjacent to, at least, some representative and is the unique vertex of color c(y) of this representative. Otherwise, if we delete those vertices in $X$ which are not representatives and also vertices in $Y$ without the previous property, we prove the inequality of the theorem, for the resulting graph $G'$. Let $n-l$ be its order. We have $$\varphi(G')\leq \frac{n-l-t'+4}{2}.$$ As the inequality $t\leq t'+l$ holds, it can be seen easily that we get $$ \varphi(G')\leq \frac{n-t+4}{2}.$$ We also have, by construction of $G'$, $\varphi(G)\leq \varphi(G')$. So it is enough to prove the theorem for the case where all the vertices in $X$ are representative and any vertex in $Y$ is adjacent to some representative. By these hypothesis, and as the coloring is proper, we have $t\leq b$. Finally since $n\geq b(p+1)$ and $p\geq 2$ then $2b \leq n - (p-1)b \leq n-b \leq n-t$. Therefore $b\leq \frac{n- t}{2}$. \end{proof} \begin{prop} For any integer $p\geq 3$, there is a bipartite graph $G$ with $n=3p-4$ vertices and biclique number $t=p-1$ such that $b(G) = p = \lfloor \frac{n-t+4}{2} \rfloor$. \end{prop} \begin{proof} We first consider a complete bipartite graph $K_{p-1,p-1}$ minus a matching with size $p-2$. We color one part say $X$ of this graph with $1,3,4, \ldots, p$ and other part say $Y$ with $2,3,4, \ldots, p$ so that vertices with colors $1$ and $2$ are adjacent. Then we add $p-2$ extra vertices to the part $X$ and color all of them with $2$. Now put a matching with size $p-2$ between these extra vertices in $X$ and all the vertices in $Y$ except the one colored by $2$. The resulting graph $G$ is a graph of order $n=3p-4$ with a $b$-coloring with $p$ colors. In fact, $b(G)$ is exactly equal to $p$ because $\Delta(G)=p-1$. By the precedent theorem, $$b(G) \leq \frac{n-t+4}{2}.$$ It is then enough to show that $t \geq p-1$ to get the reverse inequality. Because there are $p-2$ vertices with degree one, at least $p-2$ cliques are required to cover these vertices. We observe that we need an extra clique to cover the vertex colored by $2$ in $Y$. Now we get the equality $b(G) = \lfloor \frac{n-t+4}{2} \rfloor$. \end{proof}
1,108,101,564,065
arxiv
\section{Introduction} Large samples of galaxies with multi-wavelength photometric data and spectroscopic data \citep[e.g.][]{york00,gia04,scoville07} have allowed galaxy evolution studies to shift from luminosity and colour to the more physical plane of stellar mass and star formation rate (SFR), and to examine other aspects of galaxy evolution such as median ages of stellar populations, importance of bursts, correlations with stellar and gas phase metallicity, AGN activity, dust extinction etc. Deep photometric surveys have also renewed emphasis on photometric redshifts, which require a model of intrinsic galaxy colour. A crucial step in such analyses is fitting a parametric star formation history (SFH) to each galaxy's observed spectral energy distribution (SED). One of the most commonly used parametrisations is the so called ``$\tau$-model,'' where the SFH is described by an exponentially decreasing SFR with e-folding time $\tau$ \citep[e.g.,][]{bruzual83,papovich01,shapley05,lee09,pozzetti10,foster09}, sometimes augmented with bursts (e.g., \citealt{kauffmann03,brinchmann04}). Some authors \citep[e.g.,][]{lee10} advocate an alternative model where the SFH is parametrised by $t\,e^{-t/\tau}$, which allows linear growth at early times followed by an exponential decline at late times. This is sometimes referred to as the ``delayed" or ``extended" $\tau$-model, but in this paper we shall refer to it as the lin-exp (linear-exponential) model. Another approach is to fit the SFR in bins of time (e.g., \citealt{panter07,tojeiro09}), which has the virtue of generality but places strong demands on the quality of the data and the accuracy of the population synthesis models. In this paper, we examine which parametrised models give good descriptions for the SFHs of galaxies in smoothed particle hydrodynamics (SPH) simulations. While SPH simulations are not a perfect representation of the real Universe, they provide useful guidance on what functional forms of the SFH may be necessary. The simulations incorporate a wide range of processes that may be important in galaxy SFHs, including accretion with environmental dependence and stochastic variations, minor and major mergers, conversion of gas to stars based on physical conditions in the interstellar medium, ejection of gas in galactic winds, and recyclying of this ejected material through subsequent accretion. Our galaxy SFHs are obtained from a hydrodynamical simulation of a cosmological volume ($50h^{-1}{\rm Mpc}$ cube, modeled with $2\times 288^3$ particles) incorporating gas cooling, star formation, and galactic winds driven by star formation. The simulation reproduces the observed stellar mass function and HI mass function of galaxies quite well up to galaxies with stellar mass $M_* \sim 10^{11} M_\odot$ \citep{oppenheimer10,dave13}, but it fails at higher masses, predicting galaxies that are more massive than observed and have too much late-time star formation. To obtain a better match to the observed galaxy stellar mass function and colour-magnitude diagram, we also apply a post-processing prescription to the simulation that has the effect of quenching star formation in massive galaxies. Throughout the paper we consider both the galaxy population predicted directly by the simulation and the population that results from applying this post-processing prescription; we refer to the former as the ``Winds'' population and the latter as the ``Winds+Q'' population. We compare parametric models for the SFH of galaxies to the SFH of galaxies in our SPH simulation to investigate how well various parametric forms describe the shape of the SFH of simulated galaxies and how well they predict the colours and mass-to-light ratios of their stellar populations. We then examine the effectiveness of these parametric models as practical tools. We compute the colours of our simulated galaxies using their true SFHs, then fit parametric models to the colours and ask how well these fits recover physical parameters of interest such as the stellar mass, population age, and current star formation rate. Our investigation offers insight into the shortcomings of commonly used SFH models in regard to the biases and errors they introduce in estimates of the physical parameters of galaxies. It also has practical import for future studies of galaxy evolution, as we suggest a new parametric model that describes the full variety of SFHs in our simulations, which can be used straightforwardly to interpret observations of galaxy populations. Sources of systematic uncertainty in the SED fitting technique have been investigated by several authors. For example, \cite{conroy09} investigate uncertainties arising from the assumed form of the initial mass function (IMF) and the treatment of stellar evolution, \cite{papovich01} examine errors introduced by uncertainties in the dust extinction, and \cite{lee09} find significant errors and systematic biases when standard methods for inferring ages and stellar masses of Lyman Break Galaxies (LBGs) are applied to mock catalogues constructed from semi-analytic models. Obtaining accurate stellar spectral libraries that cover the full range of stellar populations present in observed galaxies is a particular challenge. (See \citealt{conroy13} for a review of the stellar population synthesis technique, and inference of physical parameters of galaxies from SEDs.) However, even if these sources of systematic errors are controlled or eliminated, additional systematic uncertainties arise from the assumed shape of the SFH, and to get the most from the data one wants a model that has as much flexibility as needed but not more than is needed. It is this aspect of population synthesis modeling that we focus on in this paper. In \S2, we describe our simulation, our method for identifying halos and galaxies, our prescription for quenching star-formation in massive galaxies, and our parametric SFH models. In \S3, we fit these parametric models to the SFHs of our simulated galaxies and ask how well they describe the SFHs and the physical parameters that can be derived from them. In \S4, we fit these models to the colours of galaxies and compare the physical parameters obtained from these fits to their ``true'' values in the simulation. We summarise our results and discuss their implications in \S5. The Appendix compares our parametric SFH model to the one proposed recently by \cite{behroozi13}. \section{Methods} \subsection{Simulation} Our simulation is performed using the GADGET-2 code \citep{springel05} as modified by \cite{oppenheimer08}. Gravitational forces are calculated using a combination of the Particle Mesh algorithm \citep{hockney81} for large distances and the hierarchical tree algorithm \citep{barnes86,hernquist87} for short distances. The SPH algorithm is entropy and energy conserving and is based on \cite{springel02}. The details of the treatment of radiative cooling can be found in \cite{katz96} and \cite{oppenheimer06}. The details of the treatment of star formation can be found in \cite{springel03}. Briefly, each gas particle satisfying a temperature and density criterion is assigned a star formation rate, but the conversion of gaseous material to stellar material proceeds stochastically. The parameters for the star formation model are selected so as to match the $z=0$ relation between star formation rate and gas density \citep{kennicutt98,schmidt59}. We adopt a $\Lambda$CDM cosmology (inflationary cold dark matter with a cosmological constant) with $\Omega_m$=0.25, $\Omega_{\Lambda}$=0.75, $h\equiv$ H$_0/$100 km s$^{-1}$Mpc$^{-1}$=0.7, $\Omega_b=0.044$, spectral index $n_s=0.95$, and the amplitude of the mass fluctuations scaled to $\sigma_8=0.8$. These values are reasonably close to recent estimates from the cosmic microwave background \citep{larson10} and large scale structure \citep{reid10}. We do not expect minor changes in the values of the cosmological parameters to affect our conclusions. We follow the evolution of $288^3$ dark-matter particles and $288^3$ gas particles, i.e. just under 50 million particles in total, in a comoving box that is $50h^{-1}{\rm Mpc}$ on each side, from $z=129$ to $z=0$. The dark matter particle mass is 4.3 $\times$ $10^8$ $M_{\odot}$, and the SPH particle mass is 9.1 $\times$ $10^7$ $M_{\odot}$. The gravitational force softening is a comoving 5$h^{-1}$ kpc cubic spline, which is roughly equivalent to a Plummer force softening of 3.5$h^{-1}$ kpc. Higher resolution simulations of smaller volumes (e.g., \citealt{dave13}) would yield more accurate SFH predictions at a given stellar mass, but for the purposes of this paper we considered it more important to have good statistics for a wide range of galaxy masses and environments, so we chose to focus on a larger volume simulated at lower resolution. There are also uncertainties associated with the hydrodynamics algorithm itself (e.g., \citealt{agertz07,sijacki12}), but for our purposes these are less important than the physical uncertainties associated with feedback and quenching mechanisms. We discuss how all of these effects might impact our conclusions in \S 5. Our simulation incorporates kinetic feedback through momentum driven winds as implemented by \cite{oppenheimer06,oppenheimer08}, where the details of the implementation can be found. Briefly, wind velocity is proportional to the velocity dispersion of the galactic halo, and the ratio of the gas ejection rate to the star formation rate is inversely proportional to the velocity dispersion of the galactic halo. Except for the differences in volume and particle mass, our simulation is similar to the ``vzw'' simulations of \cite{oppenheimer10}, who investigate the growth of galaxies by accretion and wind recycling and compare predicted mass functions to observations. The specific simulation analyzed here was also used by \cite{zu10} to investigate intergalactic dust extinction and \cite{simha12} to investigate subhalo abundance matching techniques. The vzw model is quite successful at reproducing observations including quasar metal absorption line statistics at high \cite{oppenheimer06} and low \citep{oppenheimer12} redshift, the HI mass function of galaxies at $z=0$ \citep{dave13}, and the galaxy stellar mass function up to luminosities $L \sim L_*$ \citep{oppenheimer10}. We identify dark matter haloes using a FOF (friends-of-friends) algorithm \citep{davis85}. The algorithm selects groups of particles in which each particle has at least one neighbour within a linking length, set to the interparticle separation at one-third of the virial overdensity , which is calculated for the value of $\Omega_M$ at each redshift using the fitting formula of \cite{k96}. Many of our plots distinguish between the behavior of central galaxies of halos and satellite galaxies (see \citealt{simha09} for discussion). The most massive object in a FOF halo is referred to as a central galaxy and the others as satellites. Hydrodynamic cosmological simulations that incorporate cooling and star formation produce dense groups of baryons with sizes and masses comparable to the luminous regions of observed galaxies \citep{katz92,evrard94}. We identify galaxies using the Spline Kernel Interpolative DENMAX (SKID\footnote{http://www-hpcc.astro.washington.edu/tools/skid.html}) algorithm \citep{gelb94,katz96}, which identifies gravitationally bound particles associated with a common density maximum. We refer to the groups of stars and cold gas thus identified as galaxies. The simulated galaxy population becomes substantially incomplete below a threshold of $\sim$64 SPH particles \citep{murali02}, which corresponds to a baryonic mass of 5.8 $\times 10^9$ $M_{\odot}$. For this work, we adopt a higher stellar mass threshold of $10^{10}$ $M_{\odot}$ because star formation histories of lower mass galaxies are noisy, even if their final stellar masses are reasonably robust. For each SKID identified galaxy at $z=0$, we trace the formation time of its stars. We then bin these star formation events in time to extract a star formation rate as a function of time. From this SFR$(t)$, we generate colours using the stellar population synthesis package FSPS \citep{conroy09}. We assume solar metallicity and ignore dust extinction in this paper. While dust exinction and metallicity should be additional free parameters when fitting SFHs to colours of observed galaxies, our goal in this paper is to isolate the impact of SFH shape, so we avoid the step of inserting and then attempting to remove these additional effects. \subsection{Quenching Model} The left panel of Figure \ref{fig:sub1} shows the $(g-r)$ colour of SPH galaxies against their $r$-band magnitudes. Each point is an individual galaxy. Central galaxies are shown as black crosses and satellite galaxies as green open circles. The red galaxies in our simulation are almost all low luminosity satellites. In common with other hydrodynamic simulations that do not include AGN feedback, our simulation fails to produce bright red galaxies. Our simulation also matches the observed galaxy luminosity function up to $L_*$ but overpredicts the number density of galaxies brighter than $L_*$. To obtain bright red galaxies and match the observed stellar luminosity function, we construct a ``quenched winds'' (Winds + Q) population by implementing a post-processing prescription. Various lines of observational evidence suggest that star formation is quenched in high mass halos. The most commonly invoked explanation is AGN feedback. This could be connected to the transition between cold and hot mode gas accretion \citep{keres05,dekel06}, with AGN feedback being more effective in suppressing the accretion of the hot gas that appears in higher mass halos. In any case, models of galaxy formation that match the observed luminosity function or stellar mass function have some quenching mechanism for central galaxies of high mass halos. In our simulation we have a complete history of star formation events for each galaxy. We modify the SFR of galaxies based on their parent halo mass at the epoch of each star formation event. In halos more massive than a halo mass threshold, $M_{\rm max}$, we eliminate all star formation events. In less massive halos we multiply the mass of stars formed by a factor that scales linearly with halo mass down to a halo mass $M_{\rm min}$, below which we do not alter the star formation rate. The effect of this post-processing can be described as \begin{equation} \begin{rm} SFR (winds + Q) = SFR (winds) \end{rm} \times f(M_H) \end{equation} where $M_H$ is the parent halo mass and \begin{equation} f(M_H) = \begin{cases} 1 & \mbox{if } (M_H < M_{\rm min})\\ {{\Delta M}/({M_{\rm max}-M_{\rm min}})} & \mbox{if }(M_{\rm min} < M_H < M_{\rm max})\\ 0 & \mbox{if } (M_H > M_{\rm max})~ \end{cases} \end{equation} and \begin{equation} \Delta M = M_{\rm max}-M_H~. \end{equation} We implement this procedure for both central and satellite galaxies. We set $M_{\rm min}$ equal to 1.5 $\times 10^{12}$ $M_{\odot}$ and $M_{\rm max}$ equal to 3.5 $\times 10^{12}$ $M_{\odot}$. These parameters are chosen to obtain an approximate match to the observed stellar mass function. \cite{oppenheimer10} discuss the comparison between the predicted stellar mass functions and observational estimates in some detail. Roughly speaking, our simulation reproduces observational estimates for $M_*$ $<$ 10$^{11}$ M$_{\odot}$ , but it predicts excessive galaxy masses (at a given space density) for $M_* > 10^{10.8} M_{\odot}$. Figure \ref{fig:sub2} shows the galaxy stellar mass function for the simulation (Winds) and for the post-processing prescription implemented here (Winds + Q). While we do not obtain a perfect match, Winds+Q is a substantial improvement over the original simulation results. The right panel of Figure \ref{fig:sub1} shows the colour-magnitude diagram for the quenched winds population. While our post-processing does produce a ``massive red sequence'' of central galaxies, we do not match the detailed properties of the observed colour-magnitude diagram such as its slope. Also, the brightest galaxies in our model are blue, while the brightest observed galaxies are red. (The most massive galaxies in Winds + Q are on the red sequence, but their higher mass-to-light ratios make them less luminous than the most massive blue galaxies.) \cite{gabor10} perform a detailed investigation of various post-processing prescriptions, including one similar to our own, and compare them to the observed stellar mass function, slope of the colour-magnitude diagram and other observables. We refer the reader to that paper for a more in depth discussion of physical and observational issues. Clearly our Winds + Q model is not perfect, but it serves our purpose of providing a model population of galaxies that is a reasonable match to observations, including a large population of quenched massive galaxies as in the real universe. \subsection{SFH models} The simplest SFH model we consider is the ``$\tau$-model'' where the SFH is described by an exponentially decreasing function with timescale $\tau$, starting at time $t_i$: \begin{equation} {\rm SFR}(t-t_i) = Ae^{-(t-t_i)/\tau}. \end{equation} Our simulated galaxies show little star formation before 1 Gyr, and we therefore set $t_i=1{\,{\rm Gyr}}$; choosing $t_i=0$ would make the fits to the simulated SFHs systematically worse. In practice the simulated SFHs rise to a maximum rather than starting at a high value, so we also consider the lin-exp model with \begin{equation} {\rm SFR}(t) = A(t-t_i)e^{-(t-t_i)/\tau} . \label{eq:lin-exp} \end{equation} As with the exponential model, we treat $\tau$ as a free parameter and set $t_i=1{\,{\rm Gyr}}$. In the limit of $\tau \gg t_0$, the lin-exp SFH is simply a linearly rising SFR$(t)$, while in the limit $\tau\rightarrow 0$ it is a burst at $t=t_i$. For greater generality we consider a lin-exp model at early times that transitions to a linear ramp at late times: \begin{equation} {\rm SFR}(t) =\begin{cases} {A(t-t_i)e^{-(t-t_i)/\tau}} & \mbox{for } (t \leq t_{\rm trans})\\ {{\rm SFR}(t_{\rm trans}) + {\Gamma(t-t_{\rm trans})}} & \mbox{for } (t > t_{\rm trans}) .\\ \end{cases} \label{eq:general} \end{equation} The key feature of this model is that it decouples the late-time SFR (after $t_{\rm trans}$) from the early-time SFR, though it requires continuity at $t_{\rm trans}$. The new parameter $\Gamma$ determines the slope of the SFH at $t>t_{\rm trans}$, allowing rising, flat, or falling SFR$(t)$. (In the FSPS code, $\Gamma$ is referred to as ``$\tan\theta$'', where $\theta$ is the angle of the linear ramp in the SFR-$t$ plane.) We set SFR$=0$ at times when eq.~\ref{eq:general} gives a negative result, thus permitting a truly truncated SFH. We can describe the SFHs of our simulated galaxies adequately by setting $t_i=1{\,{\rm Gyr}}$ and $t_{\rm trans}=t_0-3.5{\,{\rm Gyr}} = 10.7{\,{\rm Gyr}}$, so that $\Gamma$ is the only new free parameter. We refer to this as our ``2-parameter model.'' Modest changes in $t_{\rm trans}$ would lead to changes in $\Gamma$ values but would not significantly degrade the fits. However, the timing of the SFR transition and the onset of early star formation could be affected by the specific feedback physics and numerical resolution in our simulations, so real galaxies may have greater variety. We therefore also consider a 3-parameter model in which $t_{\rm trans}$ is a fitting parameter and a 4-parameter model in which both $t_{\rm trans}$ and $t_i$ are fitting parameters. The 4-parameter model is the most general one we consider in this paper and the one we advocate for practical applications. Note that lin-exp is a special case of the 4-parameter model with $t_{\rm trans}=t_0$ and $t_i=1{\,{\rm Gyr}}$. During the course of our investigation, we also explored other possibilities. For example, we considered models like $t^{\alpha}e^{-t/\tau}$, but rejected them because they added more complexity without significantly improving the performance in describing the SFHs of our simulated galaxies. We also considered other simple extensions of the lin-exp model such as adding a constant late-time component instead of a linear ramp. However, this model proved insufficient to describe the SFH of galaxies in our simulation, which sometimes show a truncation or a rising late-time SFR. Because our simulations rarely show discrete ``bursts'' of star formation, we did not investigate parametrisations like the $\tau$+burst models of \cite{kauffmann03}. \section{Fitting Models to Simulated SFHs} In this Section, we fit the five parametric models described in \S2.3, namely, the $\tau$-model, lin-exp model, and the 2, 3, and 4-parameter models to the SFHs of galaxies in the SPH simulation (the ``Winds'' population) and the post-processed simulation (the ``Winds+Q'' population). We choose model parameters to minimize the cost function \begin{equation} C = \int_0^{t_{\rm max}} \sqrt{(\rm{SFR}-{\rm{SFR}}_{model})^2}~dt. \label{eqm} \end{equation} We also impose an integral constraint: \begin{equation} \int_0^{t_{\rm max}} {{\rm SFR_{model}}}(t)~dt = \int_0^{t_{\rm max}} {{\rm SFR}}(t)~dt~. \end{equation} The mass-to-light ratio, age quantiles of the stellar population, and predicted colours are independent of the normalization of the SFH and are fully determined by the shape alone. Figure \ref{fig:sub3.1} shows the SFH of a representative selection of SPH galaxies. SFR normalized by the stellar mass at $z=0$ is shown on the vertical axis against time on the horizontal axis. The thick gray solid curve shows the SFH in the simulation, and the other curves show the best-fit models of the different SFH parametrisations. The top row shows blue galaxies, and each successive lower row shows a redder colour. The first three columns from the left show central galaxies in three mass bins, with mass increasing from left to right. The right column shows satellite galaxies, where a satellite galaxy is defined as a SKID group that is not the most massive galaxy in its FOF halo. An examination of Figure \ref{fig:sub3.1} reveals several interesting trends. The SFHs of low mass galaxies show bumps and wiggles, but for the most part the SFHs of individual galaxies are smooth, not punctuated by starbursts and gaps. Other implementations of star formation and feedback physics might lead to burstier behaviour, but if star formation and its associated outflows largely keep pace with accretion as they do in this simulation, then a smooth SFH is the generic outcome. At high $z$, most simulated galaxies have a gradually increasing SFH, in contrast to the steep increase followed by exponential decline that is mandated by the $\tau$-model. The shape of the peak in the SFH is generally matched by the lin-exp model when we allow a start time of 1 Gyr for star formation to commence. While the slope of SFR$(t)$ at high redshift varies strongly from galaxy to galaxy, there is little variation around $t_i \approx 1{\,{\rm Gyr}}$; in particular, we find no examples of galaxies that wait several Gyr before starting to form stars. At low $z$, the blue galaxies in the top two rows have a rising SFR, while the red galaxies in the lower rows have a falling SFR. The lin-exp model often describes these histories fairly well, but in some cases it cannot, such as the top two panels in the left column and the bottom two panels in the right column. As shown by \cite{simha09}, many satellite galaxies in these simulations continue to accrete gas and form stars, in agreement with inferences from observations \citep{weinmann06,weinmann10,wetzel13}. The top two panels in the right column of Figure \ref{fig:sub3.1} show two such examples. The third row of the rightmost column shows a satellite galaxy that is not forming stars at $z=0$. The bottom row of the same column shows a more extreme example, where the SFR is truncated at $t$ $\sim$ 8 Gyr, after the galaxy falls into a massive halo. The lin-exp model fails to match these truncated SFHs, predicting a SFR that is too high at $z=0$ and consequently a colour that is too blue compared to the simulation. For low mass galaxies in the Winds+Q model, we find similar trends to those in Figure~\ref{fig:sub3.1}, but for higher mass galaxies the SFR at late times is systematically lower, and for the most massive galaxies it is truncated before $z=0$. Figure \ref{fig:sub3.2} shows the average SFH of galaxies in bins of mass and colour chosen to contain approximately equal numbers of galaxies. As in Figure~\ref{fig:sub3.1}, the first three columns show central galaxies ordered by increasing mass, the fourth column shows satellite galaxies, and the rows are ordered from the bluest quartile to the reddest quartile in each bin. We also show the average stellar mass and the $g-r$ colour obtained by treating the average SFH as the SFH of an individual galaxy and using it as input for the stellar population synthesis code. These curves are smoother than those in Figure~\ref{fig:sub3.1} because they average over variations in individual SFHs, but they reveal the same trends. The $\tau$-model shows the same systematic failures seen in in Figure~\ref{fig:sub3.1}. The lin-exp model gives a good description of the average SFH in most bins, but it underpredicts the $z=0$ SFR in the bluest galaxies and overpredicts the $z=0$ SFR in red satellites. These discrepancies lead to systematic deviations in the predicted colours as shown below. Figure \ref{fig:sub3.3} is similar to Figure \ref{fig:sub3.2} but for the Winds + Q model. Results for the lowest mass central galaxies are similar, of course, but for more massive galaxies the SFHs are often truncated at late times and correspondingly more sharply peaked at early times. The lin-exp model is remarkably successful at describing the SFH shape in most of these bins, capturing the correlation between rapid early growth and suppressed late-time star formation. However, it fails to predict the correct $z=0$ SFR in some cases. We have shown results from the 4-parameter model in Figures~\ref{fig:sub3.1}-\ref{fig:sub3.3}, but the results are only slightly degraded if we fix $t_i=1{\,{\rm Gyr}}$ (3-parameter model) and $t_{\rm trans}=10.7{\,{\rm Gyr}}$ (2-parameter model). The left panel of Figure \ref{fig:sub3.4} compares the $g-r$ colour predicted by the best-fit lin-exp model to the SPH $g-r$ colour. While computing the colours, we ignore dust extinction and use the same SSPs to compute colours from the SFHs for the SPH galaxy and the model fits, and therefore ignore the possibility of template mis-match. Each point is an individual galaxy. Because lin-exp is unable to match the late time increase in SFR for the bluest galaxies, it predicts colours that are systematically too red when $(g-r)_{\rm SPH} \leq 0.3$. Conversely, for galaxies that are very red, particularly satellite galaxies, it fails to match the truncation in the SFH, instead predicting ongoing star formation and hence colours that are too blue. This error is particularly noticeable for galaxies with $(g-r)_{\rm SPH} \geq 0.6$. For comparison, the colour from the best-fit 4-parameter model is shown in the right panel. The late time linear component with variable slope helps overcome both these shortcomings of the lin-exp model, yielding accurate colour predictions for the bluest and reddest galaxies. For $(g-r)_{\rm SPH} = 0.45-0.6$ the model colours are still systematically too red. (Recall that all colours in the paper are computed for zero dust reddening.) Figure \ref{fig:sub3.7} shows the distribution of the differences between the colour of the best-fit parametric model and the SPH galaxy whose SFH is fit. We show the 2-parameter and 3-parameter models in addition to the three models shown in Figures~\ref{fig:sub3.1}-\ref{fig:sub3.3}. The $\tau$-model requires too much early star formation relative to late star formation and, therefore, predicts colours that are systematically too red, by $\sim 0.15$ magnitudes in $u-g$, $\sim 0.12$ magnitudes in $g-r$, and $\sim 0.05$ magnitudes in $r-i$. The lin-exp model is mildly biased towards redder colours, but a considerable improvement on the $\tau$-model. Results for the 2, 3, and 4-parameter models are nearly identical and sharply peaked around the colour predicted using the galaxy's true SFH. For the Winds+Q population (right hand panels) the $\tau$ and lin-exp models have a low amplitude tail of galaxies whose model colours are much too blue; these are the galaxies with truncated SFHs. This tail is strongly suppressed in the multi-parameter models. As expected, colours at redder wavelengths are predicted more accurately in every case because they are less sensitive to late-time star formation. One of the most important applications of SED fitting is to infer the mass-to-light ratios of stellar populations, so that observed luminosities can be converted to stellar masses. Figure \ref{fig:sub3.9} compares the $r$-band stellar mass-to-light ratio $Y_r \equiv M_*/L_r$ of SPH galaxies to that obtained from various parametric fits to the SFH. Specifically, we use FSPS to compute $r$-band luminosities from either the simulated SFH or the SFH of the best-fit parametric model (which is always constrained to reproduce the simulated galaxy's $M_*$). Because the best-fit $\tau$-model consistently has too much early-time star formation and too little late-time star formation (Figs.~\ref{fig:sub3.1} and~\ref{fig:sub3.2}), the $\tau$-model fits systematically overestimate $Y_r$, with a typical offset of 0.2 dex ($Y_{\rm model}/Y_{\rm SPH} \sim 1.6$). The lin-exp model fares much better, producing a reasonable match to the mass-to-light ratio of most galaxies but overestimating $Y_r$ for blue galaxies that have an increasing SFR at late times. The 2, 3 and 4-parameter models yield mass-to-light ratios sharply peaked around the true values, fitting 68\% of galaxies to within 6\%. The right panel shows results for the Winds+Q galaxy population. In addition to the previous trends, the $\tau$ and lin-exp models now have a tail of galaxies for which $Y_r$ is underestimated by up to 0.2 dex. These are the red galaxies with sharply truncated SFH, which are poorly represented in these models. The 2, 3, and 4-parameter models, on the other hand, can all represent these truncated SFHs, so they do not produce a tail of underestimated mass-to-light ratios. We examine the SFHs of galaxies at $z$ $\ge$ 0 to investigate whether the parametric models that give good descriptions of $z=0$ SFHs also do so at $z=0.5$ and $z=1$. We fit the five parametric models described in \S2.3 to the SFHs of galaxies in the SPH simulation at $z=1$ and $z=0.5$. In addition to the lin-exp model described in eq.~\ref{eq:lin-exp} and the general 4-parameter model described in eq.~\ref{eq:general}, we also fit a 3-parameter model where we fix $t_i$ = 1 Gyr and a 2-parameter model where, in addition, we scale ${t_{\rm trans}}$ with the age of the Universe. Specifically, we set \begin{equation} {t_{\rm trans}} (z=z') = {t_{\rm trans}}(z=0)~ \times ~t_{z'}/t_0 \label{eq:tscale} \end{equation} where $t_0$ is the age of the Universe at $z=0$, ${t_{\rm trans}}(z=0)=10.7$ Gyr is the value previously adopted in our 2-parameter model, and $t_{z'}$ is the age of the Universe at redshift $z=z'$. The 2-parameter model SFH follows a lin-exp model for the first 75\% of its lifetime, and a linear ramp thereafter. We restrict our analysis to the Winds + Q model. Figure \ref{fig:sub3.10} shows the distribution of differences between the colour of the best-fit parametric model and the SPH galaxy whose SFH is fit at $z=1$ (left) and $z=0.5$ (right) for the Winds + Q model. The lin-exp model is generally biased towards redder colours with the exception of a small number of galaxies with truncated SFHs whose model colours are too blue. The 4-parameter model is sharply peaked around the colour predicted from the galaxy's true SFH. Fixing $t_i$ = 1 Gyr (3-parameter model) produces a nearly identical fit as it is close to the value of $t_i$ obtained from the best-fit 4-parameter model for most galaxies, and in any case small changes in the SFR at early times do not have an appreciable effect on the colour. Further restricting to the 2-parameter model only marginally degrades the fits to the SFH. While the value of ${t_{\rm trans}}$ fixed according to eq.~\ref{eq:tscale} differs slightly from the mean ${t_{\rm trans}}$ of the best-fit 4-parameter model to all galaxies, correlations between ${t_{\rm trans}}$ and $\Gamma$ ensure that the best-fit 4-parameter SFH and the best-fit 2-parameter SFH are similar even when they have a different ${t_{\rm trans}}$. All models predict the redder colours more accurately because they are less sensitive to late-time star formation. As expected, the $\tau$-model (not shown) performs worse at high redshift than at $z=0$, predicting colours that are too red because it requires too much early star formation relative to late star formation. Figure \ref{fig:sub3.11} compares the $r$-band mass-to-light ratio, $Y_r \equiv M_*/L_r$ of SPH galaxies to that obtained from various parametric fits to the SFH. The parametric fits are constrained to reproduce the $M_*$ of the SPH galaxy. At both $z=0.5$ and $z=1$, the lin-exp model overestimates $Y_r$ for blue galaxies that have an increasing SFR at late times. Additionally, for red galaxies with a truncated SFH, the lin-exp model underestimates $Y_r$ by $\sim$ 0.15 dex. The 2,3, and 4-parameter models can match both truncated galaxy SFHs and the SFHs of galaxies with a rising SFR at late times, yielding $Y_r$ values sharply peaked around the true $Y_r$ value. They fit 68\% of galaxies to within 8\% at $z=1$, and within 13\% at $z=0.5$. \section{Fitting Parametric Models to Galaxy Colours} In dealing with observed galaxies, of course, we do not have a priori knowledge of the SFH. Instead, the SFH and other physical parameters of interest must be inferred from observables like the colours and luminosity. In this Section, we test the efficacy of parametric SFH models as practical tools by fitting them to the colours of galaxies in our simulation and comparing these fits to the true SFHs. We ignore the effects of dust extinction while computing the colours of both SPH galaxies and parametric SFH models. While uncertainties in redshift, dust extinction, and metallicity can introduce additional errors and potential biases, we focus in this work on the effect of the assumed parametrisation of the SFH. We use the FSPS stellar population synthesis code to compute the luminosity of our simulated galaxies in five SDSS optical bands ($u$,$g$,$r$,$i$ and $z$), which gives us four colours. We fit our previously described SFH models to these colours assuming a Gaussian error on each colour of 0.02 magnitudes, typical of errors for galaxies in the SDSS spectroscopic sample. While computing the colours, we ignore dust extinction and use the same SSPs to compute colours from the SFHs for the SPH galaxy and the model fits, and therefore ignore the possibility of template mis-match. Our fitting procedure is based on $\chi^2$ minimization. We have implemented our 2, 3, and 4-parameter models as SFH options to FSPS and computed colours on a grid of parameter values. We interpolate within this pre-computed grid in our $\chi^2$ minimization procedure. Figure \ref{fig:sub4.1} shows the SFH of the same representative sample of SPH galaxies illustrated in Figure~\ref{fig:sub3.1}, and gray bands are repeated from that figure. Now, however, the model curves are found not by minimizing the quantity in equation (\ref{eqm}) but by fitting the four colours. As in Figure \ref{fig:sub3.1}, the solid black curve shows the best-fit 4-parameter model, the dashed red curve shows the best-fit $\tau$-model, and the blue dot-dashed curve shows the best-fit lin-exp model. The top row shows blue galaxies, and each successive lower row shows a redder colour. The first three columns from the left show central galaxies in three mass bins, with mass increasing from left to right, and the extreme right column shows satellite galaxies. For most of our galaxies, the models are formally good fits to the colours given our adopted 0.02 mag errors, though the $\tau$-model sometimes fails to give a statistically acceptable fit for the bluest galaxies. In nearly all cases, the model fits reproduce the late-time SFH better than the early SFH. This is unsurprising, as the galaxy colour is sensitive to late-time star formation but is minimally affected by moderate shifts of age in the oldest stellar populations. The best-fit $\tau$-model is never a good description of the true galaxy SFH, showing the same generic discrepancies seen in Figures~\ref{fig:sub3.1}-\ref{fig:sub3.3}. The failures are worst for the bluest galaxies, where the $\tau$-model cannot match the rising SFR at late times, and for the reddest galaxies, where the $\tau$-model can produce a truncated SFR at late times only with a strong burst of very early star formation. The lin-exp model provides a good description of the SFH in many cases, for a variety of colours and SFH shapes. The two examples where it fares badly are the satellites with truncated SFHs (right column, two lower panels). Like the $\tau$-model, the single-parameter lin-exp model can only produce very red colours by forcing rapid early star formation, while the actual SFH of these galaxies is more extended before shutting down at late times. The 4-parameter model performs much better than lin-exp for these galaxies, and it reproduces the rising late-time SFR of the bluest galaxies. For the Winds+Q population we see broadly similar trends, but now the massive central galaxies also have a truncated SFH like the two red satellites in Figure~\ref{fig:sub4.1}. The fraction of galaxies for which the 4-parameter model outperforms lin-exp is, therefore, larger. In practical applications, fits to observed SEDs are frequently used to infer not the full SFH but high level physical parameters such as stellar mass-to-light ratios (and corresponding stellar masses), population ages, and current star formation rates. Uncertainties in these physical parameters (and, if desired, the covariance of their errors) can be derived by marginalising over the parameters of the fitted model. Figure~\ref{fig:subpdf} illustrates this approach for four of the simulated galaxies from Figure~\ref{fig:sub4.1}, showing posterior probability distribution functions (pdfs) of the mass-to-light ratio (left) and median population age (right). For the $\tau$-model and lin-exp model, we adopt a flat prior on $\tau$ over the range $0-20{\,{\rm Gyr}}$. For the 4-parameter model we adopt the same prior on $\tau$, a flat prior on $t_i$ over the range $0-1{\,{\rm Gyr}}$, a flat prior on t$_{\rm trans}$ over the range $6-14.16{\,{\rm Gyr}}$ ($t_0$ in our simulation), and a flat prior on $\theta = \tan^{-1}\Gamma$ such that the angle of the linear ramp can range uniformly from $\theta = -\pi/2$ (instantaneous truncation) to $\theta = \pi/3$ (steeply rising). The top two rows show the mass-to-light ratio and t$_{50}$ of galaxies shown in rows 3 and 4, respectively, of column 3 of Figure \ref{fig:sub4.1}. For both these galaxies, all three models fit to the galaxy colours reproduce the SFH reasonably well. The most probable $Y$ and $t_{50}$ for the $\tau$-model and the lin-exp model are reasonably close to the true SPH values, although the true value is sometimes outside the formal 95\% confidence interval. Because of its greater flexibility, the 4-parameter model allows a larger range of $Y$ and $t_{50}$, but the peak of the posterior pdfs are very close to the true values, and the true values are always within the 68\% confidence interval. The bottom two panels show galaxies whose SFH is truncated (rows 3 and 4 of column 4 in Figure \ref{fig:sub4.1}). As discussed earlier, both the $\tau$-model and the lin-exp model fail to match the truncated SFH, instead putting too much star formation at early times to match the SPH colours. Because they overpredict the age of the stellar population, they overpredict the mass-to-light ratio. In contrast, the 4-parameter model, which matches the truncation in the SFH, predicts a posterior probability distribution for the mass-to-light ratio whose peak is remarkably close to the true value for both galaxies. The true value of t$_{50}$ is within the 4-parameter model predicted 95\% confidence interval in one case (third row), and is very close to the peak of the posterior probability distribution in the other (bottom row). When investigating statistics for a large population of galaxies (e.g., the galaxy stellar mass function), it is common practice to take the best-fit model parameters for each individual galaxy, though a more sophisticated analysis could consider the full posterior pdf on a galaxy-by-galaxy basis. In what follows we will take the ``best-fit'' value of a parameter to mean the mode of that parameter's posterior PDF. Each panel of Figure~\ref{fig:sub4.2} plots the best-fit stellar mass-to-light ratio from a lin-exp (left) or 4-parameter (right) model fit to the colours of individual SPH galaxies against the galaxies' true mass-to-light ratios. The mass-to-light ratios in the SPH Winds population (top row) range from $\sim 0.4 Y_\odot$ to $\sim 3Y_\odot$, where $Y_\odot = 1 M_\odot/L_\odot$. For the bluer galaxies, with $Y < 2Y_\odot$, the lin-exp model predicts $Y$ quite accurately, though SPH galaxies with steeply rising late-time SFR have mass-to-light ratios lower than the minimum value $Y \approx 0.8 Y_\odot$ that the lin-exp model can produce (with $\tau$ forced to its limiting value of 20 Gyr). The behaviour for the red galaxies with a truncated SFR is more problematic. Because lin-exp does not allow sharp truncation, it attempts to produce red colours by forcing star formation very early (see the lower right panels of Fig.~\ref{fig:sub4.1}). As a result, the lin-exp fits overpredict the true $Y$ for these galaxies. The 4-parameter model yields a good correlation between the best-fit $Y$ and the true value across the full range, with only a small number of outliers. The performance for the Winds+Q population (lower panels) is similar in both cases, but now the fraction of ``red and dead'' galaxies at large $Y$ is higher because it includes massive centrals. Figure \ref{fig:sub4.4} shows the distribution of the differences between the mass-to-light ratios obtained from fitting different parametric models to the optical colours of galaxies and their mass-to-light ratios in the simulation. The $\tau$-model typically overestimates the mass-to-light ratio by a factor of $\sim$1.5, but the error can be as high as a factor of $\sim$ 3. The lin-exp model generally does better, but it makes significant errors in either direction. Most notably, as already seen in Figure~\ref{fig:sub4.2}, it overpredicts $Y$ for the reddest galaxies, producing the tail at high $Y_{\rm model}/Y_{\rm SPH}$ in Figure~\ref{fig:sub4.4}. The 4-parameter model estimate for the mass-to-light ratio is within 10\% of the true value for 68\% of galaxies in the Winds and Winds+Q populations, though it shows a weaker version of the same asymmetry seen for lin-exp. Figure~\ref{fig:sub4.5} presents a similar analysis for stellar population ages, showing the distribution of differences between model fit values and SPH galaxy values for the times when 10\% ($t_{10}$, top row), 50\% ($t_{50}$, middle row), and 90\% ($t_{90}$, bottom row) of the stars have formed. The 4-parameter model correctly predicts $t_{10}$ and $t_{50}$ to within 1 Gyr and $t_{90}$ to within 0.3 Gyr for 68\% of galaxies in both the Winds and Winds+Q populations, and it shows no significant bias, though the tails of the difference distribution are slightly asymmetric. The distribution for lin-exp is qualitatively similar, but it is biased towards high $t_{10}$ and $t_{50}$ by $0.5-1{\,{\rm Gyr}}$, and the distribution for $t_{90}$ is less sharply peaked. The $\tau$-model fits are systematically biased towards older population ages (smaller $t_{10}$, $t_{50}$, and $t_{90}$), by $1-2{\,{\rm Gyr}}$ for $t_{10}$ and $t_{50}$ and by $1{\,{\rm Gyr}}$ for $t_{90}$. Figure \ref{fig:sub4.6} shows similar results for specific star formation rates (sSFR$\,\equiv \dot{M}_*/M_*$) at $z=0$. For context, inset panels show the histograms of sSFR in the two galaxy populations. Since sSFR is strongly correlated with colour at $z=0$, models fit to the colours generally reproduce the sSFR quite accurately. However, the $\tau$-model is unable to match the colours of galaxies with rising late-time SFR, and it consequently underestimates their sSFR. All models successfully reproduce low sSFRs for the reddest galaxies, so the peak at near-perfect agreement is higher in the Winds+Q population, where the proportion of such galaxies is larger. {\it Fractional} errors in the sSFR can be large when the value is extremely small, but for most purposes it is the absolute error that is more relevant. We have carried out the same analyses shown in Figures~\ref{fig:sub4.4}-\ref{fig:sub4.6} at $z=0.5$ and $z=1$, for the same rest-frame colours. While we do not show the plots here, the trends are similar, with the 4-parameter model producing moderate improvements over lin-exp and substantial improvements over the $\tau$-model in recovering stellar mass-to-light ratios, population ages, and sSFRs. We have also checked that using the 2-parameter model with our recommended choices of $t_i$ and ${t_{\rm trans}}(z)$ yields similar results to those of the 4-parameter model. The $\tau$-model, which enforces declining SFHs, fails to match the mass-to-light ratios and stellar population ages of blue galaxies with ongoing star formation, often over predicting the ratios by a factor of $\sim$ 2. The lin-exp model performs better, providing more precise estimates, but it is often biased. In contrast, the 4-parameter model matches the physical parameters of SPH galaxies quite well. In the 4-parameter model, the individual parameters have partially degenerate effects on the SFH, but this degeneracy does not degrade the determinations of these physical quantities, since a similar SFH implies similar quantities and a similar fit to the data regardless of what parameter combinations produce it. With limited data (e.g., two or three colours, or large colour errors), individual model parameters may be poorly determined, but physical quantities may still be well constrained after marginalization. So far, we have only considered optical colours. We have also examined the effects of adding IR and UV colours to the optical data. Specifically, we compute SPH galaxy fluxes in the 2MASS J, H, and K bands and the GALEX NUV and FUV bands, and fit models to the combined data sets again assuming an error of 0.02 magnitudes on each colour. Figure \ref{fig:sub5.1} shows the distribution of errors in $Y$ and sSFR from fits of the 4-parameter model to optical colours alone, optical+IR colours, and optical+IR+UV colours. Somewhat surprisingly, adding IR and UV colours does not noticeably reduce the scatter in recovering $Y$ or sSFR; at least with regard to our 4-parameter model, the optical colours already contain all of the relevant information. We find similar results for population ages. Note, however, that we have not included dust extinction or metallicity in our models, and we have assumed that galaxy redshifts are known so that the rest-frame colours are available. Since optical colours already suffice to constrain the SFH in our framework, the additional information in IR and UV data can be applied to constrain extinction, metallicity, and (if necessary) redshift. Broad wavelength coverage is especially important in photometric redshift studies, as UV and IR data help to unambiguously identify breaks in the SED. \section{Discussion} The star formation histories (SFHs) of galaxies in our SPH simulations are generally smooth, governed by the interplay between cosmological accretion and star-formation driven outflow. As a result, they can be well described by models with a small number of free parameters, and this remains true after we implement a post-processing quenching scheme designed to reproduce the observed red colours and stellar mass distributions of the central galaxies in massive halos. While the simulations are far from perfect, they include many realistic aspects of cosmological growth, gas dynamics and cooling physics, and feedback. They can, therefore, provide guidance to the classes of models that are most useful for fitting observed galaxy populations. One of the models most commonly used for this purpose, the exponentially decaying ``$\tau$-model'', gives a quite poor representation of our simulated galaxies because of its implicit assumption that star formation is most rapid at the earliest epochs and declines thereafter. Adding an initial burst to a $\tau$-model would only exacerbate this problem, and allowing a start at $t_i > 0$ helps but only moderately. Fitting the colours of our simulated galaxies with a $\tau$-model (with start time $t_i = 1\,$Gyr) leads to inferred stellar mass-to-light ratios that are systematically too high, by a typical factor $\sim 1.5$, and to inferred stellar population ages that are too large, typically by $1-2\,$Gyr. Inferred specific star formation rates (sSFRs) can be either too high or too low, with substantial scatter about the true values. The {lin-exp}\ model, with $\dot{M}_* \propto (t-t_i)\exp[-(t-t_i)/\tau]$, gives a much better description of the time profiles of star formation in our simulation. We find little star formation in our simulations before $t=1\,$Gyr, reflecting the time required to build up massive systems that can support vigorous star formation, so the model is improved by setting $t_i=1\,$Gyr instead of $t_i=0$. Fitting the {lin-exp}\ model to the optical colors of SPH galaxies largely removes the biases in $M/L$ ratios and population ages that arise with $\tau$-model fitting, and it reduces the scatter between the inferred and true values for these quantities and for sSFRs. If one is going to fit a galaxy SFH with a one-parameter model, the {lin-exp}\ model with $t_i=1\,$Gyr is the one to choose. The shortcoming of the {lin-exp}\ model is that it ties late-time star formation to early star-formation: a rapid early build-up (short $\tau$) necessarily implies a low sSFR at low redshift. Early and late star formation are correlated in SPH galaxies, but they are not so perfectly correlated that galaxies lie on a 1-parameter family of SFH. Our 2-parameter model avoids this problem by changing from {lin-exp}\ to a linear ramp after ${t_{\rm trans}} = 10.7\,{\rm Gyr}$, decoupling early and late evolution. For many galaxies, this additional freedom makes little difference, but the two-parameter model offers a significantly better description of the bluest galaxies, which have rising star formation rates at late times, and of the reddest galaxies, which have truncated star formation. Fitting galaxy colors with the 2-parameter model removes the small systematic biases in $M/L$, population ages, and sSFRs that remain with {lin-exp}\ fitting, and it reduces the scatter between the true and fitted values. Our 3-parameter model turns ${t_{\rm trans}}$ into a fit parameter instead of fixing it at 10.7$\,{\rm Gyr}$ (or more generally 75\% of the current cosmic time), and our 4-parameter model additionally turns the start time $t_i$ into a fit parameter. This additional freedom only marginally improves the description of SPH galaxy SFHs or the accuracy of inferred parameter values. However, it avoids hard-wiring these ages into the model, and it provides some safeguard against the possibility that they are too strongly tied to the specifics of our simulation. For example, the preference for $t_i \approx 1\,{\rm Gyr}$ could be affected by our mass resolution. In spot checks on a simulation with the same volume but $8\times$ higher mass resolution, which became available after we had completed most of our analysis, we find that all of our results for galaxy SFHs continue to hold, but the best fit value for $t_i$ shifts slightly, from 1 Gyr to 0.83 Gyr. Our recommended strategy, therefore, is to adopt the 4-parameter model for fitting galaxy colours or SEDs and marginalize over model parameters when computing physical quantities of interest such as $M/L$ ratios, population ages, and specific star-formation rates. To enable this approach, we have added the 4-parameter model as an option to the FSPS population synthesis code \citep{conroy09}.\footnote{Publicly available at {\tt http://code.google.com/p/fsps/}.} In addition to colours, the FSPS code can compute full galaxy spectra for fitting to spectroscopic data. In place of marginalization, a less laborious but less robust strategy is to estimate physical quantities from the best-fit 2-parameter model, with $t_i$ and ${t_{\rm trans}}$ fixed to our recommended fiducial values. For our SPH galaxies, this procedure actually yields a {\it better} match between inferred and true quantities because the adopted priors on $t_i$ and ${t_{\rm trans}}$ are a good match to the simulations. However, these priors may be overly strong for fitting real galaxies, and the marginalization approach with the 4-parameter model is more conservative. When fitting the $ugriz$ colours of our SPH galaxies at $z=0$, assuming 0.02 mag colour errors, we are able to determine $r$-band mass-to-light ratios with typical errors of $\pm 13\%$ (the range encompassing 68\% of simulated galaxies). The corresponding error for median population age is 0.9 Gyr, and $t_{90}$, the time by which 90\% of stars form, has a smaller error of 0.3 Gyr. Adding near-UV or near-IR colors produces little further improvement because these quantities are already well determined given the assumed $ugriz$ errors, and additional wavelengths do little to break the degeneracies in the 4-parameter model. When fitting real galaxies, one would also need to include dust extinction as an additional parameter with an assumed extinction law (or marginalizing over a range of extinction laws). Including dust extinction will only moderately increase $M/L$ uncertainties because reddening induced by dust or by increased stellar population age has a similar impact on $M/L$ \citep{bellanddejong}. Conversely, dust extinction increases sSFR uncertainty because increasing dust or increasing late-time star formation have opposite effects on galaxy colours. In the absence of spectroscopic redshifts, one also needs to fit for galaxy photo-$z$ along with the stellar population quantities. In this situation, UV or near-IR data may play a more critical role, breaking degeneracies among SFH, extinction, and redshift. However, we caution that \cite{taylor11} find that SPS models do not provide good fits to the full optical-to-NIR SEDs of the galaxies they observe, possibly indicating inconsistencies between the SED shapes of real galaxies and those of the models. We have extended our analysis to higher redshifts, finding that at $z=0.5$ and $z=1$ the {lin-exp}\ model suffers from similar shortcomings as at $z=0$, which are overcome by models that decouple the early and late SFRs. Our 2-parameter model, with $t_i$ = 1 Gyr and ${t_{\rm trans}}$ scaled by the age of Universe, provides a significantly better description of the SFH of SPH galaxies. As at $z=0$, allowing $t_i$ and ${t_{\rm trans}}$ to be free parameters only marginally improves the description of SPH galaxy SFHs. At all redshifts, one should bear in mind that our simulation models galaxies with $M_* > 10^{10} M_\odot$, and that the SFHs that characterize much lower mass galaxies could be different both in overall form and in the level of stochasticity. Sources of uncertainty in the SED fitting technique at even higher redshifts have been investigated by other authors. \cite{lee10} apply standard SED fitting techniques to infer the physical parameters of Lyman Break Galaxies at $z \sim 3.4 - 5$ in a mock catalogue constructed from semi-analytic models of galaxy formation, finding that SFRs are systematically underestimated and mean stellar population ages overestimated because of differences between the galaxy SFHs predicted by their semi-analytic models and the $\tau$-model SFH assumed in their SED fitting technique. Because of the mass resolution of our simulation, the SFHs of $z \ge 2$ galaxies are too noisy to allow us to carry out a direct comparison, but our results at $z \le 1$ are qualitatively similar to their high $z$ results, highlighting similar discrepancies in the commonly used $\tau$-model SFH. Fitting the low-order parametrised models presented here should be more precise than fitting general stepwise SFHs. In essence, one is imposing a prior of approximate continuity to extract more from the data and reject pathological fits. This approach may also be more robust to uncertainties in the population synthesis models, since the strong spectral features that appear in stellar populations at specific ages may lead to artificial features in stepwise SFH fits. However, it is possible that the SFHs of real galaxies are more complex than those of our simulated galaxies, with bursts playing a more important role or truncation followed by rejuvenation. It will be interesting to search for evidence of such deviations to better constrain the potential contribution of ``punctuated'' star formation in galaxies of different stellar mass or morphology, or even in individual components of galaxies. These searches can be best carried out with full spectroscopic data rather than with colors alone, or better yet, with resolved stellar populations in nearby galaxies. \section * {ACKNOWLEDGEMENTS} We acknowledge support from NASA ATP grant NNX10AJ95G. \section * {APPENDIX: COMPARISON WITH BEHROOZI SFH PARAMETRISATION} \cite{behroozi13} advocate a parametrisation of the SFH based on reconstruction of average SFHs using observed galaxy stellar mass functions, specific star formation rates and cosmic star formation rates. The functional form they advocate is given by: \begin{equation} \rm{SFR}(t) = A~ [~ (t/{\tau})^{B} + ~(t/{\tau})^{-C}]^{-1}. \label{eq:behroozi} \end{equation} This model contains three free parameters, $\tau$, $B$ and $C$, in addition to the overall normalization $A$. We fit this model to the SFHs of SPH galaxies allowing $B$ and $C$ to vary between 0 and 25. Figure \ref{fig:sub16} shows the result of fitting this model to the average SFH of galaxies in the Winds + Q model in bins of mass and colour. We show the same set of SFHs as Figure \ref{fig:sub3.3}. For most galaxies, this model provides a good description of the early SFH. However, because it ties the late time SFR to the early SFR, it does not adequately match the SFH of galaxies with a rising SFR at late times such as those in the top two rows of the left most column. SFHs that are flat or gradually declining at late times are generally well described by the B-model, although for a significant fraction of galaxies, the B-model fails to match the late time SFH. But unlike the {lin-exp}\ model, the \cite{behroozi13} parametrisation can match truncated SFHs well by employing a large value of $B$, and a value of $C$ close to 0. The B-model generally provides a substantially better description of the SFH of SPH galaxies than the $\tau$-model or the {lin-exp}\ model, but it does not perform as well as the 4-parameter model. In practice, our 2-parameter model fits the SFH of SPH galaxies nearly as well as our 4-parameter model, and significantly better than equation \ref{eq:behroozi} in situations where they disagree. Thus, the better performance of our model owes to its functional form that decouples early and late-time star formation, and not to the number of parameters. \clearpage \onecolumn \begin{figure} \centerline{ \epsfxsize=5.5truein \epsfbox{figure1} } \caption{ The colour-magnitude diagram of galaxies in our SPH simulation (Winds; left) and after applying our post-processing, quenching prescription (Winds +Q; right). Each point is an individual galaxy. Green open circles are satellite galaxies and black crosses are central galaxies. Since saturation makes it difficult to judge the relative numbers of central and satellite galaxies, we list these numbers for the two boxed regions in each panel. Note in particular that the red sequence is almost entirely populated by satellite galaxies in the Winds model but is dominated (at the bright end) by central galaxies in the Winds+Q model. $N_C$ and $N_S$ denote the number of central and satellite galaxies in each box, respectively. } \label{fig:sub1} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.5truein \epsfbox{figure2} } \caption{ The galaxy stellar mass function at $z=0$ in our simulation (solid), and after applying our quenched winds post-processing prescription (dashed) compared to the observations of \protect\citealt{bell03} (dotted). In this and later plots, M$_S$ refers to the stellar mass of SKID-identified galaxies. } \label{fig:sub2} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure3} } \caption{ Galaxy SFR versus time in the winds simulation. Each panel shows an individual galaxy. The thick gray curve shows the SFR in the simulation, the black solid curve shows the best-fit 4-parameter model (see text), the blue curve shows the best-fit lin-exp model, and the red curve shows the $\tau$-model. The first three columns from the left show central galaxies in three mass bins, with mass increasing from left to right, and the right column shows satellite galaxies. The top row shows blue galaxies and each successive lower row shows a redder colour. } \label{fig:sub3.1} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure4} } \caption{ Like Figure 3, but now showing the average SFH of central galaxies in bins of mass and colour (left three columns) and of satellite galaxies in bins of colour (right column). The three central galaxy mass bins are chosen to contain approximately equal numbers of galaxies, and in each column the four panels show the four quartiles of the colour distribution in that bin. Labels indicate the mean stellar mass in each bin and the colour computed by FSPS from the average SFH. Note that the dot-dashed curve (lin-exp) is sometimes fully obscured by the solid curve (4-parameter model), and that the 4-parameter model fit is itself often obscured by the true average SFH.} \label{fig:sub3.2} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure5} } \caption{ Same as Figure 4, but for the Winds + Q population. } \label{fig:sub3.3} \end{figure} \begin{figure} \centerline{ \epsfxsize=4.5truein \epsfbox{figure6} } \caption{ (Left) Colour predicted by the best-fit lin-exp model versus the colour computed from the full SFH of the corresponding SPH galaxy, at $z=0$. (Right) Same, for the best-fit 4-parameter model. Both panels are for the Winds galaxy population. There are 1,828 individual galaxies in the plot. The horizontal ridges in the left panel correspond to fits with very long or very short timescale $\tau$. } \label{fig:sub3.4} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure7} } \caption{ Distribution of differences between parametric model colour and SPH galaxy colour in the winds simulation (left) and the Winds + Q model (right), at $z=0$. Each curve stands for a different parametric model, and the curves are normalised to unit integral. We ignore the effects of dust extinction on the colours. } \label{fig:sub3.7} \end{figure} \begin{figure} \centerline{ \epsfxsize=4.0truein \epsfbox{figure8} } \caption{ Distribution of differences between the $r$-band mass-to-light ratio predicted by the parametric models and that of the corresponding SPH galaxy, at $z=0$. Each curve stands for a different parametric model, and the curves are normalised to unit integral. } \label{fig:sub3.9} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure9} } \caption{ Distribution of differences between parametric model colour and SPH galaxy colour at $z=1$ (left) and at $z=0.5$ (right). Each curve stands for a different parametric model, and the curves are normalised to unit integral. } \label{fig:sub3.10} \end{figure} \begin{figure} \centerline{ \epsfxsize=4.0truein \epsfbox{figure10} } \caption{ Distribution of differences between the $r$-band mass-to-light ratio predicted by the parametric models and that of the corresponding SPH galaxy at $z=1$ (left) and $z=0.5$ (right). Each curve stands for a different parametric model, and the curves are normalised to unit integral. } \label{fig:sub3.11} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.5truein \epsfbox{figure11} } \caption{ Galaxy SFR versus time in the winds simulation. Each panel shows an individual galaxy. We use the same set of galaxies as Figure \ref{fig:sub3.1} but with models fit to the $z=0$ $ugriz$ colours rather than SFH. The thick gray curve shows the SFR in the simulation, the black solid curve shows the best-fit 4-parameter model (see text), the blue dot-dashed curve shows the best-fit lin-exp model, and the red dashed curve shows the $\tau$-model. } \label{fig:sub4.1} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure12} } \caption{ Posterior probability distribution of the $r$-band mass-to-light ratio (left) and t$_{50}$, the time at which 50\% of the stars are formed (right), obtained by fitting parametric models to the $z=0$ galaxy $ugriz$ colours. Each row stands for a galaxy. Each curve stands for a particular parametric model - the 4-parameter model (solid black), the lin-exp model (blue dot-dashed), and the $\tau$-model (red dashed). The grey solid line in each panel shows the ``true" value of the mass-to-light ratio (left) and t$_{50}$ (right) of the galaxy in the SPH simulation } \label{fig:subpdf} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure13} } \caption{ Mode of the posterior probability distribution of the mass-to-light ratio from parametric model SFH fits to $z=0$ $ugriz$ colours versus mass-to-light ratio of SPH galaxies. Each point is an individual galaxy. There are 1,828 galaxies plotted in the upper panels and 1,723 in the lower panels. The left panels show the lin-exp model and the right panels the 4-parameter model. The upper panels are fits to galaxies in the Winds population and the bottom panels to those in the Winds+Q population.} \label{fig:sub4.2} \end{figure} \begin{figure} \centerline{ \epsfxsize=4.0truein \epsfbox{figure14} } \caption{ Distribution of differences between the $r$-band mass-to-light ratio predicted by the parametric model SFH fits to $z=0$ $ugriz$ colours and that of the corresponding SPH galaxy. Each curve stands for a different parametric model, and the curves are normalised to unit integral. } \label{fig:sub4.4} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure15} } \caption{ Distribution of differences between age of the galaxy stellar population predicted by the parametric model SFH fits to $z=0$ $ugriz$ colours and their age in the winds population (left) and the Winds + Q population (right). Each curve stands for a different parametric model and the curves are normalised so that the area under each curve integrates to unity. $t_{10}$, $t_{50}$ and $t_{90}$ stand for the time at which 10\%, 50\% and 90\% of the stars were formed respectively. } \label{fig:sub4.5} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure16} } \caption{ Distribution of differences between the specific star formation rate (sSFR) predicted by the parametric model SFH fits to $z=0$ $ugriz$ colours and the Winds population (top left) and the Winds + Q population (top right). Each curve stands for a different parametric model, and the curves are normalised to unit integral. Histograms in the two bottom panels show the distribution of sSFR in the two galaxy populations. } \label{fig:sub4.6} \end{figure} \clearpage \begin{figure} \centerline{ \epsfxsize=4.0truein \epsfbox{figure17} } \caption{ Impact of adding IR or IR+UV colours to optical colours when inferring the $r$-band mass-to-light ratio (top) or sSFR (bottom). Curves show the distribution of errors from fits of the 4-parameter model using optical colours only (dotted), optical+IR (dashed), or optical+IR+UV (solid). } \label{fig:sub5.1} \end{figure} \begin{figure} \centerline{ \epsfxsize=5.0truein \epsfbox{figure18} } \caption{ Same as Figure \ref{fig:sub3.3}, but showing the \protect\cite{behroozi13} SFH parametrisation (long dashed green) fit to the average SFH of galaxies in bins of mass and colour in our Winds + Q population. For comparison, we also show the lin-exp model (blue dot dashed) and 4-parameter model (black solid) fits. } \label{fig:sub16} \end{figure} \clearpage \bibliographystyle{mn2e}
1,108,101,564,066
arxiv
\section{Introduction} The LIGO and Virgo gravitational wave observatories recently completed their final joint science run until advanced detectors come online. The observation was LIGO's sixth science run and Virgo's second and third science runs. LIGO operated two instruments, the four kilometer Hanford detector (H1) and the four kilometer Livingston detector (L1) from July 07, 2009 to October 20, 2010. The two kilometer Hanford interferometer, which was included in the S5/VSR1 CBC analyses, was decommissioned prior to the start of this run and did not collect any data. The three kilometer Virgo interferometer (V1) operated during roughly the same timespan, with however a major commissioning break between its second and third science runs, from Jan 11, 2010 to August 6, 2010. In this note, we summarize the sensitivity achieved by these detectors during the latest runs from the perspective of low-mass (inspiral-only) CBC searches. The strain noise power spectral density (PSD) is a complete characterization of the sensitivity of a detector but generally only meaningful over timescales of about an hour. Over longer time scales, the noise characteristics of the detectors typically vary significantly, for instance as the morning traffic picks up. More importantly, the detectors themselves change over time as they undergo weekly maintenance and occasional larger scale upgrades. The notion of PSD does not immediately have meaning when applied to the detector performance over the entire duration of the run. To solve this problem, we choose a single ``representative'' block of time for each detector and compute the PSD for each detector in this block. We take the resulting PSD as representative of the typical sensitivity to gravitational waves from CBCs achieved by the detectors in S6/VSR2-3. We choose the respresentative time for each detector by looking at its inspiral horizon distance distribution and selecting a time for which the detector operated with a horizon distance close the the mode of this distribution. The inspiral horizon distance is equal to the largest distance at which an equal-mass compact binary inspiral would accumulate an average SNR of 8 in the detector. The inspiral horizon distance contains less information but is derived from the PSD and includes information specific to CBC signals. In this sense, the horizon distance is a useful measure of the sensitivity of a detector to gravitational waves from CBCs at a given time. In this article, we gather the inspiral horizon distance data generated during S6/VSR2-3 low-mass CBC search \cite{S6lowmass} and use the results to identify PSDs that are representative of detector performance for CBC searches during these science runs. The plots and data presented here are intended to be released to the public as a summary of detector performance for CBC searches during S6 and VSR2-3 \cite{DCCpage}. These results use exactly the same science segments and analysis code \cite{TmpltBank} that was used in the low-mass CBC search in S6 and VSR2-3. In the next section, we review how the inspiral horizon distance is computed from the PSD. In section 3, we present the inspiral horizon distance data obtained from S6/VSR2-3 CBC analyses. We then use these data to identify PSDs which are representative of the detector sensitivity to inspirals for this science run. We close this note with a discussion of one potential pitfall in using these spectra. \section{Inspiral Horizon Distance} In LIGO and Virgo data analysis applications, we treat the strain noise in a detector as a stationary random process. If the noise in the detector were truly stationary, then the noise spectral density would completely characterize the sensitivity of the detector as a function of frequency. The power spectral density $S_n(f)$ for a stationary random process $n(t)$ is defined implicitly by the relation \begin{eqnarray} \label{psd}\frac{1}{2} S_n(f)\delta(f-f') = \langle \tilde{n}(f)\tilde{n}^*(f') \rangle , \end{eqnarray} where $\tilde{n}(f)$ is the Fourier transform of the random process. The spectral density is a measure of the mean square noise fluctuations at a given frequency. As mentioned above, the noise in the LIGO and Virgo detectors is not stationary. However, by measuring the spectral density over a short enough timescale, we are able to approximate the noise as stationary. The chosen timescale must also be long enough that we can form an accurate estimate of the spectral density. In the S6/VSR2-3 CBC searches, the spectral density was computed on 2048-second blocks of contiguous data \cite{FindChirp}. We account for long timescale non-stationarities by using a different spectral density for every 2048 seconds. In assessing the overall performance of a detector for CBC searches, we use the inspiral horizon distance data from S6 and VSR2-3 to identify the ``typical'' sensitivity of the interferometers. The inspiral horizon distance of a detector is the distance at which an optimally oriented and optimally located equal-mass compact binary inspiral would give an average signal to noise ratio (SNR) of $\rho=8$ in the interferometer. If $\tilde{h}(f)$ represents the Fourier transform of the expected signal, then the average SNR this signal would attain in a detector with spectral density $S_n(f)$ is given by \begin{eqnarray} \label{general_ir} \langle \rho \rangle= \sqrt{4 \int_0^\infty \frac{|\tilde{h}(f)|^2}{S_n(f)} df}. \end{eqnarray} We find the inspiral horizon distance by setting $\langle \rho \rangle = 8$ and solving for the distance $D$ to the inspiral event which parametrizes the waveform $\tilde{h}(f)$. Thus, the inspiral horizon distance combines the spectral density curve with the expected inspiral waveform to produce a single quantity that summarizes the sensitivity of the detector at a given time. Practical considerations require modifications to the limits of the integral. In the CBC search code, we compute the signal to noise ratio by \begin{eqnarray} \label{range} \langle \rho \rangle= \sqrt{4 \int_{f_{low}}^{f_{high}} \frac{|\tilde{h}(f)|^2}{S_n(f)} df}. \end{eqnarray} The lower limit is determined by our ability to characterize the noise at low frequencies. In the S6 CBC search, we took $f_{low}=40$Hz as the low frequency cut-off in computing the inspiral horizon distance. For Virgo in VSR2-3, the low frequency cut-off was $f_{low}=50$Hz. The upper limit of the integral is the innermost stable circular orbit (ISCO) frequency, \begin{eqnarray} f_{isco} = \frac{c^3}{6\sqrt{6}\pi G M}, \end{eqnarray} where $M$ is the total mass of the binary system. For binary neutron star (BNS) systems, for which we take $m=1.4M_\odot$, $f_{isco}= 1570$Hz. The inspiral waveform for CBCs is accurately given in the frequency domain by the stationary phase approximation. For an optimally oriented and optimally located equal mass binary, the signal that appears at the interferometer (in this approximation) is given by \begin{eqnarray} \label{spa} \tilde{h}(f) = \frac{1}{D}\left(\frac{5\pi }{24c^3}\right)^{1/2}(G\mathcal{M})^{5/6}(\pi f)^{-7/6} e^{i\Psi(f;M)}, \end{eqnarray} where $\mathcal{M} = \mu^{3/5}M^{2/5}$ is the chirp mass of the binary, $D$ is the distance to the binary and $\Psi$ is a real function of $f$, parametrized by the total mass $M$. Setting $\langle \rho \rangle = 8$ and inserting this waveform into eqn. \ref{range}, we find that the inspiral horizon distance is given by \begin{eqnarray} \label{range0} D = \frac{1}{8}\left(\frac{5\pi }{24c^3}\right)^{1/2}(G\mathcal{M})^{5/6}\pi^{-7/6} \sqrt{4 \int_{f_{low}}^{f_{high}} \frac{f^{-7/3}}{S_n(f)}df }. \end{eqnarray} The inspiral horizon distance is defined for optimally located and oriented sources. To compare with previous results, note that the sensitive range of an interferometer gravitational-wave detector was considered by Finn and Chernoff \cite{Finn:1993}, though here we following the conventions of Allen, {\it et. al.} \cite{FindChirp} and Brown \cite{DBrownThesis}. Furthermore, if we divide the inspiral horizon distance given here by 2.26 we obtain the SenseMon range \cite{Sutton} reported as a figure of merit in the LIGO and Virgo control rooms, where the factor of 2.26 comes from averaging over a uniform distribution of source sky locations and orientations. In practice, it is convenient to measure distances in Mpc and mass in $M_\odot$. It is useful therefore to specialize eqn. \ref{range0} to this unit system. Further, since we measure the strain $h(t)$ at discrete time intervals $\Delta t = 1/f_s$, the spectral density is only known with a frequency resolution of $\Delta f = f_{s}/N$, where $N$ is the number of data points used to measure $S_n(f)$. By putting $f=k/(N\Delta t)$ into eqn. \ref{range0} and grouping terms by units, we arrive at the expression \begin{eqnarray} D \approx \frac{1}{8} \mathcal{T} \sqrt{\frac{4\Delta t}{N} \sum_{k_{low}}^{k=k_{high}} \frac{(k/N)^{-7/3}}{S_n(k)} } \mathrm{ Mpc }, \end{eqnarray} where \begin{eqnarray} \mathcal{T} = \left( \frac{5}{24\pi^{4/3}} \right)^{1/2} \left( \frac{ \mu }{ M_\odot } \right)^{1/2} \left( \frac{ M }{ M_\odot } \right)^{1/3} \left( \frac{ G M_{\odot}/c^2 }{\mathrm{1 Mpc}} \right) \left( \frac{ G M_\odot/c^3 }{\Delta t } \right)^{-1/6}, \end{eqnarray} for the inspiral horizon distance in Mpc. Since it is convenient to work with the binary system's component masses, we have also replaced the chirp mass $\mathcal{M}$ with the reduced mass $\mu$ and the total mass $M$. Written this way, the inspiral horizon distance in Mpc is easily computed from the binary component masses in $M_\odot$. \begin{figure} \includegraphics[scale=0.4]{s6abcd_rangevtime.png} \includegraphics[scale=0.4]{s6abcd_histrange.png} \caption{(a) Inspiral horizon distance as a function of time during S6-VSR2/3. The average inspiral horizon distances for each week in S6 and VSR2-3. As an indication of the weekly variations, we have included error bars corresponding to the standard deviation of the inspiral horizon distance during each week. (b) Distribution of 1.4-1.4 solar mass inspiral horizon distance for the three gravitational wave detectors H1, L1, and V1 for the joint LIGO-Virgo science run consisting of S6 and VSR2/3. The histogrammed data consists of the same 2048-second analyzed segments from the S6 and VSR2/3 CBC searches.} \label{rangevtime} \end{figure} \section{Summary of Inspiral Horizon Distance Data} Here we present the horizon distance data collected from the final data products produced during the S6/VSR2-3 lowmass CBC search \cite{S6lowmass}. We have collected the data, rather than computing the inspiral horizon distance directly, in order to ensure that we analyze the exact same science segments and use the exact same analysis code as used in the LIGO/Virgo CBC searches. In Fig. \ref{rangevtime}a, we plot the average BNS inspiral horizon distance for each of the three detectors as a function of time. We use a window of one week and the points on the plot correspond to the average inspiral horizon distance for all science segments beginning in that week. The error bars attached to the points indicate the standard deviation in the inspiral horizon over the course of the given week. This figure highlights the variability in sensitivity throughout the run and the reason it is difficult to identify a single time for each detector with a typical or average sensitivity. In Fig. \ref{rangevtime}b, we histogram the BNS inspiral horizon distance for the three detectors H1, L1, and V1. The bimodal behavior seen in the LIGO and Virgo detectors is largely due to a significant commissioning break in S6 and commissioning in Virgo between VSR2 and VSR3. These commissioning breaks will be described in detail in a later publication on the S6/VSR2-3 runs. In the actual S6/VSR2-3 CBC analysis, the inspiral horizon is computed for ($n$)-($n$) solar mass binaries for $n$ an integer. Previous documents \cite{Nada} however have plotted the horizon distance for the canonical $1.4-1.4$ solar mass binary neutron star. In order to simplify comparison to previous results, we rescale the obtained distributions by $(2.8/2)^{5/6}$ corresponding to the ratio of chirp masses of a $1.0-1.0$ solar mass system and a $1.4-1.4$ solar mass system. This scaling ignores the fact that $f_{isco}$ is different for the two mass pairs, but this is negligible since the signal template is buried in the noise at such high frequencies. \begin{figure} \includegraphics[scale=0.65]{s6abcd_rangevmass.png} \caption{Mean inspiral horizon distance as a function of mass for the three gravitational wave detectors H1, L1 and V1 during S6-VSR2/3. The error bars on the curves extend from one standard deviation below to one standard deviation above the mean.} \label{rangevmass} \end{figure} In Fig. \ref{rangevmass}, we show the mean inspiral horizon distance for each interferometer as a function of the binary total mass, assuming equal mass binaries. This plot reflects the mean performance of the detector over various frequency bands. As the component mass becomes higher, the upper cutoff frequency $f_{high}=f_{isco}$ becomes smaller and smaller. This means that the inspiral horizon distance focuses on a narrower band around the lower cutoff $f_{low}=40$Hz (or $f_{low}=50$Hz in the case of Virgo). The inspiral horizon distance takes into account only the inspiral stage of the CBC event, while for high-mass systems ($M> 25M_{\odot}$) the merger and ringdown stages of the coalescence occur in the LIGO and Virgo sensitive band. For total masses greater than 25$M_\odot$, the inspiral-only range begins to fall over, which is not indicative of the sensitivity of the detector for these systems. For these binary systems, we use Effective One Body Numerical Relativity (EOBNR) waveform templates that include the merger and ringdown stages and our sensitivity is significantly improved relative to an inspiral-only analysis \cite{S5highmass}. \section{Representative Power Spectral Density} \begin{figure} \includegraphics[scale=0.65]{H1L1V1_representative_spectra.png} \caption{Representative spectral density curves for LIGO and Virgo detectors during S6 and VSR2-3. We plot here the amplitude spectral density, which is the square root of the power spectral density, since the strength of a gravitational wave signal is proportional to the strain induced in the interferometer and the sensitivty of a detector is therefore most naturally expressed in terms of this quantity. These spectral density curves correspond to May 9, 2010 (GPS 957474700) for H1, February 27, 2010 (GPS 951280082) for L1 and August 30, 2009 (GPS 935662133) for V1. These times are chosen such that the inspiral horizon distance for each detector at that time coincides with the mode of its inspiral horizon distance distribution, as given by the midpoint of the most populated bin in Fig. \ref{rangevtime}b.} \label{psdcurves} \end{figure} In Fig. \ref{psdcurves}, we give representative spectral density curves for each of the three detectors during S6 and VSR2-3. The chosen representative curve corresponds to a time when the detector operated near the mode of its inspiral horizon distance distribution shown in Fig. \ref{rangevtime}b. The algorithm used to compute the spectral densities is described in detail in \cite{FindChirp}. The parameters needed in order to reconstruct our results are given in Table \ref{params}. The first column in Table \ref{params} gives a list of parameter names and symbols, which are the same names and symbols used in \cite{FindChirp}. The second and third columns gives the values of these parameters used in S6/VSR2-3 CBC searches. These parameters can be used to reproduce the inspiral horizon distance data accompanying this note. The fourth column gives the values of the parameters used to compute the representative spectral density curves shown here. In making our choice of parameters for computing representative spectra, we sacrificed frequency resolution ($\Delta f = 1/T$) for PSD accuracy (which increases with $N_S$). \begin{table} \begin{center} \caption{Parameters used in the computation of the spectral density.} \label{params} $ $\\ \begin{tabular}{l|c|c|c} FINDCHIRP parameter \cite{FindChirp} & S6 low-mass & VSR2-3 low-mass & representative spectra\\ \hline sample rate ($1/\Delta t$) & 4096 Hz & 4096 Hz & 16384 Hz\\ data block duration ($T_{block}$) & 2048s & 2048s & 2048s\\ number of data segments ($N_S$) & 15 & 15 & 1023\\ data segment duration ($T$) & 256s & 256s & 4s\\ stride ($\Delta$) & 524288 & 524288 & 32768 \end{tabular} \end{center} \end{table} One potential pitfall with using these spectra is that the choice of representative PSD for a detector is not obvious. Here we illustrate the degree to which our choice of using the mode affected the chosen PSD. We compare the spectra for H1 corresponding to times when H1 operated near its mode to times when it operated near its mean and maximum of its inspiral horizon distance distribution. In Table \ref{stats}, we provide a quantitative summary of the low-mass inspiral horizon distance distributions for a 1.4--1.4$M_\odot$ binary given in Mpc. We see that the horizon distance varies by roughly 10\% between its mode and mean. This suggests that spectral density curves from a detector's most common sensitivity (mode) may differ significantly from the spectral density of a detector's ``average'' performance. To illustrate this point, we plot in Fig. \ref{h1curves} three spectra for H1 from different times in S6. \begin{figure} \includegraphics[scale=0.65]{H1_representative_spectra.png} \caption{Various possibilities for a representative PSD for H1 during S6. These spectral density curves correspond to times when the detector operated near its S6 mode (44.1Mpc), mean (39.5Mpc) and maximum (49.3Mpc) inspiral horizon distance. The times chosen are May 9, 2010 (GPS 957474700) for the mode, November 4, 2010 (GPS 941365351) for the mean and July 4, 2010 (GPS 962268343) for the maximum.} \label{h1curves} \end{figure} \begin{table} \begin{center} \caption{Inspiral Horizon Distance Summary for a 1.4--1.4$M_\odot$ Binary} \label{stats} $ $\\ \begin{tabular}{l|cccc} & H1 & L1 & V1\\ \hline mean & 39.5 & 34.0 & 16.6 \\ max & 49.3 & 47.2 & 23.2 \\ mode & 44.1 & 36.6 & 18.8 \\ std & 4.7 & 6.9 & 3.9 \end{tabular} \end{center} \end{table} All of the data used here have been computed using the final version of calibration used in the CBC searches. Note that the noise spectra presented here are subject to systematic uncertainties associated with the strain calibration. These uncertainties can be up to $\pm$15\% in amplitude. For more detail, see references \cite{Goetz:2009,VirgoCalibration}. \section*{Acknowledgements} The authors gratefully acknowledge the support of the United States National Science Foundation for the construction and operation of the LIGO Laboratory, the Science and Technology Facilities Council of the United Kingdom, the Max-Planck-Society, and the State of Niedersachsen/Germany for support of the construction and operation of the GEO600 detector, and the Italian Istituto Nazionale di Fisica Nucleare and the French Centre National de la Recherche Scientifique for the construction and operation of the Virgo detector. The authors also gratefully acknowledge the support of the research by these agencies and by the Australian Research Council, the Council of Scientific and Industrial Research of India, the Istituto Nazionale di Fisica Nucleare of Italy, the Spanish Ministerio de Educaci´on y Ciencia, the Conselleria d’Economia Hisenda i Innovaci´o of the Govern de les Illes Balears, the Foundation for Fundamental Research on Matter supported by the Netherlands Organisation for Scientific Research, the Royal Society, the Scottish Funding Council, the Polish Ministry of Science and Higher Education, the FOCUS Programme of Foundation for Polish Science, the Scottish Universities Physics Alliance, The National Aeronautics and Space Administration, the Carnegie Trust, the Leverhulme Trust, the David and Lucile Packard Foundation, the Research Corporation, and the Alfred P. Sloan Foundation. This document has been assigned LIGO Document No. LIGO-T1100338. Anyone using the information in this document and associated material (S6/VSR2/VSR3 noise spectra, inspiral ranges, observation times) in a publication or talk must acknowledge the US National Science Foundation, the LIGO Scientific Collaboration, and the Virgo Collaboration. Data files associated with the results and plots presented in this document can be found here: https://dcc.ligo.org/cgi-bin/public/DocDB/ShowDocument?docid=63432. Please direct all questions to the corresponding author (Stephen Privitera, [email protected]). Please inform the corresponding author and the LSC and Virgo spokespeople ([email protected] and [email protected], respectively) if you intend to use this information in a publication. \clearpage
1,108,101,564,067
arxiv
\section{Introduction} Synthetic aperture radar (SAR) is capable of acquiring data in all kinds of weather and lighting conditions. As a coherent imaging radar system, it is often mounted on an airplane or satellite platform, where it transmits a frequency modulated signal to a scene on the ground and records the measured response in flight. A detailed introduction about the working mechanisms of SAR can be found in \cite{moreira2013tutorial, chan2008introduction, ouchi2013recent}. However, this process of data acquisition is often plagued by phase errors. Due to inaccuracies in the SAR platform trajectory measurement, as well as due to possible movement of objects in the scene, the observed data (radar returns) will contain phase errors. These in turn lead to a defocusing effect in the formed SAR image. Techniques aiming to directly estimate these phase errors from the raw SAR data and then remove them so as to improve formed image quality are called autofocusing techniques. Among the earliest autofocusing techniques, phase gradient autofocus (PGA) \cite{wahl1994phase} is a very well-known method which computes the phase error according to the estimation of its gradient. Mapdrift autofocus is another classical technique for autofocusing \cite{calloway1994subaperture} which estimates phase errors from the maps reconstructed for each sub-aperture. Differing from those approaches are many optimization-based methods proposed in recent years. These mainly belong to one of two categories. The first maximizes a sharpness metric in the form of a power function \cite{fienup2003aberration, morrison2007sar}, or image entropy \cite{kragh2006monotonic,kantor2017minimum}. The second adopts an inverse problem approach. Based on a forward observation model relating the corrupted phase history to the underlying SAR image, SAR autofocusing is formulated as an inverse problem, and various regularized variational models have been proposed to obtain its solution. For instance, Onhon et al. uses the $p$th power of the approximate $l_{p}$ norm as the regularization term and a coordinate descent framework to solve the problem \cite{onhon2011sparsity}. There are also various methods addressing the problem in a compressive sensing context, such as the majorization-minimization based method \cite{kelly2012auto, kelly2014sparsity}, iteratively re-weighted augmented Lagrangian based method \cite{gungor2015augmented, gungor2017autofocused} and conjugate gradient based method with two different regularization terms (approximate $l_{1}$ and approximate total variation regularization) \cite{ugur2012sar}. Besides these, there are also some methods which are built on traditional SAR imaging methods. An autofocusing method which maximizes a sharpness metric for each pulse in the imaging process of back-projection is proposed in \cite{ash2011autofocus} and further extended for the case involving moving ship targets \cite{sommer2019backprojection}. A polar format algorithm based autofocusing approach \cite{kantor2019polar} which combines \cite{onhon2011sparsity} with classical autofocusing method like PGA has recently been proposed as well. In this paper, we formulate the SAR autofocusing problem as an inverse problem, and adopt a coordinate descent framework to jointly estimate the desired SAR image and the phase error. The optimization process of the phase error is the same as that in \cite{onhon2011sparsity}. However, for the optimization of the desired SAR image, we use Cauchy regularization on the magnitude of the desired image (thus we call it "magnitude Cauchy") and propose an efficient method based on Wirtinger calculus \cite{brandwood1983complex, kreutz2009complex} to handle the complex nature of this subproblem. The rest of this paper is organized as follows. In Section 2, the data acquisition model for SAR will be introduced briefly. In Section 3, our proposed methods is described. In Section 4, experimental results are shown to demonstrate the effectiveness of our methods. Finally, conclusions are drawn in section 5. \section{Data acquisition model} In this paper, we consider a SAR platform operating in spotlight mode, whose transmitted signal at each azimuth position can be formulated as: \begin{equation} \label{eq1} s(t)=Re\{e^{j(\omega_{0}t+\alpha t^{2})}\}, \end{equation} where $\omega_{0}$ is the carrier frequency and 2$\alpha$ is the chirp rate. The relationship between the observed data $r_{m}(t)$ at the $m$th aperture position and the underlying SAR image $F(x,y)$ can be described by: \begin{equation} \label{eq2} r_{m}(t)=\iint F(x,y)e^{-jU(xcos\theta+ysin\theta)}dxdy. \end{equation} The region over which the integral is computed is $x^{2}+y^{2}\le L^{2}$, with $L$ being the radius of the circular patch on the ground to be imaged. $\theta$ is the look angle, and $U$ is defined by \begin{equation} \label{eq3} U=\frac{2}{c}(\omega_{0}+2\alpha(t-\tau_{0})), \end{equation} with $\tau_{0}$ being the demodulation time. The discretized version for this model is \begin{equation} \label{eq4} r_{m}=C_{m}f, \end{equation} where $r_{m}$ and $C_{m}$ are the vector form of the phase history and the observation matrix for the $m$th aperture position resptectively. $f$ is the vector form of the underlying SAR image. Stacking (4) for all the aperture positions, and considering phase error as well as noise, the model becomes \begin{equation} \label{eq5} g=C(\phi)f+n, \end{equation} with $g$ being the corrupted phase history, $\phi$ being the phase error, and $n$ being Gaussian white noise. $C(\phi)$ is the corrupted observation matrix. Taking the case of 1D phase error varying along azimuth direction as an example, we have \begin{equation} \label{eq6} C_{m}(\phi)=e^{j\phi_{m}}C_{m}, \end{equation} where $C_{m}(\phi)$ and $\phi_{m}$ are the corrupted observation matrix and phase error for the $m$th aperture position respectively. \section{Wirtinger Coordinate Descent Autofocusing} \label{sec:pagestyle} We propose to minimize the following cost function so that the desired SAR image $f$ and the phase error $\phi$ are jointly estimated, and therefore SAR image reconstruction and phase error removal are achieved simultaneously: \begin{equation} \label{eq7} J(f,\phi)=\|g-C(\phi)f\|_{2}^{2}-\lambda \sum_{i=1}^{N}\ln{\frac{\gamma}{\gamma^{2}+|f_{i}|^{2}}}. \end{equation} $\lambda$ is the regularization parameter and $\gamma$ is the scale parameter for Cauchy distribution. The penalty term used in (7) is a Cauchy regularization merely imposed on the magnitude of the latent SAR image and we therefore refer to it as "magnitude Cauchy regularization". Like the $l_{p}$ norm, it too is a sparsity enforcing regularization term, and its effectiveness has already been demonstrated in SAR imaging and other inverse problems \cite{karakucs2020solving, karakus2020convergence}. Since we want to solve $f$ and $\phi$ jointly, we adopt a coordinate descent autofocusing framework similar to \cite{onhon2011sparsity}. Specifically, $f$ and $\phi$ are updated alternatively by fixing one of them while optimizing the other. This iterative process will terminate when the relative error between $f^{(n)}$ and $f^{(n+1)}$ is smaller than $10^{-3}$. \subsection{Image reconstruction} We first discuss the iterative estimation of the desired image $f$. In each iteration of the proposed Wirtinger coordinate descent autofocusing (WCDA) framework, the subproblem to be solved is \begin{equation} \label{eq8} f^{(n+1)}=\mathrm{arg}\min_{f\in \mathbb{C}^{(n)}}\|g-C(\phi^{(n)})f\|_{2}^{2}-\lambda \sum_{i=1}^{N}\ln{\frac{\gamma}{\gamma^{2}+|f_{i}|^{2}}}. \end{equation} Unlike many other imaging-related inverse problem formulations, this is an optimization problem involving a complex unknown vector, and needs to be handled with the appropriate mathematical tools. Specifically, the derivative results for real optimization problems need to be modified or redefined to fit the complex case. Therefore, we here present a technique for directly computing the complex gradient and solving this problem using Wirtinger calculus. Wirtinger calculus is a powerful theory covering the analysis of real-valued functions of complex variables, and thus is applicable for this subproblem. It is also a rather elegant approach, because it can address the problem in a more concise way, i. e., there is no need to expand the complex variables as their real counterparts in the computational process. To find the minimum of (8), we compute the gradient of it directly using Wirtinger calculus. The definition of the gradient for a real-valued function $h(f)$ with a complex vector variable $f$ is first proposed in \cite{brandwood1983complex} and further extended in \cite{kreutz2009complex}, and is as follows: \begin{equation} \label{eq9} \nabla_{f}h=\Omega_{f}^{-1}(\frac{\partial h}{\partial f})^{H}. \end{equation} where $\Omega_{f}^{-1}$ is a metric tensor. In this paper we use Brandwood’s setting, i. e., let it be the identity matrix, then we have \begin{equation} \label{eq10} \nabla_{f}h=(\frac{\partial h}{\partial f})^{H}. \end{equation} Since the cost function is real-valued, according to \cite{kreutz2009complex}, we have: \begin{equation} \label{eq11} (\frac{\partial h}{\partial f})^{H}=\overline{(\frac{\partial h}{\partial f})}^{T}=(\frac{\partial h}{\partial \overline{f}})^{T}=(\frac{\partial h}{\partial \overline{{f}_{1}}},...,\frac{\partial h}{\partial \overline{{f}_{N}}})^{T}. \end{equation} Each component of the last term in the right side of (11) can be computed using the chain rule and the definition of the conjugate $\mathbb{R}$-derivative \cite{kreutz2009complex}. As a result, for the second term of (8), if we denote \begin{equation} \label{eq12} R(f)=-\sum_{i=1}^{N}\ln{\frac{\gamma}{\gamma^{2}+|f_{i}|^{2}}}, \end{equation} then we have \begin{equation} \label{eq13} (\nabla R(f))_{i}=\frac{f_{i}}{\gamma^{2}+|f_{i}|^{2}}. i=1,...,N. \end{equation} For the first term of (8), when we rewrite the square of $l_{2}$ norm as an inner product, and use the derived result in \cite{brandwood1983complex}, it is obvious that: \begin{equation} \label{eq14} \nabla \|g-C(\phi^{(n)})f\|_{2}^{2}=C(\phi^{(n)})^{H}(C(\phi^{(n)})f-g). \end{equation} Therefore, the gradient of (8) can be written as: \begin{equation} \label{eq15} \nabla J(f,\phi)=C(\phi^{(n)})^{H}(C(\phi^{(n)})f-g)+\lambda W(f)f, \end{equation} where \begin{equation} \label{eq16} W(f)=diag(s), \end{equation} \begin{equation} \label{eq17} s_{i}=\frac{1}{\gamma^{2}+|f_{i}|^{2}}, i=1,...,N. \end{equation} Now we set (15) to zero, because this is the necessary and sufficient condition for a stationary point of a real-valued complex function \cite{brandwood1983complex, kreutz2009complex}. This leads to \begin{equation} \label{eq18} [C(\phi^{(n)})^{H}C(\phi^{(n)})+\lambda W(f)]f=C(\phi^{(n)})^{H}g. \end{equation} It is worth pointing out that the solution found in this way is not necessarily the global minimum due to the non-convexity of the Cauchy penalty, but the experimental results in Section 4 imply that the obtained solution is sufficiently good. We can now rewrite (18) as $Af=b$, where $b=C(\phi^{(n)})^{H}g$, $A=C(\phi^{(n)})^{H}C(\phi^{(n)})+\lambda W(f)$. Since $W(f)$ depends on $f$, so does $A$. This makes (18) nonlinear in $f$, and it is difficult to find a closed form solution for it. However, if we approximate the $f$ in $W(f)$ with the $f$ computed by the last iteration of the coordinate descent framework, $A$ can be turned into a constant matrix, and thus $Af=b$ becomes a linear system of equations. That is to say, when computing an unknown $f^{(n+1)}$, we actually solve \begin{equation} \label{eq19} [C(\phi^{(n)})^{H}C(\phi^{(n)})+\lambda W(f^{(n)})]f^{(n+1)}=C(\phi^{(n)})^{H}g. \end{equation} The solution for this equation can be efficiently obtained by using the conjugate gradient algorithm \cite{barrett1994templates}. \subsection{Phase Error Optimization} For 1D phase error varying along the azimuth direction, after obtaining $f^{(n+1)}$, for the aperture position $m$, the phase error can be updated by solving: \begin{equation} \label{eq20} \phi_{m}^{(n+1)}=\mathrm{arg}\min_{\phi_{m} }\|g_{m}-e^{j\phi_{m}}C_{m}f^{(n+1)}\|_{2}^{2}, \end{equation} with $g_{m}$ and $C_{m}$ being the parts of $g$ and $C$ corresponding to the $m$th aperture position. According to \cite{onhon2011sparsity}, the solution is \begin{equation} \label{eq21} \phi_{m}^{(n+1)}=-\arctan(-\frac{\text{Re}{[f^{(n+1)}]^{H}C_{m}g_{m}}}{\text{Im}{[f^{(n+1)}]^{H}C_{m}g_{m}}}). \end{equation} This estimation is then used to update the corrupted observation matrix as: \begin{equation} \label{eq22} C_{m}(\phi_{m}^{(n+1)})=e^{j\phi_{m}^{(n+1)}}C_{m}. \end{equation} The cases of 2D phase errors varying in both range direction and cross-range direction can also be solved by similar methods, see \cite{onhon2011sparsity} for more details. \section{Experimental results} \label{sec:typestyle} In this paper, the same radar system model as in \cite{onhon2011sparsity} is used, and its parameters are listed in Table \ref{tab:radarparameters}. \begin{table}[!htbp] \small \caption{\textbf{Parameters of the radar system.}} \begin{center} \begin{tabular}{|l|c|c|} \hline \textbf{Carrier Frequency}&$2\pi \times 10^{10} rad/s$\\\hline \textbf{Chirp Rate}&$2\pi \times 10^{12} rad/s^{2}$\\\hline \textbf{Pulse Duration}&$4 \times 10^{-4}s$\\\hline \textbf{Angular Range}&$\ang{2.3}$\\\hline \end{tabular} \end{center} \label{tab:radarparameters} \end{table} In each experiment, this radar system model is used to generate a simulated phase history from a given scene. The corrupted phase history is then obtained by adding 1D random phase error along the azimuth direction as well as white Gaussian noise to the originaly simulated phase history. This corrupted phase history is then used to reconstruct a SAR image while correcting for the phase error. The proposed method is compared to two methods. One is the sparsity driven autofocus (SDA) method \cite{onhon2011sparsity} which is a state of the art autofocusing technique operating in an inverse problem framework similar to the one employed here, while the other is the traditional polar format algorithm \cite{walker1980range} which doesn't involve autofocusing. Apart from visual comparison, two numerical metrics are also computed to better assess the performance of each method. They are the mean square error between the reconstructed SAR image and the original scene, and the entropy of the reconstructed SAR image. For both of these metrics, smaller values indicate better performance. In the first experiment, we use a simulated scene measuring $32\times 32$ pixels. The visual results are presented in Fig.1, and the numerical results are listed in Table 2. It can be observed that the visual result of polar format suffers from severe defocusing effect. However, the reconstructed images by SDA and the proposed method are both very focused and highly resemble the original scene. The values of MSE and entropy for the proposed method are lower, suggesting that its result is sharper and more similar to the original scene. In the second experiment, a real TerraSAR-X image serves as the scene. Due to the high computational burden of our method for scenes of large size, a $64\times 64$ patch is cut from the original image and regarded as an input scene. The corrupted pseudo-phase history is generated from it as explained above. Fig. 2 shows the reconstructed images by all 3 methods, while Table 2 contains the corresponding numerical results. According to Fig. 2, the polar format algorithm once again gives reconstructed results with smeared targets. However, both SDA and the proposed method can remove phase errors effectively and present focused targets, displaying significant improvement over the result of the polar format algorithm. Nevertheless, the results of the numerical indices in Table 2 demonstrate that the proposed method outperforms SDA. \begin{figure}[t] \renewcommand{\figurename}{\textbf{Fig.}} \centering \subfigure[]{ \includegraphics[width=1.5in]{original.png}} \subfigure[]{ \includegraphics[width=1.5in]{polar.png}} \subfigure[]{ \includegraphics[width=1.5in]{appl1.png}} \subfigure[]{ \includegraphics[width=1.5in]{cauchy.png}} \captionsetup{font={normalsize}} \caption{Visual results for Scene 1, a simulated $32\times 32$ scene obtained by various methods. (a) original scene, (b) polar format reconstruction, (c) SDA, (d) the proposed method.} \end{figure} \begin{figure}[t] \renewcommand{\figurename}{\textbf{Fig.}} \centering \subfigure[]{ \includegraphics[width=1.5in]{original_t.png}} \subfigure[]{ \includegraphics[width=1.5in]{polar_t.png}} \subfigure[]{ \includegraphics[width=1.5in]{appl1_t.png}} \subfigure[]{ \includegraphics[width=1.5in]{cau_t.png}} \captionsetup{font={normalsize}} \caption{Visual results for Scene 2, a $64\times 64$ patch from TerraSAR-X obtained by various methods. (a) original scene, (b) polar format reconstruction, (c) SDA, (d) the proposed method.} \end{figure} \begin{table}[!t] \small \caption{\textbf{Numerical metric evaluation of the experimental results for SDA and the proposed method.}} \begin{center} \begin{tabular}{|l|c|c|} \hline \multicolumn{3}{|c|}{\textbf{Scene 1}} \\ \hline & \textbf{MSE}&\textbf{Entropy}\\\hline \textbf{SDA}&5.4310$\times 10^{-6}$&1.4621\\ \hline \textbf{Proposed}&1.2227$\times 10^{-6}$&0.3327\\ \hline \hline \multicolumn{3}{|c|}{\textbf{Scene 2}} \\ \hline & \textbf{MSE}&\textbf{Entropy}\\ \hline \textbf{SDA}& 6.4964$\times 10^{-5}$ &5.4410\\ \hline \textbf{Proposed}& 6.3029$\times 10^{-5}$ &5.4333\\ \hline \end{tabular} \end{center} \label{tab:allres} \end{table} \section{Conclusions} \label{sec:typestyle} In this paper, an optimization model regularized by Cauchy penalty is proposed to simultaneously reconstruct a SAR image and achieve autofocusing. A coordinate descent framework is adopted to solve this inverse problem, in which Wirtinger calculus is utilized to directly solve one of the subproblems involving complex optimization. Experimental results on simulated phase histories derived from a simulated scene and a real SAR image demonstrate that the proposed method can reconstruct a highly focused image and effectively remove phase erro \bibliographystyle{unsrt}
1,108,101,564,068
arxiv
\section{Introduction} Une lame supraconductrice de type II, plong\'ee dans un champ magn\'etique ext\'erieur $\boldsymbol{B}_0$, est travers\'ee par un r\'eseau r\'egulier de lignes de vortex, sauf dans l'\'etat Meissner en champ parall\`ele faible ($B_0<B_{c1}$, le premier champ critique); c'est l'\'etat dit mixte (mixed state) g\'en\'eralement trait\'e comme un continuum \`a une \'echelle m\'esoscopique, grande devant la distance intervortex $a\sim$100--1000 \AA. Au dessus du second champ critique $B_{c2}$, et dans tous les cas de figure, la structure du r\'eseau de vortex dispara\^\i t dans la masse, o\`u le m\'etal redevient normal avec une r\'esistivit\'e $\rho_n$. Si on ignore de petits effets li\'es aux d\'eformations de la maille du r\'eseau de vortex, par exemple les effets de cisaillement d\'ecrits par une tr\`es faible constante de cisaillement \'elastique $c_{66}$, un \'etat du r\'eseau est bien d\'ecrit si on se donne en chaque point le champ de vortex $\boldsymbol{\omega} = n\varphi_0\boldsymbol{\nu}$, qui regroupe la densit\'e de lignes de vortex $n$ (m de lignes $/$ m$^3$) et leur direction $\boldsymbol{\nu}$ ($\nu=1$); en multipliant par $\varphi_0$, quantum de flux, on donne arbitrairement \`a $\boldsymbol{\omega}$ la dimension d'un champ magn\'etique (en Teslas) \cite{Mathieu88,Hocquet92}. Lorsque l'\'echantillon est soumis \`a une excitation \'electromagn\'etique quelconque, champ magn\'etique variable ou courant appliqu\'e, le r\'eseau de vortex peut se mettre en mouvement, mouvement continu ("flux flow") ou petites oscillations. Ces mouvements s'accompagnent toujours de dissipation et d'un champ \'electrique (m\'esoscopique) $\boldsymbol{E} = \boldsymbol{\omega}\times{\bf v}_L$, o\`u ${\bf v}_L$ est la vitesse de ligne \cite{Mathieu88,Hocquet92}. Quel que soit le cas de figure, la r\'eponse \'electromagn\'etique de l'\'etat mixte met en jeu la dynamique des vortex. Un \'echantillon {\em id\'eal} n'ayant aucun d\'efaut cristallin, ni en volume ni en surface, donc aucun site d'ancrage (ou ``pinning'') possible pour les vortex, se comporterait assez banalement comme un milieu conducteur et diamagn\'etique. Sa r\'esistivit\'e est anisotrope et varie de z\'ero (pour des courants parall\`eles aux vortex) \`a $\rho_f\simeq\rho_n\omega/B_{c2}$ (pour des courants normaux aux vortex). On d\'efinit d'autre part une ``perm\'eabilit\'e diamagn\'etique'' effective $\mu(\omega)<\mu_0$ \cite{Vasseur97}; la perm\'eabilit\'e relative $\mu_r=\mu/\mu_0$ est une fonction rapidement croissante du champ, de 0 \`a 1, si bien qu'aux champs $B$ assez \'elev\'es o\`u nous travaillons, $\mu$ se confond pratiquement avec $\mu_0$. Ainsi, la r\'eponse d'une lame \emph{id\'eale} \`a une excitation ext\'erieure ${\bf b}_e(t)$ dans la g\'eom\'etrie de la figure~\ref{slab1}, est solution d'une simple \'equation de diffusion \begin{equation} \frac{\partial^2b}{\partial x^2} = \mu_0\sigma_f\frac{\partial b}{\partial t}\qquad, \label{diffusionequation}\end{equation} o\`u $\sigma_f=\rho_f^{-1}$ est la conductivit\'e ``flux-flow''. Par exemple, la r\'eponse alternative id\'eale, pour un $b_0e^{-i\Omega t}$, serait un mode d'effet de peau classique d\'ecrit par l'\'equation de dispersion $k_1^2=i\mu_0\sigma_f\Omega=2i/\delta_f^2$ imm\'ediatement d\'eduite de (\ref{diffusionequation}), o\`u $\delta_f$ est la profondeur de peau ``flux-flow''. Ou encore, pour un courant continu $I$ appliqu\'e dans la direction $y$, la caract\'eristique courant-tension $V$-$I$ de la lame de la figure~\ref{slab1}, serait simplement une loi d'Ohm $V=R_fI$. A cet \'egard, il est important de souligner que toutes les th\'eories sont d'accord sur la nature de cette r\'eponse id\'eale, mais aussi sur le fait qu'elle n'est jamais observ\'ee (sauf indirectement \cite{Vasseur97}), parce que les moindres d\'efauts cristallins affectent consid\'erablement la r\'eponse \'electromagn\'etique avec apparition de "pinning" et de courants critiques : la caract\'eristique $V$-$I$ en courant continu ne devient lin\'eaire qu'\`a fort courant : $V=R_f(I-I_c)$; la r\'eponse alternative de basse fr\'equence reste lin\'eaire \`a tr\`es faible niveau ($b_0\lesssim1\mu$T), mais la profondeur apparente de p\'en\'etration est beaucoup plus faible que $\delta_f$, et de plus ind\'ependante de la fr\'equence \cite{Campbell69}, etc \dots En revanche, il existe de fortes divergences d'interpr\'etation sur la nature exacte des courants critiques et du processus de "pinning", de m\^eme que sur la localisation des pi\`eges (surface ou volume). Nous pr\'esentons dans cet article une mesure de la r\'eponse \`a une petite variation en \'echelon du champ ext\'erieur, $\boldsymbol{B}_0+{\bf b}_e(t)$, d'une lame polycristalline mais chimiquement homog\`ene. Cette exp\'erience d\'emontre de fa\c{c}on tr\`es directe et spectaculaire, que dans ce type d'\'echantillon, dit ``soft'' (la distinction entre \'echantillons ``soft'' et ``hard'' est effectivement tr\`es importante; nous en discuterons au \S~\ref{modeles}), \emph{le r\'eseau de vortex en volume r\'epond librement \`a une excitation}. Autrement dit, il n'y a pas de signe d\'ecelable de "pinning" en volume, et seuls les d\'efauts de surface gouvernent la r\'eponse globale. Ce r\'esultat corrobore le mod\`ele de l'\'etat critique propos\'e en 1988 par Mathieu et Simon (mod\`ele MS) \cite{Mathieu88}, en contradiction avec les id\'ees classiques sur la nature du "pinning" qui furent \'elabor\'ees il y a une trentaine d'ann\'ees. La question d\'epasse en fait celle de la simple distribution des centres de "pinning", en surface ou en volume, et concerne tout autant la nature m\^eme du "pinning". Pour montrer l'importance de l'enjeu, nous pr\'esentons au \S~\ref{modeles}, aussi qualitativement que possible, un bref rappel des deux points de vue en pr\'esence, le point de vue classique, que nous appellerons en abr\'eg\'e BPM (``bulk pinning model'') d'une part, et le mod\`ele MS d'autre part. Nous pr\'ecisons \'egalement au \S~\ref{modeles} ce que nous entendons par mat\'eriau ``soft''. Le principe de l'exp\'erience et les r\'esultats sont pr\'esent\'es au \S~\ref{reponse}. De nombreuses exp\'eriences pr\'etendent mesurer des densit\'es volumiques de courant critique $J_c$ (A$/$cm$^2$), et semblent attester l'existence de "pinning" en volume dans toutes sortes d'\'echantillons. En fait, bien souvent, l'existence de $J_c$ a \'et\'e postul\'ee, et le r\'esultat exp\'erimental est interpr\'et\'e via un mod\`ele o\`u $J_c$ est un param\`etre ajustable. L'exemple le plus banal est celui o\`u $J_c$ est tout simplement calcul\'ee comme le rapport d'un courant critique $I_c$ \`a la section de l'\'echantillon. Ou alors, si l'exp\'erience est cens\'ee mesurer v\'eritablement une distribution $J_c$ fonction du point, sans exclure {\it a priori} que $J_c$ puisse \^etre nulle en volume, une interpr\'etation sommaire conduit facilement \`a la conclusion erron\'ee que les $J_c$ de volume sont significatifs. Ainsi van de Klundert {\it et al.} \cite{Klundert78}, \'etudiant la r\'eponse d'une lame (Fig.~\ref{slab1}) \`a une excitation $b_e(t)$ trap\'ezo\"\i dale, trouvent bien un courant critique de surface, mais aussi une distribution $J_c(x)$ en volume, dont la contribution \`a $I_c$ n'est pas du tout n\'egligeable. Mais curieusement cette distribution $J_c(x)$ pr\'esente un pic marqu\'e au centre $x=l/2$ de la lame; comme les auteurs le notent eux-m\^emes, cette anomalie de densit\'e de courant critique en $x=l/2$ est tout \`a fait invraisemblable. A notre avis, il s'agit l\`a d'un artefact de d\'epouillement d\^u \`a ce que la r\'eponse est consid\'er\'ee comme quasistatique, sous-estimant les effets de diffusion du champ magn\'etique (c.f. fin du \S~\ref{reponse}). Citons \'egalement \`a ce propos les exp\'eriences plus r\'ecentes \cite{Abulafia95}, qui mettent en \'evidence dans des plaquettes en champ perpendiculaire, un profil du champ en tas de sable ("pile-of-sand profile"), conform\'ement au "critical state model" de Bean \cite{Bean62} (CSM). Ces exp\'eriences sont souvent donn\'ees \`a tort comme des v\'erifications du bulk "pinning". Cette confusion entre CSM et BPM (``bulk pinning model'') est assez courante et nous y reviendrons au \S~\ref{modeles}. \section{Les mod\`eles de "pinning"}\label{modeles} Pour fixer les id\'ees, et sauf avis contraire, nous nous r\'ef\'ererons \`a la g\'eom\'etrie de la Fig.~\ref{slab1}, que nous avons utilis\'ee dans nos exp\'eriences : une lame $(xy)$ en champ $\boldsymbol{B}_0 (0,0,B_0)$ parall\`ele, dont les dimensions selon $x,y,z$ sont not\'ees respectivement $l$ (\'epaisseur), $L$ (longueur) et $W$ (largeur), avec $l\ll W<L$. Rappelons le sch\'ema des th\'eories classiques. Notons d'abord que, sauf exception \cite{Coffey92}, elles confondent syst\'ematiquement le champ de vortex $\boldsymbol{\omega}$ et le champ magn\'etique $\boldsymbol{B}$ (moyenne locale) \cite{Mathieu88,Hocquet92}. Par contre elles distinguent artificiellement dans la densit\'e de courant $\boldsymbol{J}$, courant diamagn\'etique $\boldsymbol{J}_D$ et courant de transport $\boldsymbol{J}_T$ \cite{Campbell72}. En effet, comme le remarquait d\'ej\`a Josephson \cite{Josephson66}, un courant de transport non-dissipatif et un $\boldsymbol{J}_D$ sont tous deux des moyennes locales de m\^emes courants microscopiques ${\bf j}_s$. La force motrice qui tend \`a mettre les vortex en mouvement est la force de Lorentz $\boldsymbol{J}_T\times\varphi_0\boldsymbol{\nu}$ (par unit\'e de longueur de vortex), soit $\boldsymbol{J}_T\times\boldsymbol{\omega}=\boldsymbol{J}_T\times\boldsymbol{B}$ par unit\'e de volume. S'il n'y avait pas de "pinning", on aurait $\boldsymbol{J}_T\times\varphi_0\boldsymbol{\nu}=\eta{\bf v}_L$, d'o\`u un flux-flow ${\bf v}_L$ \`a angle droit du courant de transport (le coefficient de friction $\eta$ est reli\'e \`a $\rho_f$ par $\eta=\varphi_0\omega/\rho_f$, et $\boldsymbol{E} = \rho_f\boldsymbol{J}_T$). Mais on pense que toutes sortes de d\'efauts cristallins peuvent se comporter comme des centres de "pinning". La force d'ancrage sur un vortex, dont le c\oe ur est voisin d'un site d'ancrage, est vue comme le gradient d'un profil d'\'energie libre fonction de la position du c\oe ur; elle se transmet \'eventuellement aux autres vortex par le jeu des interactions entre vortex. Ces interactions sont traditionnellement d\'ecrites, pour de petits \'ecarts \`a un r\'eseau id\'eal uniforme, par trois modules \'elastiques $c_{11}$ (compression), $c_{44}$ (torsion) et $c_{66}$ (cisaillement) \cite{Campbell72}. Les forces de "pinning", dont l'effet est en g\'en\'eral isotrope, sont suppos\'ees pouvoir compenser la force de Lorentz $\boldsymbol{J}_T\times\boldsymbol{B}$ jusqu'\`a une valeur seuil $J_T=J_c(B,T)$, dite densit\'e de courant critique, qui peut \^etre aussi fonction du point. Dans ces conditions, le "flux flow" commencera dans une tranche d$y$ de la lame, quand partout sur cette section $J_T=J_c$; le courant critique $I_c(y)$, valeur du courant $I$ appliqu\'e, pour lequel on observe la premi\`ere tension aux bornes de cette tranche, est donc identifi\'e \`a la somme des $J_c$ sur la section. Dans les interpr\'etations classiques du "pinning" et des courants critiques, il arrive qu'on envisage une forte contribution de la surface, dans le sens o\`u une forte densit\'e de pi\`eges peut se trouver localis\'ee sur la surface, ou du moins sur une faible profondeur (quelques microns), tout simplement \`a cause du traitement de surface ou de la technique de d\'ecoupe. Mais, que leur concentration soit faible ou \'elev\'ee, on n'envisage jamais l'absence de centres de "pinning" en volume; c'est pourquoi nous caract\'erisons les mod\`eles classiques par le terme de BPM. Si les centres d'ancrage sont trop dilu\'es en volume pour pouvoir pi\'eger les vortex un par un, on fait intervenir l'effet du module de cisaillement $c_{66}$ (pourtant tr\`es faible) \cite{Campbell72}, pour expliquer que l'ensemble du r\'eseau ne se met pas en mouvement imm\'ediatement au moindre courant appliqu\'e. C'est ainsi qu'on attribue la disparition de tout courant critique le long d'une ligne dite d'irr\'eversibilit\'e $B^*(T)$ dans les cuprates supraconducteurs, \`a une fusion du r\'eseau de vortex, qui ferait effectivement dispara\^\i tre tout effet de cisaillement. Dans d'autres cas, au contraire, on suppose que la concentration des centres d'ancrage en volume est importante, entra\^\i nant de nombreuses distorsions du r\'eseau de vortex en volume, au point d'en faire parfois un verre de vortex \cite{Blatter94}. Avant d'\'enoncer quelques difficult\'es des BPM, rappelons ce qu'on observe lorsque le courant continu appliqu\'e \`a la lame est surcritique. Pour $I\!>\!I_c(y)$, la tension d$V$ aux bornes de la tranche d$y$ suit une loi lin\'eaire $dV\!=\!dR_f(I\!-\!I_c(y))$ o\`u $dR_f\!=\!\rho_fdy/Wl$. Comme il est pratiquement impossible d'obtenir une distribution parfaitement homog\`ene des d\'efauts, quel que soit le mod\`ele invoqu\'e, il est clair que $I_c(y)$ n'est pas constant sur la longueur $L$ de la lame, mais varie sur un intervalle $(I_c^\prime,I_c^{\prime\prime}$), de sorte que la caract\'eristique $V$-$I$ de la lame est une somme de caract\'eristiques \'el\'ementaires lin\'eaires, ce qui donne : $V=0$ jusqu'\`a $I_c^\prime$, caract\'eristique courb\'ee de $I_c^\prime$ \`a $I_c^{\prime\prime}$, et $V=R_f(I-I_c)$ au del\`a de $I_c^{\prime\prime}$, avec $I_c=\langle I_c(y)\rangle$, moyenne sur $L$. Cette remarque pratique interviendra dans notre discussion du \S~\ref{reponse}. Rappelons \'egalement qu'\`a c\^ot\'e de cette tension continue en "flux flow", appara\^\i t une tension bruyante $\delta V(t)$ (typiquement $10^{-8}$--$10^{-11}$ V$/$(Hz)$^{1/2}$, dans la gamme 0--10 kHz), dont tout le monde s'accorde \`a dire qu'elle r\'esulte du mouvement plus ou moins irr\'egulier des vortex au voisinage des d\'efauts. Une analyse des th\'eories classiques et la confrontation avec l'exp\'erience montrent assez vite que le BPM est un mod\`ele trop rustique, et qu'il est impuissant \`a rendre compte de fa\c{c}on coh\'erente de l'ensemble des r\'esultats exp\'erimentaux. Nous disons bien de l'ensemble des r\'esultats exp\'erimentaux. Rappelons en effet que ce d\'ebat sur la nature et la localisation des courants critiques est, en ce qui nous concerne, commenc\'e depuis longtemps. Des exp\'eriences vari\'ees \cite{Hocquet92,Placais94,Mathieu93,LuetkeEntrup97}, dont certaines tr\`es anciennes \cite{Thorel72b,Thorel73}, remettent en cause la notion de densit\'e de courant critique et les m\'ecanismes classiques de "pinning" dans les ph\'enom\`enes de transport. Il n'est \'evidemment pas question de reprendre ici en d\'etail tous les r\'esultats et arguments que nous avons accumul\'es sur le sujet; mais il est clair que nous avons d'ores et d\'ej\`a assez de r\'esultats pour affirmer que le BPM est r\'efut\'e dans toute une classe d'\'echantillons "soft", qu'il s'agisse d'alliages classiques, de m\'etaux purs, ou de cristaux d'YBCO non macl\'es ("untwinned crystals" voir plus loin). Cependant, nous sommes bien conscients qu'on ne remet pas aussi ais\'ement en question des notions qui datent de plus de trente ans. C'est la raison pour laquelle nous proposons ici une exp\'erience, qui n'est jamais qu'une exp\'erience de plus, mais qui a la vertu d'\^etre tr\`es simple et d\'emonstrative, et surtout de pouvoir \^etre interpr\'et\'ee ind\'ependamment de tout formalisme li\'e \`a un mod\`ele quelconque de "pinning". Reprenons bri\`evement quatre exemples :\\ {\bf i)} Dans une lame inclin\'ee sur $\boldsymbol{B}_0$, dans un cylindre, une sph\`ere, il existe des densit\'es de courant diamagn\'etiques $\boldsymbol{J}_D$ tr\`es \'elev\'ees pr\`es de la surface ($\sim10^7$--$10^8$ A$/$cm$^2$); pourquoi ces courants ne donnent-ils lieu \`a aucune force de Lorentz ? \\ {\bf ii)} Si on compare les $\boldsymbol{J}_c$ (mesur\'ees comme les rapports $I_c/Wl$) obtenues avec des films, plaquettes, lames d'un m\^eme mat\'eriau et de diff\'erentes \'epaisseurs $l$, pr\'epar\'ees dans les m\^emes conditions \cite{Joiner67,Simon94}, on s'aper\c{c}oit que le plus souvent $J_c\propto 1/l$, ce qui signifie en clair que $I_c$ est proportionnel au p\'erim\`etre $2W$, ou ne varie pas si seulement $l$ varie. D'o\`u l'id\'ee qu'on ferait mieux de d\'efinir une densit\'e de courant critique superficielle, comme $K_c$(A$/$m)$=I_c/2W$. Cette id\'ee est confirm\'ee par des exp\'eriences faites dans notre laboratoire, et qui permettent de localiser directement $J_T$ ainsi que l'effet Joule \cite{Hocquet92}.\\ {\bf iii)} La forme de la caract\'eristique $V$-$I$ suppose que la force de "pinning" agit comme une force de frottement solide quand le r\'eseau de vortex est en mouvement; les mod\`eles classiques parviennent non sans mal \`a expliquer cette circonstance \cite{Campbell72}, mais ne sont jamais parvenus, sauf au prix de graves incoh\'erences \cite{Placais94}, \`a rendre compte du bruit de "flux flow". L'exp\'erience montre qu'il y a bien des fluctuations de vitesse de ligne $\delta v_L(t)$ dans la masse de l'\'echantillon, mais que, contrairement \`a toute attente, et en compl\`ete contradiction avec un BPM, elles sont coh\'erentes dans tout l'\'echantillon \cite{Placais94}.\\ {\bf iv)} tout le monde reconnait que la p\'en\'etration d'une onde \'electromagn\'etique dans une lame de type II, mesur\'ee par l'imp\'edance de surface $Z(\Omega)$ est gouvern\'ee par la dynamique du "pinning". Mais, comme nous l'avons montr\'e r\'ecemment \cite{LuetkeEntrup97,LuetkeEntrup98}, aucun BPM n'est capable d'expliquer tous les aspects qualitatifs de l'effet de peau dans l'\'etat mixte (effets de taille), et encore moins de rendre compte quantitativement du spectre $Z(\Omega)$ lorsqu'on parcourt la gamme des radiofr\'equences ("depinning transition") \cite{LuetkeEntrup98}. Il nous para\^\i t d'autre part symptomatique qu'aucun BPM n'a \'et\'e capable de pr\'edire seulement l'ordre de grandeur observ\'e des courants critiques. Or le mod\`ele MS, en ne consid\'erant comme d\'efauts que la rugosit\'e de la surface, peut non seulement pr\'edire cet ordre de grandeur \cite{Mathieu88,Hocquet92,Simon94}, mais apporte \'egalement une solution aux probl\`emes que nous avons \'evoqu\'es : caract\`ere superficiel des $I_c$, localisation en surface de la partie $VI_c$ de l'effet Joule $VI$, origine du bruit de "flux flow", existence d'une ligne d'irr\'eversibilit\'e dans les supraconducteurs anisotropes \cite{Simon94}, forme du spectre $Z(\Omega)$ dans des \'echantillons aussi vari\'es que PbIn, Nb, V, YBCO. Nous renvoyons \'egalement aux articles originaux pour ce qui concerne les d\'etails du mod\`ele MS des courants critiques, et de la th\'eorie ph\'enom\'enologique dont ce mod\`ele d\'ecoule \cite{Mathieu88,Hocquet92,Simon94}. Nous nous contenterons de donner ici trois \'el\'ements essentiels de la th\'eorie MS, qui, \`a notre avis, donnent la cl\'e de tous les probl\`emes de transport dans les supraconducteurs de type II : \\ {\bf i)} Chaque ligne de vortex doit se terminer perpendiculairement \`a la surface de l'\'echantillon, d'o\`u l'importance des conditions aux limites en surface (lisse ou rugueuse) dans tout probl\`eme d'\'equilibre ou de mouvement du r\'eseau de vortex. \\ {\bf ii)} Les lignes de vortex ne sont pas toujours les lignes de champ, de sorte que $\boldsymbol{\omega}$ et $\boldsymbol{B}$ doivent \^etre consid\'er\'es comme deux variables locales ind\'ependantes. La variable conjugu\'ee de $\boldsymbol{\omega}$, $\boldsymbol{\varepsilon}=\varepsilon(\omega,T)\boldsymbol{\nu}$, se pr\'esente comme une tension de ligne $\varphi_0\varepsilon$ (J/m) dans l'\'equation MS d'\'equilibre des vortex (ou de non-dissipation $\boldsymbol{J}_s\!+\!\mbox{curl}\boldsymbol{\varepsilon}\!=\!0$). La distinction entre $\boldsymbol{\omega}$ et $\boldsymbol{B}$ introduit des degr\'es de libert\'e suppl\'ementaires, et conduit \`a des solutions d'\'equilibre inattendues (d\'ecrivant des \'etats sous-critiques), qui ont \'echapp\'e aux th\'eories classiques.\\ {\bf iii)} L'analogie classique du diamagn\'etisme local est trompeuse. Un courant diamagn\'etique $\boldsymbol{J}_D$ est un vrai courant supraconducteur non-dissipatif $\boldsymbol{J}_s$ ($=-\mbox{curl}\boldsymbol{\varepsilon}$), au m\^eme titre qu'un $\boldsymbol{J}_T$ sous-critique. L'un ou l'autre circule pr\`es de la surface, sur une faible profondeur $\lambda_V$ ($\lesssim\lambda_0\sim1000$ \AA, profondeur de p\'en\'etration de London \`a champ faible). Au del\`a de cette profondeur $\lambda_V$, tout \'ecart $\boldsymbol{\omega\!-\!B}$ dispara\^\i t, si bien que dans la masse $\boldsymbol{\omega}\!\equiv\!\boldsymbol{B}$. Il se trouve que l'intensit\'e d'aimantation moyenne d'un corps parfait dans l'\'etat mixte est justement $-\boldsymbol{\varepsilon}$, mais $-\boldsymbol{\varepsilon}$ n'a pas le sens physique premier d'une intensit\'e d'aimantation locale, ni $\mu_r$, d\'efini comme le rapport $\omega/(\omega+\mu_0\varepsilon)$ \cite{Vasseur97}, celui d'une v\'eritable perm\'eabilit\'e. Pr\'ecisons maintenant la distinction que nous faisons entre \'echantillons "soft" et "hard" \cite{Mathieu88}. La plupart des exp\'eriences fondamentales destin\'ees \`a explorer les propri\'et\'es de transport des supraconducteurs de type II, qu'il s'agisse de mat\'eriaux conventionnels ou de mat\'eriaux \`a haute $T_c$, utilisent des \'echantillons relativement homog\`enes chimiquement, que nous appelons ``soft''. Un \'echantillon ``soft'' peut \^etre un monocristal, ou une feuille polycristalline lamin\'ee pleine de d\'efauts cristallins, mais ses caract\'eristiques thermodynamiques telles que $B_{c1}, B_{c2}, \varepsilon$ ou $T_c$, sont bien d\'etermin\'ees et homog\`enes, ou du moins varient lentement \`a l'\'echelle m\'esoscopique de la distance intervortex $a$. Cette d\'efinition exclut bien entendu tous les \'echantillons, dits ``hard'' (fils industriels, poudres fritt\'ees, \dots), qui contiennent des cavit\'es, pr\'ecipit\'es, d\'efauts colomnaires, \dots introduisant dans la masse de v\'eritables \emph{interfaces} \`a l'\'echelle de $a$. Dans ce sens, un cristal anisotrope macl\'e ("twinned crystal") doit \^etre class\'e parmi les \'echantillons "hard"; un plan de macle ("twin boundary") repr\'esente une v\'eritable interface \`a l'\'echelle de $a$, et de belles exp\'eriences de STM \cite{MaggioAprile97} montrent que cette interface peut transporter de fortes densit\'es de courant, $J\sim 10^8$ A/cm$^2$, tout a fait comparables \`a celles que peut transporter une surface rugueuse dans le mod\`ele MS de l'\'etat critique. La pr\'esence d'interfaces revient \`a multiplier artificiellement les effets de surface mis en \'evidence dans les \'echantillons ``soft''. On se souvient par exemple que les premiers fils supraconducteurs fabriqu\'es en France, avaient des courants critiques proportionnels \`a leur p\'erim\`etre, ou \`a leur rayon \cite{Thorel72}, tandis que les cables multi-filaments actuels ont des $I_c$ variant comme leur section. De m\^eme qu'en optique, o\`u il est pr\'ef\'erable de commencer par la physique du dioptre plut\^ot que par celle du tas de billes ou du verre cath\'edrale, il nous semble que la compr\'ehension du "pinning" et des courants critiques dans les \'echantillons ``hard'' (certes les plus utiles) passe d'abord par l'\'etude des ``soft''. Ce sont les seuls qui nous concernent ici. Revenons sur un point improtant de la discussion \'evoqu\'e \`a la fin de l'introduction. La v\'erification du CSM dans des lames en champ normal est souvent pr\'esent\'ee comme une \'evidence exp\'erimentale de l'ancrage des vortex en volume, comme si CSM impliquait BPM. Or l'id\'ee du CSM de Bean \cite{Bean62}, qui est ind\'ependante de la nature et de la localisation des courants critiques, et peut en fait s'appliquer \`a tout mod\`ele des courants critiques, est simplement la suivante : la p\'en\'etration du champ (par exemple croissant) est limit\'ee par la saturation progressive, jusqu'\`a leur valeur critique et dans le sens des courants induits, des courants non dissipatifs, de l'ext\'erieur vers l'int\'erieur, ou pour une lame de la p\'eriph\'erie vers le centre. Notre analyse de la r\'eponse \`a un \'echelon (\S~\ref{reponsesymetrique}) est tout \`a fait conforme au CSM. R\'ecemment une \'equipe isra\'elienne propose un nouveau CSM dans des films qui tient compte de la variation de la densit\'e de courant critique sur l'\'epaisseur des films \cite{Prozorov98}. Insistons sur le fait que si la technique \'el\'egante des petites sondes de Hall permet bien de se rendre compte de la distribution des courants dans le plan de la lame, elle ne permet pas en revanche de r\'esoudre le probl\`eme de la distribution des courants sur l'\'epaisseur de la lame \cite{Abulafia95}. \section{R\'eponse \`a un \'echelon}\label{reponse} \subsection{\'Echantillon et principe de l'exp\'erience}\label{sample} Les exp\'eriences ont \'et\'e r\'ealis\'ees sur une s\'erie de lames polycristallines de plomb-indium. Cet alliage, dont les propri\'et\'es supraconductrices sont bien connues \cite{Farrel69}, a l'avantage de pouvoir \^etre pr\'epar\'e en lingots de grande taille ($\lesssim 10$ mm de diam\`etre). Le m\'elange de PbIn est fondu \`a 360$^\circ$C pendant plusieurs heures sous une pression de $3\times10^{-4}$ mbar d'Argon, puis tremp\'e \`a la temp\'erature ambiante. Un recuit progressif de 15 jours \`a une temp\'erature inf\'erieure de 8$^\circ$C \`a celle du point de fusion assure une bonne homog\'en\'eit\'e chimique de la solution solide Pb$_{1-x}$In$_x$ (pour $x\lesssim0.2$). Nous estimons que cette homog\'en\'eit\'e est satisfaisante si la largeur de la transition $\Delta B_{c2}$ n'exc\`ede pas 50 Gauss (soit $\Delta B_{c2}\lesssim0.01 B_{c2}$), ce que nous pouvons v\'erifier par une mesure de tension transverse an champ parall\`ele mise au point dans notre laboratoire \cite{Mathieu93}. Les lames sont obtenues par \'electro-\'erosion, suivie ou non d'un laminage ou d'une compression entre plaques de verre. Les \'echantillons sont chimiquement homog\`enes, mais insistons sur le fait qu'aucune pr\'ecaution sp\'eciale n'a \'et\'e prise pour \'eviter les d\'efauts cristallins en volume, ou pour r\'eduire les courants critiques; ceux-ci sont d'un ordre de grandeur tout \`a fait standard. Pour fixer les id\'ees, nous donnerons ci-dessous les valeurs num\'eriques et r\'esultats explicites obtenus avec une m\^eme lame Pb$_{0.82}$In$_{0.18}$, de dimensions $l=2.7$ mm, $W=7.2$ mm, et $L=30$ mm, \`a $T=1.79$ K (soit 0.26 $T_c$). Son champ critique $B_{c2}$, \`a cette temp\'erature, est de 4750 G, et sa conductivit\'e normale est $\sigma_n=9.7\times10^6 \;\Omega^{-1}$m$^{-1}$. La lame est soumise, dans la g\'eom\'etrie de la Fig.~\ref{slab1}, \`a une perturbation ${\bf b}_e(t)$ uniforme de m\^eme direction $z$ que le champ principal $\boldsymbol{B}_0$, et \'eventuellement \`a un courant continu $I$ superpos\'e dans la direction $y$ de la longueur de l'\'echantillon. Nous avons utilis\'e syst\'ematiquement une excitation $b_e(t)$ p\'eriodique en cr\'eneaux, d'amplitude $\pm b_0$ variable ($b_0\sim$ 1--10 G). La p\'eriode, de l'ordre de quelques ms, est grande devant le temps de diffusion du champ ($\sim 100 \mu$s), de sorte qu'un \'etat d'\'equilibre est atteint \`a chaque demi-alternance. Le probl\`eme th\'eorique correspondant est celui de la r\'eponse \`a un \'echelon, de $-b_0$ \`a $+b_0$ (ou de $+b_0$ \`a $-b_0$), et nous consid\'ererons qu'il peut \^etre trait\'e \`a {\em une dimension} suivant l'\'epaisseur de la lame $\Delta x = l\sim 1$ mm ($l\ll W,L$) : il s'agit donc dans chaque cas de calculer le profil $b_z=b(x,t)$ dans la lame, et d'en d\'eduire le champ \'electrique induit $e_y=e(x,t)$; la valeur du champ $e$ sur les faces de la lame est accessible \`a la mesure et peut \^etre compar\'ee \`a sa valeur th\'eorique. Pour fixer les id\'ees, nous consid\'ererons l'\'echelon croissant, en prenant comme origine des temps, $t=0$, l'instant o\`u le champ excitateur commence \`a cro\^\i tre ($b_e(t)=-b_0$ pour $t\leq0$). A l'\'equilibre le champ principal $\boldsymbol{B}_1$ \`a l'int\'erieur de la lame est l\'eg\`erement plus faible que $\boldsymbol{B}_0$ \`a cause des courants diamagn\'etiques superficiels. Le profil de champ $b(x,t)$ repr\'esente l'\'ecart par rapport \`a cet \'equilibre, et de m\^eme nous appelons $J$ ou $K$ toute densit\'e de courant induite par l'excitation ou associ\'ee \`a la pr\'esence d'un courant continu appliqu\'e $I$. Dans la discussion, comme sur les sch\'emas des figures \ref{reponse2} et \ref{reponse3}, il n'y aura pas d'inconv\'enient \`a ignorer les champs et courants d'\'equilibre. Comme nous envisageons la possibilit\'e de courants superficiels $K(0)$ et $K(l)$ sur les faces $x=0$ et $x=l$, dans la direction $y$ (\`a l'\'echelle de $l$ leur profondeur de p\'en\'etration $\lambda_V$ est negligeable), le champ $b$ a une discontinuit\'e $\mu_0K$ \`a chaque face, alors que $e_y$ est continu. Si on note $b(0,t)$ et $b(l,t)$ les valeurs du champ sur les faces mais \`a l'int\'erieur, on a \begin{eqnarray} \mu_0K(0) &=& b_e(t) + b_I - b(0,t)\qquad ,\nonumber\\ \mu_0K(l) &=& b(l,t) - b_e(t) + b_I\qquad , \label{deux}\end{eqnarray} o\`u $b_I=\mu_0I/2W$ est le champ \'eventuellement cr\'e\'e \`a l'ext\'erieur de la lame (c\^ot\'e $x<0$) par un courant $I$ appliqu\'e. On mesure la tension induite $V_{ab}$ entre deux contacts $a$ et $b$, plac\'es \`a une distance $\Delta y=ab=d$ sur la face $x=0$ (Fig.~\ref{slab1}). Si $\Phi=sb_e(t)$ est le flux de $b_e(t)$ dans la boucle de mesure (de surface \'equivalente $s$) suppos\'ee ferm\'ee par le segment $ab$, $V_{ab}=\partial\Phi/\partial t+e(0,t)d$. Une fois retranch\'e le signal parasite $\partial\Phi/\partial t$, accessible \`a champ nul $B_0=0$, on obtient le signal utile $e(0,t) d$, qui mesure le flux rentrant par la face $x=0$ entre $a$ et $b$. Pour minimiser la surface de la boucle de mesure ($s<0.5$ mm$^2$), donc le signal parasite, les fils des prises de tension (diam\`etre 5/100 mm) sont coll\'es sur la face de l'\'echantillon. Le signal transitoire $V_{ab}(t)$ ($\sim 100 \mu$V, voir Fig.~\ref{pulse4}), reproduit p\'eriodiquement \`a chaque mont\'ee de cr\'eneau, est amplifi\'e 1000 fois et analys\'e point par point par un int\'egrateur Boxcar PAR 160. D'habitude, dans ce type d'exp\'erience le champ de surface est mesur\'e en enroulant une bobine autour de l'\'echantillon, avec l'avantage de multiplier le signal par le nombre de tours. Nous avons cependant pr\'ef\'er\'e la technique des prises de tension, quitte \`a amplifier le signal. Les m\^emes prises de tension permettent en effet de mesurer la tension continue associ\'ee \`a un courant $I$ appliqu\'e, et donc le courant critique $I_c$ (moyen) entre $a$ et $b$, qui est une donn\'ee essentielle; une bobine emp\^echerait l'acc\`es commode des prises de courant, et n'\'evite pas les effets de bout. D'autre part, une bobine prendrait en compte l'entr\'ee du flux par les deux faces de la lame; cela n'a pas d'importance si les faces jouent un r\^ole sym\'etrique et que $e(l,t) = -e(0,t)$; mais nous verrons qu'un courant $I$ appliqu\'e rompt cette sym\'etrie. Chaque vortex transportant un quantum de flux, la tension $V_{ab}$ mesure le nombre de vortex (par seconde) qui entrent dans la lame par la face $x=0$, et par cette face seulement. \subsection{R\'eponse sym\'etrique $(I=0)$}\label{reponsesymetrique} D\'ecrivons d'abord la r\'eponse {\em quasistatique}, c'est-\`a-dire la r\'eponse \`a une oscillation lente du champ $b_e(t)$ de $-b_0$ \`a $b_0$, qui donne les m\^emes profils limites du champ $b(x)$ qu'avant et apr\`es la mont\'ee d'un \'echelon (Fig.~\ref{reponse2}). Partant d'un \'etat d'\'equilibre, supposons que $b_e(t)$ diminue jusqu'\`a $-b_0$, puis remonte \`a $+b_0$. Si la surface a la capacit\'e de transporter un courant $K$ non-dissipatif, avec un maximum critique $K_c$, on s'attend \`a ce que l'\'ecrantage des variations du champ ext\'erieur soit parfait tant que $b_0<b_c=\mu_0K_c$. Notons en passant que cette conclusion est conforme au CSM. Si $b_0$ d\'epasse $b_c$, on peut envisager deux cas de figure pour la r\'eponse quasistatique (Fig.~\ref{reponse2}). Ou bien, il n'y a pas de "pinning" en volume $(J_c=0)$, et l'exc\`es de champ $b_0-b_c$ p\'en\`etre librement et uniform\'ement dans la lame comme dans un m\'etal ordinaire (Fig.~\ref{reponse2}b); le champ int\'erieur oscille alors entre deux valeurs $\pm b_0^\prime$ avec $b_0^\prime=b_0-b_c$. Ou bien il y a du "pinning" en volume, disons avec un $J_c$ uniforme. Dans ce cas le profil de champ se complique (Fig.~\ref{reponse2}a). Conform\'ement au CSM, $J_y=-(1/\mu_0)\partial b/\partial x = \pm J_c$ ou 0. En cons\'equence pour des \'ecarts $b_0-b_c$ pas trop \'elev\'es, la p\'en\'etration du champ est limit\'ee \`a une profondeur $x_0<l/2$ : \begin{equation} x_0 = \frac{b_0-b_c}{\mu_0J_c}\qquad . \label{equation2}\end{equation} Consid\'erons maintenant, dans la premi\`ere hypoth\`ese $(J_c=0)$, la r\'eponse \`a un \'echelon parfait d'amplitude $b_0>b_c$ (Fig.~\ref{reponse3}b). Des courants induits $+K_c$ et $-K_c$ s'\'etablissent imm\'ediatement sur les faces $x=0$ et $x=l$, imposant les conditions aux limites $b(0,t) = b(l,t) = b_0^\prime = b_0-b_c$. La r\'eponse $b(x,t)$ est la solution de l'\'equation de diffusion (\ref{diffusionequation}) satisfaisant ces conditions aux limites, avec le profil initial, \`a $t=0^+$ : $b(0,0)=b(l,0)=b_0^\prime$ et $b(x,0)\equiv-b_0^\prime$ ailleurs (Fig.~\ref{reponse3}b). Cette solution s'exprime analytiquement par d\'ecomposition en modes de Fourier pour $t\geq0$ : \begin{equation} b(x,t)=b_0^\prime\left[1-\sum_{\rm n \; impairs}\frac{8}{n\pi}\;\sin\left[\frac{n\pi x}{l}\right]\;\exp\left(-\frac{t}{\tau_n}\right)\right]\quad,\quad\mbox {o\`u}\quad\tau_n=\frac{\mu_0\sigma_fl^2}{n^2\pi^2}\quad; \label{champb}\end{equation} Le temps de diffusion est gouvern\'e par la constante de temps la plus longue, $\tau_1\simeq0.1\mu_0\sigma_fl^2$. La Fig.~\ref{reponse3}b montre sch\'ematiquement l'\'evolution du profil de $b$ dans la lame. La diffusion du champ bien entam\'ee \`a $t=\tau_1$ est pratiquement termin\'ee \`a $t=5\tau_1$ (profil plat $b_0^\prime$, limite $t=\infty$ de l'expression (\ref{champb}). On en d\'eduit le champ \'electrique induit sur les faces $x=0$ et $x=l$ par $e_y=j_y/\sigma_f=(1/\mu_0\sigma_f)\partial b/\partial x$, conform\'ement \`a l'\'equation de diffusion (\ref{diffusionequation}), puisque $-\partial b/\partial t=\partial e/\partial x$ \begin{equation} e(0,t)=-e(l,t)=\frac{8b_0^\prime}{\mu_0\sigma_fl} \sum_{\rm n \; impairs}\;\exp\left(-\frac{t}{\tau_n}\right)\quad.\qquad(t>0) \label{champe}\end{equation} Dans la seconde hypoth\`ese du BPM, avec $J_c=const.$ et un \'ecart $b_0-b_c$ pas trop \'elev\'e, la r\'eponse sch\'ematis\'ee sur la Fig.~\ref{reponse3}a est plus complexe; elle a \'et\'e calcul\'ee num\'eriquement par Kawashima {\it et al.} \cite{Kawashima78}. La diff\'erence essentielle avec la r\'eponse libre (\ref{champb}) est que le temps caract\'eristique de diffusion, $\tau\sim\mu_0\sigma_fx_0^2$ (voir la figure~9 de la Ref.~\cite{Kawashima78}), doit varier maintenant avec la profondeur de p\'en\'etration du champ $x_0$ (Eq.~\ref{deux}), donc avec l'amplitude de l'\'echelon. Le temps de diffusion devrait augmenter comme $(b_0-b_c)^2$. Or nous n'observons rien de tel. A champ $B_0$ assez \'elev\'e, nous trouvons effectivement un seuil $b_0=b_c$ au del\`a duquel il y a une p\'en\'etration massive du flux magn\'etique. En toute rigueur, l'\'ecrantage de l'excitation en dessous de $b_c$ n'est pas parfait, m\^eme aux plus faibles niveaux, \`a cause d'un faible effet de peau lin\'eaire bien connu \cite{Campbell69,Alais67}; ce ph\'enom\`ene, strictement, est en contradiction avec le CSM. Mais les signaux $e(0,t)$ associ\'es sont plus de 10 fois plus petits que notre signal parasite. Nous pouvons par cons\'equent les n\'egliger dans cette exp\'erience, o\`u le seuil $b_c$ reste en pratique bien marqu\'e. Le fait exp\'erimental essentiel est que la forme du signal transitoire $e(0,t)$ observ\'e pour de faibles \'ecarts $b_0-b_c$ reste la m\^eme quand $b_0$ augmente; un changement d'\'echelle permet de superposer tr\`es exactement les signaux obtenus pour diff\'erentes valeurs de $b_0>b_c$. Cela veut dire que la cin\'ematique de la diffusion est ind\'ependante de l'amplitude de l'excitation, en accord avec l'\'equation (\ref{champe}). Autrement dit il n'y a aucune variation d\'etectable du temps de diffusion, qu'elle soit quadratique en $b_0\!-\!b_c$ ou autre, comme le voudrait un BPM. D'autres que nous ont mis en \'evidence un seuil $b_c$ \cite{Klundert78}, et par ailleurs la possibilit\'e d'une densit\'e de courant critique superficielle $K_c$ n'est pas contest\'ee. Mais curieusement, aucun auteur n'a cherch\'e \`a comparer la contribution $2WK_c$ de la surface au courant critique total $I_c=2WK_c+WJ_cl$, mesur\'e directement et ind\'ependamment \`a partir d'une caract\'eristique continue courant-tension. Or notre montage permet de mesurer \`a la fois $I_c$ et $b_c$, et nous avons toujours trouv\'e que $I_c\simeq2WK_c=2Wb_c/\mu_0$, c'est-\`a-dire qu'\`a la pr\'ecison des mesures $J_c=0$. Ainsi, \`a 4000 G, nous mesurons d'une part $I_c=10.2$ A, et d'autre part $b_c=9.0\pm 0.1$ G, ce qui correspond \`a une densit\'e de courant critique $K_c=b_c/\mu_0$ voisine de 7 A/cm, et un courant critique $2WK_c=10.3\pm0.1$ A. Plus quantitativement on peut songer \`a comparer la tension transitoire induite, $e(0,t)\;d$, \`a la solution th\'eorique (\ref{champe}) de l'\'equation de diffusion. Mais, dans notre cas, le temps de mont\'ee de $b_e(t)$ est de l'ordre de 10 $\mu$s; cela se voit directement sur la dur\'ee de l'impulsion de tension parasite $s\;db_e/dt$ (Fig.~\ref{pulse4}). Ce temps de mont\'ee \'etant comparable au temps de diffusion $\tau_1$ lui-m\^eme, il est clair que le calcul (\ref{champb}) doit \^etre corrig\'e, si on esp\`ere un accord quantitatif. Si $t=0$ est toujours l'instant o\`u $b_e(t)$ commence \`a cro\^\i tre, la p\'en\'etration du champ est retard\'ee et commence seulement \`a l'instant $t_d$, o\`u $b_e(t_d)$ atteint la valeur $-b_0^\prime + b_c=-b_0+2b_c$, apr\`es quoi $b(0,t)\!=\!b(l,t)$ cro\^\i t progressivement de $-b_0^\prime $ \`a $+b_0^\prime$. La condition limite de l'\'echelon fini id\'eal $b(0,t)\!=\!b_0^\prime\left[{\rm Y}(t)\!-\!1\right]$ est formellement remplac\'ee par une somme infinie d'\'echelons infinit\'esimaux d\'ecal\'es dans le temps; la r\'eponse s'obtient alors par superposition : \begin{equation} e(0,t)=-e(l,t)=\frac{4}{\mu_0\sigma_fl} \sum_{\rm n \;impairs}\;\int_{{t}_d}^t {\frac{db_e}{dt^\prime}\;\exp\left(-\frac{t^\prime-t}{\tau_n}\right)}\quad{\rm d}t^\prime\quad,\qquad(t>t_d) \label{champebis}\end{equation} tandis que $e(0,t<t_d)=0$. Noter qu'avec $t_d=0$ et d$b_e/$d$t^\prime=2b_0^\prime\delta(t^\prime)$ dans (\ref{champebis}) on retrouve bien l'\'equation (\ref{champe}) pour un \'echelon id\'eal. Il se trouve que la forme du signal parasite $s\;$d$b_e/$d$t$, ind\'ependante de l'\'echantillon et de l'amplitude du cr\'eneau, est assez bien repr\'esent\'ee par la diff\'erence de deux exponentielles $A(e^{-t/\theta_1}-e^{-t/\theta_2})$, en prenant $\theta_1=1.6\;\mu$s et $\theta_2=0.8 \;\mu$s. D'une telle expression (purement empirique) on d\'eduit une expression de d$b_e/$d$t$ (et de $b_e(t)$), qui est un interm\'ediaire commode pour calculer la r\'eponse th\'eorique (\ref{champebis}) dans tous les cas de figure $(B_0,l,b_0$ et $b_c)$ : \begin{equation} \frac{db_e}{dt} = \frac{2b_0}{\theta_1-\theta_2} \;\left(\exp\left[-\frac{t}{\theta_1}\right]-\exp\left[-\frac{t}{\theta_2}\right]\right)\quad. \label{phi}\end{equation} Nous trouvons que la r\'eponse th\'eorique ainsi corrig\'ee d\'ecrit le signal mesur\'e de fa\c{c}on tr\`es satisfaisante. La figure \ref{pulse4} concerne un exemple de r\'eponse asym\'etrique observ\'e en pr\'esence d'un courant continu, mais elle illustre bien la qualit\'e de l'accord quantitatif obtenu dans tous les cas de figure. Cet accord constant atteste que la diffusion du champ \`a l'int\'erieur de la lame, comme le mouvement des vortex qui l'accompagne, n'est ni limit\'e, ni g\^en\'e par des d\'efauts de volume. \subsection{R\'eponse asym\'etrique $(I\neq0)$}\label{reponseasymetrique} Pour confirmer notre analyse, nous avons calcul\'e et mesur\'e la r\'eponse \`a un \'echelon en pr\'esence d'un courant continu $I$ appliqu\'e. On introduit ainsi une dissym\'etrie entre les deux faces, qui conduit \`a un effet curieux sur le temps de diffusion. La figure~\ref{reponse2}c montre la r\'eponse quasistatique dans le sc\'enario suivant. Partant toujours d'un \'equilibre, un courant de transport $I<I_c$ est appliqu\'e dans la direction $y$; le champ ext\'erieur devient $+b_I$ d'un c\^ot\'e de la lame ($x<0$) et $-b_I$ de l'autre. Chaque face transporte dans le m\^eme sens une nappe superficielle de courant de densit\'e $K<K_c \,(b_I<b_c)$. Puis le champ appliqu\'e est diminu\'e jusqu'\`a $-b_0$ : le courant induit sur la face $x=l$ est de m\^eme sens que le courant de transport de sorte que $K(l)$ augmente. Si l'amplitude de l'\'echelon est assez \'elev\'e pour que $b_I+b_0>b_c$, alors $K(l)$ atteint sa valeur de saturation $K_c$, tandis que la densit\'e de courant $K(0)$ sur l'autre face diminue et reste sous-critique. Dans ces conditions, les vortex vont p\'en\'etrer (et le champ diffuser) dans la lame par la face $x=l$ seulement, et le champ int\'erieur va diminuer jusqu'\`a $-(b_0+b_I-b_c)$. Ensuite le champ appliqu\'e remonte \`a $+b_0$ : les r\^oles des deux faces sont interchang\'es; les vortex entrent par la face $x=0$, et le champ int\'erieur plafonne \`a $+(b_0+b_I-b_c)$. Consid\'erons maintenant la r\'eponse au cr\'eneau p\'eriodique, dans les m\^emes conditions d'amplitude : $b_I<b_c$, mais $b_I+b_0>b_c$. Le champ int\'erieur oscille entre deux profils plats $\pm(b_0+b_I-b_c)$. Le transitoire $e(0,t)$ \`a la mont\'ee d'un \'echelon (voir Fig.~\ref{reponse3}c) se calcule et se corrige comme plus haut, en rempla\c{c}ant simplement $b_c$ par $b_c-b_I$ (soit $b_0^\prime = b_0+b_I-b_c$), mais avec une importante diff\'erence li\'ee \`a la dissym\'etrie des conditions aux limites : en $x=0$, rien n'est chang\'e, $b(0,t)=b_e(t)+b_I-b_c$ suit la variation du champ appliqu\'e \`a partir de l'instant $t=t_d$, o\`u $b_e(t_d)=-b_0+2(b_c-b_I)$; mais en $x=l$ cette variation reste \'ecrant\'ee, et le fait que les vortex ne rentrent pas dans l'\'echantillon par cette face impose la nouvelle condition $e(l,t)\equiv0$ (ou $\partial b/\partial x=0$). Or cette situation est exactement celle de la r\'eponse sym\'etrique dans la moiti\'e d'une lame d'\'epaisseur $2l$; d'o\`u un effet spectaculaire, jamais signal\'e, se traduisant par un facteur 4 sur le temps de diffusion, puisque $\tau_1\propto l^2$. Nous observons effectivement cet allongement du temps de diffusion, et mieux encore une co\"\i ncidence remarquable entre la r\'eponse calcul\'ee et le signal observ\'e (Fig.~\ref{pulse4}). Le bien-fond\'e de la condition $e\equiv0$ sur une face est confirm\'e directement par l'absence de tension induite sur la face $x=0$ aux demi-alternances correspondant \`a un \'echelon d\'ecroissant. La technique habituelle de mesure de l'effet de peau par une bobine captrice enroul\'ee sur l'\'echantillon ne permettrait pas de distinguer le r\^ole dissym\'etrique des deux faces. Terminons par quelques remarques sur les ordres de grandeur et les conditions pratiques, qui ont contribu\'e \`a rendre pr\'edictif un calcul relativement simple \`a partir d'une \'equation de diffusion 1D. Prenons les donn\'ees de la figure~\ref{pulse4}. Nous avons d\'ej\`a indiqu\'e les dimensions de la lame ($l=2.7$ mm; $W=7.2$ mm); connaissant $\sigma_n$, la caract\'eristique $V$-$I$ dans l'\'etat normal donne la meilleure mesure de la distance $d=ab$ entre les prises de tension, soit $d=6.5$ mm. A 3000 G, la pente et l'abscisse \`a l'origine de la partie lin\'eaire de la caract\'eristique $V$-$I$ donnent respectivement la conductivit\'e flux-flow $\sigma_f=1.76\times10^7 \Omega^{-1}$m$^{-1}$, et le courant critique $I_c=20.4$ A. On en d\'eduit les constantes de temps de diffusion $\tau_1=\pi^{-2}\mu_o\sigma_f4l^2=65 \mu$s et $\tau_n=\tau_1/n^2$. Comme nous l'avons rappel\'e au \S~\ref{reponse}, le courant critique ainsi mesur\'e repr\'esente une valeur moyenne entre $a$ et $b$, $I_c(y)$ variant dans le cas pr\'esent entre $I_c^\prime\simeq20.1$ A et $I_c^{\prime\prime}\simeq20.7$ A. De m\^eme $K_c$ et $b_c=\mu_0I_c/2W\simeq17.8\pm0.3$ G. Une dispersion trop importante de $b_c$ pourrait remettre en cause le calcul \`a une dimension. Le point important ici n'est pas l'apparition de termes $\partial^2b/\partial y^2\sim d^2b_c/dy^2$ dans une \'equation de diffusion plus g\'en\'erale \`a 2D, car ces termes, comme on le v\'erifie ais\'ement, restent n\'egligeables devant $\partial^2b/\partial x^2$. En revanche des effets 2D significatifs peuvent r\'esulter du fait que, $t_d$ \'etant fonction de $y$, la diffusion en masse du champ ne commence pas partout en m\^eme temps le long de la lame. Nous pensons que c'est le cas dans les exp\'eriences de van de Klundert {\it et al.} \cite{Klundert78}, o\`u l'excitation $b_e(t)$ est trap\'ezo\"\i dale avec des rampes relativement lentes. Mais, pour les \'echelons rapidement croissants ou d\'ecroissants que nous utilisons, cet \'etalement de $t_d$ est n\'egligeable \`a l'\'echelle des temps de diffusion; ainsi dans le cas de figure \ref{pulse4} nous trouvons $t_d=1.6\pm0.4 \mu$s. \section{Conclusion}\label{conclusion} La question de la localisation des courants et des centres d'ancrage, et plus fondamentalement la compr\'ehension de la nature m\^eme du "pinning", est \'evidemment essentielle pour la plupart des applications des supraconducteurs. Par ailleurs il est clair que l'efficacit\'e des d\'efauts de volume comme sites d'ancrage joue un r\^ole capital dans la physique actuellement tr\`es d\'evelopp\'ee des diff\'erentes phases du r\'eseau de vortex dans le diagramme $(B,T)$ des cuprates supraconducteurs. Or, en d\'epit d'un nombre consid\'erable de travaux sur le sujet, d'abord sur des mat\'eriaux conventionnels dans les ann\'ees 1960 et le d\'ebut des ann\'ees 70, et plus r\'ecemment, apr\`es 1986, sur les nouveaux mat\'eriaux, nous pensons que le probl\`eme est loin d'\^etre r\'esolu. Un certain nombre d'exp\'eriences ant\'erieures \cite{Hocquet92,Placais94}, et d'autres plus r\'ecentes \cite{LuetkeEntrup97,LuetkeEntrup97} nous ont convaincu de l'inefficacit\'e de l'ancrage des vortex en volume dans toute une classe de mat\'eriaux ``soft'' telle que nous l'avons d\'efinie au \S~\ref{modeles}. L'exp\'erience que nous avons pr\'esent\'ee n'est qu'une exp\'erience de plus dont les conclusions vont dans le m\^eme sens, et dont le seul int\'eret est la grande simplicit\'e. Elle ne suppose aucune th\'eorie pr\'eliminaire, en dehors des cons\'equences \'el\'ementaires des \'equations de Maxwell; son but est donc essentiellement p\'edagogique. Elle permet de v\'erifier asez directement le libre mouvement des vortex dans la masse. Soulignons que la g\'eom\'etrie d'une lame en champ parall\`ele est indispensable; l'analyse de la r\'eponse \`a une impulsion d'une lame en champ normal serait beaucoup plus compliqu\'ee. Comme nous voulions simplement proposer un test exp\'erimental simple, nous n'avons pas cherch\'e ici \`a changer syst\'ematiquement d'\'echantillon. La grande vari\'et\'e des \'echantillons (alliages, m\'etaux purs, YBCO, \ldots) auxquels notre mod\`ele de supraconducteur "soft" s'applique est d\'emontr\'ee et discut\'ee ailleurs \cite{LuetkeEntrup97,Simon94,LuetkeEntrup97}. Terminons par deux remarques qui nuancent notre conclusion. Le probl\`eme n'est pas tant de montrer que la surface joue un r\^ole plus ou moins important dans le "pinning"; tout le monde est convaincu que les d\'efauts de surface y contribuent. Mais, en g\'en\'eral, on s'interroge simplement sur le poids relatif du volume et de la surface suivant le traitement de l'\'echantillon. Ce que nous pr\'etendons, c'est qu'il existe une vaste cat\'egorie d'\'echantillons dits "soft", faciles \`a obtenir, et pour lesquels tout se passe comme si (\`a la pr\'ecision des mesures pr\`es bien entendu) le r\'eseau de vortex r\'epondait librement dans la masse. Maintenant, il est aussi tr\`es facile d'introduire des interfaces dans la masse d'un \'echantillon et de le rendre "hard". D'autre part, quand nous parlons de l'inefficacit\'e des sites d'ancrage en volume, il ne s'agit que des probl\`emes de transport. Ce que nous constatons, c'est que les d\'efauts cristallins ne g\^enent pas le mouvement des vortex. Notre conclusion concerne donc seulement la dynamique des vortex (courants critiques, bruit de "flux flow", imp\'edance de surface, \ldots). Cela n'exclut pas que ces m\^emes d\'efauts cristallins dans la masse peuvent perturber la configuration du r\'eseau de vortex \`a l'\'equilibre, ce qu'on peut observer par diff\'erentes m\'ethodes d'imagerie (d\'ecoration ou diffraction de neutrons). Nous pensons notamment que tous les mod\`eles de "pinning collectif", qui relient le d\'esordre du r\'eseau de vortex aux distributions statistiques de points d'ancrage, sont sans doute pertinents, mais qu'il faut se garder d'\'etendre ces notions aux probl\`emes de dynamique des vortex et des courants critiques. C'est en tout cas ce que sugg\`ere l'exp\'erience.
1,108,101,564,069
arxiv
\section{Logic of the program} The complex scalars in ${\mathcal N}=2$ gauge theories in four dimensions generically admit continuous vacuum expectation values which are not lifted by quantum corrections. This space of vacuum solutions is called the moduli spaces of vacua, ${\mathcal M}$. Depending on the properties of the low-energy physics, ${\mathcal M}$ is divided into \emph{Coulomb}, \emph{Higgs} and \emph{mixed} branches. We focus on the former and indicate the Coulomb branch (CB) of an ${\mathcal N}=2$ theory by ${\mathcal C}$. The properties of both ${\mathcal M}$ and ${\mathcal C}$ can be formulated in a way that makes no reference to the quantum fields in the gauge theory. Therefore characterizing ${\mathcal N}=2$ field theories via the properties of their moduli space is immediately suitable to study theories with no known lagrangian formulation as is the case for the majority of ${\mathcal N}=2$ superconformal field theories (SCFTs) which are the ultimate objective of our study. This note is organized as follows. In this first section, we provide a lighting review of the CB and outline in some detail the logic of the classification program. In the second section, we report some of the most important results which our analysis has thus far produced. We conclude by outlining the directions which we are currently pursuing and identify the main open questions of this program. \paragraph{Coulomb Branch generalities.} The property defining ${\mathcal C}$ is that the low-energy theory on a generic point is extremely simple: it is just a free ${\mathcal N}=2$ supersymmetric $U(1)^r$ gauge theory with no massless charged states. $r$ is called the \emph{rank} of the theory and coincides with the complex dimensionality of ${\mathcal C}$, ${\rm dim}_\C{\mathcal C}=r$. ${\mathcal C}$ is a singular space and its (complex co-dimension one) singular locus, ${\mathcal V}$, is the locus where charged states become massless. In other words, ${\mathcal V}$ represents precisely the locus where the low-energy physics is less boring and even potentially interesting: it may no longer be free. Interacting scale-invariant physics is often hard to characterize directly, as it typically does not have a useful lagrangian description. But the geometry of ${\mathcal C}$, though singular at this locus, is amenable to analysis, and so provides some information about these interacting scale-invariant theories. The striking fact about CB geometry is that the physics on ${\mathcal V}$ can be studied in a fairly detailed way by studying the theory in the non-singular region, ${\mathcal C}_{\rm reg}:={\mathcal C}\setminus {\mathcal V}$ where the low-energy physics is as simple as it gets (just a bunch of non-interacting ${\mathcal N}=2$ vector multiplets!). This is due to the fact that no globally defined lagrangian description of the low energy ${\mathcal N}=2$ $U(1)^r$ is possible and non-trivial \emph{monodromies} have to considered to describe the physics on ${\mathcal C}_{\rm reg}$. These are specific elements of the $\mathop{\rm Sp}(2r,\Z)$ \emph{electromagnetic duality group} which depend on the physics at ${\mathcal V}$ and can be therefore used to characterize it. The object which transforms non-trivially under the monodromy group is the vector of \emph{special coordinates} ${\sigma}$, which provides a holomorphic section of an $\mathop{\rm Sp}(2r,\Z)$ bundle over ${\mathcal C}_{\rm reg}$. The special coordinates also satisfy non trivial constraints which allow the definition of a K\"ahler metric on ${\mathcal C}_{\rm reg}$ and which can be extended in a non-trivial way to ${\mathcal V}$ (see the {\bf stratification} section below). All this together equips ${\mathcal C}_{\rm reg}$ with a \emph{rigid special K\"ahler} (RSK) structure. ${\mathcal N}=2$ SCFTs live at the origin of scale invariant CBs. Scale invariance makes the study of the geometry in the non-singular locus even simpler and strongly constrains the allowed monodromies. Our program aims at extracting the most information with the minimum (which for ranks greater than one is still substantial) effort, and characterize the space of ${\mathcal N}=2$ SCFTs by understanding the properties of the non-singular region of their CB geometries. Other facts further motivate our approach. There is a belief that all interacting ${\mathcal N}=2$ SCFT have a CB (see the {\bf rank-0} section below), and thus can be captured by our classification method. This should be contrasted with the other branches of the moduli space, where an infinite number of interacting ${\mathcal N}=2$ SCFTs with trivial Higgs and/or mixed branches are known. Also, interestingly, the CB has the property that it is only deformed and not lifted by ${\mathcal N}=2$-preserving relevant deformations. Thus the RG-structure of ${\mathcal N}=2$ theories is immediately visible from the CB geometry. There is a natural way to organize our classification program. First, theories with lower-dimensional CBs are simpler, and in particular lower-rank theories can be reached via RG flows of higher ones but not vice versa (see the {\bf stratification} section below). Thus it is reasonable to study CB geometries in order of their increasing dimensionality.\footnote{The story is different for the Higgs branch. There the natural organizing parameter is the number of symplectic leaves rather than the overall Higgs branch dimension.} Secondly, if conformal invariance is unbroken, scale invariance of the corresponding geometry dramatically constrains its global structure. Quick progress can be made by classifying the \emph{scale invariant limit} of these geometries. In fact for ${\mathcal N}=2$ SCFTs, the $\R^+\times {\rm U}(1)$ action by dilatations and the $U(1)_R$ symmetry induces a $\C^*$ action on the full CB geometry. In the one-dimensional case, requiring invariance under this $\C^*$ action immediately constrains the set of allowed geometries to be one of only 7 possibilities. We discuss the two-dimensional case below. Unfortunately it is a fact, known since the seminal papers on the topic \cite{Seiberg:1994rs,Seiberg:1994aj}, that many SCFTs share the same scale invariant CB geometry, and therefore this information is not enough to fully characterize the space of ${\mathcal N}=2$ SCFTs. Also in the conformal limit, ${\mathcal C}_{\rm reg}$ is ``so simple'' that not much information on the non-trivial physics of the ${\mathcal N}=2$ SCFT living at the origin of ${\mathcal C}$ can be deduced. Here comes the third, and hardest, step of the story: understanding the possible \emph{mass deformations} of the scale invariant geometries. As we said above, turning on (${\mathcal N}=2$ SUSY-preserving) relevant operators does not lift the CB, and it instead deforms it in a precise manner. The different possible deformations thus give different continuous families of CB geometries (depending on the mass parameters) which can lift the degeneracy between distinct SCFTs which have the same scale-invariant CB geometry. In the rank-1 case, this deformation information can be encoded via the \emph{deformation pattern} of the scale invariant geometry. If refined with the deformation pattern, the initial scale invariant CB geometry data, almost uniquely characterizes a ${\mathcal N}=2$ SCFT.\footnote{It is known that this data is not enough to distinguish also discretely gauged theories.} In summary, our classification program then consists in picking an increasing CB complex dimension, determining the allowed scale invariant geometries, and then understanding their possible mass deformations. \input{tablePaperIII} \paragraph{Rank-0 theories.} An ${\mathcal N}=2$ theory with no CB is a rank-0 theory and there is a belief that no interacting rank-0 ${\mathcal N}=2$ SCFT exists. This is largely based on the lack of counter-examples and could be a consequence of a \textit{lamp post effect}; many techniques to study ${\mathcal N}=2$ SCFTs are based on assuming the existence of the CB. There is also some further evidence from our rank-1 classification (described more below). In carrying out this classification we explicitly assume no interacting rank-0 SCFTs exist. If that were not the case, and a rank-0 SCFT with small enough central charges did instead exist, our results would be modified in a dramatic way.\footnote{For more details on this point see the discussion in section 5 of \cite{Argyres:2015gha}.} Instead of predicting the existence of a total of 28 theories,\footnote{The exact number of rank-1 theories has been the source of some confusion, particularly since each of the summary tables in \cite{Argyres:2015ffa,Argyres:2015gha,Argyres:2016xua,Argyres:2016yzz,Argyres:2016xmc} seem to report contradicting results. 28 is the number of Coulomb branch geometries of non-discretely gauged theories which certainly exist.} the number would be close to a hundred and those new rank-1 theories would have a CB and therefore would be detectable with a multitude of methods. The fact that our classification appears to be complete and that there is no sign of the existence of these extra theories, therefore provides evidence supporting the \textit{non-existence of rank-0} theories conjecture, at least within a certain range of central charges. By extending our systematic classification to higher ranks, we can considerably strengthen this indirect evidence. \section{Review of important results} \begin{wrapfigure}{L}{0.50\textwidth} \includegraphics[width=.35\textwidth]{knot3.pdf} \caption{Depiction of an $L_{(1,2)}(0,3,0)$ torus link consisting of the red, orange, and yellow circles.} \label{knot3} \end{wrapfigure} \paragraph{Full classification of rank-1 geometries.} In a series of papers \cite{Argyres:2015ffa,Argyres:2015gha,Argyres:2016xua,Argyres:2016xmc} the program outlined above was carried out completely in the one complex dimensional CB case which led to a complete classification of rank-1 theories. The results of our analysis are summarized in table 1 of \cite{Argyres:2016xmc} which is reported here, see table \ref{tab1}. As discussed extensively in \cite{Argyres:2016yzz}, it is possible to start from some of the theories in table \ref{tab1} and gauge discrete subgroups without breaking ${\mathcal N}=2$ SUSY. This operation acts non-trivially on the CB but without lifting it, so it produces other rank-1 theories. This explains why table 1 in \cite{Argyres:2016yzz} differs from table \ref{tab1} here. Recently it was shown that all the rank-1 theories can be obtained from 6 dimension. In particular, starting from specific 6d (1,0) theories compactified on a $T^2$ and twisted by non commuting (flavor) holonomies, it is possible to obtain the theories sitting at the top of each one of the series in table \ref{tab1} \cite{Ohmori:2018ona}. The rest can be obtained by turning on specific subsets of their mass deformations. \cite{Apruzzi:2020pmv} discusses instead the construction of all entries in table \ref{tab1} in F-theory. To the best of our knowledge, all the known rank-1 ${\mathcal N}=2$ SCFTs are captured by our analysis. \paragraph{Understanding the singularity structure of rank-2 geometries.} The analysis of scale invariant CB geometries at complex dimension two is already considerably more challenging than the rank-1 case outlined above. In \cite{Argyres:2018zay} we were able to show that the $\C^*$ action, along with a basic assumption to avoid pathological behavior of the CB (such as singularities dense in ${\mathcal C}$),\footnote{Though we feel strongly that such behaviors are unphysical, we have thus far been unable to prove it \cite{Argyres:2018zay}.} constrains dramatically the topology of ${\mathcal V}_{\rm rank-2}$. Specifically we showed that ${\mathcal V}_{\rm rank-2}\cap S^3$, where the intersection with the three sphere $S^3$ is taken to get rid of the contractible direction corresponding to the scaling action, is in general a $(p,q)$ torus \text{$n$-link}, perhaps with additional unknots \cite{Argyres:2018zay}; see fig \ref{knot3}. The set of allowed $(p,q)$ is strongly restricted by the $\mathop{\rm Sp}(4,\Z)$ monodromy structure and only a finite set of values is allowed. Yet the number, $n$, of components of the link is not constrained in any obvious way by the single monodromies. In particular $n$ could be infinite therefore leading to an infinite set of topologically inequivalent CB geometries. A possible restriction could come from studying the whole monodromy group, and not just individual monodromy elements. The former provide a representation of the fundamental group of the smooth part of the CB ${\mathcal C}\setminus {\mathcal V}$ and thus is sensitive to global data. The fundamental group of the spaces of interest for rank-2 was computed recently \cite{Argyres:2019kpy}. \paragraph{Metric vs. complex singularities.} The singularities on ${\mathcal V}$ can occur in two types: \textit{metric} singularities, ${\mathcal V}_{\rm metric}$, and singularities in the \textit{complex structure}, ${\mathcal V}_{\rm cplx}$. At ${\mathcal V}_{\rm metric}$, the CB as an algebraic (projective) variety is perfectly fine though the metric structure is non-analytic.\footnote{The CB is still a metric space at these points, but does not have a well-defined Riemannian metric. Think of the tip of a 2-dimensional cone as an example.} ${\mathcal V}_{\rm cplx}$ is instead the set of points in which the CB is singular as an algebraic variety. The latter type were once believed not to occur but counter-examples were pointed out in \cite{Bourget:2019phe,Argyres:2018wxu}. The physical interpretation of ${\mathcal V}_{\rm metric}$ and ${\mathcal V}_{\rm cplx}$ is considerably different. ${\mathcal V}_{\rm metric}$ occur where charged states become massless. This can only happen when the BPS lower bound on their mass vanishes which restricts ${\mathcal V}_{\rm metric}$ to be complex co-dimension one in ${\mathcal C}$. The locus of complex singularities ${\mathcal V}_{\rm cplx}$ is generically a proper subvariety of the locus of metric singularities. ${\mathcal V}_{\rm cplx}$ occur when the Coulomb branch chiral ring of the ${\mathcal N}=2$ SCFT is \emph{not freely generated}. In this case new phenomena can take place like an apparent violation of the unitarity bound \cite{Argyres:2017tmj}. All the known cases of non-freely generated chiral rings arise by a non-trivial action of a discrete group on the CB \cite{Argyres:2019ngz,Bourget:2019phe,Argyres:2018wxu}. \paragraph{Scaling dimensions of Coulomb branch coordinates.} If the CB chiral ring is freely generated (${\mathcal V}_{\rm cplx}=\varnothing$), the scaling dimensions, ${\Delta}_i$, of the CB coordinates are proportional to the ${\rm U}(1)_r$, $r_i$, charges of the CB operators of the ${\mathcal N}=2$ SCFT: ${\Delta}_i\propto r_i$. Superconformal representation theory only constrains these charges to be greater than one, ${\Delta}_i\ge1$. A relatively elementary argument shows that these scaling dimensions are instead constrained by the low energy electromagnetic monodromy group and that they belong to a finite set of rational numbers, whose size depends on the rank $r$ of the theory. Explicitly, the allowed values for the ${\Delta}_i$ of a rank-$r$ theory are \cite{Argyres:2018urp,Caorsi:2018zsq}: \begin{align} {\Delta} \in \left\{ \frac{n}{m} \ \bigg\vert\ n,m\in\N,\ 0<m\le n,\ \gcd(n,m)=1,\ {\varphi}(n) \le 2r \right\} \nonumber \end{align} where $\varphi(n)$ is the Euler totient function and the maximal dimension allowed grows superlinearly with rank as ${\Delta}_\text{max} \sim r \ln\ln r$. \begin{comment} This remarkably constraining results follows from three main facts about rank-$r$ CB geometries: \begin{itemize} \item[{\bf 1}-] CB special coordinates are not holomorphic function on ${\mathcal M}_{{\mathcal C}}$ but section of an $\mathop{\rm Sp}(2r,\Z)$ bundle over ${\mathcal M}_{{\mathcal C}}$. This in particular implies that along a closed loop ${\gamma}$ encircling ${\mathcal V}_{\rm metric}$ they undergo a monodromy transformation $\mathscr{M}_{\gamma}$ and $\mathscr{M}_{\gamma}\in\mathop{\rm Sp}(2r,\Z)$. \item[{\bf 2}-] The CB special coordinates tranform homogenously with weight 1 under the CB $\C^*$ action. This implies that they are eigenstate of the monodromy tranformation $\mathscr{M}_{{\gamma}_r}$ associated to a closed loop ${\gamma}_r$ obtained by a one parameter ${\rm U}(1)_r$ action ${\alpha}\in[0,2\pi[$ on ${\mathcal M}_{{\mathcal C}}$. Furthermore their eigenvalues are always phases and those phases depend on the scaling dimensions of the coordinate on ${\mathcal M}_{\mathcal C}$. In general there are many inequivalent such ${\gamma}_r$ loops. \item[{\bf 3}-] The product of pair of $\mathop{\rm Sp}(2r,\Z)$ eigenvalues is always equal to one. An element is called \textit{elliptic} if all of its eigenvalues are phases. Furthermore, the fact that such phases have to be solutions of characteristic polynomials over the integers tremendously restricts the allowed values and thus the scaling dimensions of the possible CB operators. \end{itemize} \end{comment} If the CB chiral ring is not freely generated, the relation between scaling dimension of CB coordinates and ${\rm U}(1)_r$ charges of CB multiplet is less straightforward. Often (maybe always) the scaling dimensions are neither globally nor uniquely defined. \section{Future developments} \paragraph{Finite vs.\ infinite number of allowed geometries.} One of the motivating reasons behind carrying out the program of classifying ${\mathcal N}=2$ SCFTs in four dimensions is the belief that at any given rank only a finite set of such theories exist.\footnote{We are counting families of SCFTs connected by exactly marginal deformations as a single SCFT for the purpose of this counting.} If this belief is correct, it trivially follows that only a finite set of scale invariant CB geometries are \emph{realized} as moduli spaces of consistent physical theories at any given rank. The surprising fact that all the CB geometries allowed in rank-1 are indeed realized, motivates instead the authors' belief that in fact only a finite number of scale invariant CB geometries are \emph{allowed}. As mentioned above, our analysis of rank-2 scale invariant CB geometries, arrives close to showing that this is the case for two complex dimensional CBs, but we fall short of showing that for any given value of $(p,q)$ only a finite number of link components is allowed. If that were achieved it would be an important conceptual result. It is important to also stress, that proving the existence of a finite set of scale invariant CB geometries at any given rank, is only a necessary condition for showing that the at that rank only a finite number of ${\mathcal N}=2$ SCFTs exist. In fact it is logically possible, that a given scale invariant geometry admits an infinite set of inequivalent mass deformations and therefore corresponds to an infinite set of physically distinct ${\mathcal N}=2$ SCFTs. \paragraph{Stratification of the Coulomb branch and importance of rank-1 theories.} We have thus far said little about the structure of the singular locus ${\mathcal V}$. If we assume, again, that some pathological behaviors are avoided and in particular that ${\mathcal V}$ is a complex co-dimension one complex subvariety of ${\mathcal C}$,\footnote{${\mathcal V}$ is identified by the zeros of the central charge $Z_Q$, where $Q$ is the electromagnetic charge of a populated BPS state. If $Z_Q$ were a function on ${\mathcal C}$ then it would be easy to prove that ${\mathcal V}$ is indeed a complex subvariety. But $Z_Q$ is indeed branched over ${\mathcal V}$ and thus the challenge in showing this result in generality.} then it is possible to convincingly argue that ${\mathcal V}$ is itself an RSK variety of one complex dimension less, or more specifically the union of a finite number of such RSK varieties. This result might sound counter-intuitive. But the observation that there cannot be a transition among two inequivalent vacua at zero energy cost, implies that it has to be possible to induce a non-zero metric on ${\mathcal V}$ from the ambient space ${\mathcal C}$. This can be done by identifying an appropriate set of complex coordinates on ${\mathcal C}$, ($u^\perp,u^\parallel)$, such that ${\mathcal V}$ is at $u^\perp=0$. The metric is induced by considering the $\partial_{\parallel}\sigma$ and $\partial_{\parallel}\overline{{\sigma}}$ components, which are well-defined on ${\mathcal V}$, where ${\sigma}$ labels the vector of special coordinates. It is also possible to show that the entire RSK structure can be restricted to ${\mathcal V}$, thus providing a consistent lower-dimensional RSK space. For more details on the argument see \cite{Argyres:2018zay} and in particular \cite{Argyres:2019yyb} where the stratification of the singular locus is discussed in the context of ${\mathcal N}=3$ theories. This discussion, with some minor but important modifications, carries over to the ${\mathcal N}=2$ case. This powerful result, which parallels the structure of singularities of symplectic varieties, opens exciting perspectives on carrying out the classification of higher-dimensional geometries using a sort of inductive argument starting from the completed rank-1 story. \begin{comment} \begin{itemize} \item[1)] ${\mathcal V}_{\rm metric}$ is a co-dimension one linear subspace of the special coordinates and it therefore inherits a metric by restriction. \item[2)] ${\mathcal V}_{\rm metric}$ is itself a scale invariant $r-1$ dimensional special kahler geometry and therefore it should be identified as an entry in the rank-$(r-1)$ list of scaling invariant geometries. \end{itemize} CB of ${\mathcal N}=2$ SCFTs are in general singular space and we will indicate the set of singular points as ${\mathcal V}$. This set has an intricate a constrained structure. \end{comment} \paragraph{Polarization and other ${\mathcal N}=4$ theory.} The RSK structure on ${\mathcal C}$ is richer than we have thus far discussed. In fact the $U(1)^r$ low-energy physics provides a natural integral skew-symmetric pairing on the special coordinates. This arises as follows. States in the low-energy theory are labeled by a set of $2r$ integers, their corresponding electric and magnetic charges which we will collectively label $Q$.\footnote{Recall that on ${\mathcal C}_{\rm reg}$ these states are massive; the theory there has no massless charged state.} Denote by $\langle Q, Q'\rangle:=Q^T\mathbb{D} Q'$, where $\mathbb{D}$ is an integer non-degenerate skew-symmetric $2r\times 2r$ matrix in canonical form, the pairing induced by the Dirac-Zwanziger-Schwinger quantization condition on the lattice of electric-magnetic charges. Since the special coordinates are dual to the lattice of electric-magnetic charges, $\mathbb{D}$ induces a pairing on them as well. We call $\mathbb{D}$ the \emph{polarization} of the lattice of electric-magnetic charges, and it has the very important property of determining the structure of the electric-magnetic duality group, which is indeed $\mathop{\rm Sp}_\mathbb{D}(2r,\Z)$. If the polarization can be brought to the canonical form $\mathbb{D }=\epsilon\otimes \mathds{1}_r$, where $\epsilon$ is a $2\times2$ antisymmetric matrix, $\mathbb{D}$ is called \emph{principal} and $\mathop{\rm Sp}_\mathbb{D}(2r,\Z)$ reduces to the more standard $\mathop{\rm Sp}(2r,\Z)$. Theories with non-principal polarization have not been studied in any real detail. There is some mild evidence, from the classification of rank-1 geometries, that such theories are indeed relative field theories \cite{Argyres:2015ffa,Caorsi:2019vex}. This result is very preliminary and this exciting subject certainly deserves more study. \subsubsection*{Acknowledgements} We would like to thank our collaborators (listed in the references) as well as O. Aharony, C, Beem, S. Cecotti, M. Caorsi, M. Del Zotto, J. Distler, I. Garcia-Extebarria, S. Giacomelli, A. Hanany, M. Lemos, C. Meneghelli, L. Rastelli, S. Schafer-Nameki, Y. Tachikawa, and T. Weigand for many helpful discussions and insightful comments. This work benefited from the 2019 Pollica summer workshop, which was supported in part by the Simons Foundation (Simons Collaboration on the Non-perturbative Bootstrap) and in part by the INFN. The authors are grateful for this support. PCA is supported by DOE grant DE-SC0011784, and MM is supported by NSF grants PHY-1151392 and PHY-1620610. \footnotesize \bibliographystyle{bib} t
1,108,101,564,070
arxiv
\section{} Fully 3D magnetohydrodynamic (MHD) simulations of astrophysical phenomena represent a challenge in standard data visualization for scientific purposes, for the amount of processed data and the wealth of scientific information they contain. Recently, the potential of virtual reality (VR) hardware and software began to be exploited for the purposes of scientific data analysis. Moreover, VR has also been adopted in different fields of public outreach and education with excellent outcome. To this end, YouTube and online multimedia digital stores host several VR titles with high visual impact in the categories of Astrophysics and Space Science. However, the routinely scientific use of the VR environment is still in its infancy and requires the development of ad-hoc techniques and methods. In the first half of 2019, we launched \href{http://cerere.astropa.unipa.it/progetti_ricerca/HPC/3dmap_vr.htm}{3DMAP-VR} (3-Dimensional Modeling of Astrophysical Phenomena in Virtual Reality), a project aimed at visualizing 3D MHD models of astrophysical simulations, using VR sets of equipment. The models account for all the relevant physical processes in astrophysical phenomena: gravity, magnetic-field-oriented thermal conduction, energy losses due to radiation, gas viscosity, deviations from proton-electron temperature equilibration, deviations from the ionization equilibrium, cosmic rays acceleration, etc. (e.g. \citealt{2011MNRAS.415.3380O, 2015ApJ...810..168O, 2016ApJ...822...22O, 2017MNRAS.464.5003O}). The workflow to create VR visualizations of the models combines: 1) accurate 3D HD/MHD simulations performed for scientific purposes, using parallel numerical codes for astrophysical plasmas (e.g. the FLASH code, \citealt{for00}, or the PLUTO code, \citealt{2007ApJS..170..228M}) on high performance computing facilities (e.g. CINECA, Bologna, Italy); 2) data analysis and visualization applications (e.g. \href{https://www.harrisgeospatial.com/Software-Technology/IDL}{Interactive Data Language}, \href{https://yt-project.org}{YT project}, \href{https://www.paraview.org}{ParaView}, \href{https://wci.llnl.gov/simulation/computer-codes/visit/}{Visit}, \href{http://www.meshlab.net}{MeshLab}, \href{http://www.meshmixer.com}{MeshMixer}) to realize navigable 3D graphics of the astrophysical simulations and quickly have a VR representation of the models. The 3D representations are realized using a mixed technique consisting of multilayer isodensity surfaces with different opacities. Once the 3D graphics are ready, they are uploaded on \href{https://sketchfab.com}{Sketchfab}, one of the largest open access platforms to publish and share 3D virtual reality and augmented reality content. Our VR laboratory includes two Oculus Rift VR sets of equipment and dedicated computers with advanced graphics cards to visualize the models in VR. The laboratory is used to analyze the numerical results in an immersive fashion, integrating the traditional screen displays, and allows scientists to navigate and interact with their own MHD models. At the same time, we use the VR in educational and public outreach events in order to visualize invisible radiation, improve the learners sense of presence and, eventually, increase the content understanding and motivation to learn. We realized an excellent synergy between our 3DMAP-VR project and Sketchfab to promote a wide dissemination of results for both scientific and public outreach purposes. We realized a Sketchfab gallery, \href{https://skfb.ly/6NooE}{"Universe in hands"}, which gathers different models of astrophysical objects and phenomena developed by our team for scientific purposes and published in international scientific journals. More specifically these models describe (see Fig.~\ref{fig:1}): magnetic structures of the solar and stellar coronae (e.g. \citealt{2016ApJ...830...21R}); the interaction between a star and its planet (cf. \citealt{2015ApJ...805...52P}); accretion phenomena in young stellar objects (e.g. \citealt{2011MNRAS.415.3380O}); protostellar jets (e.g \citealt{2016A&A...596A..99U}); nova outbursts (e.g. \citealt{2010ApJ...720L.195D}); the outcome of supernova explosions (e.g. \citealt{2016ApJ...822...22O}); the interaction of supernova remnants with the inhomogeneous surrounding environment (e.g. \citealt{2015ApJ...810..168O}); the effects of cosmic ray particle acceleration on the morphology of supernova remnants (e.g. \citealt{2012ApJ...749..156O}). In addition we created a second gallery, \href{https://skfb.ly/6OzCU}{"The art of astrophysical phenomena"}, which collects artist's view of astrophysical phenomena for public outreach purposes. The two galleries are publically available and continuously updated to include new models. \begin{figure}[th!] \begin{center} \includegraphics[width=18cm]{figure.ps} \caption{Examples of 3D models uploaded in the Sketchfab gallery \href{https://skfb.ly/6NooE}{"Universe in hands"}.\label{fig:1}} \end{center} \end{figure}
1,108,101,564,071
arxiv
\section{Introduction} Let $A$ be an associative unital algebra projective over a commutative ring $k$. The Hochschild cohomology $k$-modules of $A$ with coefficients in an $A$-bimodule $M$, \[ H^\bullet(A,M)=\bigoplus_{n\geq0} H^n(A,M) \] have been introduced by Hochschild \cite{Hochschild} and extensively studied since then. Operations on cohomology have been defined, such as the cup product and the Gerstenhaber bracket, making it into a Gerstenhaber algebra \cite{Gerstenhaber}. Tradler showed \cite{Tradler} that for symmetric algebras this Gerstenhaber algebra structure on cohomology comes from a Batalin-Vilkovisky operator (BV-operator) and Menichi extended the result \cite{Menichi}. As Tradler mentions, it is important to determine other families of algebras where this property holds. Lambre-Zhou-Zimmermann proved that this is the case for Frobenius algebras with semisimple Nakayama automorphism \cite{Lambre}. Independently, Volkov proved with other methods that this holds for Frobenius algebras in which the Nakayama automorphism has finite order and the characteristic of the field $k$ does not divide it \cite{Volkov}. It has also been shown that Calabi-Yau algebras admit the existence of a BV-operator \cite{Ginzburg}, and that this BV-structure on its cohomology is isomorphic to the one of the cohomology of the Koszul dual, for a Koszul Calabi-Yau algebra \cite{Chen}. More generally, for algebras with duality, see \cite{Lambre}, a BV-structure is equivalent to a Tamarkin-Tsygan calculus or a differential calculus \cite{Lambre}. The proofs of \cite{Ginzburg}, \cite{Lambre} and \cite{Tradler} have in common the use of Connes' differential \cite{Connes} on homology to define the BV-operator on cohomology. We start by giving an interpretation of Connes' differential in Hochschild cohomology with coefficients in the $A$-bimodule $A^*=Hom_k(A,k)$. The use of $A^*$ as bimodule of coefficients replaces the inner product which is in force for Frobenius algebras \cite{Lambre}, \cite{Tradler} as it is shown in Lemma 2.1 and Corollary 4.1. For symmetric algebras this induced BV-structure is isomorphic to the one given by Tradler in \cite{Tradler}. In the case of monomial path algebras we give a description of the $A$-bimodule structure of $A^*$ that allows us to construct an $A$-structural map on $A^*$. To the knowledge of the author there is no other $BV$-operator entirely independent of Connes' differential. \section{Connes' differential} \textit{Connes' differential} is the map $B:HH_n(A) \to HH_{n+1}(A)$ that makes the Hochschild theory of an algebra into a differential calculus \cite{Tamarkin}. It is given by \[ B([a_0 \otimes \cdots \otimes a_n]) = \left[ \sum_{i=0}^n (-1)^{ni} 1 \otimes a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} \right]. \] For an $A$-bimodule $M$ the \textit{dual} $A$-bimodule is denoted $M^*=Hom_k(M,k)$. We consider the canonical $A$-bimodule structure on $M^*$, that is $(afb)(x)=f(bxa)$ for all $a,b\in A$, all $f\in M^*$ and all $x\in M$. Let \[ \bar{B}: H^{n+1}(A,A^*) \to H^{n}(A,A^*) \] given by \[ \bar{B}([f])([a_1 \otimes \cdots \otimes a_n])(a_0) := \sum_{i=0}^{n} (-1)^{ni} f(a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1}) (1). \] It is straightforward to verify that it is well-defined. Let \[ \mathfrak{C}: H^n(A,M^*) \to H_n(A,M)^* \] be the morphism \[ \mathfrak{C}([f])([x \otimes a_1 \otimes \cdots \otimes a_n])=f(a_1 \otimes \cdots \otimes a_n)(x), \] for all $a_i \in A$, for $i=1,\cdots,n$, all $x\in M$ and all $[f]\in H^{n+1}(A,M^*)$, see \cite{Cartan}. The evaluation map $ev:H_n(A,M) \to H_n(A,M)^{**}$ can be composed with the $k$-dual of $\mathfrak{C}$ to get a morphism \[ \varphi: H_n(A,M) \to H^n(A,M^*)^* \] which is given by \[ \varphi([x \otimes a_1 \otimes \cdots \otimes a_n])([f]) = f(a_1 \otimes \cdots \otimes a_n)(x). \] For $M=A$ we obtain a morphism $\varphi: HH_n(A) \to H^n(A,A^*)^*$. The proof of the following lemma is straightforward. \begin{lemm} Let $k$ be a commutative ring and let $A$ be an associative and unital $k$-algebra. The following diagram is commutative \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=2em] { HH_n(A) & HH_{n+1}(A) \\ H^n(A,A^*)^* & H^{n+1}(A,A^*)^*.\\ }; \path[-stealth] (m-1-1) edge node [above] {$B$} (m-1-2) (m-2-1) edge node [above] {$\bar{B}^*$} (m-2-2) (m-1-1) edge node [right] {$\varphi$} (m-2-1) (m-1-2) edge node [right] {$\varphi$} (m-2-2); \end{tikzpicture} \] If $k$ is a field then $\varphi$ is a monomorphism. If $k$ is a field and $HH_n(A)$ is finite dimensional then $\varphi: HH_n(A) \to H^n(A,A^*)^*$ is an isomorphism. \end{lemm} \iffalse \begin{proof} Let $[a_0 \otimes \cdots \otimes a_n] \in HH_n(A)$. For $[f] \in H^{n+1}(A,A^*)$ we have \[ \begin{array}{l} \varphi(b([a_0 \otimes \cdots a_n])) ([f]) \\ = \varphi \left( \left[ \displaystyle\sum_{i=0}^n (-1)^{in} 1 \otimes a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} \right] \right) ([f]) \\ = \displaystyle\sum_{i=0}^n (-1)^{in} \varphi ( 1 \otimes a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} ) ([f]) \\ = \displaystyle\sum_{i=0}^n (-1)^{in} f(a_i \otimes \cdots \otimes a_n \otimes a_0 \otimes \cdots \otimes a_{i-1} )(1) \\ = \bar{b}([f])(a_1 \otimes \cdots \otimes a_n)(a_0) \\ = \varphi([a_0 \otimes \cdots \otimes a_n])(\bar{b}([f]))\\ = \bar{b}^* \big( \varphi([a_0 \otimes \cdots a_n]) \big)([f]). \\ \end{array} \] If $k$ is a field, then the evaluation map is a monomorphism and $\mathfrak{C}$ is an isomorphism \cite{Cartan}, hence $\varphi$ is a monomorphism. If in addition $HH_n(A)$ is finite dimensional over the field $k$, the evaluation map on $HH_n(A)$ is an isomorphism and then so does $\varphi$. \end{proof} \fi \section{Batalin-Vilkovisky structure} A \textit{Gerstenhaber} algebra is a triple $\left( \mathcal{H}^\bullet, \cup, [\ ,\ ] \right)$ such that $\mathcal{H}^\bullet$ is a graded $k$-module, $\cup:\mathcal{H}^n \otimes \mathcal{H}^m \to \mathcal{H}^{n+m}$ is a graded commutative associative product and $[\ ,\ ]:\mathcal{H}^n \otimes \mathcal{H}^m \to \mathcal{H}^{n+m-1}$ is a graded Lie bracket such that it is anti-symmetric $[f,g] = (-1)^{(|f|-1)(|g|-1)} [g,f]$, it satisfies the Jacobi identity \[ [f,[g,h]] = [[f,g],h] + (-1)^{(|f|-1)(|g|-1)} [g,[f,h]] \] as well as the Poisson identity \[ [f,g \cup h] = [f,g] \cup h + (-1)^{(|f|-1)|g|} g \cup [f,h], \] for all homogeneus elements $f,g,h$ of $\mathcal{H}^\bullet$. We denote by $|f|$ the degree of an homogeneous element $f\in\mathcal{H}^\bullet$. A \textit{Batalin-Vilkovisky} algebra (\textit{BV-algebra}) is a Gerstenhaber algebra $(\mathcal{H}^*,\cup,[\ ,\ ])$ together with a morphism \[ \Delta:\mathcal{H}^{n+1} \to \mathcal{H}^n \] such that $\Delta^2 = 0$ and \[ [f,g] = (-1)^{|f|+1} \big( \Delta(f \cup g) - \Delta(f) \cup g - (-1)^{|f|} f \cup \Delta(g) \big). \] Recall that $H^0(A,M)=M^A=\{m \in M | ma=am \ for \ all \ a\in A \}$ for an $A$-bimodule $M$. \begin{Definition} Let $M$ be an $A$-bimodule. A morphism $\psi: M \otimes_A M \to M$ of $A$-bimodules is called an $A$-\textit{structural map} if it is \textit{associative}, that is \[ \psi(m_1 \otimes \psi(m_2 \otimes m_3)) = \psi(\psi(m_1 \otimes m_2) \otimes m_3) \] for all $m_1,m_2,m_3 \in M$, and $\psi$ is unital in the sense that there is $1_M\in H^0(A,M)$ such that $\psi(1_M\otimes m)=\psi(m\otimes 1_M)=m$ for all $m\in M$. \end{Definition} \begin{Remark} Let $\psi: M \otimes_A M \to M$ be an $A$-structural map. Then the $\cup$-product \[ \cup : H^n(A,M) \otimes H^m(A,M) \to H^{n+m}(A,M\otimes_A M) \] can be composed with $\psi$ to obtain \[ \cup_\psi: H^n(A,M) \otimes H^m(A,M) \to H^{n+m}(A,M), \] that is \[ (f \cup_\psi g) (a_1 \otimes \cdots \otimes a_{n+m}) := \psi\big( f(a_1 \otimes \cdots \otimes a_n) \otimes g(a_{n+1} \otimes \cdots \otimes a_{n+m}) \big). \] Our assumptions on $\psi$ imply that $H^\bullet(A,M)$ is an associative and unital $k$-algebra. \end{Remark} We will denote $H^\bullet_\psi(A,M)$ the $k$-algebra $H^\bullet(A,M)$ endowed with the $\cup_\psi$-product. In case $M=A^*$, we have the following. \begin{lemm} Let $A$ be an associative unital $k$-algebra and let $\psi:A^* \otimes_A A^* \to A^*$ be an $A$-structural map. Then $H^\bullet_\psi(A,A^*)$ is a Gerstenhaber algebra. \end{lemm} \begin{proof} Let $d^*$ be the differential on the complex that calculates $H^\bullet(A,A^*)$ and let $f,g\in H^\bullet(A,A^*)$ be homogeneous elements. The following relation is well known, see \cite{Gerstenhaber}, \[ f\cup g - (-1)^{|f||g|}g\cup f = d^*(g) \bar{\circ} f + (-1)^{|f|} d^*(g\bar{\circ} f) + (-1)^{|f|-1} g \bar{\circ} d^*(f), \] where $g \bar{\circ} f (a_1 \otimes \cdots \otimes a_{|f|+|g|-1})$ is by definition \[ \sum_{i=0}^{|g|}(-1)^{j}g(a_1 \otimes \cdots a_{i-1} \otimes f(a_i \otimes \cdots \otimes a_{i+|f|-1}) \otimes a_{i+|f|} \otimes \cdots \otimes a_{|f|+|g|-1} ), \] for $j=(i-1)(|f|-1)$. If $f$ and $g$ are cocycles, we get that the cup product is graded commutative and since $\psi$ is $k$-linear we get that the $\cup_\psi$-product is graded commutative. Define the bracket in terms of $\bar{B}$ and the $\cup_\psi$-product as \[ [f,g]_\psi:=(-1)^{(|f|-1)|g|} \big( \bar{B}(f \cup_\psi g) - \bar{B}(f) \cup_\psi g - (-1)^{|f|} f \cup_\psi \bar{B}(g) \big). \] \iffalse and as a consequence we get the graded Jacobi identity \[ [f,[g,h]_\psi]_\psi = [[f,g]_\psi,h]_\psi + (-1)^{(|f|-1)(|g|-1)} [g,[f,h]_\psi]_\psi \] and the graded Poisson identity \[ [f,g \cup_\psi h]_\psi = [f,g]_\psi \cup_\psi h + (-1)^{(|f|-1)|g|} g \cup_\psi [f,h]_\psi. \] \fi Hence the graded $k$-module $H^\bullet_\psi(A,A^*)$ with the $\cup_\psi$-product and the bracket $[\ ,\ ]_\psi$ is a Gerstenhaber algebra. \end{proof} \begin{Theorem} Let $A$ be an associative unital $k$-algebra and let $\psi:A^* \otimes_A A^* \to A^*$ be an $A$-structural map. Then the data $\left(H^\bullet_\psi(A,A^*),\cup_\psi, [\ ,\ ]_\psi,\bar{B}\right)$ is a BV-algebra. \end{Theorem} \begin{proof} Since the following diagram is commutative \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=2em] { HH_n(A) & HH_{n+1}(A) & HH_{n+2}(A)\\ H^n(A,A^*)^* & H^{n+1}(A,A^*)^* & H^{n+2}(A,A^*)^*.\\ }; \path[-stealth] (m-1-1) edge node [above] {$B$} (m-1-2) (m-1-2) edge node [above] {$B$} (m-1-3) (m-2-1) edge node [above] {$\bar{B}^*$} (m-2-2) (m-2-2) edge node [above] {$\bar{B}^*$} (m-2-3) (m-1-1) edge node [left] {$\varphi$} (m-2-1) (m-1-2) edge node [left] {$\varphi$} (m-2-2) (m-1-3) edge node [left] {$\varphi$} (m-2-3); \end{tikzpicture} \] we have that $\bar{B}^2=0$. Then $H^\bullet_\psi(A,A^*)$ is a BV-algebra with the bracket defined as in the last lemma. \end{proof} \section{Frobenius and Symmetric algebras} Assume that $A$ is a symmetric algebra, i.e. a finite dimensional algebra with a symmetric, associative and non-degenerate bilinear form $<,>: A\otimes A \to k$, where associative means \[ <ab,c>=<a,bc> \] for all $a,b,c\in A$. The bilinear form defines an isomorphism of $A$-bimodules $Z:A \to A^*$ given by $Z(a)=<a,->$. It is shown in \cite{Tradler} that this defines a $BV$-operator on Hochschild cohomology, where $\Delta f$ is defined such that for $f \in HH^n(A)$ we have \[ <\Delta f (a_1 \otimes \cdots \otimes a_{n-1}),a_n> = \sum_{i=1}^n (-1)^{i(n-1)}<f(a_i \otimes \cdots a_n \otimes a_1 \cdots \otimes a_{i-1}),1>. \] \begin{Corollary} If $A$ is a symmetric algebra, then there is an $A$-structural map $\psi: A^* \otimes_A A^* \to A^*$ such that the BV-algebras $HH^\bullet(A)$ and $H^\bullet_\psi(A,A^*)$ are isomorphic. \end{Corollary} \begin{proof} Let $Z:A \to A^*$ be the isomorphism of $A$-bimodules given by the bilinear form of $A$. We will denote $Z_*:HH^\bullet(A) \to H^\bullet_\psi(A,A^*)$ the isomorphism induced by composition with $Z$. Then the following diagram is commutative \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=2em] { HH^n(A) & HH^{n-1}(A)\\ H^n(A,A^*) & H^{n-1}(A,A^*).\\ }; \path[-stealth] (m-1-1) edge node [above] {$\Delta$} (m-1-2) (m-2-1) edge node [above] {$\bar{B}$} (m-2-2) (m-1-1) edge node [left] {$Z_*$} (m-2-1) (m-1-2) edge node [right] {$Z_*$} (m-2-2); \end{tikzpicture} \] Indeed, \[ \begin{array}{l} (\bar{B} \circ Z_*)([f]) (a_1 \otimes \cdots \otimes a_{n-1})(a_0) \\ = \bar{B}(Z \circ f)(a_1 \otimes \cdots \otimes a_{n-1})(a_0) \\ = \sum_{i=0}^{n-1} (-1)^{(n-1)i} (Z\circ f)(a_i \otimes \cdots \otimes a_{n-1} \otimes a_0 \otimes \cdots \otimes a_{i-1}) (1) \\ = Z \circ \left( \sum_{i=0}^{n-1} (-1)^{(n-1)i} f(a_i \otimes \cdots \otimes a_{n-1} \otimes a_0 \otimes \cdots \otimes a_{i-1}) (1) \right) \\ = (Z_* \circ \Delta)([f]) (a_1 \otimes \cdots \otimes a_{n-1})(a_0). \end{array} \] Using the isomorphism given by the product $A \otimes_A A \cong A$ the transport of the algebra structure of $A$ to $A^*$ via $Z$ gives the $A$-structural map \[ \psi = Z \circ (Z \otimes Z)^{-1}: A^* \otimes_A A^* \to A^*. \] This isomorphism satisfies the associativity and unity conditions of remark 3.1, since the product of $A$ is associative and has a unit. Even more, there are commutative diagrams where the vertical maps are isomorphisms \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=2em] { HH^n(A) \otimes HH^m(A) & HH^{n+m}(A)\\ H^n(A,A^*) \otimes H^m(A,A^*) & H^{n+m}(A,A^*),\\ }; \path[-stealth] (m-1-1) edge node [above] {$\cup$} (m-1-2) (m-2-1) edge node [above] {$\cup_\psi$} (m-2-2) (m-1-1) edge node [left] {$Z_* \otimes Z_*$} (m-2-1) (m-1-2) edge node [right] {$Z_*$} (m-2-2); \end{tikzpicture} \] \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=2em,column sep=2em] { HH^n(A) \otimes HH^m(A) & HH^{n+m-1}(A)\\ H^n(A,A^*) \otimes H^m(A,A^*) & H^{n+m-1}(A,A^*).\\ }; \path[-stealth] (m-1-1) edge node [above] {$[\ ,\ ]$} (m-1-2) (m-2-1) edge node [above] {$[\ ,\ ]_\psi$} (m-2-2) (m-1-1) edge node [left] {$Z_* \otimes Z_*$} (m-2-1) (m-1-2) edge node [right] {$Z_*$} (m-2-2); \end{tikzpicture} \] Indeed, \[ \begin{array}{lcl} Z_*(f) \cup_\psi Z_*(g) & = & \psi \circ (Z \otimes Z) (f\cup g) \\ & = & Z \circ (Z \otimes Z)^{-1} \circ (Z \otimes Z) (f\cup g) \\ & = & Z \circ (f \cup g) \\ & =& Z_* (f \cup g), \\ \end{array} \] and \[ \begin{array}{l} [Z_*f,Z_*g]_\psi \\ = (-1)^{(|f|-1)|g|} \Big( \bar{b}\big( Z_*f \cup_\psi Z_*g \big) - \bar{b}(Z_*f)\cup_\psi Z_*g - (-1)^{|f|}Z_*f \cup_\psi \bar{b}(Z_*g) \Big) \\ = (-1)^{(|f|-1)|g|} \Big( \bar{b}\big( Z_* (f \cup_\psi g) \big) - Z_*(\Delta f) \cup_\psi Z_*g - (-1)^{|f|} Z_*f \cup_\psi Z_* (\Delta) g \Big) \\ = (-1)^{(|f|-1)|g|} \Big( Z_* \Delta(f \cup g) - Z_*(\Delta f \cup g) - (-1)^{|f|} Z_*(f \cup \Delta g) \Big) \\ = (-1)^{(|f|-1)|g|} Z_* \Big( \Delta(f \cup g) - (\Delta f \cup g) - (-1)^{|f|} (f \cup \Delta g) \Big) \\ = Z_* [f,g]. \\ \end{array} \] Commutativity of these diagrams implies that the $BV$-algebras $HH^\bullet(A)$ and $H^\bullet_\psi(A,A^*)$ are isomorphic. \end{proof} \begin{Remark} Observe that choosing $\Delta:=(Z_*)^{-1} \bar{b} Z_* $ gives $HH^\bullet(A)$ the structure of a BV-algebra. \end{Remark} Assume now that $A$ is a Frobenius algebra, i.e. a finite dimensional algebra with a non-degenerate associative bilinear form $<-,->:A \times A \to k$. For every $a\in A$ there exist a unique $\mathfrak{N}(a) \in A$ such that $<a,->=<-, \mathfrak{N}(a)>$. The map $\mathfrak{N}:A \to A$ turns out to be an algebra isomorphism and is called the \textit{Nakayama} automorphism of the Frobenius algebra $A$. Following \cite{Lambre} we consider the $A$-bimodule $A_\mathfrak{N}$ whose underlying $k$-module is $A$ and the corresponding actions are \[ a x b = a x \mathfrak{N} (b). \] Hence the morphism $Z:A_\mathfrak{N} \to A^*$ given by $Z(a)=<a,->$ is an isomorphism of $A$-bimodules \cite{Lambre}. The morphism \[ \mu : A_\mathfrak{N} \otimes_A A_\mathfrak{N} \to A_\mathfrak{N} \] given by $\mu(a \otimes b)= a \mathfrak{N}(b)$ is a morphism of $A$-bimodules since \[ \mu(ab \otimes_A cd) = ab \mathfrak{N}(cd) = ab \mathfrak{N}(c)\mathfrak{N}(d) = ab \mathfrak{N}(c)d = a\mu(b \otimes_A c)d \] and it is well-defined since \[ \mu(a c \otimes b) = \mu(a \mathfrak{N}(c) \otimes b) = a \mathfrak{N}(c) \mathfrak{N}(b) = a \mathfrak{N}(cb) = \mu(a \otimes cb) \] for all $a,b,c,d\in A_\mathfrak{N}$. It is also unital and associative since $\mathfrak{N}(1)=1$, and \[ \begin{array}{lcl} \mu \big( \mu(a \otimes b) \otimes c \big) & = & \mu\big(a \mathfrak{N}(b) \otimes c \big) \\ & = & \mu\big(a \otimes b c \big) \\ & = & \mu\big(a \otimes b \mathfrak{N}(c) \big) \\ & = & \mu\big(a \otimes \mu (b \otimes c) \big). \\ \end{array} \] Then $\psi=Z \circ \mu \circ (Z \otimes_A Z)^{-1} : A^* \otimes_A A^* \to A^*$ is an $A$-structural map. \begin{Corollary} Let $A$ be a Frobenius algebra with diagonalizable Nakayama automorphism, then the BV-algebras $HH^\bullet_\psi(A,A^*)$ and $HH^\bullet(A,A_\mathfrak{N})$ are isomorphic. \end{Corollary} \begin{proof} Hochschild cohomology of $A$ with coefficients in $A_\mathfrak{N}$ is isomorphic, see \cite{Lambre}, to Hochschild cohomology of $A$ with coefficients in $A_\mathfrak{N}$ corresponding to the eigenvalue $1\in k$ of the linear transformation $\mathfrak{N}$, \[ HH^\bullet(A,A_\mathfrak{N}) \cong HH_1^\bullet(A,A_\mathfrak{N}). \] The BV-operator of $HH^\bullet(A,A_\mathfrak{N})$ is the transpose of Connes' differential \[ B_\mathfrak{N}([a_0 \otimes \cdots \otimes a_n]) = \left[ \sum_{i=0}^n (-1)^{in} a_i \otimes \dots \otimes a_n \otimes \mathfrak{N}(a_0) \otimes \cdots \otimes \mathfrak{N}(a_{i-1}) \right] , \] with respect to the duality given in \cite{Lambre}. By finite dimensionality arguments, this morphism turns out to be the $k$-dual of $\varphi$, namely \[ \partial:HH_\bullet(A,A^*)^* \to HH^\bullet(A). \] The compatibility conditions for the $\cup$-product and the Gerstenhaber bracket are proved similarly. \end{proof} \iffalse The BV-operator given in \cite{Volkov} is defined in terms of the bilinear form of the Frobenius algebra $A$. Let $\Delta : C^n(A) \to C^{n-1}(A)$ be given by \[ <\Delta f (a_1 \otimes \cdots \otimes a_{n-1}),a_n> = < \sum_{i=1}^n (-1)^{i(n-1)} \Delta_i f(a_1 \otimes \cdots \otimes a_{n-1}),1> \] where \[ \begin{array}{l} <\Delta_if(a_1 \otimes \cdots \otimes a_{n-1}),a_n> \\ = \ <f(a_i \otimes \cdots \otimes a_{n-1} \otimes a_n \otimes \mathfrak{N}(a_0) \otimes \cdots \otimes \mathfrak{N}(a_{i-1})),1>. \end{array} \] Y. Volkov proves that $\Delta$ defines a $BV$-operator on the elements of $HH^\bullet(A)$ for which \[ \mathfrak{N}^{-1} \left( f(\mathfrak{N}(a_1 \otimes \cdots \otimes a_n)) \right) = f(a_1 \otimes \cdots \otimes a_n). \] \fi \section{Monomial path algebras} Let $Q$ be a finite quiver with $n$ vertices and consider a monomial path algebra $A=kQ/\left<T\right>$, that is, $T$ is a subset of paths in $Q$ of length greater or equal than 2. We do not require the algebra $A$ to be finite dimensional. We write $s(\omega)$ and $t(\omega)$ for the source and the target of $\omega$. A basis $P$ of $A$ is given the set of paths of $Q$ which do not contain paths of $T$. Let $P^\vee$ be the dual basis of $P$, and for $\omega \in P$ we denote $\omega^\vee$ its dual. Let $\alpha \in P$ and define $\omega_{/ \alpha}$ as the subpath of $\omega$ that starts in $s(\omega)$ and ends in $s(\alpha)$ if $\alpha$ is a subpath of $\omega$ such that $t(\alpha)=t(\omega)$, and zero otherwise. Let $\beta \in P$ and define $_{\beta \texttt{\symbol{92}} }\omega$ as the subpath of $\omega$ that starts at $t(\beta)$ and ends in $t(\omega)$ if $\beta$ is a subpath of $\omega$ such that $s(\beta)=s(\omega)$, and zero otherwise. The canonical $A$-bimodule structure of $A^*$ is isomorphic to the one given by linearly extending the following action \[ \alpha.\omega^\vee.\beta = (_{\beta \texttt{\symbol{92}} }\omega_{/ \alpha})^\vee. \] Now we construct an $A$-structural map for $A^*$. For $\omega, \gamma \in P$ we define \[ \omega^\vee \cdot \gamma^\vee = \left[ \begin{array}{ll} (\gamma \omega)^\vee & if \ t(\omega)=s(\gamma) \\ 0 & otherwise \\ \end{array} \right] \] and extend by linearity. Observe that $\gamma \ _{\beta \texttt{\symbol{92}} }\omega = \gamma_{/ \beta} \ \omega$, then \[ (\omega^\vee.\beta) \cdot \gamma^\vee = (_{\beta \texttt{\symbol{92}} }\omega)^\vee \cdot \gamma^\vee = (\gamma \ _{\beta \texttt{\symbol{92}} }\omega)^\vee = (\gamma_{/ \beta} \ \omega)^\vee = \omega^\vee \cdot (\gamma_{/ \beta})^\vee = \omega^\vee \cdot (\beta.\gamma^\vee). \] Therefore, by linearly extending $\psi(\omega^\vee \otimes \gamma^\vee) = \omega^\vee \cdot \gamma^\vee$ we get a morphism of $k$-modules \[ \psi: A^* \otimes_A A^* \to A^*. \] It is a morphism of $A$-bimodules since \[ \alpha . (\omega^\vee \cdot \gamma^\vee) . \beta = \alpha.(\gamma \omega)^\vee.\beta = (_{\beta \texttt{\symbol{92}} }\gamma \omega_{/ \alpha})^\vee = (\omega_{/ \alpha})^\vee \cdot (_{\beta \texttt{\symbol{92}} }\gamma)^\vee = (\alpha.\omega^\vee)\cdot (\gamma^\vee.\beta). \] The morphism $\psi$ is associative since the product of $A$ is associative. Let $e_1,...,e_n$ be the idempotents of $A$ given by the vertices of $Q$. Define $1^*=e_1^\vee+\cdots+e_n^\vee$ and observe that if $\alpha$ is a basic element of $A$ of length greater or equal than one then \[ 1^*.\alpha = e_1^\vee.\alpha + \cdots + e_n^\vee.\alpha = 0 = \alpha.e_1^\vee + \cdots + \alpha.e_n^\vee = \alpha.1^* \] for every $i=1,...,n$. Moreover, \[ 1^*.e_i = e_1^\vee.e_i + \cdots + e_n^\vee.e_i = e_i^\vee = e_i.e_1^\vee + \cdots + e_i.e_n^\vee = e_i.1^* \] so we get that $1^*\in H^0(A,A^*)$. Finally, \[ 1^*\cdot \omega^\vee = e_1^\vee \cdot \omega^\vee + \cdots + e_n^\vee\cdot \omega^\vee = e_{t(\omega)}^\vee \cdot \omega^\vee = \omega^\vee \] and analogously $\omega^\vee \cdot 1^* = \omega^\vee$. Therefore $\psi$ is an $A$-structural map. \begin{Corollary} Let $A$ be a monomial path algebra. Then $H_\psi^\bullet(A,A^*)$ is a BV-algebra. \end{Corollary} \subsection*{Acknowledgements} I want to thank Claude Cibils and Ricardo Campos for some very useful talks at IMAG, University of Montpellier, during the Recontre 2018 du GdR de Topologie Alg\'ebrique. The author received funds by CIMAT, CNRS, CONACyT and EDUCAFIN. \bibliographystyle{amsplain}
1,108,101,564,072
arxiv
\section{Introduction} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{poc_v3.png} \caption{Illustration of the steerable anatomy parsing concept. (a) Input 3D CT scan. (b) Target rib identified by query. (c) Target vertebra identified by query. (d) Identifying pancreas and spleen simultaneously. Best viewed in color.} \label{concept_illus} \end{figure} \label{sec:introduction} Deep learning based methods have been remarkably successful in various medical imaging tasks~\cite{chen2022recent}. It may be generally believed that models can analyze almost any anatomy with sufficient well-annotated data at hand, if not considering computation burdens. It is also desirable to accurately segment and quantify as many anatomies as possible in clinical practice. For instance, clinicians may use automatically segmented ribs around the target tumor as organs-at-risk in stereotactic body radiation therapy (SBRT)~\cite{stam2016validation}. In opportunistic osteoporosis screening using routine CT, the first lumbar vertebra is the common anatomy of interest~\cite{jang2019opportunistic}. From the computing side, precisely parsing ribs or vertebrae into individual instances and assigning each one with its anatomical label are required, e.g., ``5th right rib'' or ``1st lumbar vertebra''. Extensive previous work have been proposed for nearly two decades \cite{shen2004tracing, klinder2007automated, staal2007automatic, ramakrishnan2011automatic2, wu2012learning, lenga2018deep, yang2021ribseg, yao2006automated, klinder2009automated, sekuboyina2021verse}, among them complicated heuristic parsing rules were constructed in order to obtain robust and holistically valid anatomy labeling results. This motivates us to develop an alternative Transformer~\cite{vaswani2017attention, carion2020end} based method that can detect, identify, and segment anatomies in a steerable way to bypass the daunting procedure. By \textit{steerable} we mean that we can directly retrieve any anatomy of interest. If the 5th right rib is expected to be analyzed, our framework should output the actual localization and pixel-level segmentation mask of the targeted rib if it exists in the input scan; otherwise output empty. This is illustrated intuitively in~\cref{concept_illus}. To achieve this goal, we need an effective deep feature embedding to encode high-level anatomical information including semantic label, location, size, shape, texture, and relation with contextual circumstances. The success of Transformer on natural language processing and its application on object detection in computer vision shed light on the embedding learning~\cite{vaswani2017attention, carion2020end}. DETR, the first Transformer-based object detector, models object detection as a set prediction problem and uses bipartite matching to bind query predictions with the ground-truth boxes during training~\cite{carion2020end}. In this work, we explore to utilize some intrinsic properties in medical imaging, such as label uniqueness (e.g., there is only one ``5th right rib''), to achieve a steerable detection scheme. An effective weighted adjacency matrix is proposed to guide the bipartite matching process. Once converged, each query embedding will be attached with an anatomically semantic label, and the model can run in a steerable way. Thus, our model is named Med-Query, i.e., steerable medical anatomy parsing with query embedding. Detection in full 3D space is non-trivial, especially for repeated anatomical sequential structures, such as ribs or vertebrae. For rib parsing, the first barrier that we have to overcome is how to represent each elongated and oblique rib using a properly parameterized bounding box. The ordinary axis-aligned 3D bounding boxes enclosing ribs will have large portions of overlaps (spatial collisions among nearby boxes), posing additional difficulties for segmentation. To be effective and keep generality, we choose the fully-parameterized 9-DoF bounding box to handle 6D pose and 3D scale/size estimation. This strategy was studied in marginal space learning for heart chambers detection and aortic valve detection~\cite{zheng2007fast, ghesu2016marginal}, and in incremental parameter learning for ileo-cecal valve detection~\cite{lu2008simultaneous}. However, previous methods employed a hierarchical formulation to estimate 9-DoF parameters in decomposed steps. In contrast, we present a direct and straightforward one-stage 9-DoF bounding box detection strategy. 9-DoF bounding box detection task poses a huge challenge for modern detectors if they are based on heuristic anchor assignment\cite{ren2015faster, liu2016ssd, lin2017focal}, because of the intractable computation expense of intersection-over-union (IoU) in 3D space. It is hard for the widely used post-processing operation of non-maximum suppression (NMS)\cite{girshick2015deformable} on 9-DoF bounding boxes to work efficiently for the same reason. However, in our scenario, due to the uniqueness of each anatomy or query, NMS is not essential. Our Med-Query is built upon DETR, a Transformer-based anchor-free detector~\cite{carion2020end}, to predict the box parameters directly, thus circumvents the otherwise necessary IoU and NMS computations. We further extend another two popular anchor-free detectors of CenterNet~\cite{zhou2019objects} and FCOS~\cite{tian2019fcos} into the 9-DoF setting in our framework to investigate the performance gap between Transformer and convolutional neural network (CNN) architectures. Experimental results show that Transformer-based method empirically exhibits advantages on anatomy parsing problems, which might be attributed to its capability of modeling long-range dependencies and holistic spatial information. For a comprehensive anatomy parsing task, only detection and identification is not sufficient and pixel-level segmentation is needed in many scenarios. As we know, segmentation task should be addressed in high image resolution to reduce imaging resampling artifacts \cite{isensee2021nnu}. However, keeping the whole 3D image in (original) high CT resolution is not computationally efficient. Therefore, detecting to obtain a compact 3D region of interest (ROI) then segmenting to get the instance mask inside ROI becomes the mainstream practice in computationally efficient solutions~\cite{he2017mask, liu2018path, chen2019hybrid, fang2021instances, zheng2007fast, ghesu2016marginal}, which we also follow. In this work, we present a unified computing framework for a variety of anatomy parsing problems, with ribs, spine, and abdominal organs as examples. Our method consists of three main stages: task-oriented ROI extraction, anatomy-specific 9-DoF ROI detection, and anatomy instance segmentation. Input 3D CT scans are operated at different resolutions in different processing stages. ROI extractor is trained and tested at an isotropic spacing of 3mm, detector is conducted at 2mm, and the segmentation network is conducted at the original CT spacing of the input volume with the finest image resolution. The whole inference latency is around 3 seconds per CT scan on an NVIDIA V100 GPU, outperforming several highly optimized methods\cite{isensee2021nnu, lenga2018deep} in the rib parsing task. If we query only a subset of all ribs instead, the needed computing time can be further shortened. In addition, among these three anatomical structures utilized in this work, ribs relatively lack of high quality publicly available dataset. Thus this instance parsing problem has not been as extensively studied as the other two tasks~\cite{yang2021ribseg}. To address this issue, we curate an elaborately annotated instance-level rib parsing dataset, named RibInst, substantially built upon a previously released chest-abdomen CT dataset of rib fracture detection and classification~\cite{ribfrac2020}. RibInst will be made publicly available to the community by providing a standardized and fair evaluation benchmark for future rib instance segmentation and labeling methods. Our main contributions are summarized as follows. \begin{itemize} \item We present a Transformer-based Med-Query method for simultaneously estimating and labeling 9-DoF ROI of anatomy in CT scans. To the best of our knowledge, we are the first to estimate the 9-DoF representation in 3D medical imaging using a one-stage Transformer. \item A steerable object/anatomy detection model is achieved by proposing an effective weighted adjacency matrix in the bipartite matching process between query and ground-truth boxes. This steerable attribute enables a novel medical image analysis paradigm that permits to directly retrieve any instance of anatomy of interest and further boost the inference efficiency. \item We propose a unified computing framework that can generalize well for a variety of anatomy parsing problems. This framework achieves new state-of-the-art quantitative performance results on detection, identification and segmentation of anatomies, comparing to other strong baseline methods. \item Last but not least, we will release publicly a comprehensively annotated instance-level rib parsing CT dataset of 654 patients, termed RibInst, built on top of a previous chest-abdomen CT dataset~\cite{ribfrac2020}. \end{itemize} \section{Related Work} \noindent {\bf Detection, identification and segmentation.} Automated parsing techniques have been widely adopted for object/anatomy detection, identification and segmentation in medical imaging domain~\cite{sharma2010automated, shen2017deep, chen2022recent}. From the specific application perspective, previous work can be roughly categorized into task-specific and universal methods. In rib parsing, existing methods solve rib segmentation and labeling relying heavily on seed points detection and centerline extraction, thus its results can be inaccurate or even fail when seed points are missing or mis-located~\cite{shen2004tracing, klinder2007automated, staal2007automatic, lenga2018deep}. Using auxiliary representations for parsing ribs are also explored: skeletonized centerlines~\cite{ramakrishnan2011automatic2, wu2012learning} or point cloud~\cite{yang2021ribseg}. Similarly in spine parsing, complicated heuristic rules or sophisticated processing pipelines ~\cite{yao2006automated, klinder2009automated, sekuboyina2021verse, wang2021automatic} are often constructed in order to obtain robust labeling results. Recent end-to-end segmentation neural network models demonstrate their versatility so that have been widely adopted, such as U-Net~\cite{ronneberger2015u, cciccek20163d}, V-Net~\cite{milletari2016v}, and so on. Particularly, nnU-Net~\cite{isensee2021nnu} and its variants have achieved cutting-edge performances in various medical imaging segmentation tasks~\cite{antonelli2022medical, ma2021abdomenct, liu2022universal, bian2022artificial}. All existing techniques do not offer steerable 3D object/anatomy detection as query retrieval. In this work, we present a unified computing framework that can generalize well for a variety of anatomy parsing tasks, based on the detection-then-segmentation paradigm in 3D medical image analysis with steerable, robust and efficiency benefits. \noindent {\bf Detection-then-segmentation} paradigm has dominated the instance segmentation problem in photographic images. He et al.\cite{he2017mask} extend Faster R-CNN\cite{ren2015faster} to Mask R-CNN by adding a branch for predicting the segmentation mask of each detected ROI. Several work emerge from Mask R-CNN, e.g., PANet \cite{liu2018path}, HTC \cite{chen2019hybrid} and QueryInst \cite{fang2021instances}. There are some other work that do not rely on the explicit detection stage \cite{XinleiChen2019TensorMaskAF, XinlongWang2020SOLOSO, BowenCheng2021PerPixelCI}. The detection-then-segmentation paradigm deserves to be exploited further in 3D medical imaging data where precision and computation efficiency both matter. For rib instance segmentation, previous techniques can not be directly applied since the ribs are orientated obliquely and are so close to neighbouring ones that these regular axis-aligned bounding boxes would be largely overlapped. Partially inspired by \cite{zheng2007fast} and \cite{lu2008simultaneous}, in which 9-DoF parameterized boxes are employed to localize the target region with a hierarchical workflow to estimate the box parameters progressively, we parametrically formulate the detection target for each rib as a 9-DoF bounding box encompassing the rib tightly. This 9-DoF representation can be generalized to other anatomies existing in 3D medical imaging volumes. Direct 9-DoF box estimation on point clouds have been explored recently \cite{wang2019normalized, lin2021dualposenet, weng2021captra}, demonstrating promising results on this complex pose estimation problem in 3D space. In this work, we tackle the similar task by detecting the 9-DoF bounding box of anatomy in one-stage using 3D CT scans. \noindent {\bf Anchor-free object detection} techniques have been well studied \cite{redmon2016you, HeiLaw2020CornerNetDO, XingyiZhou2019BottomUpOD, zhou2019objects, tian2019fcos, ZeYang2019RepPointsPS, carion2020end, XizhouZhu2021DeformableDD, PeizeSun2021SparseRE}. YOLO\cite{redmon2016you} reports comparable detection performance close to Faster R-CNN\cite{ren2015faster} while running faster. Several anchor-free methods also obtain competitive accuracy with high inference efficiency. Zhou et al.\cite{zhou2019objects} present CenterNet that models an object as the center point of its bounding box so that object detection is simplified as key point estimation and size regression. FCOS\cite{tian2019fcos} takes advantage of all points in a ground-truth box to supplement positive samples and suppresses the low-quality detected boxes with its ``centerness'' branch. Apart from CNN based methods, DETR\cite{carion2020end} comes up with a new representation that views object detection as a direct set prediction problem with Transformer. It learns to produce unique predictions via the bipartite matching loss and utilizes global information with the self-attention mechanism \cite{vaswani2017attention}, beneficial for the detection of large or elongated objects. More recently, 3DETR \cite{misra2021end} of a detection model for 3D point clouds has proven that the Transformer-based detection paradigm can be extended to 3D space with minor modifications and achieve convincing results with good scalability in dealing with higher dimensional object detection problems. \section{Approach} \subsection{Problem Definition} Anatomy parsing focuses on distinguishing each instance in a cluster of similar anatomical structures with a unique anatomical label, e.g., recognizing each rib in a ribcage, splitting each vertebra in a spine, and delineating each organ in the abdominal region in 3D CT scans. Our work distinguishes itself from existing work on its steerable capability that is attributed to a Transformer-based 9-DoF box detection module. We briefly describe our overall detection framework as follows. Giving a CT scan containing $N (1 \leq N \leq C)$ targets, $C$ is the maximum number of target anatomies in normal scan (e.g., $C$ is set as 24 for rib parsing in our work according to \cite{glass2002pediatric}), the whole anatomy set can be expressed as $x=\{x_i = (c_i, x_i^p, x_i^s, x_i^a), 1 \leq i \leq N\}$, where $c_i, x_i^p, x_i^s, x_i^a$ stands for anatomy instance label, center position component, scale component, and angle component of the correspondingly parameterized bounding box, respectively. The goal of the detection task is to find the best prediction set $\hat{x}$ that matches $x$ for each object/organ/anatomy instance. Unless specified otherwise, the term \textit{position} in the following context refers to the position of the box center. \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{architecture_v3.pdf} \caption{An instantiation of Med-Query architecture for rib parsing, which consists of \textbf{A}: a ribcage ROI extractor, \textbf{B}: a steerable 9-DoF parametric rib detector, and \textbf{C}: a stand-alone segmentation head, for robust and efficient rib parsing, i.e., instance segmentation and labeling. In (\textbf{B}) Anatomy Detector, we use an adapted 3D version of ResNet \cite{he2016deep} as feature extractor. The stacked colored illustrative blocks next to the feature extractor represents the flattened spatial features.} \label{architecture} \end{figure*} \subsection{9-DoF Box Parameterization} To get a compact parameterization for anatomy localization, we compute the 9-DoF bounding box via principle component analysis (PCA)\cite{jolliffe2016principal}, based on the annotated instance-level 3D segmentation mask. Specifically, three steps are performed: 1) computing eigenvalues and eigenvectors of covariance matrix, based on the voxel coordinates of each anatomy mask; 2) sorting eigenvalues to determine the axes for the local coordinate system with the correspondent eigenvectors; 3) formulating the 9-DoF box in representation of $(x, y, z, w, h, d, \alpha, \beta, \gamma)$ based on the anatomy mask and the local coordinate system. To be more specific, $(x, y, z)$ stands for the box center and is transformed back into the world coordinate system to keep invariance whenever any data augmentation applies; $(w, h, d)$ represents for the box scale along $x$-axis, $y$-axis, and $z$-axis of the local coordinate system, respectively, and is stored in the unit of millimeter; $(\alpha, \beta, \gamma)$ stands for the box's Euler angles around $Z$-axis, $Y$-axis, and $X$-axis of the world coordinate system, respectively. An intuitive visual illustration can be found in \cref{concept_illus}. It is noteworthy that this universal parameterization can be customized to be degraded or simplified, e.g., only keeping the angle of pitch as the rotation parameter may be sufficient to define the vertebra parameterization (\cref{concept_illus}(c)), depending on the specific anatomical characteristics. \subsection{Med-Query Architecture} In this section, we describe our full architecture in details. Based on the steerable anatomy detector, we also enhance our algorithm pipeline with a preceding task-oriented ROI extractor and a subsequent stand-alone segmentation head. The term ``Med-Query'' is not limited to the detector itself but represents the whole processing framework. For ease of illustration, a schematic flowchart on a concrete application example of rib parsing, is demonstrated in \cref{architecture}. \noindent {\bf ROI Extractor.} The scales and between-slice spacings of chest-abdomen CT scans vary greatly. To make Med-Query concentrated in the target rib region, a task-oriented ROI extraction which involves only ribcage area would be helpful. Simply thresholding to get the ROI is not robust, especially for input scans with large FOVs, e.g., from neck to abdomen. To obtain an accurate ROI estimation when handling with input CT scans under various situations, we train a simplified U-Net\cite{ronneberger2015u} model to coarsely identify the rib regions in 3D, then the proper ROI box can be inferred by calculating the weighted average coordinates and distribution scope of the predicted rib voxels. At inference stage, the obtained ROI center is critical and the ROI scale can be adjusted depending on the data augmentation strategies adopted in training. Our ROI extractor is computationally efficient while running on an isotropic spacing of $3mm\times3mm\times3mm$. \noindent {\bf Steerable Anatomy Detector.} DETR \cite{carion2020end} handles object detection as a direct set prediction problem through the conjunction of the bipartite matching loss and Transformer \cite{vaswani2017attention} with parallel decoding of queries. Queries in DETR demonstrate the preference on object spatial locations and sizes in a statistical perspective. If there exists a deterministic binding between the learned query embedding and anatomical structures, a steerable object parsing paradigm for medical imaging is achievable. Medical imaging differs from natural images mainly in two aspects: 1) the semantic targets inside medical imaging scans are relatively more stable with respect to their quantities and absolute/relative positions, even though there exist some local ambiguities; 2) the anatomical label for each instance is unique. These two intrinsic properties constitute the cornerstones of developing a steerable anatomy detector. The remaining obstacle is how to obtain a deterministic/steerable binding. We empirically find that the model can be trapped in random bindings between queries and ground-truth boxes due to different initialization parameters. Two simple examples are illustrated in \cref{binding}(a)(b). We take this one step further and intend to fix the binding consequence, as in \cref{binding}(c). We define the query set as $q=\{q_i, 0 \leq i \leq Q\}$. Note that in our experiments, the total number of queries is set as $C+1$, which is the number of output channels for the classification branch in each query prediction, with the channel zero as the background class. To explicitly guide the bipartite matching process, we propose a weighted adjacency matrix $M \in \mathbb{R}^{(Q+1) \times (Q+1)}$, which can be interpreted as an index cost term to penalize the index mismatch between queries and ground-truth boxes. A matrix instantiation with 10 queries is shown in \cref{binding}(d), as can be seen, the greater the index gap, the higher the index cost. Therefore, assuming a query with index $\sigma(i)$ and its prediction $\hat{x}_{\sigma(i)}$, our matching cost on $\left(\hat{x}_{\sigma(i)}, x_{i}\right)$ can be defined as: \begin{equation} \label{match_eq} \begin{aligned} \mathcal{C}\left(\hat{x}_{\sigma(i)}, x_{i}\right) = & -\lambda_{c}\hat{p}_{\sigma(i)}\left(c_{i}\right)+ \lambda_{p}\|\hat{x}_{\sigma(i)}^{p} - x_{i}^{p}\|_1\\ &+ \lambda_{s}\|\hat{x}_{\sigma(i)}^{s} - x_{i}^{s}\|_1 + \lambda_{a}\|\hat{x}_{\sigma(i)}^{a} - x_{i}^{a}\|_1 \\ &+\lambda_{m}M[\sigma(i), c_i], \end{aligned} \end{equation} where $c_i$ is the target class label, $\hat{p}_{\sigma(i)}\left(c_{i}\right)$ is the probability prediction on class $c_i$, and $\lambda_c, \lambda_p, \lambda_s, \lambda_a, \lambda_m$ are cost coefficients for classification, position, scale, rotation, and the preset weighted adjacency matrix, respectively. In our implementation, the position values and scale values in the image coordinate system are normalized by the image size $[W, H, D]$, thus they can be merged into the same branch with a \textit{sigmoid} function as its activation. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{binding_and_matrix.png} \caption{Three types of binding results between queries and ground-truth boxes: (a), (b) and (c). (d) shows an instantiation of the weighted adjacency matrix with 10 queries and 10 ground-truth classes. Queries are arranged in rows while ground-truths are arranged in columns.} \label{binding} \end{figure} The optimal matching $\hat{\sigma}$ is searched using Hungarian algorithm \cite{kuhn1955hungarian}. Then our loss function is defined similarly as: \vspace{-2mm} \begin{equation} \label{loss_eq} \begin{aligned} \mathcal{L}_(\hat{x}, x)=\frac{1}{N}\sum_{i=1}^{N}\left.\Big[-\lambda_{c}\log \hat{p}_{\hat{\sigma}(i)}\left(c_{i}\right)+ \lambda_{p}\|\hat{x}_{\hat{\sigma}(i)}^{p} - x_{i}^{p}\|_1 \right. \\ \phantom{=\;\;}\left. + \lambda_{s}\|\hat{x}_{\hat{\sigma}(i)}^{s} - x_{i}^{s}\|_1 + \lambda_{a}\|\hat{x}_{\hat{\sigma}(i)}^{a} - x_{i}^{a}\|_1 \right.\Big], \end{aligned} \end{equation} where the coefficients are kept consistent with those in \cref{match_eq}. For queries matched to the background class, only the classification loss is accounted. Note that we merely use L1 Loss for the geometrical 9-DoF box regression. No IoU-related loss \cite{rezatofighi2019generalized} is used in bipartite matching or loss computing, which is different from \cite{carion2020end} and \cite{misra2021end}. The proposed preset matrix only takes effects at the matching stage. \noindent {\bf Segmentation Head.} The input data for the aforementioned detector is isotropic with the spacing of $2mm\times2mm\times2mm$ per voxel which is finer than the ROI extraction step. To obtain high accuracy instance segmentation results with affordable training expense, we adopt a stand-alone U-Net\cite{ronneberger2015u} to segment each anatomy independently with a finer spatial resolution in a locally cropped FOV. Specifically, each detected 9-DoF bounding box from our steerable anatomy detector is expanded slightly to crop a sub-volume from the input CT volume at the original spacing. Then the segmentation head performs a binary segmentation for all sub-volumes, with the anatomy of interest being segmented as foreground and other tissues (including neighbouring anatomies of the same category) as background. After this, all predicted binary masks are merged back with their corresponding labels and spatial locations into the original CT scan coordinate system to form the final instance segmentation output. Our detector and segmentation head is designed to be disentangled, so there are good flexibility and scalability to employ even more powerful segmentation networks than \cite{ronneberger2015u}. Furthermore, benefiting from the steerable nature of the proposed detector, the segmentation head can be invoked dynamically per detection box at request, to save computational cost if necessary. \subsection{Data Augmentation}\label{sec:data_aug} For better model generalization and training efficiency, we employ both online and offline data augmentation schemes. \noindent {\bf Offline Scheme.} We perform \textit{RandomCrop} along $Z$-axis to imitate (largely) varying data input FOVs in realistic clinical situations, where FOVs can contain or cover the targeted object/anatomy region completely or only partially. Spatial cropping may truncate some obliquely oriented anatomies, whose 9-DoF parameterization need to be recomputed. Online computation is time-consuming so that we conduct this operation offline. \noindent {\bf Online Schemes.} Since the anatomy position, orientation and scale vary case by case, local spatial ambiguities exist among different CT scans, posing great challenges for identifying each anatomy correctly and precisely. We perform 3D \textit{RandomTranslate}, \textit{RandomScale}, and \textit{RandomRotate} to add more input variations and alleviate this problem. These operations can be conducted efficiently without 9-DoF box recomputation, so they are performed online during the training process. According to \cite{glass2002pediatric}, 5\%-8\% of normal individuals may have 11 pairs of ribs. This raises an obstacle because we have no sufficient training data exhibiting this pattern of anatomy variation. Hence, for rib parsing task, we particularly use \textit{RandomErase} to remove the bottom pair of ribs in the training set with a certain probability ratio, to alleviate the possible over-prediction problem. No additional data augmentation is used otherwise. \section{Experiments} \subsection{Datasets} \noindent {\bf RibInst} dataset is constructed from a public CT dataset that was used as the rib fracture evaluation benchmark in \textit{RibFrac} challenge\cite{ribfrac2020}. This original dataset consists of total 660 chest-abdomen CT scans with 420 for training, 80 for validation and 160 for testing. We comply with the dataset splitting rule and annotate each rib within each CT scan with spatially ordered and unique labels. There are 6 cases with extremely incomplete FOV in the validation set, infeasible to identify the correct rib labels. Therefore RibInst dataset has the remaining 654 CT scans. Note that a dataset with 490 CT scans annotated with binary rib segmentation masks and labeled rib centerlines was released \cite{yang2021ribseg}. Compared with it, RibInst is more concise and comprehensive, for conducting rib instance segmentation evaluation without centerline extraction. \noindent {\bf CTSpine1K} provides 1,005 CT scans with instance-level vertebra annotation for spine analysis ~\cite{deng2021ctspine1k, liu2022universal}. The dataset is curated from four previously released datasets ranging from head-and-neck imaging to colonography scans. It serves as a benchmark for spine-related image analysis tasks, such as vertebrae segmentation and labeling. There are 610 scans for training and the remaining are for validation and testing. \noindent {\bf FLARE22} stands for Fast and Low-resource semi-supervised Abdominal oRgan sEgmentation in CT, a challenge held in MICCAI 2022. The dataset is curated from three abdomen CT datasets. The segmentation targets contain 13 organs: liver, spleen, pancreas, right kidney, left kidney, stomach, gallbladder, esophagus, aorta, inferior vena cava, right adrenal gland, left adrenal gland, and duodenum. As a semi-supervised task, the training set includes 50 labeled CT scans with pancreas disease and 2,000 unlabeled CT scans with liver, kidney, spleen, or pancreas diseases. There are also 50 CT scans in the validation set and 200 CT scans in the test set with various diseases. For more information, readers are referred to the challenge website\footnote{https://flare22.grand-challenge.org}. \subsection{Implementation Details} Our implementation uses PyTorch \cite{paszke2019pytorch} and is partially built upon MMDetection\cite{chen2019mmdetection}. The detector in Med-Query is trained with AdamW optimizer \cite{loshchilov2017decoupled} in the initial setting of $lr=4\mathrm{e}{-4}, \beta_1=0.9, \beta_2=0.999, weight\_{decay}=0.1$. The total training process contains 1000 epochs: the first 200 epochs to linearly warm up and the learning rate $lr$ is reduced by a factor of 10 at the 800th epoch. It costs about 20 hours on 8 V100 GPUs with a single GPU batch size of 8. We empirically set coefficients $\lambda_c=1, \lambda_p=10, \lambda_s=10, \lambda_a=10, \lambda_m=4$. The network output of the fourth stage (C4), from an adapted 3D ResNet50 is used as the spatial features for Transformer encoding. The Transformer component consists of a one-layer encoder and a three-layer decoder, which is empirically found to be sufficient in our task. We follow \cite{misra2021end} to set a dropout of 0.1 in encoder and 0.3 in decoder. The \texttt{MultiHeadAttention} block of both have 4 heads. The length of positional encoding in one-axis is set as 128, thus the total embedding dimensions is 384 in our case. We use data augmentations mentioned in \cref{sec:data_aug} to generate diverse training data offline and on the fly, the maximum translation distance from the reference center (i.e., anatomy mask center in our experiments) is 20mm, the scaling range is [0.9, 1.1], and the rotation range is [-15$^\circ$, 15$^\circ$]. The ROI extractor and segmentation head are trained separately, for the sake of training efficiency. All models in our experiments are trained from scratch. \subsection{Performance Metrics} \noindent {\bf Detection and Identification.} To evaluate the 9-DoF box detection and identification performance, we refer to the practice of \textit{VerSe} \cite{sekuboyina2021verse}, a vertebrae labeling and segmentation benchmark. We extend the Identification Rate (Id.Rate) computation in \textit{VerSe} by considering all factors of the label accuracy, center position deviation ($\operatorname{P_{mean}}$), scale deviation ($\operatorname{S_{mean}}$) and angle deviation ($\operatorname{A_{mean}}$), to evaluate 9-DoF box predictions from different methods. To be more specific: giving a CT scan containing $N$ ground-truth anatomy boxes and the true location of the $i^{th}$ anatomy box denoted as $x_i$ with its predicted box denoted as $\hat{x}_{i}$ ($M$ predictions in total), the anatomy $i$ is correctly identified if $\hat{x}_{i}$ among $\{\hat{x}_{j} \forall j \in \{1, 2, \ldots, M\}\}$ is the closet box predicted to $x_i$ and subjects to the following conditions: \begin{equation} \label{det_conditions} \left\{ \begin{array}{r} \|\hat{x}_{i}^{p} - x_{i}^{p}\|_2 < 20\operatorname{mm}, \\ (\sum_{k=1}^{3}\|\hat{x}_{i}^{s_k} - x_{i}^{s_k}\|_1)/3 < 20\operatorname{mm}, \\ (\sum_{k=1}^{3}\|\hat{x}_{i}^{a_k} - x_{i}^{a_k}\|_1)/3 < 10^{\circ}, \end{array} \right. \end{equation} where the superscript $p$ denotes the center position component of the box, the superscript $s_k$ denotes the scale component in axis $k$, and the superscript $a_k$ represents the Euler angle component rotated around axis $k$. Given a CT scan, if there are $R$ anatomies correctly identified, then Id.Rate is defined as $\operatorname{Id.Rate} = \frac{R}{N}$. The center position deviation is computed as $\operatorname{P_{mean}} = (\sum_{i=1}^{R}\|\hat{x}_{i}^{p} - x_{i}^{p}\|_2)/R$. Similarly, the scale deviation is computed as $\operatorname{S_{mean}} = (\sum_{i=1}^{R}\sum_{k=1}^{3}\|\hat{x}_{i}^{s_k} - x_{i}^{s_k}\|_1)/3R$, and the angle deviation is computed as $\operatorname{A_{mean}} = (\sum_{i=1}^{R}\sum_{k=1}^{3}\|\hat{x}_{i}^{a_k} - x_{i}^{a_k}\|_1)/3R$. Note that we only compute the average deviations of the identified anatomies where unpredicted anatomies or mislocated predictions have been reflected by the Id.Rate index. \noindent {\bf Segmentation.} We employ the widely-used Dice similarity coefficient (DSC), 95\% Hausdorff distance (HD95) and average symmetric surface distance (ASSD) as segmentation metrics. For missing anatomies, HD and ASSD are not defined. We follow the practice of \cite{sekuboyina2021verse} to ignore such anatomies when computing the averages. Those missing anatomies will be reflected on DSC and Id.Rate. We utilize a publicly available evaluation toolkit\footnote{https://github.com/deepmind/surface-distance} to compute the surface measures. \subsection{Main Results} We report comprehensive evaluation results on RibInst from rib instance detection and segmentation. Besides, we validate the performance generality of our framework on CTSpine1K and FLARE22 datasets, and report the segmentation performances compared with several strong baseline methods. \begin{table*}[!t] \begin{minipage}{0.65\textwidth} \centering \caption{\upshape Detection and identification results on the test set of RibInst. *Note that CenterNet and FCOS demonstrated here are significantly developed and modified to fit into our task and equipped with the proposed 9-DoF box representation.} \label{tbl:quant_det_rib} \vspace{2.5mm} \setlength{\tabcolsep}{0.71mm}{ \begin{tabular}{lcccccc} \toprule[1pt] Methods & \#params(M) & Id.Rate(\%)$\uparrow$ & $\operatorname{P_{mean}}(\text{mm})$$\downarrow$ & $\operatorname{S_{mean}}(\text{mm})$$\downarrow$ & $\operatorname{A_{mean}}(^{\circ})$$\downarrow$ & Latency(s)$\downarrow$\\ \midrule CenterNet*~\cite{zhou2019objects} & 1.3 & 94.9 &\textbf{3.032} &\textbf{2.333} & 2.890 & 2.575 \\ FCOS*~\cite{tian2019fcos} & 19.4 & 94.1 & 3.100 & 2.516 &\textbf{1.786} & 0.745 \\ Med-Query & 9.6 &\textbf{97.0} & 4.702 & 2.784 & 2.185 &\textbf{0.065}\\ \bottomrule[1pt] \end{tabular}} \end{minipage} \begin{minipage}{0.36\textwidth} \centering \caption{\upshape Segmentation results on the validation set and test set of CTSpine1K. The results of nnFormer, nnU-Net, and CPTM are quoted from~\cite{liu2022universal}.} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{l ccc} \toprule[1pt] Methods & DSC(\%)$\uparrow$ & HD(mm)$\downarrow$ & ASSD(mm)$\downarrow$\\ \midrule nnFormer~\cite{zhou2021nnformer} & 74.3 & 11.56 & - \\ nnU-Net~\cite{isensee2021nnu} & 84.2 & 9.02 & - \\ CPTM~\cite{liu2022universal} & 84.5 & 9.03 & - \\ Med-Query & \textbf{85.0} &\textbf{8.68} & 1.16\\ \bottomrule[1pt] \end{tabular}} \label{tbl:quant_ctspine} \end{minipage} \end{table*} \begin{table*}[t] \centering \caption{\upshape Organ-specific DSC (\%) on the validation set of FLARE22. Abbreviations: ``RK''-Right Kidney, ``IVC''-Inferior Vena Cava, ``RAG''-Right Adrenal Gland, ``LAG''-Left Adrenal Gland, ``Gall.''-Gallbladder, ``Eso.''-Esophagus, ``Stom.''-Stomach, ``Duode.''-Duodenum, ``LK''-Left Kidney, ``ens.''-ensemble, ``pre.''-pre-training, ``mDSC''-mean DSC.} \label{tbl:quant_flare} \resizebox{\textwidth}{!}{ \begin{tabular}{lcccccccccccccc} \toprule[1pt] Methods & Liver & RK & Spleen & Pancreas & Aorta & IVC & RAG & LAG & Gall. & Eso. & Stom. & Duode. & LK & mDSC$\uparrow$\\ \midrule nnU-Net~\cite{isensee2021nnu} & 97.7 & 94.1 & 95.8 & 87.2 & 96.8 & 87.8 & 83.0 & 80.1 & 76.5 & 89.2 & 89.9 & 77.1 & 91.1 & 88.2\\ nnU-Net ens. & 97.9 & \textbf{94.8} & 96.0 & 88.6 & \textbf{96.9} & 89.7 & \textbf{83.8} & \textbf{81.9} & 78.7 & \textbf{90.1} & 90.7 & \textbf{79.2} & 92.0 & 89.2\\ \midrule Swin UNETR~\cite{tang2022self} & 96.5 & 91.2 & 94.2 & 84.6 & 93.0 & 86.5 & 75.8 & 74.2 & 77.1 & 79.0 & 88.6 & 76.5 & 88.7 & 85.0\\ Swin UNETR pre. & 96.4 & 92.1 & 95.2 & 88.1 & 93.7 & 86.2 & 79.4 & 79.1 & 79.2 & 81.8 & 89.5 & 79.0 & 87.9 & 86.7\\ Med-Query & \textbf{98.0} & 94.5 & \textbf{97.2} & \textbf{89.0} & 96.6 & \textbf{90.3} & 82.4 & 80.6 & \textbf{86.1} & 87.4 & \textbf{91.5} & 78.7 & \textbf{93.7} & \textbf{89.7}\\ \bottomrule[1pt] \end{tabular}} \end{table*} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{det_visual_v2.pdf} \caption{Detection visualizations show that our 9-DoF predictions enclose the ground-truth rib masks accurately. (a) Normal results in superior-to-inferior view. (b) A limited FOV case in posterior-to-anterior view. (c) A case with rib adhesions. (d) Only odd-number labels are queried. Ground-truth masks are rendered as visual reference.} \label{fig:det_visual} \end{figure} \noindent {\bf Detection Evaluation on RibInst.} From \cref{tbl:quant_det_rib}, Med-Query achieves the best identification rate of 97.0\% with a moderate amount of parameters. The capability of capturing long-range dependencies and holistic information of Transformer appears to be suitable at solving this sequence modeling problem. Notably, compared with pure CNN architectures, Med-Query can infer at least 10x faster in terms of latency. This may benefit from not relying on additional upsampling layers in \cite{zhou2019objects} or dense prediction heads in \cite{tian2019fcos}. Meanwhile, we have some interesting findings on different characteristics of Med-Query and traditional object detectors \cite{zhou2019objects,tian2019fcos} that recognize objects individually while ignoring relations between objects\cite{hu2018relation}. In our case, the main reason why FCOS and CenterNet could fail is that they sometimes assign different labels to the same rib or same label to different ribs, subsequently leading to missing ribs as shown in \cref{fig:seg_visual}. These traditional detectors did not explicitly consider the anatomy uniqueness and spatial relationship/dependency between rib labels. Med-Query behaves in another way that solves a label assignment problem rather than an unconstrained object detection task, so that the relations between targets are explicitly constructed and enforced through the steerable label assignment strategy and the intrinsic self-attention mechanism in Transformer models \cite{vaswani2017attention}. On the other hand, Med-Query is slightly inferior in local regression indicators as shown in \cref{tbl:quant_det_rib}. Med-Query can correctly identify more ribs with global information, but including those challenging ribs in the calculation of deviations may deteriorate the quantitative performances on localization errors. It is noteworthy that all these optimized models achieved small regression deviations w.r.t. $\operatorname{P_{mean}}$, $\operatorname{S_{mean}}$, and $\operatorname{A_{mean}}$. Note that we have improved FCOS and CenterNet with our one-stage 9-DoF detection strategy. The above results demonstrate that the challenging 9-DoF parameter estimation problem can be solved effectively. Some qualitative results of Med-Query in rib detection and labeling are shown in \cref{fig:det_visual}. Our 3D detection visualization tool is developed based on \textit{vedo} \cite{musy2021vedo}. \noindent {\bf Segmentation Evaluation on RibInst.} As shown in \cref{tbl:quant_seg_rib}, the widely-used self-configuring nnU-Net~\cite{isensee2021nnu} achieves a good DSC of 89.7\% with a long latency of 252 seconds. nnU-Net is trained with sampled patches, therefore the lack of global information leads to noticeable label confusion, significantly revealed by HD95 of 4.498 mm. We also reimplement a centerline-based method~\cite{lenga2018deep} to investigate how the parsing results relies heavily on centerline extraction and heuristic rules. Among all listed methods in \cref{tbl:quant_seg_rib}, Med-Query achieves the best DSC of 90.9\%. Notably, the whole pipeline of Med-Query can be finished in 2.591 seconds. The efficiency is attributed to our detection-then-segmentation paradigm which detects on low resolution with global FOV and segments on high resolution with local FOV. A qualitative comparison on a representative patient case with rib fractures is shown in \cref{fig:seg_visual}, demonstrating the robustness of Med-Query. \noindent {\bf Segmentation Evaluation on CTSpine1K.} To be aligned with the evaluation protocol in ~\cite{liu2022universal}, we report segmentation metrics on vertebra-level average scores computed on all scans in the validation and test set (HD is employed instead of HD95). As in \cref{tbl:quant_ctspine}, we compare Med-Query with several strong baselines, including nnFormer~\cite{zhou2021nnformer}, nnU-Net~\cite{isensee2021nnu}, and CPTM~\cite{liu2022universal}. Med-Query obtains marked performance improvements on both DSC and HD, achieving a new state-of-the-art result for this challenging task of CTSpine1K. Note that the semi-supervised annotation curation procedure of this dataset used nnU-Net in-the-loop. \begin{table*}[!t] \begin{minipage}{0.42\textwidth} \centering \caption{\upshape Segmentation results on the test set of RibInst. $\dag$A centerline-based method proposed in~\cite{lenga2018deep}. *Adapted versions, also equipped with ROI extractor and segmentation head.} \label{tbl:quant_seg_rib} \vspace{8mm} \setlength{\tabcolsep}{0.78mm}{ \begin{tabular}{l cccc} \toprule[1pt] Methods & DSC(\%)$\uparrow$ & HD95(mm)$\downarrow$ & ASSD(mm)$\downarrow$ & Latency(s)$\downarrow$\\ \midrule Centerline\dag & 84.6 & 13.84 & 3.437 & 18.50 \\ CenterNet* & 89.5 & 1.826 & 0.725 & 4.793 \\ FCOS* & 88.1 & \textbf{1.137} & \textbf{0.352} & 3.241 \\ nnU-Net & 89.7 & 4.498 & 0.900 & 252.0 \\ Med-Query & \textbf{90.9}&1.644 & 0.438&\textbf{2.591}\\ \bottomrule[1pt] \end{tabular}} \end{minipage} \begin{minipage}{0.58\textwidth} \caption{\upshape Id.Rate evaluations in different operations and different measurement thresholding combinations. \textit{RandomTranslate}, \textit{RandomScale}, and \textit{RandomRotate} are performed jointly, thus are abbreviated as ``T.S.R''. ``Re.Dist.'' represents relative distance constraints. ``PE'' is short for positional encoding.} \label{tbl:ablation_aug} \setlength{\tabcolsep}{0.8mm}{ \begin{tabular}{cccccc|ccc} \toprule[1pt] \multicolumn{1}{c}{\multirow{2}{*}{No Aug.}} & \multicolumn{3}{c}{Data Aug.} & \multirow{2}{*}{Re.Dist.} & \multirow{2}{*}{w/o PE} & \multirow{1}{*}{20mm20mm} & \multirow{1}{*}{10mm10mm} & \multirow{1}{*}{10mm10mm}\\ \cmidrule{2-4} \multicolumn{1}{c}{} & \multicolumn{1}{c}{T.S.R} & \multicolumn{1}{c}{Crop} & \multicolumn{1}{c}{Erase} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multirow{1}{*}{10$^{\circ}$} & \multirow{1}{*}{10$^{\circ}$} & \multirow{1}{*}{5$^{\circ}$} \\ \hline \checkmark & & & & & & 83.1 & 49.0 & 33.7 \\ & \checkmark & & & & & 94.4 & 84.8 & 79.2 \\ & \checkmark & \checkmark & & & & 96.2 & 89.0 & 84.2 \\ & \checkmark & \checkmark & \checkmark & & & 96.5 & 90.7 & 87.0 \\ & \checkmark & \checkmark & \checkmark & \checkmark & & \textbf{97.0} & 91.8 & 87.4 \\ & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & 96.2 & \textbf{92.0} & \textbf{87.9} \\ \toprule[1pt] \end{tabular}} \end{minipage} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{seg_visual.png} \caption{An example with broken structures in RibInst. Missing or wrong labels are marked using golden arrows and dashed circles, respectively.} \label{fig:seg_visual} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{ablation.png} \caption{Ablation studies on the coefficient of the proposed index cost and the number of query embeddings at inference.} \label{fig:ablation} \end{figure*} \noindent {\bf Segmentation Evaluation on FLARE22.} We report the evaluation results on FLARE22 validation leaderboard that gives the detailed performance evaluation at organ-level. In~\cref{tbl:quant_flare}, nnU-Net and its ensemble version are merely trained on labeled data, as shown, they can achieve very good results on the validation set even with only 50 labeled data. The ensembled nnU-Net (assembly of 12 models from different settings and different folds in cross-validation) can obtain a 1\% improvement on mean DSC. Considering the semi-supervised setting of this task, we use nnU-Net as the teacher model to generate pseudo labels for the unlabeled part of the training set, as a straightforward but effective semi-supervised strategy. Based on this, we exploit the performance of a recent published model Swin UNETR~\cite{tang2022self} on this task. As can be seen, Swin UNETR with self-supervised pre-training on unlabeled data has a 1.7\% improvement against its vanilla version. Our method compares favorably with the average DSC of 89.7\%, which is a competitive result ranking in top 3\% among all 1,162 entries in the validation leaderboard without heavy model ensembles. \subsection{Ablation Study} We perform ablation study on RibInst to investigate the effectiveness of three key factors in our framework. \noindent {\bf Effects of the Weighted Adjacency Matrix.} To validate the efficacy of the preset weighted adjacency matrix, we conduct an ablation study w.r.t. $\lambda_m$, the scaling factor of the matrix, which is also known as index cost. Candidate parameter is chosen from $[0, 1, 2, 4, 8]$ where 0 means the vanilla bipartite matching. We should notice that when $\lambda_m$ is large enough, it can be regarded as directly assigning the query to a fixed ground-truth label. \cref{fig:ablation}(a) reveals that a proper value of $\lambda_m = 4$ stands out with the highest Id.Rate of rib labeling. One explanation is that a reasonable perturbation term during the matching process could gradually impose anatomical semantic labels on queries and help build the correct relations among them \noindent {\bf Steerable at Inference?} To further validate the steerable attribute of the proposed model, we randomly pick some rib labels with varied number from 4 to 24 with a step size of 4 at inference. The task is to retrieve the pre-selected ribs with query embeddings in the same sequence where Id.Rate for the given subset of ribs is the evaluation metric. \cref{fig:ablation}(b) demonstrates the performances of two different settings of with index cost or without index cost. Note that in the round of only 4 ribs are expected to be detected, the model without index cost misses all targets. This is not out of expectation, because the one-to-one relations between query series and ground-truth sequence are not explicitly constructed. As the number of queries increases, the vanilla model can gradually return some correct targets back. On the contrary, Med-Query has a stable performance when the number of query varies. In terms of inference efficiency, Med-Query can be further boosted with less queries required, e.g., if 4 queries are required, it can save 17\% of time in detection stage and the ratio can even expand to 50\% in the perspective of full pipeline, as shown in \cref{fig:ablation}(c). The efficiency improvement benefits from less computation in Transformer decoder and segmentation head when fewer queries are executed. During the above ablation experiments, cropped data is not used. \noindent {\bf Data Augmentation \& Beyond.} \label{sec:ablation_data_aug}We conduct comprehensive experiments to analyze the critical factors in our proposed framework as shown in \cref{tbl:ablation_aug}. The joint operations of \textit{RandomTranslate}, \textit{RandomScale}, and \textit{RandomRotate} in 3D space get the most significant performance gain. \textit{RandomCrop} along $Z$-axis for imitating limited FOVs in clinical practice can further improve the Id.Rate by 1.8\%. \textit{RandomErase} gets additional performance benefit. We also explore a common trick of imposing relative distance constraints between neighboring center points in landmark detection tasks~\cite{liu2020landmarks}, and obtain a gain of 0.5\%. Additionally, we exploit the efficacy of a 3D extended version of the fixed {\it Sine} spatial positional encoding that has proven to be useful in \cite{carion2020end}. It can be seen that cancellation of this leads to 0.8\% Id.Rate drop if the measurement thresholding is set to (20mm, 20mm, 10$^\circ$). However, if the thresholding is set more strictly, drop will disappear and even get the opposite results. A similar observation that removing positional encoding only degrades accuracy by a small margin can be found in~\cite{chen2021empirical}. It implies that positional encoding may deserve further investigation especially in 3D scenarios. \section{Conclusion} In this work, we have presented a steerable, robust and efficient Transformer-based framework for anatomy parsing in CT scans. The pipeline is conducted via following the detection-then-segmentation paradigm and processing input 3D scans at different resolutions progressively. To our best knowledge, this work is the first to estimate the 9-DoF representation for object detection in 3D medical imaging via one-stage Transformer. The resulted method can be executed in a steerable way to directly retrieve any anatomy of interest and further boost the inference efficiency. It is a unified computing framework that can generalize well for a variety of anatomy parsing tasks and achieves new state-of-the-art performance on anatomy instance detection, identification and segmentation. We will release our annotations, code and models upon publication, hoping to benefit the community and facilitate the future development on automatic parsing of anatomical structures. \bibliographystyle{IEEEtran}
1,108,101,564,073
arxiv
\section{Introduction} \label{sec:intro} There are two types of motivations to study the role of sub-stellar objects (planets and brown dwarfs) in shaping planetary nebulae (PNe), i.e., shaping the outflow from asymptotic giant branch (AGB) progenitors of PNe. We include in this study also rare cases of PNe that red giant branch (RGB) stars might form (e.g., \citealt{Jonesetal2020RGB}). The first motivation is the growing number of observed potential progenitors of PNe (while they are still on the main sequence) with massive planets around them at the appropriate orbits (we list some systems in section \ref{sec:Systems}). The second type of motivations is the great success of stellar binary interaction scenarios to account for the shaping of many PNe with the realization that there are not enough stellar binary systems to account for all non-spherical PNe (e.g., \citealt{DeMarcoSoker2011}). The number of studies that support the stellar binary shaping of PNe amounts to several hundreds, and so we limit the list to a fraction of papers from the last five years (e.g. \citealt{Akrasetal2016, Chiotellisetal2016, Jonesetal2016, GarciaRojasetal2016, Hillwigetal2016a, Bondetal2016, Madappattetal2016, Alietal2016, JonesBoffin2017b, Barker2018, BondCiardullo2018, Bujarrabaletal2018, Chenetal2018, Danehkaretal2018, Franketal2018, GarciaSeguraetal2018, Hillwig2018, MacLeodetal2018b, Miszalskietal2018PASA, Sahai2018ASPC, Wessonetal2018, Brownetal2019, Desmursetal2019, Jonesetal2019triple, Kimetal2019, Kovarietal2019, Miszalskietal2019MNRAS487, Oroszetal2019, Akrasetal2020, Alleretal2020, BermudezBustamanteetal2020, Jacobyetal2020, Jones2020CEE, Jones2020Galax, Mundayetal2020}). The hard time of planets to survive the evolution of PN progenitors, and the severe difficulties of detecting planets around central stars of PNe in case they do survive, explain the relatively small number of papers that study PN shaping by planets and brown dwarfs (e.g, some papers from the last five years, \citealt{Kervellaetal2016, Boyle2018PhDT, SabachSoker2018a, SabachSoker2018b, Salasetal2019, Schaffenrothetal2019, Decinetal2020}). In some cases energy deposition in the RGB or AGB envelope that inflates the envelope to much larger dimensions than what regular evolution does, might help a planet that is engulfed at this phase to survive (e.g., \citealt{Bearetal2011, Bearetal2021, Lagosetal2021, Chamandyetal2021}). The challenge for shaping PNe by planets is to find processes by which a companion with a mass of $\approx 1 \%$ of the AGB (or RGB) star influences the mass loss geometry to a detectable level. Stellar companions can affect the envelope of the PN progenitor and its mass loss in some relatively energetic processes. Most notable processes that three-dimensional hydrodynamical simulations of the common envelope evolution (CEE) with stellar companions have revealed over the years is the inflation of the envelope and ejection of a relatively dense equatorial outflow (e.g., \citealt{LivioSoker1988, RasioLivio1996, SandquistTaam1998, RickerTaam2008, Passyetal2012, Nandezetal2014, Ohlmannetal2016, Iaconietal2017, MacLeodetal2018a}). A brown dwarf affects the envelope much less than a stellar companion (e.g., \citealt{Krameretal2020}), and a planet will have an even smaller effect of this type. Other stellar-induced energetic processes include, among others, the extreme deformation of the AGB (or RGB) envelope at the end of the CEE to form two opposite polar `funnels' (e.g., \citealt{Soker1992, Reichardtetal2019, GarciaSeguraetal2020, Zouetal2020}), and the launching of jets inside the envelope (e.g., \citealt{Chamandyetal2018a, LopezCamaraetal2019, Schreieretal2019, Shiberetal2019, LopezCamaraetal2020}). Planets that experience the CEE with AGB and RGB stars are unable to have these large effects. Even if a planet does launch jets, the result is likely to be only two small opposite clumps along the symmetry axis (\textit{ansae}; FLIERS), e.g., \cite{SokerWorkPlans2020}. The influence of planets on the mass loss geometry is via non-linear effects that amplify the perturbations of the planets. In particular, the possible important role of dust formation in determining the mass loss rate and geometry during the CEE (e.g., \citealt{Soker1992b, Soker1998AGB, GlanzPerets2018, Iaconietal2019, Iaconi2020}). These effects include the action of a dynamo that amplifies magnetic fields in giant stars (e.g., \citealt{LealFerreiraetal2013, Vlemmings2018}) after the planet spins-up the envelope (e.g., \citealt{Soker1998AGB, NordhausBlackman2006}). These magnetic fields might affect the formation of dust on the surface of the giant star, and via that the mass loss geometry (e.g., \citealt{Soker2000, Soker2001a, Khourietal2020}). Another process of a planet deep inside the giant envelope is the excitation of waves whose amplitude becomes large on the giant surface to the degree that they influence the efficiency of dust formation (e.g., \citealt{Soker1993}). We therefore set the goal to examine the degree of spin-up that parent stars of some observed exoplanets will suffer as they swallow their planet, and to calculate how much orbital energy the core-planet system releases relative to the binding energy of the giant envelope. The notion that planets spin-up their parent star on the RGB or AGB is not new of course (e.g., \citealt{Soker1996, SiessLivio1999AGB, Massarotti2008, Carlbergetal2009, Nordhausetal2010, MustillVillaver2012, NordhausSpiegel2013, GarciaSeguraetal2014, Priviteraetal2016I, Priviteraetal2016II, Staffetal2016, AguileraGomezetal2016, Veras2016, Guoetal2017, Raoetal2018, Jimenezetal2020}). Our new contribution is the calculation of these effects in specific observed exoplanetary systems that are likely to shape future PNe. Namely, we only deal with planets that the star swallows on the upper RGB or during the AGB. We do not deal with exoplanets with a semi-major axis smaller than $\simeq 1 {~\rm AU}$. This study follows that by \cite{Hegazietal2020} who determine the potential role of three exoplanets and two brown dwarfs on the future RGB or AGB evolution of their parent star. We list two of these systems and four other exoplanets in section \ref{sec:Systems}. Like \cite{Hegazietal2020} we use the Modules for Experiments in Stellar Astrophysics in its binary mode (\textsc{MESA-binary}) as we describe in section \ref{sec:method}. A key ingredient in the study of \cite{Hegazietal2020} that we adopt here is the usage of a lower than traditional mass loss rate on the giant branches. This particularly affects the stellar evolution on the upper AGB. This lower mass loss rate approach is based on the argument that in a case where no companion, stellar or sub-stellar, spins-up the envelope of an AGB star, the mass loss rate of the AGB star is much lower than what traditional formulas give (\citealt{SabachSoker2018a, SabachSoker2018b}; they termed such stars that suffer no spin-up interaction with a companion angular momentum isolated stars or {\it Jsolated stars}). These three earlier studies of our group discuss some aspects that we do not deal with here, like the brightest end of the PN luminosity function in old stellar populations. However, they did not study the degree of spin-up and spiraling-in of the planet into the giant envelope which we study here (section \ref{sec:evolution}). We summarize our results in section \ref{sec:summary}. \section{NUMERICAL METHOD} \label{sec:method} We use \textsc{mesa-binary} (version 10398; \citealt{Paxtonetal2011,Paxtonetal2013,Paxtonetal2015,Paxtonetal2018,Paxtonetal2019}). We follow the inlist of \cite{Hegazietal2020} (which is based on the example of stellar plus point mass) in order to investigate the impact the planet has on the stellar evolution. We take the planet to be a point mass (i.e., we do not follow its evolution or possible mass accretion by the planet). We here list parameters that we change from their default values in the \textsc{mesa-binary} mode for all runs. \begin{itemize} \item We set the initial ratio of the angular velocity ($\omega$) and the critical angular velocity ($\omega_{\rm critical}$) of the primary star, while it is on the main sequence, in all systems to be $\frac{\omega}{\omega_{\rm critical}}=0.001$. We took this value as an initial rotation for slow rotators. This slow initial rotation has no effect on our study of giant stars. % \item We allow tidal circulation ($do~tidal~circ$) and synchronization ($do~tidal~sync$) because tidal forces and their effects become very important close to the beginning of the CEE. \item To avoid numerical difficulties we do not limit the minimum of the time step. \item We took into account mass-loss by wind, and we used Reimer's and Blocker's formulas for the RGB and AGB evolutionary stages, respectively. The commonly used value of the wind mass-loss rate efficiency parameter is $\eta \simeq 0.5$. We assume a lower mass loss rate as we discussed above, and take $\eta=0.09$ for HIP~114933~b and $\eta=0.12$ for all other systems. \end{itemize} Let us elaborate on the last assumption of $\eta<0.5$. As we mention in section \ref{sec:intro}, this is an assumption that we adopt from earlier works (\citealt{SabachSoker2018a, SabachSoker2018b}). These earlier studies based this assumption on the argument that the stellar samples that have been used to derive $\eta \simeq 0.5$ are contaminated by binary systems that increase the mass loss rate. We take the specific values of $\eta=0.09$ or $\eta=0.12$ as these are about the maximum values of $\eta$ that allow the respective stars to engulf their exoplanets. Larger values of $\eta$ will not bring the respective stars to engulf their planets. This assumption clearly needs a future verification. \section{The sample of observed planetary systems} \label{sec:Systems} We search for exoplanetary systems where a massive planet might enter the envelope of their parent star along the RGB or the AGB, i.e., the system enters a CEE. We do not follow the CEE, but we do determine the properties of the giant star at the onset of the CEE. We search for such systems in the Extrasolar Planets Encyclopaedia; (exoplanet.eu; \citealt{Schneideretal2011}). \cite{Hegazietal2020} already studied the evolution of the two exoplanets HIP~75458~b and beta~Pic~c. However, they did not examine the particular properties of the giants that we are interested in. For the systems HIP~90988~b, HIP~75092~b, and HIP~114933~b we took the initial parameters from \cite{Jonesetal2021}, while for the initial parameters of the system HD~222076~b we used the studies of \cite{Jiangetal2020} and \cite{Wittenmyeretal2017}. In systems with uncertain inclination (HIP~114933~b, HIP~90988~b, and HIP~75092~b in this study) we take $\sin i=0.5$. We list the six systems we study in Table \ref{tab:Table1}. \begin{table*} \centering \begin{tabular}{|l||l|l|l|l||l|l|l|l|l||l|l|l|l|} \hline & $M_{\ast,i}$ & $M_p$ & $a_i$ & $e_i$ & CEE & $M_{\rm *,CEE}$ & $R_{\rm CEE}$ & $M_{\rm env,CEE}$ & $e_{\rm CEE}$ & $\frac{\omega_{\rm CEE}}{\omega_{\rm tidal}}$ & $\frac{\omega_{\rm CEE}}{\omega_{\rm critical}}$ & $\frac{E_{\rm orbit}}{E_{\rm env}}$ & $E_{\rm env}$\\ \hline & $M_\odot$& $M_{\rm J}$ & $R_{\odot}$ & & & $M_\odot$ & $R_{\odot}$ & $M_\odot$ & & & & & $10^{46} {~\rm erg}$ \\ \hline HIP 75458 b & 1.4 & 9.40 & 273.00 & 0.71 & RGB & 1.39 & 38.39 & 1.05 & 0.005 & 2.8 & 0.109 & 0.048 & 12 \\ \hline HIP 90988 b & 1.3 & 0.98 & 270.82 & 0.08 & RGB & 1.28 & 96.49 & 0.87 & 0.003 & 3.3 & 0.015 & 0.015 & 4.8 \\ \hline HIP 75092 b & 1.28 & 0.89 & 434.21 & 0.42 & RGB & 1.25 & 138.85 & 0.79 & 0.022 & 3.9 & 0.014 & 0.022 & 3.3 \\ \hline HD 222076 b & 1.38 & 1.57 & 393.34 & 0.08 & RGB & 1.35 & 140.29 & 0.89 & 0.004 & 3.4 & 0.023 & 0.034 & 3.8 \\ \hline beta Pic c & 1.73 & 9.37 & 585.00 & 0.25 & AGB & 1.67 & 231.16 & 1.12 & 0.020 & 4.4 & 0.087 & 0.275 & 3.4 \\ \hline HIP 114933 b& 1.39 & 0.97 & 610.44 & 0.21 & AGB & 1.18 & 368.34 & 0.61 & 0.154 & 54.8 & 0.011 & 0.106 & 0.94 \\ \hline \end{tabular} \caption{The six exoplanetary systems that we study here. The first column gives the name of the system, while the next four columns list four properties that are derived from observations, the stellar initial mass $M_{\ast,i}$, the planet mass $M_{p}$, the initial semi-major axis $a_i$, and the initial eccentricity ($e_{i}$). The next four column list the properties of the system at the onset of the CEE, the phase of the parent star (RGB or AGB), its mass ($M_{\rm *,CEE}$), its radius ($R_{\rm CEE}$), and its envelope mass ($M_{\rm env,CEE}$), and the eccentricity of the planet-giant binary system ($e_{\rm CEE}$). The next three columns give three relevant ratios for our study, the ratio of the envelope angular velocities at the end of the CEE ($\omega_{\rm CEE}$) to that at the onset of the CEE ($\omega_{\rm tidal}$), the ratio of $\omega_{\rm CEE}$ to the breakup angular speed of the giant ($\omega_{\rm critical}$), and the ratio of the orbital energy that the planet releases at the end of the CEE when the orbital separation is $a=1R_\odot$ ($E_{\rm orbit}$) to the binding energy of the envelope that resides above $r=1R_\odot$ ($E_{\rm env}$). For reference we also list this envelope binding energy in the last column. } \label{tab:Table1} \end{table*} In the first five columns in Table \ref{tab:Table1} we list the name of the system/planet and four relevant properties of the exoplanetary systems that observations give, the initial stellar mass, the planet mass, the initial semi-major axis, and the eccentricity. \section{Evolution to the CEE} \label{sec:evolution} We simulate the evolution of the six systems that we list in Table \ref{tab:Table1}, starting with their initial properties from the table and with metalicity of $z=0.02$. We end the simulations when the planet enters the envelope of its parent star (i.e., the system enters a CEE), either when the star is on the RGB or on the AGB, as we indicate in the sixth column of the table. We record the mass of the star, its radius, and its envelope mass at the onset of the CEE, as well as of the eccentricity of the orbit of the planet at that time. We list these quantities at the onset of the CEE in columns 7-10 of Table \ref{tab:Table1}. Although we do not simulate the CEE, we do calculate relevant properties. Due to tidal interaction in the pre-CEE phase the planet spins-up the giant envelope to an angular velocity of $\omega_{\rm tidal}$. After it enters the envelope the planet spirals-in to very small radius and further spins-up the giant envelope. Under the assumption that the envelope structure does not change (see below), we calculate the final angular velocity of the envelope (after the CEE) $\omega_{\rm CEE}$. In the eleventh column we list the ratio $\omega_{\rm CEE}/\omega_{\rm tidal}$ and in the twelfth column we list the ratio $\omega_{\rm CEE}/\omega_{\rm critical}$, where $\omega_{\rm critical}$ is the critical (breakup) angular velocity of the envelope above which it breaks apart. The above assumption that the envelope structure does not change during the CEE holds when the planet does not deposit much angular momentum or much energy. However, in the cases of HIP~75458~b and beta~Pic~c the envelope angular momentum reaches values of $\omega _{\rm CEE} \simeq 0.1 \omega_{\rm critical}$ and in the cases of beta~Pic~c and HIP~114933~b the orbital energies that the planets release is significant. We do expect the envelope to expand and flatten somewhat in response, increasing the value of the envelope moment of inertia and reduces the angular velocity. Nonetheless, the envelope are substantially deformed in these three cases as well. When the planet spirals-in it releases orbital gravitational energy until it is destroyed at a very small orbital separation (e.g., \citealt{Krameretal2020} for a recent study). We calculate the orbital energy $E_{\rm orbit}$ that the system releases from the onset of the CEE to an orbital separation of $a_{\rm post-CEE}=1 R_\odot$. Because the initial orbital separation is much larger than $1 R_\odot$, this energy is $E_{\rm orbit}=0.5 G M_{\ast,1}M_p/1R_\odot$, where $M_{\ast,1}$ is the giant mass inward to $r=1R_\odot$. We calculate the binding energy of the envelope that resides above $r=a_{\rm post-CEE}=1 R_\odot$, $E_{\rm env}$. In the next-to-last column of the Table \ref{tab:Table1} we list the ratio $E_{\rm orbit}/E_{\rm env}$, and in the last column we list $E_{\rm env}$. In calculating the binding energy of the envelope that resides at $r > 1 R_\odot$ we took half the gravitational energy of that part of the envelope $E_{\rm env} = 0.5 \vert U_{\rm grav} \vert$, where $U_{\rm grav} =-\int^{M_\ast}_{M_{\ast,1}} [G M(r)/r] dm $, and $M_\ast$ is the stellar mass. At a radius of $\approx 1 R_\odot$ the binding energy calculated this way and that calculated by adding the gravitational and internal energy become equal (e.g., \citealt{Lohevetal2019}). We take a final orbital radius for the planet at $a_{\rm post-CEE}=1 R_\odot$ because at about that radius massive planets start to evaporate. \cite{Krameretal2020}, for example, list the orbital radius at which their hot giant envelope starts to evaporate a planet of mass $M_p=0.01M_\odot$ as $r_{\rm eva} = 1.27 R_\odot$. The planet might spiral-in further while it is evaporated because its tidal destruction radius is much smaller, $r_{\rm tid} = 0.06R_\odot$. Our results are sensitive to the uncertain value of $a_{\rm post-CEE}$ and therefore there are some uncertainties in the values that we calculate based on the value of $a_{\rm post-CEE}$. We demonstrate the evolution of two systems in Fig. \ref{fig:HD222076graph} for a planet engulfment on the RGB, and in Fig. \ref{fig:HIP114933graph} for a planet engulfment on the AGB (see \citealt{Hegazietal2020} for the evolution of HIP~75458~b and beta~Pic~c). We present the evolution of the stellar radius (blue solid line), the periastron distance $(1-e)a$, and the eccentricity of these two systems. \begin{figure \vskip -1.50 cm \hskip -2.00 cm \includegraphics[scale=0.6]{HD222076new.pdf}\\ \vskip -10.00 cm \caption{ The eccentricity (red-dashed line), stellar radius (blue line) and periastron distance (black line) for HD~222076~b as a function of time starting at $t = 3.74 \times 10^9 {~\rm yr}$ and ending at the onset of the CEE. } \label{fig:HD222076graph} \end{figure \begin{figure \vskip -1.0 cm \hskip -2.00 cm \includegraphics[scale=0.6]{HIP114933new.pdf}\\ \vskip -10.00 cm \caption{Similar to Fig. \ref{fig:HD222076graph}, but for the system HIP~114933~b and starting at $t = 3.77 \times 10^9 {~\rm yr}$. } \label{fig:HIP114933graph} \end{figure The rapid fall of the planet to the envelope with the increase in stellar radius $R$ demonstrates the well-known sensitivity of the tidal forces to the ratio of $R/a$. Like previous studies (e.g., \citealt{Hegazietal2020}), we find that AGB stars are likely to engulf planets shortly after a helium shell flash (thermal pulse). These helium shell flashes are seen in Fig. \ref{fig:HIP114933graph} as spikes in the stellar radii. \section{Discussion and Summary} \label{sec:summary} The degree to which planets can shape the outflow from RGB and AGB stars is an open question. As a result of that, the fraction of PNe that are shaped by planetary systems (and brown dwarfs) is also an open question. The difficulties in answering these questions result from the complicated shaping mechanisms by planets as well as the large difficulties of observations to detect planets in PNe, mainly because many planets do not survive the evolution. As such, our view is that many small steps will achieve the goal of answering these questions. The present study is one such small step where we study observed planetary systems. We examine the possible future influence of the planets on the post-CEE giant envelope of their parent star. Planets might influence the mass loss rate and geometry by inducing other processes (see section \ref{sec:intro}). One process is the excitation of waves that have large surface amplitudes when the planet is deep in the envelope. Another process is the spinning-up of the giant envelope to a degree that allows the operation of an efficient dynamo. The sun rotates at about half a per cent of its breakup velocity $\omega_{\rm critical}$, and despite this low value has a pronounced dynamo activity to the degree than the wind from the sun is not spherical, neither in velocity nor in mass loss rate. In RGB and AGB stars the magnetic activity might make the geometry of dust formation non-spherical (section \ref{sec:intro}). More dust formation implies a higher mass loss rate from these luminous cool giants. We learn from Table \ref{tab:Table1} that in all cases the planets spin-up the envelopes of their parent stars to angular velocities of $\omega_{CEE} > 0.01 \omega_{\rm critical}$. We expect a non-negligible magnetic activity in all these giants. The spin-up process is particularly significant in the two systems with massive planets, $M_{P} \simeq 10 M_{\rm J}$, where the angular velocity is about ten per cent of the breakup angular velocity. In these two cases, HIP~75458~b and beta~Pic~c, the envelope achieves significant rotation before the planet even enters the envelope, and more so after the CEE. It is not clear whether planets can accrete sufficient amount of mass to launch jets before they enter the envelope. \cite{Salasetal2019} present an optimistic view on planet launching jets in a triple system where a tertiary star keeps the planet outside the envelope and on an eccentric orbit. We also consider another possibility (\citealt{SokerWorkPlans2020} and references therein), according to which the most likely process for planets to form jets is by forming an accretion disk around the core of the RGB or AGB star after the core tidally destroys the planet. All these processes require further studies. The direct deposition of significant amount of orbital energy to the envelope takes place only at the end of the CEE. As expected, we learn from the next-to-last last column of Table \ref{tab:Table1} that a large ratio of the orbital energy that the planet releases to the binding energy of the giant envelope occurs only when the parent star is on the AGB, i.e., when the radius of the envelope is very large. In these two cases, HIP~114933~b and beta~Pic~c, we obtain $\frac{E_{\rm orbit}}{E_{\rm env}} = 0.11$ and $0.27$, respectively. This amount of energy is likely to inflate the envelope, therefore reducing the effective temperature. A lower effective temperature facilitates dust formation that in turn makes the mass loss process more susceptible to the dynamo activity and to the excitation of waves by the planet when it is deep in the envelope (and before it is destroyed). Overall, we take out findings to support the notion that massive planets, more massive than about Jupiter mass, can enhance the mass loss rate from RGB and AGB stars and shape their wind to a low degree of aspherical mass loss. \textbf{ACKNOWLEDGEMENTS} We thank an anonymous referee for helpful comments. This research was supported by a grant from the Israel Science Foundation (769/20). \textbf{DATA AVAILABILITY} The data underlying this article will be shared on reasonable request to the corresponding author. \pagebreak
1,108,101,564,074
arxiv
\section{Introduction} \subsection{Context and Motivation} Design of radar waveforms and detectors has been a topic of great interest to the radar community (see e.g. \cite{Kay1998}-\cite{Kay 2007}). For best performance, radar waveforms and detectors should be designed jointly \cite{Richards 2010}, \cite{MU}. Traditional joint design of waveforms and detectors typically relies on mathematical models of the environment, including targets, clutter, and noise. In contrast, this paper proposes data-driven approaches based on end-to-end learning of radar systems, in which reliance on rigid mathematical models of targets, clutter and noise is relaxed. Optimal detection in the Neyman-Pearon (NP) sense guarantees highest probability of detection for a specified probability of false alarm \cite{Kay1998}. The NP detection test relies on the likelihood (or log-likelihood) ratio, which is the ratio of probability density functions (PDF) of the received signal conditioned on the presence or absence of a target. Mathematical tractability of models of the radar environment plays an important role in determining the ease of implementation of an optimal detector. For some target, clutter and noise models, the structure of optimal detectors is well known \cite{Van 2004}-\cite{Richards 2005}. For example, closed-form expressions of the NP test metric are available when the applicable models are Gaussian \cite{Richards 2005}, and, in some cases, even for non-Gaussian models \cite{Sangston 1994}. However, in most cases involving non-Gaussian models, the structure of optimal detectors generally involves intractable numerical integrations, making the implementation of such detectors computationally intensive \cite{Gini 1997}, \cite{Sangston 1999}. For instance, it is shown in \cite{Gini 1997} that the NP detector requires a numerical integration with respect to the texture variable of the K-distributed clutter, thus precluding a closed-form solution. Furthermore, detectors designed based on a specific mathematical model of environment suffer performance degradation when the actual environment differs from the assumed model \cite{Farina 1986}, \cite{Farina 1992}. Attempts to robustify performance by designing optimal detectors based on mixtures of random variables quickly run aground due to mathematical intractability. Alongside optimal detectors, optimal radar waveforms may also be designed based on the NP criterion. Solutions are known for some simple target, clutter and noise models (see e.g. \cite{Delong1967}, \cit {Kay 2007}). However, in most cases, waveform design based on direct application of the NP criterion is intractable, leading to various suboptimal approaches. For example, mutual information, J-divergence and Bhattacharyya distance have been studied as objective functions for waveform design in multistatic settings \cite{Kay 2009}-\cite{Jeong 2016}. In addition to target, clutter and noise models, waveform design may have to account for various operational constraints. For example, transmitter efficiency may be improved by constraining the peak-to-average-power ratio (PAR) \cite{DeMaio2011}-\cite{Wu 2018}. A different constraint relates to the requirement of coexistence of radar and communication systems in overlapping spectral regions. The National Telecommunications and Information Administration (NTIA) and Federal Communication Commission (FCC) have allowed sharing of some of the radar frequency bands with commercial communication systems \cite{NTIA}. In order to protect the communication systems from radar interference, radar waveforms should be designed subject to specified compatibility constraints. The design of radar waveforms constrained to share the spectrum with communications systems has recently developed into an active area of research with a growing body of literature \cite{Aubry2016}-\cite{Tang2019}. Machine learning has been successfully applied to solve problems for which mathematical models are unavailable or too complex to yield optimal solutions, in domains such as computer vision \cite{ML 1.1}, \cite{ML 1.2} and natural language processing \cite{ML 2.1}, \cite{ML 2.2}. Recently, a machine learning approach has been proposed for implementing the physical layer of communication systems. Notably, in \cite{OShea 2017}, it is proposed to jointly design the transmitter and receiver of communication systems via end-to-end learning. Reference \cite{PAR_OFDM} proposes an end-to-end learning-based approach for jointly minimizing PAR and bit error rate in orthogonal frequency division multiplexing systems. This approach requires the availability of a known channel model. For the case of an unknown channel model, reference \cite{Aoudia 2019} proposes an alternating training approach, whereby the transmitter is trained via reinforcement learning (RL) on the basis of noiseless feedback from the receiver, while the receiver is trained by supervised learning. In \cite{SPSA}, the authors apply simultaneous perturbation stochastic optimization for approximating the gradient of a transmitter's loss function. A detailed review of the state of the art can be found in \cite{osvaldo2} (see also \cite{osvaldo3}-\cite{osvaldo5} for recent work). In the radar field, learning machines trained in a supervised manner based on a suitable loss function have been shown to approximate the performance of the NP detector \cite{Moya 2009}, \cite{Moya 2013}. As a representative example, in \cite{Moya 2013}, a neural network trained in a supervised manner using data that includes Gaussian interference, has been shown to approximate the performance of the NP detector. Note that design of the NP detector requires express knowledge of the Gaussian nature of the interference, while the neural network is trained with data that happens to be Gaussian, but the machine has no prior knowledge of the statistical nature of the data. \subsection{Main contributions} In this work, we introduce two learning-based approaches for the joint design of waveform and detector in a radar system. Inspired by \cite{Aoudia 2019}, end-to-end learning of a radar system is implemented by alternating supervised learning of the detector for a fixed waveform, and RL-based learning of the transmitter for a fixed detector. In the second approach, the learning of the detector and waveform are executed simultaneously, potentially speeding up training in terms of required radar transmissions to yield the training samples compared alternating training. In addition, we extend the problem formulation to include training of waveforms with PAR or spectral compatibility constraints. The main contributions of this paper are summarized as follows: \begin{enumerate} \item We formulate a radar system architecture based on the training of the detector and the transmitted waveform, both implemented as feedforward multi-layer neural networks. \item We develop two end-to-end learning algorithms for detection and waveform generation. In the first learning algorithm, detector and transmitted waveform are trained alternately: For a fixed waveform, the detector is trained using supervised learning so as to approximate the NP detector; and for a fixed detector, the transmitted waveform is trained via policy gradient-based RL. In the second algorithm, the detector and transmitter are trained simultaneously. \item We extend learning algorithms to incorporate waveform constraints, specifically PAR and spectral compatibility constraints. \item We provide theoretical results that relate alternating and simultaneous training by computing the gradients of the loss functions optimized by both methods. \item We also provide theoretical results that justify the use of RL-based transmitter training by comparing the gradient used by this procedure with the gradient of the ideal model-based likelihood function. \end{enumerate} This work extends previous results presented in the conference version \cite {Wei 2019NN}. In particular, reference \cite{Wei 2019NN} proposes a learning algorithm, whereby supervised training of the radar detector is alternated with RL-based training of the unconstrained transmitted waveforms. As compared to the conference version \cite{Wei 2019NN}, this paper studies also a simultaneous training; it develops methods for learning radar waveforms with various operational waveform constraints; and it provides a theoretical results regarding the relationship between alternating training and simultaneous training, as well as regarding the adoption of RL-based training of the transmitter. The rest of this paper is organized as follows. A detailed system description of the end-to-end radar system is presented in Section II. Section III proposes two iterative algorithms of jointly training the transmitter and receiver. Section IV provides theoretical properties of gradients Numerical results are reported in Section V. Finally, conclusions are drawn in Section VI. Throughout the paper bold lowercase and uppercase letters represent vector and matrix, respectively. The conjugate, the transpose, and the conjugate transpose operator are denoted by the symbols $(\cdot)^{*}$, $(\cdot)^{T}$, and $(\cdot)^{H}$, respectively. The notations $\mathbb{C}^{K}$ and $\mathbb{R}^{K}$ represent sets of $K$-dimensional vectors of complex and real numbers, respectively. The notation $|\cdot |$ indicates modulus, $||\cdot ||$ indicates the Euclidean norm, and $\mathbb{E}_{x\sim p_{x}}\{\cdot \}$ indicates the expectation of the argument with respect to the distribution of the random variable $x\sim p_{x}$, respectively. $\Re(\cdot )$ and $\Im (\cdot )$ stand for real-part and imaginary-part of the complex-valued argument, respectively. The letter $j$ represents the imaginary unit, i.e., $j=\sqrt{-1}$. The gradient of a function $f$: \mathbb{R}^{n}\rightarrow \mathbb{R}^{m}$ with respect to $\mathbf{x \in \mathbb{R}^{n}$ is $\nabla _{\mathbf{x}}f(\mathbf{x})\in \mathbb{R ^{n\times m}$. \section{Problem Formulation} Consider a pulse-compression radar system that uses the baseband transmit signal \begin{equation} x(t)=\sum_{k=1}^{K}y_k \zeta\big( t- [k-1]T_c\big), \label{eq: time tx signal} \end{equation} where $\zeta(t)$ is a fixed basic chip pulse, $T_c$ is the chip duration, and $\{y_k\}_{k=1}^K$ are complex deterministic coefficients. The vector $\mathbf{y}\triangleq[ y_1,\dots, y_K ]^T$ is referred to as the fast-time \emph{waveform} of the radar system, and is subject to design. The backscattered baseband signal from a stationary point-like target is given by \begin{equation} z(t)=\alpha x(t-\tau) + c(t) + n(t) \label{eq: time rx signal}, \end{equation} where $\alpha$ is the target complex-valued gain, accounting for target backscattering and channel propagation effects; $\tau$ represents the target delay, which is assumed to satisfy the target detectability condition condition $\tau >\!\!>KT_c$; $c(t)$ is the clutter component; and $n(t)$ denotes signal-independent noise comprising an aggregate of thermal noise, interference, and jamming. The clutter component $c(t)$ associated with a detection test performed at $\tau=0$ may be expressed \begin{equation} c(t)=\sum_{g=-K+1}^{K-1}\gamma_g x\big( t- g T_c \big), \label{eq: time clutter} \end{equation} where $\gamma_g$ is the complex clutter scattering coefficient at time delay $\tau=0$ associated with the $g$th range cell relative to the cell under test. Following chip matched filtering with $\zeta^*(-t)$, and sampling at $T_c-$spaced time instants $t=\tau + [k-1] T_c$ for $k\in \{1, \dots K\}$, the $K\times 1$ discrete-time received signal $\mathbf{z}=[z(\tau), z(\tau+T_c), \dots, z(\tau + [K-1]T_c)]^T$ for the range cell under test containing a point target with complex amplitude $\alpha$, clutter and noise can be written as \begin{equation} \mathbf{z}=\alpha \mathbf{y} + \mathbf{c} + \mathbf{n}, \label{eq: rx} \end{equation} where $\mathbf{c}$ and $\mathbf{n}$ denote, respectively, the clutter and noise vectors. Detection of the presence of a target in the range cell under test is formulated as the following binary hypothesis testing problem: \begin{equation} \left\{ \begin{aligned} &\mathcal{H}_0:{\mathbf{z}}={\mathbf{c}}+{\mathbf{n}} \\ &\mathcal{H}_1:{\mathbf{z}}=\alpha \mathbf{y}+{\mathbf{c}}+{\mathbf{n}}. \end{aligned} \right. \label{eq:binary hypo} \end{equation} In traditional radar design, the golden standard for detection is provided by the NP criterion of maximizing the probability of detection for a given probability of false alarm. Application of the NP criterion leads to the likelihood ratio test \begin{equation} \Lambda(\mathbf{z})=\frac{p(\mathbf{z}|\mathbf{y}, \mathcal{H}_1)}{p(\mathbf{z}|\mathbf{y}, \mathcal{H}_0)}\mathop{\gtrless}_{\mathcal{H}_0}^{\mathcal{H}_1} T_{\Lambda}, \label{eq: lrt} \end{equation} where $\Lambda(\mathbf{z})$ is the likelihood ratio, and $T_{\Lambda}$ is the detection threshold set based on the probability of false alarm constraint \cite{Richards 2010}. The NP criterion is also the golden standard for designing a radar waveform that adapts to the given environment, although, as discussed earlier, a direct application of this design principle is often intractable. The design of optimal detectors and/or waveforms under the NP criterion requires on channel models of the radar environment, namely, knowledge of the conditional probabilities $p(\mathbf{z}|\mathbf{y}, \mathcal{H}_i)$ for $i=\{0,1\}$. The channel model $p(\mathbf{z}|\mathbf{y}, \mathcal{H}_i)$ is the likelihood of the observation $\mathbf{z}$ conditioned on the transmitted waveform $\mathbf{y}$ and hypothesis $\mathcal{H}_i$. In the following, we introduce an end-to-end radar system in which the detector and waveform are jointly learned in a data-driven fashion. \subsection{End-to-end radar system} The end-to-end radar system illustrated in Fig. 1 comprises a transmitter and a receiver that seek to detect the presence of a target. Transmitter and receiver are implemented as two separate parametric functions $f_{\boldsymbol{\theta}_T}(\cdot)$ and $f_{ \boldsymbol{\theta}_R}(\cdot)$ with trainable parameter vectors $\boldsymbol{ \theta}_T$ and $\boldsymbol{\theta}_R$, respectively. \begin{figure} \vspace{-3ex} \hspace{25ex} \includegraphics[width=1.3 \linewidth]{radar_system} \vspace{-141ex} \caption{An end-to-end radar system operating over an unknown radar operating environment. Transmitter and receiver are implemented as two separate parametric functions $f_{\boldsymbol{\theta}_T}(\cdot)$ and $f_{ \boldsymbol{\theta}_R}(\cdot)$ with trainable parameter vectors $\boldsymbol{ \theta}_T$ and $\boldsymbol{\theta}_R$, respectively.} \label{f:end_to_end_real} \end{figure} As shown in Fig. \ref{f:end_to_end_real}, the input to the transmitter is a user-defined initialization waveform ${\mathbf{s}}\in \mathbb{C}^{K}$. The transmitter outputs a radar waveform obtained through a trainable mapping $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s}) \in \mathbb{C}^K$. The environment is modeled as a stochastic system that produces the vector $\mathbf{z}\in \mathbb{C}^{K}$ from a conditional PDF $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T}, \mathcal{H}_i)$ parameterized by a binary variable $i\in \{0,1\}$. The absence or presence of a target is indicated by the values $i=0$ and $i=1$ respectively, and hence $i$ is referred to as the \emph{target state indicator}. The receiver passes the received vector $\mathbf{z}$ through a trainable mapping $p=f_{\boldsymbol{\theta}_R}(\mathbf{z})$, which produces the scalar $p\in (0,1)$. The final decision $\hat{i}\in \{0,1\}$ is made by comparing the output of the receiver $p$ to a hard threshold in the interval $(0,1)$. \subsection{Transmitter and Receiver Architectures} As discussed in Section II-A, the transmitter and the receiver are implemented as two separate parametric functions $f_{\boldsymbol{\theta}_T}(\cdot)$ and $f_{\boldsymbol{\theta}_R}(\cdot)$. We now detail an implementation of the transmitter $f_{\boldsymbol{\theta}_T}(\cdot)$ and receiver $f_{\boldsymbol{\theta}_R}(\cdot )$ based on feedforward neural networks. A feedforward neural network is a parametric function $\tilde{f}_{\boldsymbol{\theta}}(\cdot )$ that maps an input real-valued vector $\mathbf{u}_{\text{in}}\in \mathbb{R}^{N_{\text{in}}}$ to an output real-valued vector $\mathbf{u}_{\text{out}}\in \mathbb{R}^{N_{\text{out}}}$ via $L$ successive layers, where $N_{\text{in}}$ and $N_{\text{out}}$ represent, respectively, the number of neurons of the input and output layers. Noting that the input to the $l$th layer is the output of the $(l-1)$th layer, the output of the $l$th layer is given by \begin{equation} \mathbf{u}_{l}=\tilde{f}_{\boldsymbol{\theta}^{[l]}}(\mathbf{u}_{l-1})=\phi \big \mathbf{W}^{[l]}\mathbf{u}_{l-1}+\mathbf{b}^{[l]}\big),\text{ }\text{for \text{ }l=1,\dots ,L, \end{equation where $\phi (\cdot )$ is an element-wise activation function, and $\boldsymbol{\theta}^{[l]}=\{\mathbf{W}^{[l]},\mathbf{b}^{[l]}\}$ contains the trainable parameter of the $l$th layer comprising the weight $\mathbf{W}^{[l]}$ and bias \mathbf{b}^{[l]}$. The vector of trainable parameters of the entire neural network comprises the parameters of all layers, i.e., $\boldsymbol{\theta }=\text{vec}\{\boldsymbol{\theta}^{[1]},\cdots,\boldsymbol{\theta}^{[L]}\}$. The architecture of the end-to-end radar system with transmitter and receiver implemented based on feedforward neural networks is shown in Fig. \ref{f: arch}. The transmitter applies a complex initialization waveform $\mathbf{s}$ to the function $f_{\boldsymbol{\theta }_{T}}(\cdot)$. The complex-value input $\mathbf{s}$ is processed by a complex-to-real conversion layer. This is followed by a real-valued neural network $\tilde{f}_{\boldsymbol{\theta}_T}(\cdot)$. The output of the neural network is converted back to complex-values, and an output layer normalizes the transmitted power. As a result, the transmitter generates the radar waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$. The receiver applies the received signal $\mathbf{z}$ to the function $f_{\boldsymbol{\theta }_{R}}(\cdot )$. Similar to the transmitter, a first layer converts complex-valued to real-valued vectors. The neural network at the receiver is denoted $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$. The task of the receiver is to generate a scalar $p\in (0,1)$ that approximates the posterior probability of the presence of a target conditioned on the received vector $\mathbf{z}$. To this end, the last layer of the neural network $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$ is selected as a logistic regression layer consisting of operating over a linear combination of outputs from the previous layer. The presence or absence of the target is determined based on the output of the receiver and a threshold set according to a false alarm constraint. \begin{figure} \vspace{-5ex} \hspace{17ex} \includegraphics[width=1.2\linewidth]{archi2} \vspace{-109ex} \caption{Transmitter and receiver architectures based on feedforward neural networks.} \label{f: arch} \end{figure} \section{Training of End-to-End Radar Systems} This section discusses the joint optimization of the trainable parameter vectors $\boldsymbol{\theta }_{T}$ and $\boldsymbol{\theta }_{R}$ to meet application-specific performance requirements. Two training algorithms are proposed to train the end-to-end radar system. The first algorithm alternates between training of the receiver and of the transmitter. This algorithm is referred to as \emph{alternating training}, and is inspired by the approach used in \cite{Aoudia 2019} to train encoder and decoder of a digital communication system. In contrast, the second algorithm trains the receiver and transmitter simultaneously. This approach is referred to as \emph{simultaneous training}. Note that the proposed two training algorithms are applicable to other differentiable parametric functions implementing the transmitter $f_{\boldsymbol{\theta }_{T}}(\cdot )$ and the receiver $f_{\boldsymbol{\theta }_{R}}(\cdot )$, such as recurrent neural network or its variants \cite{deeplearning}. In the following, we first discuss alternating training and then we detail simultaneous training. \subsection{Alternating Training: Receiver Design} Alternating training consists of iterations encompassing separate receiver and transmitter updates. In this subsection, we focus on the receiver updates. A receiver training update optimizes the receiver parameter vector $\boldsymbol{\theta }_{R}$ for a fixed transmitter waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$. Receiver design is supervised in the sense that we assume the target state indicator $i$ to be available to the receiver during training. Supervised training of the receiver for a fixed transmitter's parameter vector $\boldsymbol{\theta}_T$ is illustrated in Fig. \ref{f:rx_training}. \begin{figure}[H] \vspace{-4ex} \hspace{16ex} \includegraphics[width=1.3\linewidth]{rx_training} \vspace{-146ex} \caption{Supervised training of the receiver for a fixed transmitted waveform.} \label{f:rx_training} \end{figure} The standard cross-entropy loss \cite{Moya 2013} is adopted as the loss function for the receiver. For a given transmitted waveform $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$, the receiver average loss function is accordingly given by \begin{equation} \begin{aligned} \mathcal{L}_R(\boldsymbol{\theta}_R)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\big\}, \end{aligned}\label{eq: rx loss} \end{equation} where $P(\mathcal{H}_i)$ is the prior probability of the target state indicator $i$, and $\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)$ is the instantaneous cross-entropy loss for a pair $\big(f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)$, namely, \begin{equation} \ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)=-i\ln f_{\boldsymbol{\theta}_{R}}(\mathbf{z})-(1-i)\ln\big[1- f_{\boldsymbol{\theta}_{R}}(\mathbf{z})\big]. \label{eq: loss inst} \end{equation} For a fixed transmitted waveform, the receiver parameter vector $\boldsymbol{\theta}_R$ should be ideally optimized by minimizing (\ref{eq: rx loss}), e.g., via gradient descent or one of its variants \cite{SGD}. The gradient of average loss (\ref{eq: rx loss}) with respect to the receiver parameter vector $\boldsymbol{\theta}_R$ is \begin{equation} {\nabla}_{\boldsymbol{\theta}_R}\mathcal{L}_R(\boldsymbol{\theta}_R)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{ {\nabla}_{\boldsymbol{\theta}_R} \ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\big\}. \label{eq: rx loss grad.} \end{equation} This being a data-driven approach, rather than assuming known prior probability of the target state indicator $P(\mathcal{H}_i)$ and likelihood $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$, the receiver is assumed to have access to $Q_R$ independent and identically distributed (i.i.d.) samples $\mathcal{D}_R=\big\{ \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_{i^{(q)}}), {i^{(q)}}\in\{0,1\} \big\}_{q=1}^{Q_R}$. Given the output of the receiver function $f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)})$ for a received sample vector $\mathbf{z}^{(q)}$ and the indicator $i^{(q)}\in \{0,1 \}$, the instantaneous cross-entropy loss is computed from (\ref{eq: loss inst}), and the estimated receiver gradient is given by \begin{equation} {\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R)=\frac{1}{Q_R}\sum_{q=1}^{Q_R} {\nabla}_{\boldsymbol{\theta}_R} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),{i^{(q)}} \big). \label{eq: est. rx loss grad} \end{equation} Using (\ref{eq: est. rx loss grad}), the receiver parameter vector $\boldsymbol{\theta}_R$ is adjusted according to stochastic gradient descent updates \begin{equation} \boldsymbol{\theta }_R^{(n+1)}=\boldsymbol{\theta}_R^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R^{(n)}) \label{eq: rx sgd} \end{equation} across iterations $n=1,2,\cdots$, where $\epsilon >0$ is the learning rate. \subsection{Alternating Training: Transmitter Design} In the transmitter training phase of alternating training, the receiver parameter vector $\boldsymbol{\theta}_R$ is held constant, and the function $f_{\boldsymbol{\theta}_T}(\cdot)$ implementing the transmitter is optimized. The goal of transmitter training is to find an optimized parameter vector $\boldsymbol{\theta}_T$ that minimizes the cross-entropy loss function (\ref{eq: rx loss}) seen as a function of $\boldsymbol{\theta}_T$. As illustrated in Fig. \ref{f:tx_training}, a stochastic transmitter outputs a waveform $\mathbf{a}$ drawn from a distribution $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ conditioned on $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$. The introduction of the randomization $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ of the designed waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ is useful to enable exploration of the design space in a manner akin to standard RL policies. To train the transmitter, we aim to minimize the average cross-entropy loss \begin{equation} \begin{aligned} \mathcal{L}^{\pi}_T(\boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}. \label{eq: tx loss RL no constraint} \end{aligned} \end{equation} Note that this is consistent with (\ref{eq: rx loss}), with the caveat that an expectation is taken over policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$. This is indicated by the superscript ``$\pi$''. \begin{figure}[H] \vspace{-4ex} \hspace{15ex} \includegraphics[width=1.3\linewidth]{tx_training} \vspace{-141ex} \caption{RL-based transmitter training for a fixed receiver design.} \label{f:tx_training} \end{figure} Assume that the policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ is differentiable with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$, i.e., that the gradient $\nabla_{\boldsymbol{\theta }_T}\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ exists. The policy gradient theorem \cite{Sutton 2000} states that the gradient of the average loss (\ref{eq: tx loss RL no constraint}) can be written as \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}_T(\boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)\nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})\big\}. \label{eq: tx loss RL grad} \end{aligned} \end{equation} The gradient (\ref{eq: tx loss RL grad}) has the important advantage that it may be estimated via $Q_T$ i.i.d. samples $\mathcal{D}_T=\big\{\mathbf{a}^{(q)}\sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}), \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathbf{a}^{(q)},\mathcal{H}_{i^{(q)}}), i^{(q)}\in \{0,1\} \big\}_{q=1}^{Q_T}$, yielding the estimate \begin{equation} {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_T(\boldsymbol{\theta}_T)=\frac{1}{Q_T}\sum_{q=1}^{Q_T} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i^{(q)}\big) \nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}^{(q)}|\mathbf{y}_{\boldsymbol{\theta}_T}). \label{eq: tx loss RL grad est} \end{equation} With estimate (\ref{eq: tx loss RL grad est}), in a manner similar to (\ref{eq: rx sgd}), the transmitter parameter vector $\boldsymbol{\theta}_T$ may be optimized iteratively according to the stochastic gradient descent update rule \begin{equation} \begin{aligned} &\boldsymbol{\theta }_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_T(\boldsymbol{\theta}_T^{(n)}) \end{aligned} \label{eq: tx sgd} \end{equation} over iterations $n=1,2,\cdots$. The alternating training algorithm is summarized as Algorithm 1. The training process is carried out until a stopping criterion is satisfied. For example, a prescribed number of iterations may have been reached, or a number of iterations may have elapsed during which the training loss (\ref{eq: tx loss RL no constraint}), estimated using samples $\mathcal{D}_T$, may have not decreased by more than a given amount. \DontPrintSemicolo \begin{algorithm}[] \SetAlgoLined \KwIn{initialization waveform $\mathbf{s}$; stochastic policy $\pi_{\boldsymbol{\theta}_T}(\cdot|\mathbf{y})$; learning rate $\epsilon$} \KwOut{learned parameter vectors $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$} initialize $\boldsymbol{\theta}_R^{(0)}$ and $\boldsymbol{\theta}_T^{(0)}$, and set $n=0$\; \While{stopping criterion not satisfied}{ \tcc{receiver training phase} evaluate the receiver loss gradient ${\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R^{(n)})$ from (\ref{eq: est. rx loss grad}) with $\boldsymbol{\theta}_T=\boldsymbol{\theta}_T^{(n)}$\; update receiver parameter vector $\boldsymbol{\theta}_R$ via \begin{equation*} \boldsymbol{\theta }_R^{(n+1)}=\boldsymbol{\theta}_R^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R^{(n)}) \end{equation*} and stochastic transmitter policy turned off\; \tcc{transmitter training phase} evaluate the transmitter loss gradient ${\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T}(\boldsymbol{\theta}_T^{(n)})$ from (\ref{eq: tx loss RL grad est}) with $\boldsymbol{\theta}_R=\boldsymbol{\theta}_R^{(n+1)}$\; update transmitter parameter vector $\boldsymbol{\theta}_T$ via \begin{equation*} \boldsymbol{\theta }_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T}(\boldsymbol{\theta}_T^{(n)}) \end{equation*}\; $n\leftarrow n+1$ } \caption{Alternating Training} \end{algorithm} \subsection{Transmitter Design with Constraints} We extend the transmitter training discussed in the previous section to incorporate waveform constraints on PAR and spectral compatibility. To this end, we introduce penalty functions that are used to modify the training criterion (\ref{eq: tx loss RL no constraint}) to meet these constraints. \subsubsection{PAR Constraint} Low PAR waveforms are preferred in radar systems due to hardware limitations related to waveform generation. A lower PAR entails a lower dynamic range of the power amplifier, which in turn allows an increase in average transmitted power. The PAR of a radar waveform ${\mathbf{y}}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$ may be expressed \begin{equation} J_{\text{PAR}}(\boldsymbol{\theta}_T)=\frac{\underset{k=1,\cdots ,K}{\max } {y}_{_{\boldsymbol{\theta}_T}, k}|^{2}}{||{\mathbf{y}}_{\boldsymbol{\theta}_T}||^{2}/K}, \label{eq: PAPR complex} \end{equation which is bounded according to $1\leq J_{\text{PAR}}(\boldsymbol{\theta}_T)\leq K$. \subsubsection{Spectral Compatibility Constraint} A spectral constraint is imposed when a radar system is required to operate over a spectrum partially shared with other systems such as wireless communication networks. Suppose there are $D$ frequency bands $\{\Gamma _{d}\}_{d=1}^{D}$ shared by the radar and by the coexisting systems, where $\Gamma _{d}=[f_{d,l},f_{d,u}]$, with f_{d,l}$ and $f_{d,u}$ denoting the lower and upper normalized frequencies of the $d$th band, respectively. The amount of interfering energy generated by the radar waveform ${\mathbf{y}}_{\boldsymbol{\theta}_T}$ in the $d$th shared band is \begin{equation} \int_{f_{d,l}}^{f_{d,u}}\left\vert \sum_{k=0}^{K-1}{y _{_{\boldsymbol{\theta}_T}, k}e^{-j2\pi fk}\right\vert^{2}df={\mathbf{y}^{H}_{\boldsymbol{\theta}_T}}{\boldsymbol \Omega }}_{d}{\mathbf{y}_{\boldsymbol{\theta}_T}}, \label{eq: waveform energy} \end{equation where \begin{equation} \begin{aligned} \big[ {\boldsymbol{\Omega}}_d \big]_{v,h} & =\left\{ \begin{aligned} &f_{d,u}-f_{d,l} \qquad\qquad\qquad\qquad\text{if } v=h\\ &\frac{e^{j2\pi f_{d,u}(v-h)}-e^{j2\pi f_{d,l}(v-h)}}{j2\pi (v-h)} \quad \text{ if } v\neq h \end{aligned} \right. \end{aligned} \end{equation} for $(v,h)\in \{1,\cdots,K\}^2$. Let ${\boldsymbol{\Omega }}=\sum_{d=1}^{D}\omega_{d}{\boldsymbol{\Omega }}_{d}$ be a weighted interference covariance matrix, where the weights $\{\omega _{d}\}_{d=1}^{D}$ are assigned based on practical considerations regarding the impact of interference in the $D$ bands. These include distance between the radar transmitter and interferenced systems, and tactical importance of the coexisting systems \cite{Aubry2015}. Given a radar waveform $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$, we define the spectral compatibility penalty function as \begin{equation} J_{\text{spectrum}}(\boldsymbol{\theta}_T)={\mathbf{y}}^{H}_{\boldsymbol{\theta}_T}{\boldsymbol{\Omega }}{\mathbf{y}_{\boldsymbol{\theta}_T}}, \label{eq: spectrum complex} \end{equation} which is the total interfering energy from the radar waveform produced on the shared frequency bands. \subsubsection{Constrained Transmitter Design} For a fixed receiver parameter vector $\boldsymbol{\theta}_R$, the average loss (\ref{eq: tx loss RL no constraint}) is modified by introducing a penalty function $J\in\{ J_{\text{PAR}}, J_{\text{spectrum}}\}$. Accordingly, we formulate the transmitter loss function, encompassing (\ref{eq: tx loss RL no constraint}), (\ref{eq: PAPR complex}) and (\ref{eq: spectrum complex}), as \begin{equation} \begin{aligned} \mathcal{L}^{\pi}_{T,c}(\boldsymbol{\theta}_T)&=\mathcal{L}^{\pi}_T(\boldsymbol{\theta}_T)+\lambda J(\boldsymbol{\theta}_T)\\ &=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}+\lambda J(\boldsymbol{\theta}_T). \label{eq: tx loss RL} \end{aligned} \end{equation} where $\lambda$ controls the weight of the penalty $J(\boldsymbol{\theta}_T)$, and is referred to as the \emph{penalty parameter}. When the penalty parameter $\lambda$ is small, the transmitter is trained to improve its ability to adapt to the environment, while placing less emphasis on reducing the PAR level or interference energy from the radar waveform; and vice versa for large values of $\lambda$. Note that the waveform penalty function $J(\boldsymbol{\theta}_T)$ depends only on the transmitter trainable parameters $\boldsymbol{\theta}_T$. Thus, imposing the waveform constraint does not affect the receiver training. It is straightforward to write the estimated version of the gradient (\ref{eq: tx loss RL}) with respect to $\boldsymbol{\theta}_T$ by introducing the penalty as \begin{equation} \nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T,c}(\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_T(\boldsymbol{\theta}_T)+\lambda \nabla_{\boldsymbol{\theta}_T}J(\boldsymbol{\theta}_T), \label{eq: est tx loss grad constraint} \end{equation} where the gradient of the penalty function $\nabla_{\boldsymbol{\theta}_T}J(\boldsymbol{\theta}_T)$ is provided in Appendix A. Substituting (\ref{eq: tx loss RL grad est}) into (\ref{eq: est tx loss grad constraint}), we finally have the estimated gradient \begin{equation} \nabla_{\boldsymbol{\theta}_T} \widehat{\mathcal{L}}^{\pi}_{T,c}(\boldsymbol{\theta}_T)=\frac{1}{Q_T}\sum_{q=1}^{Q_T} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i^{(q)}\big) \nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}^{(q)}|\mathbf{y}_{\boldsymbol{\theta}_T})+\lambda \nabla_{\boldsymbol{\theta}_T} J(\boldsymbol{\theta}_T), \label{eq: est tx loss grad constraint2} \end{equation} which is used in the stochastic gradient update rule \begin{equation} \begin{aligned} &\boldsymbol{\theta }_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T,c}(\boldsymbol{\theta}_T^{(n)}) \quad \text{for }n=1,2,\cdots. \end{aligned} \label{eq: tx constraint_sgd} \end{equation} \subsection{Simultaneous Training} This subsection discusses simultaneous training, in which the receiver and transmitter are updated simultaneously as illustrated in Fig. \ref{f:joint_training}. To this end, the objective function is the average loss \begin{equation} \begin{aligned} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}. \label{eq: joint loss} \end{aligned} \end{equation} This function is minimized over both parameters $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$ via stochastic gradient descent. \begin{figure}[H] \vspace{-3ex} \hspace{16ex} \includegraphics[width=1.2\linewidth]{joint_training} \vspace{-131ex} \caption{Simultaneous training of the end-to-end radar system. The receiver is trained by supervised learning, while the transmitter is trained by RL. } \label{f:joint_training} \end{figure} The gradient of (\ref{eq: joint loss}) with respect to $\boldsymbol{\theta}_R$ is \begin{equation} \nabla_{\boldsymbol{\theta}_R}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\nabla_{\boldsymbol{\theta}_R}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}, \label{eq: rx loss grad joint} \end{equation} and the gradient of (\ref{eq: joint loss}) with respect to $\boldsymbol{\theta}_T$ is \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\nabla_{\boldsymbol{\theta}_T} \mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{ \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)\big\}\\ =&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{ \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)\nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})\big\}. \label{eq: tx loss RL grad joint} \end{aligned} \end{equation} To estimate gradients (\ref{eq: rx loss grad joint}) and (\ref{eq: tx loss RL grad joint}), we assume access to $Q$ i.i.d. samples $\mathcal{D}=\big\{\mathbf{a}^{(q)}\sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}), \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathbf{a}^{(q)},\mathcal{H}_{i^{(q)}}), i^{(q)}\in \{0,1\} \big\}_{q=1}^{Q}$. From (\ref{eq: rx loss grad joint}), the estimated receiver gradient is \begin{equation} \nabla_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\frac{1}{Q}\sum_{q=1}^{Q}\nabla_{\boldsymbol{\theta}_R}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i ^{(q)}\big). \label{eq: rx loss grad joint est} \end{equation} Note that, in (\ref{eq: rx loss grad joint est}), the received vector $\mathbf{z}^{(q)}$ is obtained based on a given waveform $\mathbf{a}^{(q)}$ sampled from policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$. Thus, the estimated receiver gradient (\ref{eq: rx loss grad joint est}) is averaged over the stochastic waveforms $\mathbf{a}$. This is in contrast to alternating training, in which the receiver gradient depends directly on the transmitted waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$. From (\ref{eq: tx loss RL grad joint}), the estimated transmitter gradient is given by \begin{equation} \nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\frac{1}{Q}\sum_{q=1}^{Q} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i^{(q)}\big) \nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}^{(q)}|\mathbf{y}_{\boldsymbol{\theta}_T}). \label{eq: tx loss RL grad joint est} \end{equation} Finally, denote the parameter set $\boldsymbol{\theta}=\{\boldsymbol{\theta }_{R}, \boldsymbol{\theta }_{T} \}$, from (\ref{eq: rx loss grad joint est}) and (\ref{eq: tx loss RL grad joint est}), the trainable parameter set $\boldsymbol{\theta}$ is updated according to the stochastic gradient descent rule \begin{equation} \boldsymbol{\theta}^{(n+1)}=\boldsymbol{\theta}^{(n)} -\epsilon {\nabla }_{\boldsymbol{\theta }}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)}) \label{eq: sgd} \end{equation} across iterations $n=1,2,\cdots.$ The simultaneous training algorithm is summarized in Algorithm 2. Like alternating training, simultaneous training can be directly extended to incorporate prescribed waveform constraints by adding the penalty term $\lambda J(\boldsymbol{\theta}_T)$ to the average loss (\ref{eq: joint loss}). \DontPrintSemicolo \begin{algorithm} \SetAlgoLined \KwIn{initialization waveform $\mathbf{s}$; stochastic policy $\pi(\cdot|\mathbf{y}_{\boldsymbol{\theta}_T})$; learning rate $\epsilon$} \KwOut{learned parameter vectors $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$} initialize $\boldsymbol{\theta}_R^{(0)}$ and $\boldsymbol{\theta}_T^{(0)}$, and set $n=0$\; \While{stopping criterion not satisfied}{ evaluate the receiver gradient $\nabla_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)})$ and the transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)})$ from (\ref{eq: rx loss grad joint est}) and (\ref{eq: tx loss RL grad joint est}), respectively \; update receiver parameter vector $\boldsymbol{\theta}_R$ and transmitter parameter vector $\boldsymbol{\theta}_T$ simultaneously via \begin{equation*} \boldsymbol{\theta}_R^{(n+1)}=\boldsymbol{\theta}_R^{(n)} -\epsilon {\nabla }_{\boldsymbol{\theta }_R}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)}) \end{equation*} and \begin{equation*} \boldsymbol{\theta}_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla }_{\boldsymbol{\theta }_T}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)}) \end{equation*}\; $n\leftarrow n+1$ } \caption{Simultaneous Training} \end{algorithm} \section{Theoretical properties of the gradients} In this section, we discuss two useful theoretical properties of the gradients used for learning receiver and transmitter. \subsection{Receiver Gradient} As discussed previously, end-to-end learning of transmitted waveform and detector may be accomplished either by alternating or simultaneous training. The main difference between alternating and simultaneous training concerns the update of the receiver trainable parameter vector $\boldsymbol{\theta}_R$. Alternating training of $\boldsymbol{\theta}_R$ relies on a fixed waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ (see Fig. \ref{f:rx_training}), while simultaneous training relies on random waveforms $\mathbf{a}$ generated in accordance with a preset policy, i.e., $\mathbf{a} \sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$, as shown in Fig. \ref{f:joint_training}. The relation between the gradient applied by alternating training, $\nabla_{\boldsymbol{\theta }_R}{\mathcal{L}}_R(\boldsymbol{\theta}_R)$, and the gradient of simultaneous training, $\nabla_{\boldsymbol{\theta}_R}L^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)$, with respect to $\boldsymbol{\theta}_R$ is stated by the following proposition. \begin{proposition} For the loss function (\ref{eq: rx loss}) computed based on a waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ and loss function (\ref{eq: tx loss RL no constraint}) computed based on a stochastic policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ continuous in $\mathbf{a}$, the following equality holds: \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta }_R}{\mathcal{L}}_R(\boldsymbol{\theta}_R)=\nabla_{\boldsymbol{\theta}_R}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T). \end{aligned} \label{eq: Rx joint grad} \end{equation} \end{proposition} \begin{proof} See Appendix B. \end{proof} Proposition 1 states that the gradient of simultaneous training, $\nabla_{\boldsymbol{\theta}_R}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)$, equals the gradient of alternating training, $\nabla_{\boldsymbol{\theta }_R}\mathcal{L}_R(\boldsymbol{\theta}_R)$, even though simultaneous training applies a random waveform $\mathbf{a}\sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ to train the receiver. Note that this result applies only to ensemble means according to (\ref{eq: rx loss}) and (\ref{eq: rx loss grad joint}), and not to the empirical estimates used by Algorithms 1 and 2. Nevertheless, Proposition 1 suggests that training updates of the receiver are unaffected by the choice of alternating or simultaneous training. That said, given the distinct updates of the transmitter's parameter, the overall trajectory of the parameters ($\boldsymbol{\theta}_R$, $\boldsymbol{\theta}_T$) during training may differ according to the two algorithms. \subsection{Transmitter gradient} As shown in the previous section, the gradients used for learning receiver parameters $\boldsymbol{\theta}_R$ by alternating training (\ref{eq: est. rx loss grad}) or simultaneous training (\ref{eq: rx loss grad joint est}) may be directly estimated from the channel output samples $\mathbf{z}^{(q)}$. In contrast, the gradient used for learning transmitter parameters $\boldsymbol{\theta}_T$ according to (\ref{eq: rx loss}) cannot be directly estimated from the channel output samples. To obviate this problem, in Algorithms 1 and 2, the transmitter is trained by exploring the space of transmitted waveforms according to a policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$. We refer to the transmitter loss gradient obtained via policy gradient (\ref{eq: tx loss RL grad joint}) as the \emph{RL transmitter gradient}. The benefit of RL-based transmitter training is that it renders unnecessary access to the likelihood function $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T}, \mathcal{H}_i)$ to evaluate the RL transmitter gradient, rather the gradient is estimated via samples. We now formalize the relation of the RL transmitter gradient (\ref{eq: tx loss RL grad joint}) and the transmitter gradient for a known likelihood obtained according to (\ref{eq: rx loss}). As mentioned, if the likelihood $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$ were known, and if it were differentiable with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$, the transmitter parameter vector $\boldsymbol{\theta}_T$ may be learned by minimizing the average loss (\ref{eq: rx loss}), which we rewrite as a function of both $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$ as \begin{equation} \mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)= \sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{ \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\big\}. \label{eq: known loss} \end{equation} The gradient of (\ref{eq: known loss}) with respect to $\boldsymbol{\theta}_T$ is expressed as \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T ) &=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) \nabla_{\boldsymbol{\theta}_T}\ln p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i) \big\}, \end{aligned} \label{eq: tx loss known grad} \end{equation} where the equality leverages the following relation \begin{equation} \nabla_{\boldsymbol{\theta}_T}p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)=p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)\nabla_{\boldsymbol{\theta}_T}\ln p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i). \label{eq: log-trick} \end{equation} The relation between the RL transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ in (\ref{eq: tx loss RL grad joint}) and the transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ in (\ref{eq: tx loss known grad}) is elucidated by the following proposition. \begin{proposition} If likelihood function $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$ is differentiable with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$ for $i\in\{0,1\}$, the following equality holds \begin{equation} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T). \end{equation} \end{proposition} \begin{proof} See Appendix C. \end{proof} Proposition 2 establishes that the RL transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ equals the transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ for any given receiver parameters $\boldsymbol{\theta}_R$. Proposition 2 hence provides a theoretical justification for replacing the gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ with the RL gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ to perform transmitter training as done in Algorithms 1 and 2. \section{Numerical Results} This section first introduces the simulation setup, and then it presents numerical examples of waveform design and detection performance that compare the proposed data-driven methodology with existing model-based approaches. While simulation results presented in this section rely on various models of target, clutter and interference, this work expressly distinguishes data-driven learning from model-based design. Learning schemes rely solely on data and not on model information. In contrast, model-based design implies a system structure that is based on a specific and known model. Furthermore, learning may rely on synthetic data containing diverse data that is generated according to a variety of models. In contrast, model-based design typically relies on a single model. For example, as we will see, a synthetic dataset for learning may contain multiple clutter sample sets, each generated according to a different clutter model. Conversely, a single clutter model is typically assumed for model-based design. \subsection{Models, Policy, and Parameters} \subsubsection{Models of target, clutter, and noise} The target is stationary, and has a Rayleigh envelope, i.e., $\alpha\sim \mathcal{CN}(0,\sigma_{\alpha}^2)$. The noise has a zero-mean Gaussian distribution with the correlation matrix $[\boldsymbol{\Omega }_n]_{v,h}=\sigma _{n}^{2}\rho^{|v-h|}$ for $(v,h)\in \{1,\cdots,K\}^2$, where $\sigma_n^2$ is the noise power and $\rho$ is the one-lag correlation coefficient. The clutter vector in (\ref{eq: rx}) is the superposition of returns from $2K-1$ consecutive range cells, reflecting all clutter illuminated by the $K$-length signal as it sweeps in range across the target. Accordingly, the clutter vector may be expressed as \begin{equation} {\mathbf{c}}=\sum_{\substack{ g=-K+1 }}^{K-1}{\gamma }_{g}\mathbf{ }_{g}{\mathbf{y}}, \label{eq: clutter} \end{equation} where $\mathbf{J}_{g}$ represents the shifting matrix at the $g$th range cell with elements \begin{equation} \big[\mathbf{J}_{g}\big]_{v,h}=\left\{ \begin{aligned} &1 \quad \text{if} \quad v-h=g\\ &0\quad \text{if} \quad v-h\neq g \end{aligned}\quad (v,h)\in \{1,\cdots ,K\}^{2}\right. . \end{equation} The magnitude $|\gamma_g|$ of the $g$th clutter scattering coefficient is generated according to a Weibull distribution \cite{Richards 2010} \begin{equation} p(|\gamma_g|)=\frac{\beta}{\nu^{\beta}}|\gamma_g|^{\beta-1}\exp\bigg( - \frac{|\gamma_g|^{\beta}}{\nu^{\beta}} \bigg), \label{eq: Weibull pdf} \end{equation} where $\beta$ is the shape parameter and $\nu$ is the scale parameter of the distribution. Let $\sigma_{\gamma_g}^2$ represent the power of the clutter scattering coefficient $\gamma_g$. The relation between $\sigma_{\gamma_g}^2$ and the Weibull distribution parameters $\{\beta,\nu\}$ is \cite{Farina 1987} \begin{equation} \sigma_{\gamma_g}^2=\text{E}\{|{\gamma}_g|^2\}=\frac{2\nu^2}{\beta}\Gamma\bigg(\frac{2}{\beta}\bigg), \end{equation} where $\Gamma(\cdot)$ is the Gamma function. The nominal range of the shape parameter is $0.25\leq\beta\leq2$ \cite{shape}. In the simulation, the complex-valued clutter scattering coefficient $\gamma_g$ is obtained by multiplying a real-valued Weibull random variable $|\gamma_g|$ with the factor $\exp(j\psi_g)$, where $\psi_g$ is the phase of $\gamma_g$ distributed uniformly in the interval $(0,2\pi)$. When the shape parameter $\beta=2$, the clutter scattering coefficient $\gamma_g$ follows the Gaussian distribution $\gamma_g \sim \mathcal{CN}(0,\sigma_{\gamma_g}^2)$. Based on the assumed mathematical models of the target, clutter and noise, it can be shown that the optimal detector in the NP sense is the square law detector \cite{Richards 2005}, and the adaptive waveform for target detection can be obtained by maximizing the signal-to-clutter-plus-noise ratio at the receiver output at the time of target detection (see Appendix A of \cite{Wei 2019NN} for details). \subsubsection{Transmitter and Receiver Models} Waveform generation and detection is implemented using feedforward neural networks as explained in Section II-B. The transmitter $\tilde{f}_{\boldsymbol{\theta}_T}(\cdot)$ is a feedforward neural network with four layers, i.e., an input layer with $2K$ neurons, two hidden layers with $M=24$ neurons, and an output layer with $2K$ neurons. The activation function is exponential linear unit (ELU) \cite{ELU}. The receiver $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$ is implemented as a feedforward neural network with four layers, i.e., an input layer with $2K$ neurons, two hidden layers with $M$ neurons, and an output layer with one neuron. The sigmoid function is chosen as the activation function. The layout of transmitter and receiver networks is summarized in Table I. \begin{table}[H] \caption{Layout of the transmitter and receiver networks} \label{table:1} \centering \resizebox{0.6\columnwidth}{!}{ \begin{tabular}{@{}cccccccc@{}}\toprule \multicolumn{1}{c}{} & \multicolumn{3}{c}{Transmitter $\tilde{f}_{\boldsymbol{\theta}_T}(\cdot)$} & \phantom{a} & \multicolumn{3}{c}{Receiver $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$} \\ \cmidrule{2-4} \cmidrule{6-8} Layer& 1 & 2-3 & 4 && 1 & 2-3 & 4 \\ Dimension& $2K$ & $M$ & $2K$ && $2K$ & $M$ & $1$ \\ Activation& - & ELU & Linear && - & Sigmoid & Sigmoid \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Gaussian policy} A Gaussian policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ is adopted for RL-based transmitter training. Accordingly, the output of the stochastic transmitter follows a complex Gaussian distribution $\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})=\mathcal{CN}\big(\sqrt{1-\sigma^2_p}\mathbf{y}_{\boldsymbol{\theta}_T},\frac{\sigma^2_p}{K}\mathbf{I}_K\big)$, where the per-chip variance $\sigma^2_p$ is referred to as the \emph{policy hyperparameter}. When $\sigma^2_p=0$, the stochastic policy becomes deterministic \cite{Silver 2014}, i.e., the policy is governed by a Dirac function at $\mathbf{y}_{\boldsymbol{\theta}_T}$. In this case, the policy does not explore the space of transmitted waveforms, but it ``exploits'' the current waveform. At the opposite end, when $\sigma^2_p=1$, the output of the stochastic transmitter is independent of $\mathbf{y}_{\boldsymbol{\theta}_T}$, and the policy becomes zero-mean complex-Gaussian noise with covariance matrix $\mathbf{I}_K/K$. Thus, the policy hyperparameter $\sigma^2_p$ is selected in the range $(0,1)$, and its value sets a trade-off between exploration of new waveforms versus exploitation of current waveform. \subsubsection{Training Parameters} The initialization waveform $\mathbf{s}$ is a linear frequency modulated pulse with $K=8$ complex-valued chips and chirp rate $R=(100\times10^3)/(40\times 10^{-6})$ Hz/s. Specifically, the $k$th chip of $\mathbf{s}$ is given by \begin{equation} \mathbf{s}(k)=\frac{1}{\sqrt{K}}\exp \big\{ j\pi R \big( k/f_s\big)^2 \big\} \end{equation} for $\forall k\in\{0,\dots,K-1\}$, where $f_s=200$ kHz. The signal-to-noise ratio (SNR) is defined as \begin{equation} \text{SNR}=10\log_{10}\bigg\{\frac{\sigma_{\alpha}^2}{\sigma_n^2}\bigg\}. \label{eq: SNR} \end{equation} Training was performed at $\text{SNR}=12.5$ dB. The clutter environment is uniform with $\sigma_{\gamma_g}^2=-11.7$ dB, $\forall g\in\{-K+1,\dots, K-1\}$, such that the overall clutter power is $\sum_{g=-(K-1)}^{K-1}\sigma_{\gamma_g}^2=0$ dB. The noise power is $\sigma_n^2=0$ dB, and the one-lag correlation coefficient $\rho=0.7$. Denote $\beta_{\text{train}}$ and $\beta_{\text{test}}$ the shape parameters of the clutter distribution (\ref{eq: Weibull pdf}) applied in training and test stage, respectively. Unless stated otherwise, we set $\beta_{\text{train}}=\beta_{\text{test}}=2$. To obtain a balanced classification dataset, the training set is populated by samples belonging to either hypothesis with equal prior probability, i.e., $ P(\mathcal{H}_0)=P(\mathcal{H}_1)=0.5$. The number of training samples is set as $Q_R=Q_T=Q=2^{13}$ in the estimated gradients (\ref{eq: est. rx loss grad}), (\ref{eq: tx loss RL grad est}), (\ref{eq: rx loss grad joint est}), and (\ref{eq: tx loss RL grad joint est}). Unless stated otherwise, the policy parameter is set to $\sigma^2_p=10^{-1.5}$, and the penalty parameter is $\lambda=0$, i.e., there are no waveform constraints. The Adam optimizer \cite{adam} is adopted to train the system over a number of iterations chosen by trial and error. The learning rate is $\epsilon=0.005$. In the testing phase, $2\times10^5$ samples are used to estimate the probability of false alarm ($P_{fa}$) under hypothesis $\mathcal{H}_0$, while $5\times10^4$ samples are used to estimate the probability of detection ($P_d$) under hypothesis $\mathcal{H}_1$. Receiver operating characteristic (ROC) curves are obtained via Monte Carlo simulations by varying the threshold applied at the output of the receiver. Results are obtained by averaging over fifty trials. Numerical results presented in this section assume simultaneous training, unless stated otherwise. \subsection{Results and Discussion} \subsubsection{Simultaneous Training vs Training with Known Likelihood} We first analyze the impact of the choice of the policy hyperparameter $\sigma_p^2$ on the performance on the training set. Fig. \ref{f: var_loss} shows the empirical cross-entropy loss of simultaneous training versus the policy hyperparameter $\sigma^2_p$ upon the completion of the training process. The empirical loss of the system training with a known channel (\ref{eq: known loss}) is plotted as a comparison. It is seen that there is an optimal policy parameter $\sigma^2_p$ for which the empirical loss of simultaneous training approaches the loss with known channel. As the policy hyperparameter $\sigma^2_p$ tends to $0$, the output of the stochastic transmitter $\mathbf{a}$ is close to the waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$, which leads to no exploration of the space of transmitted waveforms. In contrast, when the policy parameter $\sigma^2_p$ tends to $1$, the output of the stochastic transmitter becomes a complex Gaussian noise with zero mean and covariance matrix $\mathbf{I}_K/K$. In both cases, the RL transmitter gradient is difficult to estimate accurately. \begin{figure} \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{exploration2} \vspace{-3ex} \caption{Empirical training loss versus policy hyperparameter $\sigma^2_p$ for simultaneous training algorithm and training with known channel, respectively.} \label{f: var_loss} \end{figure} While Fig. \ref{f: var_loss} evaluates the performance on the training set in terms of empirical cross-entropy loss, the choice of the policy hyperparameter $\sigma^2_p$ should be based on validation data and in terms of the testing criterion that is ultimately of interest. To elaborate on this point, ROC curves obtained by simultaneous training with different values of the policy hyperparameter $\sigma^2_p$ and training with known channel are shown in Fig. \ref{f: var_ROC}. As shown in the figure, simultaneous training with $\sigma^2_p=10^{-1.5}$ achieves a similar ROC as training with known channel. The choice $\sigma^2_p=10^{-1.5}$, also has the lowest empirical training loss in Fig. \ref{f: var_loss}. These results suggest that training is not subject to overfitting \cite{osvaldo1}. \begin{figure} \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{var_ROC3} \vspace{-3ex} \caption{ROC curves for training with known channel and simultaneous training with different values of policy parameter $\sigma^2_p$.} \label{f: var_ROC} \end{figure} \subsubsection{Simultaneous Training vs Alternating Training} We now compare simultaneous and alternating training in terms of ROC curves in Fig. \ref{f: alter}. ROC curves based on the optimal detector in the NP sense, namely, the square law detector \cite{Richards 2005} and the adaptive/initialization waveform are plotted as benchmarks. As shown in the figure, simultaneous training provides a similar detection performance as alternating training. Furthermore, both simultaneous training and alternating training are seen to result in significant improvements as compared to training of only the receiver, and provide detection performance comparable to adaptive waveform \cite{Wei 2019NN} and square law detector. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{Gaussian_ROC3} \vspace{-3ex} \caption{ROC curves with and without transmitter training.} \label{f: alter} \end{figure} \subsubsection{Learning Gaussian and Non-Gaussian Clutter} Two sets of ROC curves under different clutter statistics are illustrated in Fig. \ref{f: non-G1}. Each set contains two ROC curves with the same clutter statistics: one curve is obtained based on simultaneous training, and the other one is based on model-based design. For simultaneous training, the shape parameter of the clutter distribution (\ref{eq: Weibull pdf}) in the training stage is the same as that in the test stage, i.e, $\beta_{\text{train}}=\beta_{\text{test}}$. In the test stage, for Gaussian clutter ($\beta_{\text{test}}=2$), the model-based ROC curve is obtained by the adaptive waveform and the optimal detector in the NP sense. As expected, simultaneous training provides a comparable detection performance with the adaptive waveform and square law detector (also shown in Fig. \ref{f: alter}). In contrast, when the clutter is non-Gaussian ($\beta_{\text{test}}=0.25$), the optimal detector in the NP sense is mathematically intractable. Under this scenario, the data-driven approach is beneficial since it relies on data rather than a model. As observed in the figure, for non-Gaussian clutter with a shape parameter $\beta_{\text{test}}=0.25$, simultaneous training outperforms the adaptive waveform and square law detector. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{Non_G2} \vspace{-3ex} \caption{ROC curves for Gaussian/non-Gaussian clutter. The end-to-end radar system is trained and tested by the same clutter statistics, i.e, $\beta_{\text{train}}=\beta_{\text{test}}$.} \label{f: non-G1} \end{figure} \subsubsection{Simultaneous Training with Mixed Clutter Statistics} The robustness of the trained radar system to the clutter statistics is investigated next. As discussed previously, model-based design relies on a single clutter model, whereas data-driven learning depends on a training dataset. The dataset may contain samples from multiple clutter models. Thus, the system based on data-driven learning may be robustified by drawing samples from a mixture of clutter models. In the test stage, the clutter model may not be the same as any of the clutter models used in the training stage. As shown in the figure, for simultaneous training, the training dataset contains clutter samples generated from (\ref{eq: Weibull pdf}) with four different values of shape parameter $\beta_{\text{train}}\in \{0.25, 0.5, 0.75, 1\}$. The test data is generated with a clutter shape parameter $\beta_{\text{test}}=0.3$ not included in the training dataset. The end-to-end leaning radar system trained by mixing clutter samples provides performance gains compared to a model-based system using an adaptive waveform and square law detector. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{Non_mix} \vspace{-3ex} \caption{ROC curves for non-Gaussian clutter. To robustify detection performance, the end-to-end leaning radar system is trained with mixed clutter statistics, while testing for a clutter model different than used for training.} \label{f: non-mix} \end{figure} \subsubsection{Simultaneous Training under PAR Constraint} Detection performance with waveforms learned subject to a PAR constraint is shown in Fig. \ref{f:PAPR_fig1}. The end-to-end system trained with no PAR constraint, i.e., $\lambda=0$, serves as the reference. It is observed the detection performance degrades as the value of the penalty parameter $\lambda$ increases. Moreover, PAR values of waveforms with different $\lambda$ are shown in Table \ref{table:3}. As shown in Fig. \ref{f:PAPR_fig1} and Table \ref{table:3}, there is a tradeoff between detection performance and PAR level. For instance, given $P_{fa}=5\times 10^{-4}$, training the transmitter with the largest penalty parameter $\lambda=0.1$ yields the lowest $P_d=0.852$ with the lowest PAR value $0.17$ dB. In contrast, training the transmitter with no PAR constraint, i.e., $\lambda=0$, yields the best detection with the largest PAR value $3.92$ dB. Fig. \ref{f:PAPR_fig2} compares the normalized modulus of waveforms with different values of the penalty parameter $\lambda$. As shown in Fig. \ref{f:PAPR_fig2} and Table \ref{table:3}, the larger the penalty parameter $\lambda$ adopted in the simultaneous training, the smaller the PAR value of the waveform. \begin{table}[H] \caption{PAR values of waveforms with different values of penalty parameter $\lambda$} \label{table:3} \centering \resizebox{0.5\columnwidth}{!}{ \begin{tabular}{@{}cccc@{}}\toprule & $\lambda=0$ (reference) & $\lambda=0.01$ & $\lambda=0.1$ \\ \cmidrule{2-4} PAR [dB] (\ref{eq: PAPR complex}) & 3.92 & 1.76 & 0.17 \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.68\linewidth]{PAPR_ROC} \vspace{-3ex} \caption{ROC curves for PAR constraint with the different values of the penalty parameter $\lambda$.} \label{f:PAPR_fig1} \end{figure} \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.68\linewidth]{PAPR_modulus} \vspace{-3ex} \caption{Normalized modulus of transmitted waveforms with different values of penalty parameter $\lambda$.} \label{f:PAPR_fig2} \end{figure} \subsubsection{Simultaneous Training under Spectral Compatibility Constraint} ROC curves for spectral compatibility constraint with different values of the penalty parameter $\lambda$ are illustrated in Fig. \ref{f:spectrum_fig1}. The shared frequency bands are $\Gamma_1=[0.3,0.35]$ and $\Gamma_2=[0.5,0.6]$. The end-to-end system trained with no spectral compatibility constraint, i.e., $\lambda=0$, serves as the reference. Training the transmitter with a large value of the penalty parameter $\lambda$ is seen to result in performance degradation. Interfering energy from radar waveforms trained with different values of $\lambda$ are shown in Table \ref{table:4}. It is observed that $\lambda$ plays an important role in controlling the tradeoff between detection performance and spectral compatibility of the waveform. For instance, for a fixed $P_{fa}=5 \times 10^{-4}$, training the transmitter with $\lambda=0$ yields $P_d=0.855$ with an amount of interfering energy $-5.79$ dB on the shared frequency bands, while training the transmitter with $\lambda=1$ creates notches in the spectrum of the transmitted waveform at the shared frequency bands. Energy spectral densities of transmitted waveforms with different values of $\lambda$ are illustrated in Fig. \re {f:spectrum_fig2}. A larger the penalty parameter $\lambda$ results in a lower amount of interfering energy in the prescribed frequency shared regions. Note, for instance, that the nulls of the energy spectrum density of the waveform for \lambda=1$ are much deeper than their counterparts for $\lambda=0.2$. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{spectrum_ROC} \vspace{-3ex} \caption{ROC curves for spectral compatibility constraint for different values of penalty parameter $\lambda$.} \label{f:spectrum_fig1} \end{figure} \begin{table}[H] \caption{Interfering energy from radar waveforms with different values of weight parameter $\lambda$ } \label{table:4} \centering \resizebox{0.65\columnwidth}{!}{ \begin{tabular}{@{}cccc@{}}\toprule & $\lambda=0$ (reference) & $\lambda=0.2$ & $\lambda=1$ \\ \cmidrule{2-4} Interfering energy [dB] (\ref{eq: spectrum complex}) & -5.79 & -10.39& -17.11 \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{spectrum_psd} \vspace{-3ex} \caption{Energy spectral density of waveforms with different values of penalty parameter $\lambda$.} \label{f:spectrum_fig2} \end{figure} \section{Conclusions} In this paper, we have formulated the radar design problem as end-to-end learning of waveform generation and detection. We have developed two training algorithms, both of which are able to incorporate various waveform constraints into the system design. Training may be implemented either as simultaneous supervised training of the receiver and RL-based training of the transmitter, or as alternating between training of the receiver and of the transmitter. Both training algorithms have similar performance. We have also robustified the detection performance by training the system with mixed clutter statistics. Numerical results have shown that the proposed end-to-end learning approaches are beneficial under non-Gaussian clutter, and successfully adapt the transmitted waveform to actual statistics of environmental conditions, while satisfying operational constraints. \numberwithin{equation}{section} \appendices \section{Gradient of Penalty Functions} In this appendix are derived the respective gradients of the penalty functions (\ref{eq: PAPR complex}) and (\ref{eq: spectrum complex}) with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$. To facilitate the presentation, let $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}$ represent a $2K \times 1$ real vector comprising the real and imaginary parts of the waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$, i.e., $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}=\big[\Re(\mathbf{y}_{\boldsymbol{\theta}_T}), \Im (\mathbf{y}_{\boldsymbol{\theta}_T})\big]^T$. \subsubsection{Gradient of PAR Penalty Function} As discussed in Section II-B, the transmitted power is normalized such that $||\mathbf{y}_{\boldsymbol{\theta}_T} ||^2=||\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}||^2=1$. Let subscript ``max'' represent the chip index associated with the PAR value (\ref{eq: PAPR complex}). By leveraging the chain rule, the gradient of (\ref{eq: PAPR complex}) with respect to $\boldsymbol{\theta}_T$ is written \begin{equation} \nabla_{\boldsymbol{\theta}_T}J_{\text{PAR}}(\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\overline{\mathbf{y}}_{\boldsymbol{\theta}_T} \cdot \mathbf{g}_{\text{PAR}}, \end{equation} where $\mathbf{g}_{\text{PAR}}$ represents the gradient of the PAR penalty function $J_{\text{PAR}}(\boldsymbol{\theta}_T)$ with respect to $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}$, and is given by \begin{equation} \mathbf{g}_{\text{PAR}}=\big[ \begin{array}{c;{2pt/2pt}c} 0,\dots ,0, 2K\Re({{y}}_{\boldsymbol{\theta}_T,\text{max}}),0, \dots, 0 & 0,\dots, 0, 2K\Im({{y}}_{\boldsymbol{\theta}_T,\text{max}}),0, \dots, 0 \end{array} \big]^T. \end{equation} \subsubsection{Gradient of Spectral Compatibility Penalty Function} According to the chain rule, the gradient of (\ref{eq: spectrum complex}) with respect to $\boldsymbol{\theta}_T$ is expressed \begin{equation} \nabla_{\boldsymbol{\theta}_T}J_{\text{spectrum}}(\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\overline{\mathbf{y}}_{\boldsymbol{\theta}_T} \cdot \mathbf{g}_{\text{spectrum}}, \end{equation} where $\mathbf{g}_{\text{spectrum}}$ denotes the gradient of the spectral compatibility penalty function $J_{\text{spectrum}}(\boldsymbol{\theta}_T)$ with respect to $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}$, and is given by \begin{equation} \mathbf{g}_{\text{spectrum}}=\left[ \begin{array}{c} 2\Re\big[(\boldsymbol{\Omega}\mathbf{y}_{\boldsymbol{\theta}_T})^*\big] \\ \hdashline[2pt/2pt] -2\Im\big[(\boldsymbol{\Omega}\mathbf{y}_{\boldsymbol{\theta}_T})^*\big] \end{array} \right]. \end{equation} \section{Proof of Proposition 1} \begin{proof} The average loss function of simultaneous training $\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)$ (\ref{eq: joint loss}) could be expressed \begin{equation} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i) \int_{\mathcal{A}}\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})\int_{\mathcal{Z}} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)d\mathbf{z}d\mathbf{a}. \label{a: fuse loss ori.} \end{equation} As discussed in Section II-B, the last layer of the receiver implementation consists of a sigmoid activation function, which leads to the output of the receiver $f_{\boldsymbol{\theta}_R}(\mathbf{z})\in (0,1)$. Thus there exist a constant $b$ such that $\sup_{\mathbf{z},i} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) <b<\infty$. Furthermore, for $i\in \{0,1\}$, the instantaneous values of the cross-entropy loss $\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)$, the policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$, and the likelihood $p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)$ are continuous in variables $\mathbf{a}$ and $\mathbf{z}$. By leveraging Fubini's theorem \cite{Fubini} to exchange the order of integration in (\ref{a: fuse loss ori.}), we have \begin{equation} \begin{aligned} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) \int_{\mathcal{A}}p(\mathbf{z}|\mathbf{a},\mathcal{H}_i) \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) d\mathbf{a}d\mathbf{z}. \end{aligned} \label{a: fuse loss exchage} \end{equation} Note that for a waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ and a target state indicator $i$, the product between the likelihood $p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)$ and the policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ becomes a joint PDF of two random variables $\mathbf{a}$ and $\mathbf{z}$, namely, \begin{equation} p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})=p(\mathbf{a},\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i). \label{a: joint prob.} \end{equation} Substituting (\ref{a: joint prob.}) into (\ref{a: fuse loss exchage}), we obtain \begin{equation} \begin{aligned} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)&=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\int_{\mathcal{A}}p(\mathbf{a},\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i) d\mathbf{a}d\mathbf{z}\\ &=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)d\mathbf{z}, \end{aligned} \label{a: fuse loss final} \end{equation} where the second equality holds by integrating the joint PDF $p(\mathbf{z},\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$ over the random variable $\mathbf{a}$, i.e., $\int_{\mathcal{A}}p(\mathbf{a},\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i) d\mathbf{a}=p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$. Taking the gradient of (\ref{a: fuse loss final}) with respect to $\boldsymbol{\theta}_R$, we have \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta }_R} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)&= \sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)\nabla_{\boldsymbol{\theta }_R}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)d\mathbf{z}\\ &={\nabla}_{\boldsymbol{\theta}_R}\mathcal{L}_R(\boldsymbol{\theta}_R), \end{aligned} \end{equation} where the second equality holds via (\ref{eq: rx loss grad.}). Thus, the proof of Proposition 1 is completed. \end{proof} \section{Proof of Proposition 2} \begin{proof} According to (\ref{a: fuse loss final}), the gradient of the average loss function of simultaneous training with respect to $\boldsymbol{\theta}_T$ is given by \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta }_T} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)&=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) \nabla_{\boldsymbol{\theta }_T} p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)d\mathbf{z}\\ & = \nabla_{\boldsymbol{\theta }_T} \mathcal{L}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T), \end{aligned} \label{a: fuse loss tx grad. ori.} \end{equation} where the last equality holds by (\ref{eq: tx loss known grad}). The proof of {Proposition 2} is completed. \end{proof} \section{Introduction} \subsection{Context and Motivation} Design of radar waveforms and detectors has been a topic of great interest to the radar community (see e.g. \cite{Kay1998}-\cite{Kay 2007}). For best performance, radar waveforms and detectors should be designed jointly \cite{Richards 2010}, \cite{MU}. Traditional joint design of waveforms and detectors typically relies on mathematical models of the environment, including targets, clutter, and noise. In contrast, this paper proposes data-driven approaches based on end-to-end learning of radar systems, in which reliance on rigid mathematical models of targets, clutter and noise is relaxed. Optimal detection in the Neyman-Pearon (NP) sense guarantees highest probability of detection for a specified probability of false alarm \cite{Kay1998}. The NP detection test relies on the likelihood (or log-likelihood) ratio, which is the ratio of probability density functions (PDF) of the received signal conditioned on the presence or absence of a target. Mathematical tractability of models of the radar environment plays an important role in determining the ease of implementation of an optimal detector. For some target, clutter and noise models, the structure of optimal detectors is well known \cite{Van 2004}-\cite{Richards 2005}. For example, closed-form expressions of the NP test metric are available when the applicable models are Gaussian \cite{Richards 2005}, and, in some cases, even for non-Gaussian models \cite{Sangston 1994}. However, in most cases involving non-Gaussian models, the structure of optimal detectors generally involves intractable numerical integrations, making the implementation of such detectors computationally intensive \cite{Gini 1997}, \cite{Sangston 1999}. For instance, it is shown in \cite{Gini 1997} that the NP detector requires a numerical integration with respect to the texture variable of the K-distributed clutter, thus precluding a closed-form solution. Furthermore, detectors designed based on a specific mathematical model of environment suffer performance degradation when the actual environment differs from the assumed model \cite{Farina 1986}, \cite{Farina 1992}. Attempts to robustify performance by designing optimal detectors based on mixtures of random variables quickly run aground due to mathematical intractability. Alongside optimal detectors, optimal radar waveforms may also be designed based on the NP criterion. Solutions are known for some simple target, clutter and noise models (see e.g. \cite{Delong1967}, \cit {Kay 2007}). However, in most cases, waveform design based on direct application of the NP criterion is intractable, leading to various suboptimal approaches. For example, mutual information, J-divergence and Bhattacharyya distance have been studied as objective functions for waveform design in multistatic settings \cite{Kay 2009}-\cite{Jeong 2016}. In addition to target, clutter and noise models, waveform design may have to account for various operational constraints. For example, transmitter efficiency may be improved by constraining the peak-to-average-power ratio (PAR) \cite{DeMaio2011}-\cite{Wu 2018}. A different constraint relates to the requirement of coexistence of radar and communication systems in overlapping spectral regions. The National Telecommunications and Information Administration (NTIA) and Federal Communication Commission (FCC) have allowed sharing of some of the radar frequency bands with commercial communication systems \cite{NTIA}. In order to protect the communication systems from radar interference, radar waveforms should be designed subject to specified compatibility constraints. The design of radar waveforms constrained to share the spectrum with communications systems has recently developed into an active area of research with a growing body of literature \cite{Aubry2016}-\cite{Tang2019}. Machine learning has been successfully applied to solve problems for which mathematical models are unavailable or too complex to yield optimal solutions, in domains such as computer vision \cite{ML 1.1}, \cite{ML 1.2} and natural language processing \cite{ML 2.1}, \cite{ML 2.2}. Recently, a machine learning approach has been proposed for implementing the physical layer of communication systems. Notably, in \cite{OShea 2017}, it is proposed to jointly design the transmitter and receiver of communication systems via end-to-end learning. Reference \cite{PAR_OFDM} proposes an end-to-end learning-based approach for jointly minimizing PAR and bit error rate in orthogonal frequency division multiplexing systems. This approach requires the availability of a known channel model. For the case of an unknown channel model, reference \cite{Aoudia 2019} proposes an alternating training approach, whereby the transmitter is trained via reinforcement learning (RL) on the basis of noiseless feedback from the receiver, while the receiver is trained by supervised learning. In \cite{SPSA}, the authors apply simultaneous perturbation stochastic optimization for approximating the gradient of a transmitter's loss function. A detailed review of the state of the art can be found in \cite{osvaldo2} (see also \cite{osvaldo3}-\cite{osvaldo5} for recent work). In the radar field, learning machines trained in a supervised manner based on a suitable loss function have been shown to approximate the performance of the NP detector \cite{Moya 2009}, \cite{Moya 2013}. As a representative example, in \cite{Moya 2013}, a neural network trained in a supervised manner using data that includes Gaussian interference, has been shown to approximate the performance of the NP detector. Note that design of the NP detector requires express knowledge of the Gaussian nature of the interference, while the neural network is trained with data that happens to be Gaussian, but the machine has no prior knowledge of the statistical nature of the data. \subsection{Main contributions} In this work, we introduce two learning-based approaches for the joint design of waveform and detector in a radar system. Inspired by \cite{Aoudia 2019}, end-to-end learning of a radar system is implemented by alternating supervised learning of the detector for a fixed waveform, and RL-based learning of the transmitter for a fixed detector. In the second approach, the learning of the detector and waveform are executed simultaneously, potentially speeding up training in terms of required radar transmissions to yield the training samples compared alternating training. In addition, we extend the problem formulation to include training of waveforms with PAR or spectral compatibility constraints. The main contributions of this paper are summarized as follows: \begin{enumerate} \item We formulate a radar system architecture based on the training of the detector and the transmitted waveform, both implemented as feedforward multi-layer neural networks. \item We develop two end-to-end learning algorithms for detection and waveform generation. In the first learning algorithm, detector and transmitted waveform are trained alternately: For a fixed waveform, the detector is trained using supervised learning so as to approximate the NP detector; and for a fixed detector, the transmitted waveform is trained via policy gradient-based RL. In the second algorithm, the detector and transmitter are trained simultaneously. \item We extend learning algorithms to incorporate waveform constraints, specifically PAR and spectral compatibility constraints. \item We provide theoretical results that relate alternating and simultaneous training by computing the gradients of the loss functions optimized by both methods. \item We also provide theoretical results that justify the use of RL-based transmitter training by comparing the gradient used by this procedure with the gradient of the ideal model-based likelihood function. \end{enumerate} This work extends previous results presented in the conference version \cite {Wei 2019NN}. In particular, reference \cite{Wei 2019NN} proposes a learning algorithm, whereby supervised training of the radar detector is alternated with RL-based training of the unconstrained transmitted waveforms. As compared to the conference version \cite{Wei 2019NN}, this paper studies also a simultaneous training; it develops methods for learning radar waveforms with various operational waveform constraints; and it provides a theoretical results regarding the relationship between alternating training and simultaneous training, as well as regarding the adoption of RL-based training of the transmitter. The rest of this paper is organized as follows. A detailed system description of the end-to-end radar system is presented in Section II. Section III proposes two iterative algorithms of jointly training the transmitter and receiver. Section IV provides theoretical properties of gradients Numerical results are reported in Section V. Finally, conclusions are drawn in Section VI. Throughout the paper bold lowercase and uppercase letters represent vector and matrix, respectively. The conjugate, the transpose, and the conjugate transpose operator are denoted by the symbols $(\cdot)^{*}$, $(\cdot)^{T}$, and $(\cdot)^{H}$, respectively. The notations $\mathbb{C}^{K}$ and $\mathbb{R}^{K}$ represent sets of $K$-dimensional vectors of complex and real numbers, respectively. The notation $|\cdot |$ indicates modulus, $||\cdot ||$ indicates the Euclidean norm, and $\mathbb{E}_{x\sim p_{x}}\{\cdot \}$ indicates the expectation of the argument with respect to the distribution of the random variable $x\sim p_{x}$, respectively. $\Re(\cdot )$ and $\Im (\cdot )$ stand for real-part and imaginary-part of the complex-valued argument, respectively. The letter $j$ represents the imaginary unit, i.e., $j=\sqrt{-1}$. The gradient of a function $f$: \mathbb{R}^{n}\rightarrow \mathbb{R}^{m}$ with respect to $\mathbf{x \in \mathbb{R}^{n}$ is $\nabla _{\mathbf{x}}f(\mathbf{x})\in \mathbb{R ^{n\times m}$. \section{Problem Formulation} Consider a pulse-compression radar system that uses the baseband transmit signal \begin{equation} x(t)=\sum_{k=1}^{K}y_k \zeta\big( t- [k-1]T_c\big), \label{eq: time tx signal} \end{equation} where $\zeta(t)$ is a fixed basic chip pulse, $T_c$ is the chip duration, and $\{y_k\}_{k=1}^K$ are complex deterministic coefficients. The vector $\mathbf{y}\triangleq[ y_1,\dots, y_K ]^T$ is referred to as the fast-time \emph{waveform} of the radar system, and is subject to design. The backscattered baseband signal from a stationary point-like target is given by \begin{equation} z(t)=\alpha x(t-\tau) + c(t) + n(t) \label{eq: time rx signal}, \end{equation} where $\alpha$ is the target complex-valued gain, accounting for target backscattering and channel propagation effects; $\tau$ represents the target delay, which is assumed to satisfy the target detectability condition condition $\tau >\!\!>KT_c$; $c(t)$ is the clutter component; and $n(t)$ denotes signal-independent noise comprising an aggregate of thermal noise, interference, and jamming. The clutter component $c(t)$ associated with a detection test performed at $\tau=0$ may be expressed \begin{equation} c(t)=\sum_{g=-K+1}^{K-1}\gamma_g x\big( t- g T_c \big), \label{eq: time clutter} \end{equation} where $\gamma_g$ is the complex clutter scattering coefficient at time delay $\tau=0$ associated with the $g$th range cell relative to the cell under test. Following chip matched filtering with $\zeta^*(-t)$, and sampling at $T_c-$spaced time instants $t=\tau + [k-1] T_c$ for $k\in \{1, \dots K\}$, the $K\times 1$ discrete-time received signal $\mathbf{z}=[z(\tau), z(\tau+T_c), \dots, z(\tau + [K-1]T_c)]^T$ for the range cell under test containing a point target with complex amplitude $\alpha$, clutter and noise can be written as \begin{equation} \mathbf{z}=\alpha \mathbf{y} + \mathbf{c} + \mathbf{n}, \label{eq: rx} \end{equation} where $\mathbf{c}$ and $\mathbf{n}$ denote, respectively, the clutter and noise vectors. Detection of the presence of a target in the range cell under test is formulated as the following binary hypothesis testing problem: \begin{equation} \left\{ \begin{aligned} &\mathcal{H}_0:{\mathbf{z}}={\mathbf{c}}+{\mathbf{n}} \\ &\mathcal{H}_1:{\mathbf{z}}=\alpha \mathbf{y}+{\mathbf{c}}+{\mathbf{n}}. \end{aligned} \right. \label{eq:binary hypo} \end{equation} In traditional radar design, the golden standard for detection is provided by the NP criterion of maximizing the probability of detection for a given probability of false alarm. Application of the NP criterion leads to the likelihood ratio test \begin{equation} \Lambda(\mathbf{z})=\frac{p(\mathbf{z}|\mathbf{y}, \mathcal{H}_1)}{p(\mathbf{z}|\mathbf{y}, \mathcal{H}_0)}\mathop{\gtrless}_{\mathcal{H}_0}^{\mathcal{H}_1} T_{\Lambda}, \label{eq: lrt} \end{equation} where $\Lambda(\mathbf{z})$ is the likelihood ratio, and $T_{\Lambda}$ is the detection threshold set based on the probability of false alarm constraint \cite{Richards 2010}. The NP criterion is also the golden standard for designing a radar waveform that adapts to the given environment, although, as discussed earlier, a direct application of this design principle is often intractable. The design of optimal detectors and/or waveforms under the NP criterion requires on channel models of the radar environment, namely, knowledge of the conditional probabilities $p(\mathbf{z}|\mathbf{y}, \mathcal{H}_i)$ for $i=\{0,1\}$. The channel model $p(\mathbf{z}|\mathbf{y}, \mathcal{H}_i)$ is the likelihood of the observation $\mathbf{z}$ conditioned on the transmitted waveform $\mathbf{y}$ and hypothesis $\mathcal{H}_i$. In the following, we introduce an end-to-end radar system in which the detector and waveform are jointly learned in a data-driven fashion. \subsection{End-to-end radar system} The end-to-end radar system illustrated in Fig. 1 comprises a transmitter and a receiver that seek to detect the presence of a target. Transmitter and receiver are implemented as two separate parametric functions $f_{\boldsymbol{\theta}_T}(\cdot)$ and $f_{ \boldsymbol{\theta}_R}(\cdot)$ with trainable parameter vectors $\boldsymbol{ \theta}_T$ and $\boldsymbol{\theta}_R$, respectively. \begin{figure} \vspace{-3ex} \hspace{25ex} \includegraphics[width=1.3 \linewidth]{radar_system} \vspace{-141ex} \caption{An end-to-end radar system operating over an unknown radar operating environment. Transmitter and receiver are implemented as two separate parametric functions $f_{\boldsymbol{\theta}_T}(\cdot)$ and $f_{ \boldsymbol{\theta}_R}(\cdot)$ with trainable parameter vectors $\boldsymbol{ \theta}_T$ and $\boldsymbol{\theta}_R$, respectively.} \label{f:end_to_end_real} \end{figure} As shown in Fig. \ref{f:end_to_end_real}, the input to the transmitter is a user-defined initialization waveform ${\mathbf{s}}\in \mathbb{C}^{K}$. The transmitter outputs a radar waveform obtained through a trainable mapping $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s}) \in \mathbb{C}^K$. The environment is modeled as a stochastic system that produces the vector $\mathbf{z}\in \mathbb{C}^{K}$ from a conditional PDF $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T}, \mathcal{H}_i)$ parameterized by a binary variable $i\in \{0,1\}$. The absence or presence of a target is indicated by the values $i=0$ and $i=1$ respectively, and hence $i$ is referred to as the \emph{target state indicator}. The receiver passes the received vector $\mathbf{z}$ through a trainable mapping $p=f_{\boldsymbol{\theta}_R}(\mathbf{z})$, which produces the scalar $p\in (0,1)$. The final decision $\hat{i}\in \{0,1\}$ is made by comparing the output of the receiver $p$ to a hard threshold in the interval $(0,1)$. \subsection{Transmitter and Receiver Architectures} As discussed in Section II-A, the transmitter and the receiver are implemented as two separate parametric functions $f_{\boldsymbol{\theta}_T}(\cdot)$ and $f_{\boldsymbol{\theta}_R}(\cdot)$. We now detail an implementation of the transmitter $f_{\boldsymbol{\theta}_T}(\cdot)$ and receiver $f_{\boldsymbol{\theta}_R}(\cdot )$ based on feedforward neural networks. A feedforward neural network is a parametric function $\tilde{f}_{\boldsymbol{\theta}}(\cdot )$ that maps an input real-valued vector $\mathbf{u}_{\text{in}}\in \mathbb{R}^{N_{\text{in}}}$ to an output real-valued vector $\mathbf{u}_{\text{out}}\in \mathbb{R}^{N_{\text{out}}}$ via $L$ successive layers, where $N_{\text{in}}$ and $N_{\text{out}}$ represent, respectively, the number of neurons of the input and output layers. Noting that the input to the $l$th layer is the output of the $(l-1)$th layer, the output of the $l$th layer is given by \begin{equation} \mathbf{u}_{l}=\tilde{f}_{\boldsymbol{\theta}^{[l]}}(\mathbf{u}_{l-1})=\phi \big \mathbf{W}^{[l]}\mathbf{u}_{l-1}+\mathbf{b}^{[l]}\big),\text{ }\text{for \text{ }l=1,\dots ,L, \end{equation where $\phi (\cdot )$ is an element-wise activation function, and $\boldsymbol{\theta}^{[l]}=\{\mathbf{W}^{[l]},\mathbf{b}^{[l]}\}$ contains the trainable parameter of the $l$th layer comprising the weight $\mathbf{W}^{[l]}$ and bias \mathbf{b}^{[l]}$. The vector of trainable parameters of the entire neural network comprises the parameters of all layers, i.e., $\boldsymbol{\theta }=\text{vec}\{\boldsymbol{\theta}^{[1]},\cdots,\boldsymbol{\theta}^{[L]}\}$. The architecture of the end-to-end radar system with transmitter and receiver implemented based on feedforward neural networks is shown in Fig. \ref{f: arch}. The transmitter applies a complex initialization waveform $\mathbf{s}$ to the function $f_{\boldsymbol{\theta }_{T}}(\cdot)$. The complex-value input $\mathbf{s}$ is processed by a complex-to-real conversion layer. This is followed by a real-valued neural network $\tilde{f}_{\boldsymbol{\theta}_T}(\cdot)$. The output of the neural network is converted back to complex-values, and an output layer normalizes the transmitted power. As a result, the transmitter generates the radar waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$. The receiver applies the received signal $\mathbf{z}$ to the function $f_{\boldsymbol{\theta }_{R}}(\cdot )$. Similar to the transmitter, a first layer converts complex-valued to real-valued vectors. The neural network at the receiver is denoted $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$. The task of the receiver is to generate a scalar $p\in (0,1)$ that approximates the posterior probability of the presence of a target conditioned on the received vector $\mathbf{z}$. To this end, the last layer of the neural network $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$ is selected as a logistic regression layer consisting of operating over a linear combination of outputs from the previous layer. The presence or absence of the target is determined based on the output of the receiver and a threshold set according to a false alarm constraint. \begin{figure} \vspace{-5ex} \hspace{17ex} \includegraphics[width=1.2\linewidth]{archi2} \vspace{-109ex} \caption{Transmitter and receiver architectures based on feedforward neural networks.} \label{f: arch} \end{figure} \section{Training of End-to-End Radar Systems} This section discusses the joint optimization of the trainable parameter vectors $\boldsymbol{\theta }_{T}$ and $\boldsymbol{\theta }_{R}$ to meet application-specific performance requirements. Two training algorithms are proposed to train the end-to-end radar system. The first algorithm alternates between training of the receiver and of the transmitter. This algorithm is referred to as \emph{alternating training}, and is inspired by the approach used in \cite{Aoudia 2019} to train encoder and decoder of a digital communication system. In contrast, the second algorithm trains the receiver and transmitter simultaneously. This approach is referred to as \emph{simultaneous training}. Note that the proposed two training algorithms are applicable to other differentiable parametric functions implementing the transmitter $f_{\boldsymbol{\theta }_{T}}(\cdot )$ and the receiver $f_{\boldsymbol{\theta }_{R}}(\cdot )$, such as recurrent neural network or its variants \cite{deeplearning}. In the following, we first discuss alternating training and then we detail simultaneous training. \subsection{Alternating Training: Receiver Design} Alternating training consists of iterations encompassing separate receiver and transmitter updates. In this subsection, we focus on the receiver updates. A receiver training update optimizes the receiver parameter vector $\boldsymbol{\theta }_{R}$ for a fixed transmitter waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$. Receiver design is supervised in the sense that we assume the target state indicator $i$ to be available to the receiver during training. Supervised training of the receiver for a fixed transmitter's parameter vector $\boldsymbol{\theta}_T$ is illustrated in Fig. \ref{f:rx_training}. \begin{figure}[H] \vspace{-4ex} \hspace{16ex} \includegraphics[width=1.3\linewidth]{rx_training} \vspace{-146ex} \caption{Supervised training of the receiver for a fixed transmitted waveform.} \label{f:rx_training} \end{figure} The standard cross-entropy loss \cite{Moya 2013} is adopted as the loss function for the receiver. For a given transmitted waveform $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$, the receiver average loss function is accordingly given by \begin{equation} \begin{aligned} \mathcal{L}_R(\boldsymbol{\theta}_R)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\big\}, \end{aligned}\label{eq: rx loss} \end{equation} where $P(\mathcal{H}_i)$ is the prior probability of the target state indicator $i$, and $\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)$ is the instantaneous cross-entropy loss for a pair $\big(f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)$, namely, \begin{equation} \ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)=-i\ln f_{\boldsymbol{\theta}_{R}}(\mathbf{z})-(1-i)\ln\big[1- f_{\boldsymbol{\theta}_{R}}(\mathbf{z})\big]. \label{eq: loss inst} \end{equation} For a fixed transmitted waveform, the receiver parameter vector $\boldsymbol{\theta}_R$ should be ideally optimized by minimizing (\ref{eq: rx loss}), e.g., via gradient descent or one of its variants \cite{SGD}. The gradient of average loss (\ref{eq: rx loss}) with respect to the receiver parameter vector $\boldsymbol{\theta}_R$ is \begin{equation} {\nabla}_{\boldsymbol{\theta}_R}\mathcal{L}_R(\boldsymbol{\theta}_R)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{ {\nabla}_{\boldsymbol{\theta}_R} \ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\big\}. \label{eq: rx loss grad.} \end{equation} This being a data-driven approach, rather than assuming known prior probability of the target state indicator $P(\mathcal{H}_i)$ and likelihood $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$, the receiver is assumed to have access to $Q_R$ independent and identically distributed (i.i.d.) samples $\mathcal{D}_R=\big\{ \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_{i^{(q)}}), {i^{(q)}}\in\{0,1\} \big\}_{q=1}^{Q_R}$. Given the output of the receiver function $f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)})$ for a received sample vector $\mathbf{z}^{(q)}$ and the indicator $i^{(q)}\in \{0,1 \}$, the instantaneous cross-entropy loss is computed from (\ref{eq: loss inst}), and the estimated receiver gradient is given by \begin{equation} {\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R)=\frac{1}{Q_R}\sum_{q=1}^{Q_R} {\nabla}_{\boldsymbol{\theta}_R} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),{i^{(q)}} \big). \label{eq: est. rx loss grad} \end{equation} Using (\ref{eq: est. rx loss grad}), the receiver parameter vector $\boldsymbol{\theta}_R$ is adjusted according to stochastic gradient descent updates \begin{equation} \boldsymbol{\theta }_R^{(n+1)}=\boldsymbol{\theta}_R^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R^{(n)}) \label{eq: rx sgd} \end{equation} across iterations $n=1,2,\cdots$, where $\epsilon >0$ is the learning rate. \subsection{Alternating Training: Transmitter Design} In the transmitter training phase of alternating training, the receiver parameter vector $\boldsymbol{\theta}_R$ is held constant, and the function $f_{\boldsymbol{\theta}_T}(\cdot)$ implementing the transmitter is optimized. The goal of transmitter training is to find an optimized parameter vector $\boldsymbol{\theta}_T$ that minimizes the cross-entropy loss function (\ref{eq: rx loss}) seen as a function of $\boldsymbol{\theta}_T$. As illustrated in Fig. \ref{f:tx_training}, a stochastic transmitter outputs a waveform $\mathbf{a}$ drawn from a distribution $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ conditioned on $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$. The introduction of the randomization $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ of the designed waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ is useful to enable exploration of the design space in a manner akin to standard RL policies. To train the transmitter, we aim to minimize the average cross-entropy loss \begin{equation} \begin{aligned} \mathcal{L}^{\pi}_T(\boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}. \label{eq: tx loss RL no constraint} \end{aligned} \end{equation} Note that this is consistent with (\ref{eq: rx loss}), with the caveat that an expectation is taken over policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$. This is indicated by the superscript ``$\pi$''. \begin{figure}[H] \vspace{-4ex} \hspace{15ex} \includegraphics[width=1.3\linewidth]{tx_training} \vspace{-141ex} \caption{RL-based transmitter training for a fixed receiver design.} \label{f:tx_training} \end{figure} Assume that the policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ is differentiable with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$, i.e., that the gradient $\nabla_{\boldsymbol{\theta }_T}\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ exists. The policy gradient theorem \cite{Sutton 2000} states that the gradient of the average loss (\ref{eq: tx loss RL no constraint}) can be written as \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}_T(\boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)\nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})\big\}. \label{eq: tx loss RL grad} \end{aligned} \end{equation} The gradient (\ref{eq: tx loss RL grad}) has the important advantage that it may be estimated via $Q_T$ i.i.d. samples $\mathcal{D}_T=\big\{\mathbf{a}^{(q)}\sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}), \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathbf{a}^{(q)},\mathcal{H}_{i^{(q)}}), i^{(q)}\in \{0,1\} \big\}_{q=1}^{Q_T}$, yielding the estimate \begin{equation} {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_T(\boldsymbol{\theta}_T)=\frac{1}{Q_T}\sum_{q=1}^{Q_T} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i^{(q)}\big) \nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}^{(q)}|\mathbf{y}_{\boldsymbol{\theta}_T}). \label{eq: tx loss RL grad est} \end{equation} With estimate (\ref{eq: tx loss RL grad est}), in a manner similar to (\ref{eq: rx sgd}), the transmitter parameter vector $\boldsymbol{\theta}_T$ may be optimized iteratively according to the stochastic gradient descent update rule \begin{equation} \begin{aligned} &\boldsymbol{\theta }_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_T(\boldsymbol{\theta}_T^{(n)}) \end{aligned} \label{eq: tx sgd} \end{equation} over iterations $n=1,2,\cdots$. The alternating training algorithm is summarized as Algorithm 1. The training process is carried out until a stopping criterion is satisfied. For example, a prescribed number of iterations may have been reached, or a number of iterations may have elapsed during which the training loss (\ref{eq: tx loss RL no constraint}), estimated using samples $\mathcal{D}_T$, may have not decreased by more than a given amount. \DontPrintSemicolo \begin{algorithm}[] \SetAlgoLined \KwIn{initialization waveform $\mathbf{s}$; stochastic policy $\pi_{\boldsymbol{\theta}_T}(\cdot|\mathbf{y})$; learning rate $\epsilon$} \KwOut{learned parameter vectors $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$} initialize $\boldsymbol{\theta}_R^{(0)}$ and $\boldsymbol{\theta}_T^{(0)}$, and set $n=0$\; \While{stopping criterion not satisfied}{ \tcc{receiver training phase} evaluate the receiver loss gradient ${\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R^{(n)})$ from (\ref{eq: est. rx loss grad}) with $\boldsymbol{\theta}_T=\boldsymbol{\theta}_T^{(n)}$\; update receiver parameter vector $\boldsymbol{\theta}_R$ via \begin{equation*} \boldsymbol{\theta }_R^{(n+1)}=\boldsymbol{\theta}_R^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}_R(\boldsymbol{\theta}_R^{(n)}) \end{equation*} and stochastic transmitter policy turned off\; \tcc{transmitter training phase} evaluate the transmitter loss gradient ${\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T}(\boldsymbol{\theta}_T^{(n)})$ from (\ref{eq: tx loss RL grad est}) with $\boldsymbol{\theta}_R=\boldsymbol{\theta}_R^{(n+1)}$\; update transmitter parameter vector $\boldsymbol{\theta}_T$ via \begin{equation*} \boldsymbol{\theta }_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T}(\boldsymbol{\theta}_T^{(n)}) \end{equation*}\; $n\leftarrow n+1$ } \caption{Alternating Training} \end{algorithm} \subsection{Transmitter Design with Constraints} We extend the transmitter training discussed in the previous section to incorporate waveform constraints on PAR and spectral compatibility. To this end, we introduce penalty functions that are used to modify the training criterion (\ref{eq: tx loss RL no constraint}) to meet these constraints. \subsubsection{PAR Constraint} Low PAR waveforms are preferred in radar systems due to hardware limitations related to waveform generation. A lower PAR entails a lower dynamic range of the power amplifier, which in turn allows an increase in average transmitted power. The PAR of a radar waveform ${\mathbf{y}}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$ may be expressed \begin{equation} J_{\text{PAR}}(\boldsymbol{\theta}_T)=\frac{\underset{k=1,\cdots ,K}{\max } {y}_{_{\boldsymbol{\theta}_T}, k}|^{2}}{||{\mathbf{y}}_{\boldsymbol{\theta}_T}||^{2}/K}, \label{eq: PAPR complex} \end{equation which is bounded according to $1\leq J_{\text{PAR}}(\boldsymbol{\theta}_T)\leq K$. \subsubsection{Spectral Compatibility Constraint} A spectral constraint is imposed when a radar system is required to operate over a spectrum partially shared with other systems such as wireless communication networks. Suppose there are $D$ frequency bands $\{\Gamma _{d}\}_{d=1}^{D}$ shared by the radar and by the coexisting systems, where $\Gamma _{d}=[f_{d,l},f_{d,u}]$, with f_{d,l}$ and $f_{d,u}$ denoting the lower and upper normalized frequencies of the $d$th band, respectively. The amount of interfering energy generated by the radar waveform ${\mathbf{y}}_{\boldsymbol{\theta}_T}$ in the $d$th shared band is \begin{equation} \int_{f_{d,l}}^{f_{d,u}}\left\vert \sum_{k=0}^{K-1}{y _{_{\boldsymbol{\theta}_T}, k}e^{-j2\pi fk}\right\vert^{2}df={\mathbf{y}^{H}_{\boldsymbol{\theta}_T}}{\boldsymbol \Omega }}_{d}{\mathbf{y}_{\boldsymbol{\theta}_T}}, \label{eq: waveform energy} \end{equation where \begin{equation} \begin{aligned} \big[ {\boldsymbol{\Omega}}_d \big]_{v,h} & =\left\{ \begin{aligned} &f_{d,u}-f_{d,l} \qquad\qquad\qquad\qquad\text{if } v=h\\ &\frac{e^{j2\pi f_{d,u}(v-h)}-e^{j2\pi f_{d,l}(v-h)}}{j2\pi (v-h)} \quad \text{ if } v\neq h \end{aligned} \right. \end{aligned} \end{equation} for $(v,h)\in \{1,\cdots,K\}^2$. Let ${\boldsymbol{\Omega }}=\sum_{d=1}^{D}\omega_{d}{\boldsymbol{\Omega }}_{d}$ be a weighted interference covariance matrix, where the weights $\{\omega _{d}\}_{d=1}^{D}$ are assigned based on practical considerations regarding the impact of interference in the $D$ bands. These include distance between the radar transmitter and interferenced systems, and tactical importance of the coexisting systems \cite{Aubry2015}. Given a radar waveform $\mathbf{y}_{\boldsymbol{\theta}_T}=f_{\boldsymbol{\theta}_T}(\mathbf{s})$, we define the spectral compatibility penalty function as \begin{equation} J_{\text{spectrum}}(\boldsymbol{\theta}_T)={\mathbf{y}}^{H}_{\boldsymbol{\theta}_T}{\boldsymbol{\Omega }}{\mathbf{y}_{\boldsymbol{\theta}_T}}, \label{eq: spectrum complex} \end{equation} which is the total interfering energy from the radar waveform produced on the shared frequency bands. \subsubsection{Constrained Transmitter Design} For a fixed receiver parameter vector $\boldsymbol{\theta}_R$, the average loss (\ref{eq: tx loss RL no constraint}) is modified by introducing a penalty function $J\in\{ J_{\text{PAR}}, J_{\text{spectrum}}\}$. Accordingly, we formulate the transmitter loss function, encompassing (\ref{eq: tx loss RL no constraint}), (\ref{eq: PAPR complex}) and (\ref{eq: spectrum complex}), as \begin{equation} \begin{aligned} \mathcal{L}^{\pi}_{T,c}(\boldsymbol{\theta}_T)&=\mathcal{L}^{\pi}_T(\boldsymbol{\theta}_T)+\lambda J(\boldsymbol{\theta}_T)\\ &=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}+\lambda J(\boldsymbol{\theta}_T). \label{eq: tx loss RL} \end{aligned} \end{equation} where $\lambda$ controls the weight of the penalty $J(\boldsymbol{\theta}_T)$, and is referred to as the \emph{penalty parameter}. When the penalty parameter $\lambda$ is small, the transmitter is trained to improve its ability to adapt to the environment, while placing less emphasis on reducing the PAR level or interference energy from the radar waveform; and vice versa for large values of $\lambda$. Note that the waveform penalty function $J(\boldsymbol{\theta}_T)$ depends only on the transmitter trainable parameters $\boldsymbol{\theta}_T$. Thus, imposing the waveform constraint does not affect the receiver training. It is straightforward to write the estimated version of the gradient (\ref{eq: tx loss RL}) with respect to $\boldsymbol{\theta}_T$ by introducing the penalty as \begin{equation} \nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T,c}(\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_T(\boldsymbol{\theta}_T)+\lambda \nabla_{\boldsymbol{\theta}_T}J(\boldsymbol{\theta}_T), \label{eq: est tx loss grad constraint} \end{equation} where the gradient of the penalty function $\nabla_{\boldsymbol{\theta}_T}J(\boldsymbol{\theta}_T)$ is provided in Appendix A. Substituting (\ref{eq: tx loss RL grad est}) into (\ref{eq: est tx loss grad constraint}), we finally have the estimated gradient \begin{equation} \nabla_{\boldsymbol{\theta}_T} \widehat{\mathcal{L}}^{\pi}_{T,c}(\boldsymbol{\theta}_T)=\frac{1}{Q_T}\sum_{q=1}^{Q_T} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i^{(q)}\big) \nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}^{(q)}|\mathbf{y}_{\boldsymbol{\theta}_T})+\lambda \nabla_{\boldsymbol{\theta}_T} J(\boldsymbol{\theta}_T), \label{eq: est tx loss grad constraint2} \end{equation} which is used in the stochastic gradient update rule \begin{equation} \begin{aligned} &\boldsymbol{\theta }_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla}_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}_{T,c}(\boldsymbol{\theta}_T^{(n)}) \quad \text{for }n=1,2,\cdots. \end{aligned} \label{eq: tx constraint_sgd} \end{equation} \subsection{Simultaneous Training} This subsection discusses simultaneous training, in which the receiver and transmitter are updated simultaneously as illustrated in Fig. \ref{f:joint_training}. To this end, the objective function is the average loss \begin{equation} \begin{aligned} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}. \label{eq: joint loss} \end{aligned} \end{equation} This function is minimized over both parameters $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$ via stochastic gradient descent. \begin{figure}[H] \vspace{-3ex} \hspace{16ex} \includegraphics[width=1.2\linewidth]{joint_training} \vspace{-131ex} \caption{Simultaneous training of the end-to-end radar system. The receiver is trained by supervised learning, while the transmitter is trained by RL. } \label{f:joint_training} \end{figure} The gradient of (\ref{eq: joint loss}) with respect to $\boldsymbol{\theta}_R$ is \begin{equation} \nabla_{\boldsymbol{\theta}_R}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{\nabla_{\boldsymbol{\theta}_R}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i \big)\big\}, \label{eq: rx loss grad joint} \end{equation} and the gradient of (\ref{eq: joint loss}) with respect to $\boldsymbol{\theta}_T$ is \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\nabla_{\boldsymbol{\theta}_T} \mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{ \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)\big\}\\ =&\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{a}\sim \pi (\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) \\ \mathbf{z}\sim p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)}}\big\{ \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}), i\big)\nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})\big\}. \label{eq: tx loss RL grad joint} \end{aligned} \end{equation} To estimate gradients (\ref{eq: rx loss grad joint}) and (\ref{eq: tx loss RL grad joint}), we assume access to $Q$ i.i.d. samples $\mathcal{D}=\big\{\mathbf{a}^{(q)}\sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}), \mathbf{z}^{(q)}\sim p(\mathbf{z}|\mathbf{a}^{(q)},\mathcal{H}_{i^{(q)}}), i^{(q)}\in \{0,1\} \big\}_{q=1}^{Q}$. From (\ref{eq: rx loss grad joint}), the estimated receiver gradient is \begin{equation} \nabla_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\frac{1}{Q}\sum_{q=1}^{Q}\nabla_{\boldsymbol{\theta}_R}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i ^{(q)}\big). \label{eq: rx loss grad joint est} \end{equation} Note that, in (\ref{eq: rx loss grad joint est}), the received vector $\mathbf{z}^{(q)}$ is obtained based on a given waveform $\mathbf{a}^{(q)}$ sampled from policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$. Thus, the estimated receiver gradient (\ref{eq: rx loss grad joint est}) is averaged over the stochastic waveforms $\mathbf{a}$. This is in contrast to alternating training, in which the receiver gradient depends directly on the transmitted waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$. From (\ref{eq: tx loss RL grad joint}), the estimated transmitter gradient is given by \begin{equation} \nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\frac{1}{Q}\sum_{q=1}^{Q} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}^{(q)}),i^{(q)}\big) \nabla_{\boldsymbol{\theta}_T}\ln\pi(\mathbf{a}^{(q)}|\mathbf{y}_{\boldsymbol{\theta}_T}). \label{eq: tx loss RL grad joint est} \end{equation} Finally, denote the parameter set $\boldsymbol{\theta}=\{\boldsymbol{\theta }_{R}, \boldsymbol{\theta }_{T} \}$, from (\ref{eq: rx loss grad joint est}) and (\ref{eq: tx loss RL grad joint est}), the trainable parameter set $\boldsymbol{\theta}$ is updated according to the stochastic gradient descent rule \begin{equation} \boldsymbol{\theta}^{(n+1)}=\boldsymbol{\theta}^{(n)} -\epsilon {\nabla }_{\boldsymbol{\theta }}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)}) \label{eq: sgd} \end{equation} across iterations $n=1,2,\cdots.$ The simultaneous training algorithm is summarized in Algorithm 2. Like alternating training, simultaneous training can be directly extended to incorporate prescribed waveform constraints by adding the penalty term $\lambda J(\boldsymbol{\theta}_T)$ to the average loss (\ref{eq: joint loss}). \DontPrintSemicolo \begin{algorithm} \SetAlgoLined \KwIn{initialization waveform $\mathbf{s}$; stochastic policy $\pi(\cdot|\mathbf{y}_{\boldsymbol{\theta}_T})$; learning rate $\epsilon$} \KwOut{learned parameter vectors $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$} initialize $\boldsymbol{\theta}_R^{(0)}$ and $\boldsymbol{\theta}_T^{(0)}$, and set $n=0$\; \While{stopping criterion not satisfied}{ evaluate the receiver gradient $\nabla_{\boldsymbol{\theta}_R}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)})$ and the transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)})$ from (\ref{eq: rx loss grad joint est}) and (\ref{eq: tx loss RL grad joint est}), respectively \; update receiver parameter vector $\boldsymbol{\theta}_R$ and transmitter parameter vector $\boldsymbol{\theta}_T$ simultaneously via \begin{equation*} \boldsymbol{\theta}_R^{(n+1)}=\boldsymbol{\theta}_R^{(n)} -\epsilon {\nabla }_{\boldsymbol{\theta }_R}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)}) \end{equation*} and \begin{equation*} \boldsymbol{\theta}_T^{(n+1)}=\boldsymbol{\theta}_T^{(n)} -\epsilon {\nabla }_{\boldsymbol{\theta }_T}\widehat{\mathcal{L}}^{\pi}(\boldsymbol{\theta}_R^{(n)}, \boldsymbol{\theta}_T^{(n)}) \end{equation*}\; $n\leftarrow n+1$ } \caption{Simultaneous Training} \end{algorithm} \section{Theoretical properties of the gradients} In this section, we discuss two useful theoretical properties of the gradients used for learning receiver and transmitter. \subsection{Receiver Gradient} As discussed previously, end-to-end learning of transmitted waveform and detector may be accomplished either by alternating or simultaneous training. The main difference between alternating and simultaneous training concerns the update of the receiver trainable parameter vector $\boldsymbol{\theta}_R$. Alternating training of $\boldsymbol{\theta}_R$ relies on a fixed waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ (see Fig. \ref{f:rx_training}), while simultaneous training relies on random waveforms $\mathbf{a}$ generated in accordance with a preset policy, i.e., $\mathbf{a} \sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$, as shown in Fig. \ref{f:joint_training}. The relation between the gradient applied by alternating training, $\nabla_{\boldsymbol{\theta }_R}{\mathcal{L}}_R(\boldsymbol{\theta}_R)$, and the gradient of simultaneous training, $\nabla_{\boldsymbol{\theta}_R}L^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)$, with respect to $\boldsymbol{\theta}_R$ is stated by the following proposition. \begin{proposition} For the loss function (\ref{eq: rx loss}) computed based on a waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ and loss function (\ref{eq: tx loss RL no constraint}) computed based on a stochastic policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ continuous in $\mathbf{a}$, the following equality holds: \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta }_R}{\mathcal{L}}_R(\boldsymbol{\theta}_R)=\nabla_{\boldsymbol{\theta}_R}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T). \end{aligned} \label{eq: Rx joint grad} \end{equation} \end{proposition} \begin{proof} See Appendix B. \end{proof} Proposition 1 states that the gradient of simultaneous training, $\nabla_{\boldsymbol{\theta}_R}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)$, equals the gradient of alternating training, $\nabla_{\boldsymbol{\theta }_R}\mathcal{L}_R(\boldsymbol{\theta}_R)$, even though simultaneous training applies a random waveform $\mathbf{a}\sim \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ to train the receiver. Note that this result applies only to ensemble means according to (\ref{eq: rx loss}) and (\ref{eq: rx loss grad joint}), and not to the empirical estimates used by Algorithms 1 and 2. Nevertheless, Proposition 1 suggests that training updates of the receiver are unaffected by the choice of alternating or simultaneous training. That said, given the distinct updates of the transmitter's parameter, the overall trajectory of the parameters ($\boldsymbol{\theta}_R$, $\boldsymbol{\theta}_T$) during training may differ according to the two algorithms. \subsection{Transmitter gradient} As shown in the previous section, the gradients used for learning receiver parameters $\boldsymbol{\theta}_R$ by alternating training (\ref{eq: est. rx loss grad}) or simultaneous training (\ref{eq: rx loss grad joint est}) may be directly estimated from the channel output samples $\mathbf{z}^{(q)}$. In contrast, the gradient used for learning transmitter parameters $\boldsymbol{\theta}_T$ according to (\ref{eq: rx loss}) cannot be directly estimated from the channel output samples. To obviate this problem, in Algorithms 1 and 2, the transmitter is trained by exploring the space of transmitted waveforms according to a policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$. We refer to the transmitter loss gradient obtained via policy gradient (\ref{eq: tx loss RL grad joint}) as the \emph{RL transmitter gradient}. The benefit of RL-based transmitter training is that it renders unnecessary access to the likelihood function $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T}, \mathcal{H}_i)$ to evaluate the RL transmitter gradient, rather the gradient is estimated via samples. We now formalize the relation of the RL transmitter gradient (\ref{eq: tx loss RL grad joint}) and the transmitter gradient for a known likelihood obtained according to (\ref{eq: rx loss}). As mentioned, if the likelihood $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$ were known, and if it were differentiable with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$, the transmitter parameter vector $\boldsymbol{\theta}_T$ may be learned by minimizing the average loss (\ref{eq: rx loss}), which we rewrite as a function of both $\boldsymbol{\theta}_R$ and $\boldsymbol{\theta}_T$ as \begin{equation} \mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)= \sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{ \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\big\}. \label{eq: known loss} \end{equation} The gradient of (\ref{eq: known loss}) with respect to $\boldsymbol{\theta}_T$ is expressed as \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T ) &=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\mathbb{E}_{\substack{ \mathbf{z}\sim p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)}}\big\{\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) \nabla_{\boldsymbol{\theta}_T}\ln p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i) \big\}, \end{aligned} \label{eq: tx loss known grad} \end{equation} where the equality leverages the following relation \begin{equation} \nabla_{\boldsymbol{\theta}_T}p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)=p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)\nabla_{\boldsymbol{\theta}_T}\ln p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i). \label{eq: log-trick} \end{equation} The relation between the RL transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ in (\ref{eq: tx loss RL grad joint}) and the transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ in (\ref{eq: tx loss known grad}) is elucidated by the following proposition. \begin{proposition} If likelihood function $p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$ is differentiable with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$ for $i\in\{0,1\}$, the following equality holds \begin{equation} \nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T). \end{equation} \end{proposition} \begin{proof} See Appendix C. \end{proof} Proposition 2 establishes that the RL transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ equals the transmitter gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ for any given receiver parameters $\boldsymbol{\theta}_R$. Proposition 2 hence provides a theoretical justification for replacing the gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ with the RL gradient $\nabla_{\boldsymbol{\theta}_T}\mathcal{L}^{\pi}(\boldsymbol{\theta}_R,\boldsymbol{\theta}_T)$ to perform transmitter training as done in Algorithms 1 and 2. \section{Numerical Results} This section first introduces the simulation setup, and then it presents numerical examples of waveform design and detection performance that compare the proposed data-driven methodology with existing model-based approaches. While simulation results presented in this section rely on various models of target, clutter and interference, this work expressly distinguishes data-driven learning from model-based design. Learning schemes rely solely on data and not on model information. In contrast, model-based design implies a system structure that is based on a specific and known model. Furthermore, learning may rely on synthetic data containing diverse data that is generated according to a variety of models. In contrast, model-based design typically relies on a single model. For example, as we will see, a synthetic dataset for learning may contain multiple clutter sample sets, each generated according to a different clutter model. Conversely, a single clutter model is typically assumed for model-based design. \subsection{Models, Policy, and Parameters} \subsubsection{Models of target, clutter, and noise} The target is stationary, and has a Rayleigh envelope, i.e., $\alpha\sim \mathcal{CN}(0,\sigma_{\alpha}^2)$. The noise has a zero-mean Gaussian distribution with the correlation matrix $[\boldsymbol{\Omega }_n]_{v,h}=\sigma _{n}^{2}\rho^{|v-h|}$ for $(v,h)\in \{1,\cdots,K\}^2$, where $\sigma_n^2$ is the noise power and $\rho$ is the one-lag correlation coefficient. The clutter vector in (\ref{eq: rx}) is the superposition of returns from $2K-1$ consecutive range cells, reflecting all clutter illuminated by the $K$-length signal as it sweeps in range across the target. Accordingly, the clutter vector may be expressed as \begin{equation} {\mathbf{c}}=\sum_{\substack{ g=-K+1 }}^{K-1}{\gamma }_{g}\mathbf{ }_{g}{\mathbf{y}}, \label{eq: clutter} \end{equation} where $\mathbf{J}_{g}$ represents the shifting matrix at the $g$th range cell with elements \begin{equation} \big[\mathbf{J}_{g}\big]_{v,h}=\left\{ \begin{aligned} &1 \quad \text{if} \quad v-h=g\\ &0\quad \text{if} \quad v-h\neq g \end{aligned}\quad (v,h)\in \{1,\cdots ,K\}^{2}\right. . \end{equation} The magnitude $|\gamma_g|$ of the $g$th clutter scattering coefficient is generated according to a Weibull distribution \cite{Richards 2010} \begin{equation} p(|\gamma_g|)=\frac{\beta}{\nu^{\beta}}|\gamma_g|^{\beta-1}\exp\bigg( - \frac{|\gamma_g|^{\beta}}{\nu^{\beta}} \bigg), \label{eq: Weibull pdf} \end{equation} where $\beta$ is the shape parameter and $\nu$ is the scale parameter of the distribution. Let $\sigma_{\gamma_g}^2$ represent the power of the clutter scattering coefficient $\gamma_g$. The relation between $\sigma_{\gamma_g}^2$ and the Weibull distribution parameters $\{\beta,\nu\}$ is \cite{Farina 1987} \begin{equation} \sigma_{\gamma_g}^2=\text{E}\{|{\gamma}_g|^2\}=\frac{2\nu^2}{\beta}\Gamma\bigg(\frac{2}{\beta}\bigg), \end{equation} where $\Gamma(\cdot)$ is the Gamma function. The nominal range of the shape parameter is $0.25\leq\beta\leq2$ \cite{shape}. In the simulation, the complex-valued clutter scattering coefficient $\gamma_g$ is obtained by multiplying a real-valued Weibull random variable $|\gamma_g|$ with the factor $\exp(j\psi_g)$, where $\psi_g$ is the phase of $\gamma_g$ distributed uniformly in the interval $(0,2\pi)$. When the shape parameter $\beta=2$, the clutter scattering coefficient $\gamma_g$ follows the Gaussian distribution $\gamma_g \sim \mathcal{CN}(0,\sigma_{\gamma_g}^2)$. Based on the assumed mathematical models of the target, clutter and noise, it can be shown that the optimal detector in the NP sense is the square law detector \cite{Richards 2005}, and the adaptive waveform for target detection can be obtained by maximizing the signal-to-clutter-plus-noise ratio at the receiver output at the time of target detection (see Appendix A of \cite{Wei 2019NN} for details). \subsubsection{Transmitter and Receiver Models} Waveform generation and detection is implemented using feedforward neural networks as explained in Section II-B. The transmitter $\tilde{f}_{\boldsymbol{\theta}_T}(\cdot)$ is a feedforward neural network with four layers, i.e., an input layer with $2K$ neurons, two hidden layers with $M=24$ neurons, and an output layer with $2K$ neurons. The activation function is exponential linear unit (ELU) \cite{ELU}. The receiver $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$ is implemented as a feedforward neural network with four layers, i.e., an input layer with $2K$ neurons, two hidden layers with $M$ neurons, and an output layer with one neuron. The sigmoid function is chosen as the activation function. The layout of transmitter and receiver networks is summarized in Table I. \begin{table}[H] \caption{Layout of the transmitter and receiver networks} \label{table:1} \centering \resizebox{0.6\columnwidth}{!}{ \begin{tabular}{@{}cccccccc@{}}\toprule \multicolumn{1}{c}{} & \multicolumn{3}{c}{Transmitter $\tilde{f}_{\boldsymbol{\theta}_T}(\cdot)$} & \phantom{a} & \multicolumn{3}{c}{Receiver $\tilde{f}_{\boldsymbol{\theta}_R}(\cdot)$} \\ \cmidrule{2-4} \cmidrule{6-8} Layer& 1 & 2-3 & 4 && 1 & 2-3 & 4 \\ Dimension& $2K$ & $M$ & $2K$ && $2K$ & $M$ & $1$ \\ Activation& - & ELU & Linear && - & Sigmoid & Sigmoid \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Gaussian policy} A Gaussian policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ is adopted for RL-based transmitter training. Accordingly, the output of the stochastic transmitter follows a complex Gaussian distribution $\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})=\mathcal{CN}\big(\sqrt{1-\sigma^2_p}\mathbf{y}_{\boldsymbol{\theta}_T},\frac{\sigma^2_p}{K}\mathbf{I}_K\big)$, where the per-chip variance $\sigma^2_p$ is referred to as the \emph{policy hyperparameter}. When $\sigma^2_p=0$, the stochastic policy becomes deterministic \cite{Silver 2014}, i.e., the policy is governed by a Dirac function at $\mathbf{y}_{\boldsymbol{\theta}_T}$. In this case, the policy does not explore the space of transmitted waveforms, but it ``exploits'' the current waveform. At the opposite end, when $\sigma^2_p=1$, the output of the stochastic transmitter is independent of $\mathbf{y}_{\boldsymbol{\theta}_T}$, and the policy becomes zero-mean complex-Gaussian noise with covariance matrix $\mathbf{I}_K/K$. Thus, the policy hyperparameter $\sigma^2_p$ is selected in the range $(0,1)$, and its value sets a trade-off between exploration of new waveforms versus exploitation of current waveform. \subsubsection{Training Parameters} The initialization waveform $\mathbf{s}$ is a linear frequency modulated pulse with $K=8$ complex-valued chips and chirp rate $R=(100\times10^3)/(40\times 10^{-6})$ Hz/s. Specifically, the $k$th chip of $\mathbf{s}$ is given by \begin{equation} \mathbf{s}(k)=\frac{1}{\sqrt{K}}\exp \big\{ j\pi R \big( k/f_s\big)^2 \big\} \end{equation} for $\forall k\in\{0,\dots,K-1\}$, where $f_s=200$ kHz. The signal-to-noise ratio (SNR) is defined as \begin{equation} \text{SNR}=10\log_{10}\bigg\{\frac{\sigma_{\alpha}^2}{\sigma_n^2}\bigg\}. \label{eq: SNR} \end{equation} Training was performed at $\text{SNR}=12.5$ dB. The clutter environment is uniform with $\sigma_{\gamma_g}^2=-11.7$ dB, $\forall g\in\{-K+1,\dots, K-1\}$, such that the overall clutter power is $\sum_{g=-(K-1)}^{K-1}\sigma_{\gamma_g}^2=0$ dB. The noise power is $\sigma_n^2=0$ dB, and the one-lag correlation coefficient $\rho=0.7$. Denote $\beta_{\text{train}}$ and $\beta_{\text{test}}$ the shape parameters of the clutter distribution (\ref{eq: Weibull pdf}) applied in training and test stage, respectively. Unless stated otherwise, we set $\beta_{\text{train}}=\beta_{\text{test}}=2$. To obtain a balanced classification dataset, the training set is populated by samples belonging to either hypothesis with equal prior probability, i.e., $ P(\mathcal{H}_0)=P(\mathcal{H}_1)=0.5$. The number of training samples is set as $Q_R=Q_T=Q=2^{13}$ in the estimated gradients (\ref{eq: est. rx loss grad}), (\ref{eq: tx loss RL grad est}), (\ref{eq: rx loss grad joint est}), and (\ref{eq: tx loss RL grad joint est}). Unless stated otherwise, the policy parameter is set to $\sigma^2_p=10^{-1.5}$, and the penalty parameter is $\lambda=0$, i.e., there are no waveform constraints. The Adam optimizer \cite{adam} is adopted to train the system over a number of iterations chosen by trial and error. The learning rate is $\epsilon=0.005$. In the testing phase, $2\times10^5$ samples are used to estimate the probability of false alarm ($P_{fa}$) under hypothesis $\mathcal{H}_0$, while $5\times10^4$ samples are used to estimate the probability of detection ($P_d$) under hypothesis $\mathcal{H}_1$. Receiver operating characteristic (ROC) curves are obtained via Monte Carlo simulations by varying the threshold applied at the output of the receiver. Results are obtained by averaging over fifty trials. Numerical results presented in this section assume simultaneous training, unless stated otherwise. \subsection{Results and Discussion} \subsubsection{Simultaneous Training vs Training with Known Likelihood} We first analyze the impact of the choice of the policy hyperparameter $\sigma_p^2$ on the performance on the training set. Fig. \ref{f: var_loss} shows the empirical cross-entropy loss of simultaneous training versus the policy hyperparameter $\sigma^2_p$ upon the completion of the training process. The empirical loss of the system training with a known channel (\ref{eq: known loss}) is plotted as a comparison. It is seen that there is an optimal policy parameter $\sigma^2_p$ for which the empirical loss of simultaneous training approaches the loss with known channel. As the policy hyperparameter $\sigma^2_p$ tends to $0$, the output of the stochastic transmitter $\mathbf{a}$ is close to the waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$, which leads to no exploration of the space of transmitted waveforms. In contrast, when the policy parameter $\sigma^2_p$ tends to $1$, the output of the stochastic transmitter becomes a complex Gaussian noise with zero mean and covariance matrix $\mathbf{I}_K/K$. In both cases, the RL transmitter gradient is difficult to estimate accurately. \begin{figure} \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{exploration2} \vspace{-3ex} \caption{Empirical training loss versus policy hyperparameter $\sigma^2_p$ for simultaneous training algorithm and training with known channel, respectively.} \label{f: var_loss} \end{figure} While Fig. \ref{f: var_loss} evaluates the performance on the training set in terms of empirical cross-entropy loss, the choice of the policy hyperparameter $\sigma^2_p$ should be based on validation data and in terms of the testing criterion that is ultimately of interest. To elaborate on this point, ROC curves obtained by simultaneous training with different values of the policy hyperparameter $\sigma^2_p$ and training with known channel are shown in Fig. \ref{f: var_ROC}. As shown in the figure, simultaneous training with $\sigma^2_p=10^{-1.5}$ achieves a similar ROC as training with known channel. The choice $\sigma^2_p=10^{-1.5}$, also has the lowest empirical training loss in Fig. \ref{f: var_loss}. These results suggest that training is not subject to overfitting \cite{osvaldo1}. \begin{figure} \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{var_ROC3} \vspace{-3ex} \caption{ROC curves for training with known channel and simultaneous training with different values of policy parameter $\sigma^2_p$.} \label{f: var_ROC} \end{figure} \subsubsection{Simultaneous Training vs Alternating Training} We now compare simultaneous and alternating training in terms of ROC curves in Fig. \ref{f: alter}. ROC curves based on the optimal detector in the NP sense, namely, the square law detector \cite{Richards 2005} and the adaptive/initialization waveform are plotted as benchmarks. As shown in the figure, simultaneous training provides a similar detection performance as alternating training. Furthermore, both simultaneous training and alternating training are seen to result in significant improvements as compared to training of only the receiver, and provide detection performance comparable to adaptive waveform \cite{Wei 2019NN} and square law detector. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{Gaussian_ROC3} \vspace{-3ex} \caption{ROC curves with and without transmitter training.} \label{f: alter} \end{figure} \subsubsection{Learning Gaussian and Non-Gaussian Clutter} Two sets of ROC curves under different clutter statistics are illustrated in Fig. \ref{f: non-G1}. Each set contains two ROC curves with the same clutter statistics: one curve is obtained based on simultaneous training, and the other one is based on model-based design. For simultaneous training, the shape parameter of the clutter distribution (\ref{eq: Weibull pdf}) in the training stage is the same as that in the test stage, i.e, $\beta_{\text{train}}=\beta_{\text{test}}$. In the test stage, for Gaussian clutter ($\beta_{\text{test}}=2$), the model-based ROC curve is obtained by the adaptive waveform and the optimal detector in the NP sense. As expected, simultaneous training provides a comparable detection performance with the adaptive waveform and square law detector (also shown in Fig. \ref{f: alter}). In contrast, when the clutter is non-Gaussian ($\beta_{\text{test}}=0.25$), the optimal detector in the NP sense is mathematically intractable. Under this scenario, the data-driven approach is beneficial since it relies on data rather than a model. As observed in the figure, for non-Gaussian clutter with a shape parameter $\beta_{\text{test}}=0.25$, simultaneous training outperforms the adaptive waveform and square law detector. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{Non_G2} \vspace{-3ex} \caption{ROC curves for Gaussian/non-Gaussian clutter. The end-to-end radar system is trained and tested by the same clutter statistics, i.e, $\beta_{\text{train}}=\beta_{\text{test}}$.} \label{f: non-G1} \end{figure} \subsubsection{Simultaneous Training with Mixed Clutter Statistics} The robustness of the trained radar system to the clutter statistics is investigated next. As discussed previously, model-based design relies on a single clutter model, whereas data-driven learning depends on a training dataset. The dataset may contain samples from multiple clutter models. Thus, the system based on data-driven learning may be robustified by drawing samples from a mixture of clutter models. In the test stage, the clutter model may not be the same as any of the clutter models used in the training stage. As shown in the figure, for simultaneous training, the training dataset contains clutter samples generated from (\ref{eq: Weibull pdf}) with four different values of shape parameter $\beta_{\text{train}}\in \{0.25, 0.5, 0.75, 1\}$. The test data is generated with a clutter shape parameter $\beta_{\text{test}}=0.3$ not included in the training dataset. The end-to-end leaning radar system trained by mixing clutter samples provides performance gains compared to a model-based system using an adaptive waveform and square law detector. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{Non_mix} \vspace{-3ex} \caption{ROC curves for non-Gaussian clutter. To robustify detection performance, the end-to-end leaning radar system is trained with mixed clutter statistics, while testing for a clutter model different than used for training.} \label{f: non-mix} \end{figure} \subsubsection{Simultaneous Training under PAR Constraint} Detection performance with waveforms learned subject to a PAR constraint is shown in Fig. \ref{f:PAPR_fig1}. The end-to-end system trained with no PAR constraint, i.e., $\lambda=0$, serves as the reference. It is observed the detection performance degrades as the value of the penalty parameter $\lambda$ increases. Moreover, PAR values of waveforms with different $\lambda$ are shown in Table \ref{table:3}. As shown in Fig. \ref{f:PAPR_fig1} and Table \ref{table:3}, there is a tradeoff between detection performance and PAR level. For instance, given $P_{fa}=5\times 10^{-4}$, training the transmitter with the largest penalty parameter $\lambda=0.1$ yields the lowest $P_d=0.852$ with the lowest PAR value $0.17$ dB. In contrast, training the transmitter with no PAR constraint, i.e., $\lambda=0$, yields the best detection with the largest PAR value $3.92$ dB. Fig. \ref{f:PAPR_fig2} compares the normalized modulus of waveforms with different values of the penalty parameter $\lambda$. As shown in Fig. \ref{f:PAPR_fig2} and Table \ref{table:3}, the larger the penalty parameter $\lambda$ adopted in the simultaneous training, the smaller the PAR value of the waveform. \begin{table}[H] \caption{PAR values of waveforms with different values of penalty parameter $\lambda$} \label{table:3} \centering \resizebox{0.5\columnwidth}{!}{ \begin{tabular}{@{}cccc@{}}\toprule & $\lambda=0$ (reference) & $\lambda=0.01$ & $\lambda=0.1$ \\ \cmidrule{2-4} PAR [dB] (\ref{eq: PAPR complex}) & 3.92 & 1.76 & 0.17 \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.68\linewidth]{PAPR_ROC} \vspace{-3ex} \caption{ROC curves for PAR constraint with the different values of the penalty parameter $\lambda$.} \label{f:PAPR_fig1} \end{figure} \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.68\linewidth]{PAPR_modulus} \vspace{-3ex} \caption{Normalized modulus of transmitted waveforms with different values of penalty parameter $\lambda$.} \label{f:PAPR_fig2} \end{figure} \subsubsection{Simultaneous Training under Spectral Compatibility Constraint} ROC curves for spectral compatibility constraint with different values of the penalty parameter $\lambda$ are illustrated in Fig. \ref{f:spectrum_fig1}. The shared frequency bands are $\Gamma_1=[0.3,0.35]$ and $\Gamma_2=[0.5,0.6]$. The end-to-end system trained with no spectral compatibility constraint, i.e., $\lambda=0$, serves as the reference. Training the transmitter with a large value of the penalty parameter $\lambda$ is seen to result in performance degradation. Interfering energy from radar waveforms trained with different values of $\lambda$ are shown in Table \ref{table:4}. It is observed that $\lambda$ plays an important role in controlling the tradeoff between detection performance and spectral compatibility of the waveform. For instance, for a fixed $P_{fa}=5 \times 10^{-4}$, training the transmitter with $\lambda=0$ yields $P_d=0.855$ with an amount of interfering energy $-5.79$ dB on the shared frequency bands, while training the transmitter with $\lambda=1$ creates notches in the spectrum of the transmitted waveform at the shared frequency bands. Energy spectral densities of transmitted waveforms with different values of $\lambda$ are illustrated in Fig. \re {f:spectrum_fig2}. A larger the penalty parameter $\lambda$ results in a lower amount of interfering energy in the prescribed frequency shared regions. Note, for instance, that the nulls of the energy spectrum density of the waveform for \lambda=1$ are much deeper than their counterparts for $\lambda=0.2$. \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{spectrum_ROC} \vspace{-3ex} \caption{ROC curves for spectral compatibility constraint for different values of penalty parameter $\lambda$.} \label{f:spectrum_fig1} \end{figure} \begin{table}[H] \caption{Interfering energy from radar waveforms with different values of weight parameter $\lambda$ } \label{table:4} \centering \resizebox{0.65\columnwidth}{!}{ \begin{tabular}{@{}cccc@{}}\toprule & $\lambda=0$ (reference) & $\lambda=0.2$ & $\lambda=1$ \\ \cmidrule{2-4} Interfering energy [dB] (\ref{eq: spectrum complex}) & -5.79 & -10.39& -17.11 \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[H] \centering \vspace{-3ex} \includegraphics[width=0.7\linewidth]{spectrum_psd} \vspace{-3ex} \caption{Energy spectral density of waveforms with different values of penalty parameter $\lambda$.} \label{f:spectrum_fig2} \end{figure} \section{Conclusions} In this paper, we have formulated the radar design problem as end-to-end learning of waveform generation and detection. We have developed two training algorithms, both of which are able to incorporate various waveform constraints into the system design. Training may be implemented either as simultaneous supervised training of the receiver and RL-based training of the transmitter, or as alternating between training of the receiver and of the transmitter. Both training algorithms have similar performance. We have also robustified the detection performance by training the system with mixed clutter statistics. Numerical results have shown that the proposed end-to-end learning approaches are beneficial under non-Gaussian clutter, and successfully adapt the transmitted waveform to actual statistics of environmental conditions, while satisfying operational constraints. \numberwithin{equation}{section} \appendices \section{Gradient of Penalty Functions} In this appendix are derived the respective gradients of the penalty functions (\ref{eq: PAPR complex}) and (\ref{eq: spectrum complex}) with respect to the transmitter parameter vector $\boldsymbol{\theta}_T$. To facilitate the presentation, let $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}$ represent a $2K \times 1$ real vector comprising the real and imaginary parts of the waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$, i.e., $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}=\big[\Re(\mathbf{y}_{\boldsymbol{\theta}_T}), \Im (\mathbf{y}_{\boldsymbol{\theta}_T})\big]^T$. \subsubsection{Gradient of PAR Penalty Function} As discussed in Section II-B, the transmitted power is normalized such that $||\mathbf{y}_{\boldsymbol{\theta}_T} ||^2=||\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}||^2=1$. Let subscript ``max'' represent the chip index associated with the PAR value (\ref{eq: PAPR complex}). By leveraging the chain rule, the gradient of (\ref{eq: PAPR complex}) with respect to $\boldsymbol{\theta}_T$ is written \begin{equation} \nabla_{\boldsymbol{\theta}_T}J_{\text{PAR}}(\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\overline{\mathbf{y}}_{\boldsymbol{\theta}_T} \cdot \mathbf{g}_{\text{PAR}}, \end{equation} where $\mathbf{g}_{\text{PAR}}$ represents the gradient of the PAR penalty function $J_{\text{PAR}}(\boldsymbol{\theta}_T)$ with respect to $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}$, and is given by \begin{equation} \mathbf{g}_{\text{PAR}}=\big[ \begin{array}{c;{2pt/2pt}c} 0,\dots ,0, 2K\Re({{y}}_{\boldsymbol{\theta}_T,\text{max}}),0, \dots, 0 & 0,\dots, 0, 2K\Im({{y}}_{\boldsymbol{\theta}_T,\text{max}}),0, \dots, 0 \end{array} \big]^T. \end{equation} \subsubsection{Gradient of Spectral Compatibility Penalty Function} According to the chain rule, the gradient of (\ref{eq: spectrum complex}) with respect to $\boldsymbol{\theta}_T$ is expressed \begin{equation} \nabla_{\boldsymbol{\theta}_T}J_{\text{spectrum}}(\boldsymbol{\theta}_T)=\nabla_{\boldsymbol{\theta}_T}\overline{\mathbf{y}}_{\boldsymbol{\theta}_T} \cdot \mathbf{g}_{\text{spectrum}}, \end{equation} where $\mathbf{g}_{\text{spectrum}}$ denotes the gradient of the spectral compatibility penalty function $J_{\text{spectrum}}(\boldsymbol{\theta}_T)$ with respect to $\overline{\mathbf{y}}_{\boldsymbol{\theta}_T}$, and is given by \begin{equation} \mathbf{g}_{\text{spectrum}}=\left[ \begin{array}{c} 2\Re\big[(\boldsymbol{\Omega}\mathbf{y}_{\boldsymbol{\theta}_T})^*\big] \\ \hdashline[2pt/2pt] -2\Im\big[(\boldsymbol{\Omega}\mathbf{y}_{\boldsymbol{\theta}_T})^*\big] \end{array} \right]. \end{equation} \section{Proof of Proposition 1} \begin{proof} The average loss function of simultaneous training $\mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)$ (\ref{eq: joint loss}) could be expressed \begin{equation} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i) \int_{\mathcal{A}}\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})\int_{\mathcal{Z}} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)d\mathbf{z}d\mathbf{a}. \label{a: fuse loss ori.} \end{equation} As discussed in Section II-B, the last layer of the receiver implementation consists of a sigmoid activation function, which leads to the output of the receiver $f_{\boldsymbol{\theta}_R}(\mathbf{z})\in (0,1)$. Thus there exist a constant $b$ such that $\sup_{\mathbf{z},i} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) <b<\infty$. Furthermore, for $i\in \{0,1\}$, the instantaneous values of the cross-entropy loss $\ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)$, the policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$, and the likelihood $p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)$ are continuous in variables $\mathbf{a}$ and $\mathbf{z}$. By leveraging Fubini's theorem \cite{Fubini} to exchange the order of integration in (\ref{a: fuse loss ori.}), we have \begin{equation} \begin{aligned} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}} \ell \big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) \int_{\mathcal{A}}p(\mathbf{z}|\mathbf{a},\mathcal{H}_i) \pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T}) d\mathbf{a}d\mathbf{z}. \end{aligned} \label{a: fuse loss exchage} \end{equation} Note that for a waveform $\mathbf{y}_{\boldsymbol{\theta}_T}$ and a target state indicator $i$, the product between the likelihood $p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)$ and the policy $\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})$ becomes a joint PDF of two random variables $\mathbf{a}$ and $\mathbf{z}$, namely, \begin{equation} p(\mathbf{z}|\mathbf{a},\mathcal{H}_i)\pi(\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T})=p(\mathbf{a},\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i). \label{a: joint prob.} \end{equation} Substituting (\ref{a: joint prob.}) into (\ref{a: fuse loss exchage}), we obtain \begin{equation} \begin{aligned} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)&=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)\int_{\mathcal{A}}p(\mathbf{a},\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i) d\mathbf{a}d\mathbf{z}\\ &=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)d\mathbf{z}, \end{aligned} \label{a: fuse loss final} \end{equation} where the second equality holds by integrating the joint PDF $p(\mathbf{z},\mathbf{a}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$ over the random variable $\mathbf{a}$, i.e., $\int_{\mathcal{A}}p(\mathbf{a},\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i) d\mathbf{a}=p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)$. Taking the gradient of (\ref{a: fuse loss final}) with respect to $\boldsymbol{\theta}_R$, we have \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta }_R} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)&= \sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)\nabla_{\boldsymbol{\theta }_R}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big)d\mathbf{z}\\ &={\nabla}_{\boldsymbol{\theta}_R}\mathcal{L}_R(\boldsymbol{\theta}_R), \end{aligned} \end{equation} where the second equality holds via (\ref{eq: rx loss grad.}). Thus, the proof of Proposition 1 is completed. \end{proof} \section{Proof of Proposition 2} \begin{proof} According to (\ref{a: fuse loss final}), the gradient of the average loss function of simultaneous training with respect to $\boldsymbol{\theta}_T$ is given by \begin{equation} \begin{aligned} \nabla_{\boldsymbol{\theta }_T} \mathcal{L}^{\pi}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T)&=\sum_{i\in\{0,1\}}P(\mathcal{H}_i)\int_{\mathcal{Z}}\ell\big( f_{\boldsymbol{\theta}_R}(\mathbf{z}),i\big) \nabla_{\boldsymbol{\theta }_T} p(\mathbf{z}|\mathbf{y}_{\boldsymbol{\theta}_T},\mathcal{H}_i)d\mathbf{z}\\ & = \nabla_{\boldsymbol{\theta }_T} \mathcal{L}(\boldsymbol{\theta}_R, \boldsymbol{\theta}_T), \end{aligned} \label{a: fuse loss tx grad. ori.} \end{equation} where the last equality holds by (\ref{eq: tx loss known grad}). The proof of {Proposition 2} is completed. \end{proof}
1,108,101,564,075
arxiv
\section{Introduction} One striking observation in high-energy nucleus-nucleus (A+A) collisions is the large anisotropy of particle production in the azimuthal angle $\phi$~\cite{Ackermann:2000tr,ALICE:2011ab}. This anisotropy is often studied via a two-particle correlation of particle pairs in relative pseudorapidity ($\Delta\eta$) and azimuthal angle ($\Delta\phi$)~\cite{Jia:2004sw,Adare:2008ae}. The anisotropy manifests itself as a strong excess of pairs at $\Delta\phi\sim0$ and $\pi$, and the magnitude of the excess is relatively constant out to large $|\Delta\eta|$~\cite{Alver:2009id,Aggarwal:2010rf,Aamodt:2011by,CMS:2012wg,Aad:2012bu}. The azimuthal structure of this ``ridge-like'' correlation is commonly characterized by its Fourier harmonics, $dN_{\mathrm{pairs}}/d\Delta\phi \sim 1 + 2\sum_{n} v_{n}^2 \cos n \Delta\phi$. While the elliptic flow, $v_2$, and triangular flow, $v_3$, are the dominant harmonics in A+A collisions, significant $v_1$, $v_4$, $v_5$ and $v_6$ harmonics have also been measured~\cite{CMS:2012wg,Aad:2012bu,Aad:2013xma,Chatrchyan:2013kba,Adamczyk:2013waa,Adare:2011tg}. These $v_n$ values are commonly interpreted as the collective hydrodynamic response of the created matter to the collision geometry and its fluctuations in the initial state~\cite{Alver:2010gr}. The success of hydrodynamic models in describing the anisotropy of particle production in heavy-ion collisions at RHIC and the LHC places important constraints on the transport properties of the produced matter~\cite{Luzum:2012wu,Teaney:2010vd,Gale:2012rq,Niemi:2012aj,Qiu:2012uy,Teaney:2013dta}. For a small collision system, such as proton-proton ($\mbox{$p$+$p$}$) or proton-nucleus ($\mbox{$p$+A}$) collisions, it was assumed that the transverse size of the produced system is too small for the hydrodynamic flow description to be applicable. Thus, it came as a surprise that ridge-like structures were also observed in two-particle correlations in high-multiplicity $\mbox{$p$+$p$}$~\cite{Khachatryan:2010gv} and proton-lead ($\mbox{$p$+Pb}$)~\cite{CMS:2012qk,Abelev:2012ola,Aad:2012gla} collisions at the LHC and later in deuteron-gold collisions~\cite{Adare:2013piz} at RHIC. A Fourier decomposition technique has been employed to study the azimuthal distribution of the ridge in $\mbox{$p$+Pb}$ collisions. The transverse momentum ($\pT$) dependence of the extracted $v_2$ and $v_3$~\cite{Abelev:2012ola,Aad:2012gla}, and the particle-mass dependence of $v_2$~\cite{ABELEV:2013wsa} are found to be similar to those measured in A+A collisions. Large $v_2$ coefficients are also measured via the four-particle cumulant method~\cite{Aad:2013fja,Chatrchyan:2013nka}, suggesting that the ridge reflects genuine multi-particle correlations. The interpretation of the long-range correlations in high-multiplicity $\mbox{$p$+$p$}$ and $\mbox{$p$+Pb}$ collisions is currently a subject of intense study. References~\cite{Bozek:2011if,Shuryak:2013ke,Bozek:2013uha,Qin:2013bha} argue that the produced matter in these collisions is sufficiently voluminous and dense such that the hydrodynamic model framework may still apply. On the other hand, models based on gluon saturation and color connections suggest that the long-range correlations are an initial-state effect, intrinsic to QCD at high gluon density~\cite{Dusling:2012cg,Dusling:2012wy,Kovchegov:2012nd,McLerran:2013oju,Dusling:2013oia}. Recently a hybrid model that takes into account both the initial- and final-state effects has been proposed~\cite{Bzdak:2013zma}. All these approaches can describe, qualitatively and even quantitatively, the $v_2$ and $v_3$ data in the $\mbox{$p$+Pb}$ collisions. In order to provide more insights into the nature of the ridge correlation and to discriminate between different theoretical interpretations, this paper provides a detailed measurement of the two-particle correlation and $v_n$ coefficients in $\mbox{$p$+Pb}$ collisions at a nucleon-nucleon center-of-mass energy of $\sqrt{s_{_{\mathrm{NN}}}}=5.02$~TeV. The data correspond to an integrated luminosity of approximately 28~$\mathrm{nb}^{-1}$, recorded in 2013 by the ATLAS experiment at the LHC. This measurement benefits from a dedicated high-multiplicity trigger (see Sec.~\ref{sec:trig}) that provides a large sample of high-multiplicity events, not only extending the previous $v_2$ and $v_3$ results to higher $\pT$, but also enabling the first measurement of $v_1$, $v_4$ and $v_5$. The results are extracted independently for two different event-activity definitions: the total transverse energy in the forward calorimeter on the Pb-fragmentation side\footnote{ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the center of the detector and the $z$-axis along the beam pipe. The $x$-axis points from the IP towards the center of the LHC ring, and the $y$-axis completes the right-handed system. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta=-\ln\tan(\theta/2)$. During 2013 $\mbox{$p$+Pb}$ data taking, the beam directions were reversed approximately half-way through the running period, but in presenting results the direction of the proton beam is always chosen to point to positive $\eta$.} ($-4.9<\eta<-3.2$), $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$, or the number of reconstructed tracks in $|\eta|<2.5$, $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. The results are also compared to the Pb+Pb data with similar multiplicity. The analysis technique follows closely the previous ATLAS study of $v_2$ and $v_3$ based on a much smaller dataset from a short $\mbox{$p$+Pb}$ run in 2012~\cite{Aad:2012gla}. \section{Experimental setup} \subsection{Detector and dataset} The ATLAS detector~\cite{Aad:2008zzm} provides nearly full solid-angle coverage of the collision point with tracking detectors, calorimeters and muon chambers. The measurements presented in this paper are performed using the ATLAS inner detector (ID), forward calorimeters (FCal), minimum-bias trigger scintillators (MBTS), zero-degree calorimeter (ZDC) and the trigger and data acquisition systems. The ID measures charged particles within the pseudorapidity region $|\eta| < 2.5$ using a combination of silicon pixel detector, silicon microstrip detector (SCT), and a straw-tube transition radiation tracker, all immersed in a 2 T axial magnetic field. The MBTS detects charged particles over 2.1 $< |\eta| <$ 3.9 using two hodoscopes of 16 counters positioned at $z$ = $\pm$3.6 m. The FCal consists of two sections that cover 3.2 $< |\eta| <$ 4.9. The FCal modules are composed of tungsten and copper absorbers with liquid argon as the active medium, which provide 10 interaction lengths of material. The ZDC, situated at $\approx$ 140~m from the collision vertex, detects neutral particles, mostly neutrons and photons, with $|\eta| >$ 8.3. This analysis uses approximately 28~$\mbox{${\rm nb}^{-1}$}$ of $\mbox{$p$+Pb}$ data recorded by the ATLAS experiment at the LHC in 2013. The LHC was configured with a 4~\TeV\ proton beam and a 1.57~\TeV\ per-nucleon Pb beam that together produced collisions at $\mbox{$\sqrt{s_{\mathrm{NN}}}$} = 5.02$~TeV. The beam directions were reversed approximately half-way through the running period. The higher energy of the proton beam results in a net rapidity shift of the nucleon--nucleon center-of-mass frame relative to the ATLAS rest frame. This rapidity shift is $0.47$ towards the proton beam direction. \subsection{Trigger} \label{sec:trig} The minimum-bias (MB) Level-1 (L1) trigger~\cite{Aad:2012xs} used for this analysis requires a signal in at least one MBTS counter on each side, or a signal in the ZDC on the Pb-fragmentation side with the trigger threshold set just below the peak corresponding to a single neutron. A timing requirement based on signals from each side of the MBTS is imposed to suppress beam backgrounds. Due to the high event rate during the run, only a small fraction of the MB events ($\sim$1/1000) were recorded. In order to enhance the number of events with high multiplicity, a dedicated high-multiplicity trigger (HMT) was implemented, which uses the ATLAS L1 and high-level trigger (HLT) systems~\cite{pPb2013}. At L1, the total transverse energy $\mbox{$E_{\mathrm{T}}^{_{\mathrm{L1}}}$}$ in the FCal rapidity interval is required to be above a certain threshold. In the HLT, the charged-particle tracks are reconstructed by requiring at least two hits in the pixel detector and three hits in the SCT. For each event, the collision vertex reconstructed with the highest number of online tracks is selected, and the number of tracks ($\mbox{$N_{\mathrm{trk}}^{_{\mathrm{HLT}}}$}$) associated with this vertex with $\pT>0.4$ GeV and a distance of closest approach of less than 4~mm is calculated. The HMT triggers are implemented by requiring different thresholds on the values of $\mbox{$E_{\mathrm{T}}^{_{\mathrm{L1}}}$}$ and $\mbox{$N_{\mathrm{trk}}^{_{\mathrm{HLT}}}$}$ with prescale factors adjusted to the instantaneous luminosity provided by the LHC~\cite{pPb2013}. This analysis uses the six pairs of thresholds on $\mbox{$E_{\mathrm{T}}^{_{\mathrm{L1}}}$}$ and $\mbox{$N_{\mathrm{trk}}^{_{\mathrm{HLT}}}$}$ listed in Table~\ref{tab:trig}. The $\mbox{$N_{\mathrm{trk}}^{_{\mathrm{HLT}}}$} \geq$ 225 trigger was not prescaled throughout the entire running period. \begin{table}[!h] \centering \begin{tabular}{|c|c|c|c|c|c|c|}\hline $\mbox{$N_{\mathrm{trk}}^{_{\mathrm{HLT}}}$}$ & $\geq$100 & $\geq$130& $\geq$150& $\geq$180& $\geq$200& $\geq$225\tabularnewline\hline $\mbox{$E_{\mathrm{T}}^{_{\mathrm{L1}}}$}$ [GeV] & $\geq$10 & $\geq$10 & $\geq$50 & $\geq$50 & $\geq$65 & $\geq$65 \tabularnewline\hline \end{tabular} \caption{\label{tab:trig} A list of thresholds in $\mbox{$E_{\mathrm{T}}^{_{\mathrm{L1}}}$}$ and $\mbox{$N_{\mathrm{trk}}^{_{\mathrm{HLT}}}$}$ for the high-multiplicity triggers used in this analysis.} \end{table} \pagebreak \section{Data analysis} \label{sec:ana} \subsection{Event and track selections} \label{sec:sel} In the offline analysis, $\mbox{$p$+Pb}$ events are required to have a reconstructed vertex containing at least two associated offline tracks, with its $z$ position satisfying $|\mbox{$z_{\mathrm{vtx}}$}|< 150$~mm. Non-collision backgrounds and photonuclear interactions are suppressed by requiring at least one hit in a MBTS counter on each side of the interaction point, and the difference between times measured on the two sides to be less than 10~ns. In the 2013 $\mbox{$p$+Pb}$ run, the luminosity conditions provided by the LHC result in an average probability of 3\% that an event contains two or more $\mbox{$p$+Pb}$ collisions (pileup). The pileup events are suppressed by rejecting events containing more than one good reconstructed vertex. The remaining pileup events are further suppressed based on the signal in the ZDC on the Pb-fragmentation side. This signal is calibrated to the number of detected neutrons ($N_{n}$) based on the location of the peak corresponding to a single neutron. The distribution of $N_{n}$ in events with pileup is broader than that for the events without pileup. Hence a simple cut on the high tail-end of the ZDC signal distribution further suppresses the pileup, while retaining more than 98\% of the events without pileup. After this pileup rejection procedure, the residual pileup fraction is estimated to be $\lesssim 10^{-2}$ in the event class with the highest track multiplicity studied in this analysis. About 57 million MB-selected events and 15 million HMT-selected events are included in this analysis. Charged-particle tracks are reconstructed in the ID using an algorithm optimized for $\mbox{$p$+$p$}$ minimum-bias measurements~\cite{Aad:2010ac}: the tracks are required to have $\pT >$ 0.3 GeV and $|\eta| <$ 2.5, at least seven hits in the pixel detector and the SCT, and a hit in the first pixel layer when one is expected. In addition, the transverse ($d_0$) and longitudinal ($z_0$ $\sin\theta$) impact parameters of the track relative to the vertex are required to be less than 1.5 mm. They are also required to satisfy $|d_0/\sigma_{d_0}| <$ 3 and $|z_0\sin\theta/\sigma_z | <$ 3, respectively, where $\sigma_{d_0}$ and $\sigma_z$ are uncertainties on $d_0$ and $z_0\sin\theta$ obtained from the track-fit covariance matrix. The efficiency, $\epsilon(\pT, \eta)$, for track reconstruction and track selection cuts is obtained using $\mbox{$p$+Pb}$ Monte Carlo events produced with version 1.38b of the HIJING event generator~\cite{Wang:1991} with a center-of-mass boost matching the beam conditions. The response of the detector is simulated using GEANT4~\cite{Agostinelli:2002hh,Aad:2010ah} and the resulting events are reconstructed with the same algorithms as applied to the data. The efficiency increases with $\pT$ by 6\% between 0.3 and 0.5 GeV, and varies only weakly for $\pT >$ 0.5 GeV, where it ranges from 82\% at $\eta$ = 0 to 70\% at $|\eta|$ = 2 and 60\% at $|\eta| >$ 2.4. The efficiency is also found to vary by less than 2\% over the multiplicity range used in the analysis. The extracted efficiency function $\epsilon(\pT, \eta)$ is used in the correlation analysis, as well as to estimate the average efficiency-corrected charged-particle multiplicity in the collisions. \subsection{Characterization of the event activity} The two-particle correlation (2PC) analyses are performed in event classes with different overall activity. The event activity is characterized by either $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$, the sum of transverse energy measured on the Pb-fragmentation side of the FCal with $-4.9<\eta<-3.2$, or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, the offline-reconstructed track multiplicity in the ID with $|\eta|<2.5$ and $\pT>0.4$ GeV. These event-activity definitions have been used in previous $\mbox{$p$+Pb}$ analyses~\cite{Aad:2012gla,Aad:2013fja,Khachatryan:2010gv,CMS:2012qk,Chatrchyan:2013nka}. Events with larger activity have on average a larger number of participating nucleons in the Pb nucleus and a smaller impact parameter. Hence the term ``centrality'', familiar in A+A collisions, is used to refer to the event activity. The terms ``central'' and ``peripheral'' are used to refer to events with large activity and small activity, respectively. Due to the wide range of trigger thresholds and the prescale values required by the HMT triggers, the $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ and $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ distributions are very different for the HMT events and the MB events. In order to properly include the HMT events in the event-activity classification, an event-by-event weight, $w=1/P$, is utilized. The combined probability, $P$, for a given event to be accepted by the MB trigger or any of the HMT triggers is calculated via the inclusion-exclusion principle as \begin{eqnarray} \label{eq:setup} P = \sum_{1\le i\le N}p_i - \sum_{1\le i<j\le N}p_ip_j+ \sum_{1\le i<j<k\le N}p_ip_jp_k-...\;\;, \end{eqnarray} where $N$ is the total number of triggers, and $p_i$ is the probability for the $i^{\mathrm{th}}$-trigger to accept the event, defined as zero if the event does not fire the trigger and otherwise as the inverse of the prescale factor of the trigger. The higher-order terms in Eq.~(\ref{eq:setup}) account for the probability of more than one trigger being fired. The weight factor, $w$, is calculated and applied event by event. The distribution for all events after re-weighting has the same shape as the distribution for MB events, as should be the case if the re-weighting is done correctly. Figure~\ref{fig:2} shows the distribution of $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ (left panels) and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (right panels) for the MB and MB+HMT events before (top panels) and after (bottom panels) the re-weighting procedure. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{fig_01} \caption{\label{fig:2} The distributions of $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ (left panels) and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (right panels) for MB and MB+HMT events before (top panels) and after (bottom panels) applying an event-by-event weight (see text). The smaller symbols in the top panels represent the distributions from the six HMT triggers listed in Table~\ref{tab:trig}.} \end{figure} For MB-selected events, the re-weighted distribution differs from the original distribution by a constant factor, reflecting the average prescale. The multiple steps in the $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ distribution (top-left panel) reflect the rapid turn-on behavior of individual HMT triggers in $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. The broad shoulder in the $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ distribution (top-right panel) is due to the finite width of the $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ vs. $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ correlation, which smears the contributions from different HMT triggers in $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$. All these structures disappear after the re-weighting procedure. The results of this analysis are obtained using the MB+HMT combined dataset, with event re-weighting. Due to the relatively slow turn-on of the HMT triggers as a function of $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (Fig.~\ref{fig:2}(c)), the events selected in a given $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ range typically receive contributions from several HMT triggers with very different weights. Hence the effective increase in the number of events from the HMT triggers in the large $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ region is much smaller than the increase in the large $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ region. Figure~\ref{fig:2b}(a) shows the correlation between $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ and $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ from MB+HMT $\mbox{$p$+Pb}$ events. This distribution is similar to that obtained for the MB events, except that the HMT triggers greatly extend the reach in both quantities. The $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ value grows with increasing $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, suggesting that, on average, $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ in the nucleus direction correlates well with the particle production at mid-rapidity. On the other hand, the broad distribution of $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ at fixed $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ also implies significant fluctuations. To study the relation between $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ and $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, events are divided into narrow bins in $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, and the mean and root-mean-square values of the $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ distribution are calculated for each bin. The results are shown in Fig.~\ref{fig:2b}(b). A nearly linear relation between $\left\langle\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}\right\rangle$ and $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ is observed. This relationship is used to match a given $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ event class to the corresponding $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ event class. This approximately linear relation can also be parameterized (indicated by the solid line in Fig.~\ref{fig:2b}(b)) as \begin{eqnarray} \label{eq:map} \left\langle\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}\right\rangle/\mathrm{GeV}\approx0.60\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}. \end{eqnarray} \begin{figure}[!h] \centering \includegraphics[width=0.5\columnwidth]{fig_02a}\includegraphics[width=0.5\columnwidth]{fig_02b} \caption{\label{fig:2b} (a) Correlation between $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ and $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ in MB+HMT events. (b) The mean $\left\langle\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}\right\rangle$ and root-mean-square $\sigma_{\mathrm{E_{\rm T}^{\rm{Pb}}}}$ of the $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ distributions for slices of $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. The line is a linear fit to all points.} \end{figure} The 2PC analysis is performed in different intervals of the event activity defined by either $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. Table~\ref{tab:1} gives a list of representative event-activity classes, together with the fraction of MB+HMT events (after re-weighting as shown in Fig.~\ref{fig:2b}(a)) contained in each event class. The table also provides the average $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ values for each event-activity class, as well as the efficiency-corrected number of charged particles within $|\eta|<2.5$ and $\pT>0.4$ GeV, $\mbox{$\langle N_{\mathrm{ch}}\rangle$}$. The event classes defined in narrow $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ ranges are used for detailed studies of the centrality dependence of the 2PC, while the event classes in broad $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ ranges are optimized for the studies of the $\pT$ dependence. As the number of events at large $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ is smaller than at large $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, the main results in this paper are presented for event classes defined in $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. \begin{table}[!h] \centering \small{ \begin{tabular}{|ccccc||ccccc|}\hline \multicolumn{5}{|c||}{Event-activity classes based on $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$} & \multicolumn{5}{|c|}{Event-activity classes based on $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$} \tabularnewline\hline $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ range & Fraction & $\mbox{$\langle E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}\rangle$}$& $\mbox{$\langle N_{\mathrm{ch}}^{\mathrm{rec}}\rangle$}$ & $\mbox{$\langle N_{\mathrm{ch}}\rangle$}$ &$\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ range& Fraction & $\mbox{$\langle E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}\rangle$}$ & $\mbox{$\langle N_{\mathrm{ch}}^{\mathrm{rec}}\rangle$}$ & $\mbox{$\langle N_{\mathrm{ch}}\rangle$}$ \tabularnewline & & [GeV] & & & [GeV] & & [GeV] & & \tabularnewline\hline $<20$ &0.31 & 7.3&10.3&$12.6\pm0.6$&$<10$&0.28 &4.8&12.4&$ 15.4\pm0.7$\tabularnewline $[20, 40)$&0.27 & 18.6&29.1&$37.9\pm1.7$&$[10,23)$&0.26 &16.1&29.2&$ 38.1\pm1.7$\tabularnewline $[40, 60)$&0.19 & 30.8&48.8&$64.3\pm2.9$&$[23,37)$&0.19 &29.5&47.3&$ 62.3\pm2.8$\tabularnewline $[60, 80)$&0.12 & 42.8&68.6&$90.7\pm4.1$&$[37,52)$&0.12 &43.8&64.0&$ 84.7\pm3.8$\tabularnewline $[80,100)$&0.064 & 54.9&88.3&$117\pm5$&$[52,68)$&0.067 &58.8&80.4&$ 107\pm5$\tabularnewline $[100,120)$&0.029 & 66.4&108&$144\pm7$&$[68,83)$&0.028 &74.2&96.1&$ 128\pm6$\tabularnewline $[120,140)$&0.011 & 78.4&128&$170\pm8$&$[83,99)$&0.012 &89.7&111&$ 147\pm7$\tabularnewline $[140,160)$&0.0040 & 90.3&148&$196\pm9$&$[99,116)$&0.0043 &106&126&$ 168\pm8$\tabularnewline $[160,180)$&0.0013 & 102&168& $223\pm10$&$[116,132)$&0.0012 &122&141&$ 187\pm8$\tabularnewline $[180,200)$&$3.6\times10^{-4}$ & 113&187&$249\pm11$&$[132,148)$&$3.6\times10^{-4}$ &138&155&$ 206\pm9$\tabularnewline $[200,220)$&$9.4\times10^{-5}$ & 125&207&$276\pm12$&$[148,165)$&$1.0\times10^{-4}$ &155&169&$ 225\pm10$\tabularnewline $[220,240)$&$2.1\times10^{-5}$ & 134&227&$303\pm14$&$[165,182)$&$2.2\times10^{-5}$ &171&184&$ 244\pm11$\tabularnewline $[240,260)$&$4.6\times10^{-6}$ & 145&247&$329\pm15$&$[182,198)$&$4.6\times10^{-6}$ &188&196&$ 261\pm12$\tabularnewline $[260,290)$&$1.1\times10^{-6}$ & 157&269&$358\pm16$&$[198,223)$&$1.1\times10^{-6}$ &206&211&$ 281\pm13$\tabularnewline $[290,370)$&$8.9\times10^{-8}$ & 174&301&$393\pm18$&$[223,300)$&$9.6\times10^{-8}$ &232&230&$ 306\pm14$\tabularnewline\hline $<40$ &0.58 & 12.5&19.0&$24.4\pm1.1$&$<25$&0.59 &10.2&21.7&$ 28.0\pm1.3$\tabularnewline $[40, 80)$&0.32 & 35.3&56.4&$74.4\pm3.3$&$[25,50)$&0.27 &35.1&54.7&$ 72.2\pm3.3$\tabularnewline $[80,110)$&0.081 & 56.8&91.7&$122\pm6$&$[50,75)$&0.096 &61.5&81.4&$ 108\pm5$\tabularnewline $[110,140)$&0.023 & 74.2&121&$161\pm7$&$[75,100)$&0.025 &84.5&106&$ 141\pm6$\tabularnewline $[140,180)$&0.0053 & 93.0&153&$203\pm9$&$[100,130)$&0.0051 &110&130&$ 173\pm8$\tabularnewline $[180,220)$&$4.6\times10^{-4}$ & 116&192&$255\pm12$&$[130,165)$&$5.6\times10^{-4}$ &141&156&$ 208\pm9$\tabularnewline $[220,260)$&$2.6\times10^{-5}$ & 136&231&$307\pm14$&$[165,200)$&$2.7\times10^{-5}$ &174&186&$ 248\pm11$\tabularnewline $[260,370)$&$1.2\times10^{-6}$ & 158&271&$361\pm16$&$[200,300)$&$1.0\times10^{-6}$ &208&214&$ 284\pm13$\tabularnewline\hline \end{tabular}}\normalsize \caption{\label{tab:1} A list of event-activity classes defined in $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ (left) and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (right) ranges, where the notation $[a,b)$ implies $a\leq\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$~or~$\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<b$. For each event class, the fraction of MB+HMT events after trigger re-weighting (Fig.~\ref{fig:2b}(a)), the average values of $\mbox{$\langle E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}\rangle$}$ and $\mbox{$\langle N_{\mathrm{ch}}^{\mathrm{rec}}\rangle$}$, and the efficiency-corrected average number of charged particles with $\pT>0.4$ GeV and $|\eta|<2.5$, $\mbox{$\langle N_{\mathrm{ch}}\rangle$}$, are also listed.} \end{table} \subsection{Two-particle correlation} \label{sec:2PC} For a given event class, the two-particle correlations are measured as a function of relative azimuthal angle, $\mbox{$\Delta \phi$}=\phi_a-\phi_b$, and relative pseudorapidity, $\mbox{$\Delta \eta$}=\eta_a-\eta_b$, with $|\mbox{$\Delta \eta$}|\leq \eta_{\Delta}^{\mathrm{max}}=5$. The labels $a$ and $b$ denote the two particles in the pair, which may be selected from different $\pT$ intervals. The particles $a$ and $b$ are conventionally referred to as the ``trigger'' and ``associated'' particles, respectively. The correlation strength, expressed in terms of the number of pairs per trigger particle, is defined as~\cite{Adare:2008ae,Alver:2009id,Aggarwal:2010rf} \begin{eqnarray} \label{eq:ana1} Y(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) = \frac{\int B(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) d\mbox{$\Delta \phi$} d\mbox{$\Delta \eta$} }{\pi \eta_{\Delta}^{\mathrm{max}}}\left(\frac{S(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})}{B(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})}\right)\;,\;\; Y(\mbox{$\Delta \phi$}) = \frac{\int B(\mbox{$\Delta \phi$}) d\mbox{$\Delta \phi$} }{\pi} \left(\frac{S(\mbox{$\Delta \phi$})}{B(\mbox{$\Delta \phi$})}\right)\;, \end{eqnarray} where $S$ and $B$ represent pair distributions constructed from the same event and from ``mixed events''\cite{Adare:2008ae}, respectively, which are then normalized by the number of trigger particles in the event. These distributions are also referred to as per-trigger yield distributions. The mixed-event distribution, $B(\mbox{$\Delta \phi$}, \mbox{$\Delta \eta$})$, measures the distribution of uncorrelated pairs. The $B(\mbox{$\Delta \phi$}, \mbox{$\Delta \eta$})$ distribution is constructed by choosing the two particles in the pair from different events of similar $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ (match to $|\Delta\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}|<10$ tracks), $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (match to $|\Delta\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}|<10$ GeV), and $\mbox{$z_{\mathrm{vtx}}$}$ (match to $|\Delta\mbox{$z_{\mathrm{vtx}}$}|<10$~mm), so that $B(\mbox{$\Delta \phi$}, \mbox{$\Delta \eta$})$ properly reflects the known detector effects in $S(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$. The one-dimensional (1-D) distributions $S(\mbox{$\Delta \phi$})$ and $B(\mbox{$\Delta \phi$})$ are obtained by integrating $S(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ and $B(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$, respectively, over a $\Delta\eta$ range. The region $|\Delta\eta|<1$ is chosen to focus on the short-range features of the correlation functions, while the region $|\Delta\eta|>2$ is chosen to focus on the long-range features of the correlation functions. These two regions are hence referred to as the ``short-range region'' and ``long-range region'', respectively. The normalization factors in front of the $S/B$ ratio are chosen such that the $(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$-averaged value of $B(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ and $\mbox{$\Delta \phi$}$-averaged value of $B(\mbox{$\Delta \phi$})$ are both unity. When measuring $S$ and $B$, pairs are filled in one quadrant of the $(\mbox{$\Delta \phi$}, \mbox{$\Delta \eta$})$ space and then reflected to the other quadrants~\cite{Aad:2012gla}. To correct $S(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ and $B(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ for the individual inefficiencies of particles $a$ and $b$, the pairs are weighted by the inverse product of their tracking efficiencies $1/(\epsilon_a\epsilon_b)$. Remaining detector distortions not accounted for in the efficiency largely cancel in the $S/B$ ratio. Examples of two-dimensional (2-D) correlation functions are shown in Fig.~\ref{fig:method0} for charged particles with $1<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<3$~GeV in low-activity events, $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<10$ GeV or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<20$ in the top panels, and high-activity events, $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}>100$ GeV or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}>220$ in the bottom panels. The correlation for low-activity events shows a sharp peak centered at $(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) = (0,0)$ due to short-range correlations for pairs resulting from jets, high-$\pT$ resonance decays, and Bose--Einstein correlations. The correlation function also shows a broad structure at $\mbox{$\Delta \phi$}\sim\pi$ from low-$\pT$ resonances, dijets, and momentum conservation that is collectively referred to as ``recoil''~\cite{Aad:2012gla} in the remainder of this paper. In the high-activity events, the correlation reveals a flat ridge-like structure at $\mbox{$\Delta \phi$}\sim0$ (the ``near-side'') that extends over the full measured $\mbox{$\Delta \eta$}$ range. This $\mbox{$\Delta \eta$}$ independence is quantified by integrating the 2-D correlation functions over $|\mbox{$\Delta \phi$}|<1$ to obtain $Y(\mbox{$\Delta \eta$}) = \int_{|\Delta\phi|<1}Y(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) \mbox{$\Delta \phi$}$. The yield associated with the near-side short-range correlation peak centered at $(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) = (0,0)$ can then be estimated as \begin{eqnarray} \label{eq:ana1b} Y^{\mathrm{N-Peak}} = \int_{|\Delta\eta|<1} Y(\mbox{$\Delta \eta$}) d\mbox{$\Delta \eta$}-\frac{1}{5-\eta_{\Delta}^{\mathrm{min}}}\int_{\eta_{\Delta}^{\mathrm{min}}<|\Delta\eta|<5} Y(\mbox{$\Delta \eta$}) d\mbox{$\Delta \eta$}\;, \end{eqnarray} where the second term accounts for the contribution of uncorrelated pairs and the ridge component under the near-side peak. The default value of $Y^{\mathrm{N-Peak}}$ is obtained with a lower-end of the integration range of $\eta_{\Delta}^{\mathrm{min}}=2$, but the value of $\eta_{\Delta}^{\mathrm{min}}$ is varied from 2 to 4 to check the stability of $Y^{\mathrm{N-Peak}}$. The distribution at $\mbox{$\Delta \phi$}\sim\pi$ (the ``away-side'') is also broadened in high-activity events, consistent with the presence of a long-range component in addition to the recoil component~\cite{Aad:2012gla}. This recoil component can be estimated from the low-activity events and subtracted from the high-activity events using the procedure detailed in the next section. \begin{figure}[!h] \centering \includegraphics[width=0.85\columnwidth]{fig_03} \caption{\label{fig:method0} The 2-D correlation function in $\mbox{$\Delta \phi$}$ and $\mbox{$\Delta \eta$}$ for the peripheral event class selected by either (a) $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<10$ GeV or (b) $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<20$ and the central event class selected by either (c) $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}\geq100$ GeV or (d) $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$.} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.85\columnwidth]{fig_04} \caption{\label{fig:method} The 2-D correlation function in $\mbox{$\Delta \phi$}$ and $\mbox{$\Delta \eta$}$ for events with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$ (a) before and (b) after subtraction of the peripheral yield. Panel (c) shows the corresponding 1-D correlation functions in $\mbox{$\Delta \phi$}$ for pairs integrated over $2<|\Delta\eta|<5$ from panels (a) and (b), together with Fourier fits including the first five harmonics. Panel (d) shows the $2^{\mathrm{nd}}$,$3^{\mathrm{rd}}$, and $4^{\mathrm{th}}$-order Fourier coefficients as a function of $|\mbox{$\Delta \eta$}|$ calculated from the 2-D distributions in panel (a) or panel (b), represented by the open or filled symbols, respectively. The error bars and shaded boxes are statistical and systematic uncertainties, respectively.} \end{figure} \subsection{Recoil subtraction} \label{sec:recoil} The correlated structure above a flat pedestal in the correlation functions is calculated using a zero-yield-at-minimum (ZYAM) method~\cite{Ajitanand:2005jj,Adare:2008ae} following previous measurements~\cite{CMS:2012qk,Abelev:2012ola,Aad:2012gla}, \begin{eqnarray} \label{eq:ana2} Y^{\mathrm{corr}}(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) = \frac{\int B(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) d\mbox{$\Delta \phi$} d\mbox{$\Delta \eta$} }{\pi \eta_{\Delta}^{\mathrm{max}}}\left(\frac{S(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})}{B(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})}-b_{_{\mathrm{ZYAM}}}\right),\; Y^{\mathrm{corr}}(\mbox{$\Delta \phi$}) = \frac{\int B(\mbox{$\Delta \phi$}) d\mbox{$\Delta \phi$} }{\pi} \left(\frac{S(\mbox{$\Delta \phi$})}{B(\mbox{$\Delta \phi$})}-b_{_{\mathrm{ZYAM}}}\right)\;, \end{eqnarray} where the parameter $b_{_{\mathrm{ZYAM}}}$ represents the pedestal formed by uncorrelated pairs. A second-order polynomial fit to the 1-D $Y(\Delta \phi)$ distribution in the long-range region is used to find the location of the minimum point, $\mbox{$\Delta \phi$}_{_{\mathrm{ZYAM}}}$, and from this the value of $b_{_{\mathrm{ZYAM}}}$ is determined and subtracted from the 2-D correlation function. The $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ functions differ, therefore, by a constant from the $Y(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ functions, such as those in Fig.~\ref{fig:method0}. In low-activity events, $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ contains mainly the short-range correlation component and the recoil component. In high-activity events, the contribution from the long-range ``ridge'' correlation also becomes important. This long-range component of the correlation function in a given event class is obtained by estimating the short-range correlation component using the peripheral events and is then subtracted, \begin{eqnarray} \label{eq:ana3} Y^{\mathrm{sub}}(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}) = Y(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})-\alpha Y_{\mathrm{peri}}^{\mathrm{corr}}(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$}), \; \; Y^{\mathrm{sub}}(\mbox{$\Delta \phi$}) = Y(\mbox{$\Delta \phi$})-\alpha Y_{\mathrm{peri}}^{\mathrm{corr}}(\mbox{$\Delta \phi$}), \end{eqnarray} where the $Y^{\mathrm{corr}}$ in a low-activity or peripheral event class, denoted by $Y_{\mathrm{peri}}^{\mathrm{corr}}$, is used to estimate and subtract (hence the superscript ``sub'' in Eq.~(\ref{eq:ana3})) the short-range correlation at the near-side and the recoil at the away-side. The parameter $\alpha$ is chosen to adjust the near-side short-range correlation yield in the peripheral events to match that in the given event class for each $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ and $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ combination, $\alpha=Y^{\mathrm{N-Peak}}/Y^{\mathrm{N-Peak}}_{\mathrm{peri}}$. This scaling procedure is necessary to account for enhanced short-range correlations and away-side recoil in higher-activity events, under the assumption that the relative contribution of the near-side short-range correlation and away-side recoil is independent of the event activity. A similar rescaling procedure has also been used by the CMS Collaboration~\cite{Chatrchyan:2013nka}. The default peripheral event class is chosen to be $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<\eT^0=10$ GeV. However, the results have also been checked with other $\eT^0$ values, as well as with a peripheral event class defined by $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<20$. In the events with the highest multiplicity, the value of $\alpha$ determined with the default peripheral event class varies from $\sim 2$ at $\pT\approx 0.5$ GeV to $\sim 1$ for $\pT>3$ GeV, with a $\pT$-dependent uncertainty of 3--5\%. The uncertainty on $b_{_{\mathrm{ZYAM}}}$ only affects the recoil-subtracted correlation functions through the $Y_{\mathrm{peri}}^{\mathrm{corr}}$ term in Eq.~(\ref{eq:ana3}). This uncertainty is usually very small in high-activity $\mbox{$p$+Pb}$ collisions, due to their much larger pedestal level than for the peripheral event class. Figures~\ref{fig:method}(a) and (b) show, respectively, the 2-D correlation functions before and after the subtraction procedure given by Eq.~(\ref{eq:ana3}). Most of the short-range peak and away-side recoil structures are removed by the subtraction, and the remaining distributions exhibit a $\mbox{$\Delta \phi$}$-symmetric double-ridge that is almost independent of $\mbox{$\Delta \eta$}$. Figure~\ref{fig:method}(c) shows the corresponding 1-D correlation functions before and after recoil subtraction in the long-range region of $|\mbox{$\Delta \eta$}|>2$. The distribution at the near-side is not affected since the near-side short-range peak is narrow in $\eta$ (Fig.~\ref{fig:method}(a)), while the away-side distribution is reduced due to the removal of the recoil component. \subsection{Extraction of the azimuthal harmonics associated with long-range correlation} \label{sec:fourier} The azimuthal structure of the long-range correlation is studied via a Fourier decomposition similar to the approach used in the analysis of Pb+Pb collisions~\cite{Aamodt:2011by,Aad:2012bu}, \begin{eqnarray} \label{eq:four1} Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})=\frac{\int Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})d\mbox{$\Delta \phi$}}{\pi}\left(1 + \sum_n 2v_{n,n}\cos(n\Delta\phi) \right), \end{eqnarray} where $v_{n,n}$ are the Fourier coefficients calculated via a discrete Fourier transformation, \begin{eqnarray} v_{n,n}= \frac{\sum_{m=1}^{N} \cos (n\Delta\phi_m) Y^{\mathrm{sub}}(\mbox{$\Delta \phi$}_m)}{\sum_{m=1}^N Y^{\mathrm{sub}}(\mbox{$\Delta \phi$}_m)}\;, \end{eqnarray} where $N=24$ is the number of $\mbox{$\Delta \phi$}$ bins from 0 to $\pi$. The first five Fourier coefficients are calculated as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ and $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ for each event-activity class. The azimuthal anisotropy coefficients for single particles, $v_n$, can be obtained via the factorization relation commonly used for heavy-ion collisions~\cite{Aamodt:2011by,Aad:2012bu,Adcox:2002ms}, \begin{eqnarray} \label{eq:four2} v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})=v_n(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})v_n(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}). \end{eqnarray} From this the $\pT$ dependence of $v_n$ for $n=2$--5 are calculated as \begin{eqnarray} \label{eq:four3} v_n(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}) = v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})/\sqrt{v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})}\;, \end{eqnarray} where the default transverse momentum range for the associated particle (b) is chosen to be $1<\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}<3$ GeV, and the Fourier coefficient as a function of the transverse momentum of the trigger particle is denoted by $v_n(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$, or simply $v_n(\pT)$ where appropriate. The extraction of $v_1$ requires a slight modification and is discussed separately in Sec.~\ref{sec:v1}. The factorization behavior is checked by comparing the $v_n(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$ obtained for different $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ ranges, as discussed in Sec.~\ref{sec:result2}. A similar Fourier decomposition procedure is also carried out for correlation functions without peripheral subtraction, i.e. $Y(\mbox{$\Delta \phi$})$. The harmonics obtained in this way are denoted by $v_{n,n}^{\mathrm{unsub}}$ and $v_{n}^{\mathrm{unsub}}$, respectively. Figure~\ref{fig:method}(d) shows the azimuthal harmonics obtained by Fourier decomposition of the $Y(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ and $Y^{\mathrm{sub}}(\mbox{$\Delta \phi$},\mbox{$\Delta \eta$})$ distributions in Figs.~\ref{fig:method}(a) and (b) for different, narrow slices of $\mbox{$\Delta \eta$}$. The resulting $v_{n}^{\mathrm{unsub}}$ and $v_{n}$ values are plotted as a function of $\mbox{$\Delta \eta$}$ for $n=2$, 3 and 4. The $v_n$ values are much smaller than $v_n^{\mathrm{unsub}}$ for $|\mbox{$\Delta \eta$}|<1$, reflecting the removal of the short-range correlations at the near-side. The $v_2$ values are also systematically smaller than $v_2^{\mathrm{unsub}}$ for $|\mbox{$\Delta \eta$}|>1$, reflecting the removal of the away-side recoil contribution. \subsection{Systematic uncertainties} \label{sec:sys} The systematic uncertainties in this analysis arise from pair acceptance, the ZYAM procedure, tracking efficiency, Monte Carlo consistency, residual pileup, and the recoil subtraction. Each source is discussed separately below. The correlation functions rely on the pair acceptance functions, $B(\Delta\phi, \Delta\eta)$ and $B(\Delta\phi)$ in Eq.~(\ref{eq:ana1}), to reproduce detector acceptance effects in the signal distribution. A natural way of quantifying the influence of detector effects on $v_{n,n}$ and $v_n$ is to express the single-particle and pair acceptance functions as Fourier series, similar to Eq.~(\ref{eq:four1}). The resulting coefficients for pair acceptance $v_{n,n}^{\mathrm{det}}$ are the product of those for the two single-particle acceptances $v_{n}^{\mathrm{det,a}}$ and $v_{n}^{\mathrm{det,b}}$. In general, the pair acceptance function in $\mbox{$\Delta \phi$}$ is quite flat: the maximum fractional variation from its average value is observed to be less than 0.001 for pairs integrated in 2 $< |\Delta\eta| <$ 5, and the corresponding $|v_{n,n}^{\mathrm{det}}|$ values are found to be less than $2\times 10^{-4}$. These $v_{n,n}^{\mathrm{det}}$ values are expected to mostly cancel in the correlation function, and only a small fraction contributes to the uncertainties in the pair acceptance function. Possible residual effects on the pair acceptance are evaluated following Ref.~\cite{Aad:2012bu}, by varying the criteria for matching in $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$, and $\mbox{$z_{\mathrm{vtx}}$}$. In each case, the residual $v_{n,n}^{\mathrm{det}}$ values are evaluated by a Fourier expansion of the ratio of the pair acceptances before and after the variation. This uncertainty varies in the range of (5--8)$\times$ $10^{-6}$. It is negligible for $v_2$ and $v_3$, but become sizable for higher-order harmonics, particularly at low $\pT$, where the $v_n$ values are small. As discussed in Sec.~\ref{sec:recoil}, the value of $b_{_{\mathrm{ZYAM}}}$ is determined by a second-order polynomial fit of the $Y(\Delta \phi)$ distribution. The stability of the fit is studied by varying the $\mbox{$\Delta \phi$}$ range in the fit. The uncertainty in $b_{_{\mathrm{ZYAM}}}$ depends on the local curvature around $\mbox{$\Delta \phi$}_{_{\mathrm{ZYAM}}}$, and is estimated to be 0.0003--0.001 of the minimum value of $Y(\Delta \phi)$. This uncertainty contributes directly to $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$})$, but contributes to $Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})$ and $v_n$ indirectly through the peripheral subtraction (see Eq.~(\ref{eq:ana3})). The resulting uncertainty on $v_n$ is found to be less than $2\%$, for all $n$. The values of per-trigger yields, $Y(\mbox{$\Delta \phi$})$, $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$})$, and $Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})$, are sensitive to the uncertainty on the tracking efficiency correction for the associated particles. This uncertainty is estimated by varying the track quality cuts and the detector material in the simulation, re-analyzing the data using corresponding Monte Carlo efficiencies and evaluating the change in the extracted yields. The resulting uncertainty is estimated to be 2.5\% due to the track selection and 2--3\% related to our limited knowledge of the detector material. The $v_{n,n}$ and $v_n$ values depend only on the shape of the $Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})$ distribution and hence are not sensitive to the tracking efficiency. The analysis procedure is also validated by measuring $v_n$ values in fully simulated HIJING events~\cite{Agostinelli:2002hh,Aad:2010ah} and comparing them to those measured using the generated particles. A small but systematic difference between the two results are included in the systematic uncertainties. Nearly all of the events containing pileup are removed by the procedure described in Sec.~\ref{sec:sel}. The influence of the residual pileup is evaluated by relaxing the pileup rejection criteria and then calculating the change in the per-trigger yields and $v_n$ values. The differences are taken as an estimate of the uncertainty, and are found to be negligible in low event-activity classes, and increase to 2\% for events with $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}>200$ GeV or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}>300$. According to Table~\ref{tab:1}, the low-activity events used in the peripheral subtraction ($\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<\eT^0=10$ GeV) correspond to 28\% of the MB-triggered events. The pair distributions for these events may contain a small genuine long-range component, leading to a reduction of the long-range correlation signal in a high-activity class via the peripheral subtraction procedure. The influence of this over-subtraction is evaluated by varying the definition of the low-activity events in the range of $\eT^0=5$ GeV to $\eT^0=20$~GeV. The $Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})$ and $v_n$ values are calculated for each variation. The $v_n$ values are found to decrease approximately linearly with increasing $\eT^0$. The amount of over-subtraction can be estimated by extrapolating $\eT^0$ to zero. The estimated changes of $v_n$ and $Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})$ vary from less than 1\% for $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}>100$ GeV or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}>150$, and increase for lower event-activity classes approximately as $1.5/\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. The relative change of $v_n$ is also found to be independent of $\pT$. As a cross-check, the analysis is repeated by defining peripheral events as $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$} <$ 20. The variation of $v_n$ values is found to be consistent with the variation from varying $\eT^0$. The stability of the scale factor, $\alpha$, is evaluated by varying the $\Delta\eta$ window of the long-range region in Eq.~(\ref{eq:ana1b}). A 3--5\% uncertainty is quoted for $\alpha$ from these variations. The resulting uncertainty on $v_n$ for $n=2$--5 is within 1\% at low $\pT$ ($<$ 4 GeV), and increases to $\sim$10\% at the highest $\pT$. However, the $v_1$ extraction is directly affected by the subtraction of the recoil component, and hence the $v_1$ value is very sensitive to the uncertainty in $\alpha$. The estimated uncertainty is 8--12\% for $\pT<1$ GeV and is about 20--30\% for $\pT>3$ GeV. The different sources of the systematic uncertainties described above are added in quadrature to give the total systematic uncertainties for per-trigger yields and $v_n$, which are summarized in Tables~\ref{tab:tabpty} and \ref{tab:tabvn}, respectively. The systematic uncertainty quoted for each source usually covers the maxmium uncertainty over the measured $\pT$ range and event-activity range. However, since $v_1(\pT)$ changes sign within 1.5--2.0 GeV (see Fig.~\ref{fig:v1b}), the relative uncertainties are quoted for $\pT<1$ GeV and $\pT>3$ GeV. The uncertainty of pair acceptance, which is less than $8\times 10^{-6}$ for $v_{n,n}$, was converted to percent uncertainties. This uncertainty can be significant at high $\pT$. \begin{table}[!h] \centering \begin{tabular}{|l|c|}\hline Residual pair acceptance [$\%$] & 1--2 \tabularnewline\hline ZYAM procedure [$\%$] & 0.2--1.5 \tabularnewline\hline Tracking efficiency \& material [$\%$] & 4.2 \tabularnewline\hline Residual pileup [$\%$] & 0--2 \tabularnewline\hline \end{tabular} \caption{\label{tab:tabpty} Summary of relative systematic uncertainties for $Y(\mbox{$\Delta \phi$})$, $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$})$ and $Y^{\mathrm{sub}}(\mbox{$\Delta \phi$})$.} \end{table} \begin{table}[!h] \centering \begin{tabular}{|l|c|c|c|c|c|}\hline & $n=1$ &$n=2$ & $n=3$ & $n=4$ & $n=5$ \tabularnewline\hline Residual pair acceptance [\%] & 1.0--5.0 & $<$0.5 & 1.0--4.0 & 7.0--12 & 7.0--20\tabularnewline\hline ZYAM procedure [\%] & 0.6 & 0.3 & 0.3 & 0.5 & 0.6 \tabularnewline\hline Tracking efficiency\& material [\%] & 1.0 & 0.4 & 0.8 & 1.2 & 2.4 \tabularnewline\hline Monte Carlo consistency [\%] & 4.0& 1.0 & 2.0 & 4.0 & 8.0 \tabularnewline\hline Residual pileup [\%] & 0--2.0 & 0--2.0 & 0--2.0 & 0--2.0 & 0--2.0\tabularnewline\hline Uncertainty on scale factor $\alpha$ [\%] & 8.0--30 & 0.2--10 & 0.2--12 & 0.2--14 & 1.0--14 \tabularnewline\hline \shortstack{Choice of peripheral events \\for $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$} >$ 160 or $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}>$100 GeV [\%]} & 4.0 & 1.0 & 1.0 & 2.0 & 4.0 \tabularnewline\hline \end{tabular} \caption{\label{tab:tabvn} Summary of relative systematic uncertainties on $v_n$, for $n=1$ to 5.} \end{table} \section{Results} \label{sec:result} \subsection{Correlation functions and integrated yields} \label{sec:result1} Figure~\ref{fig:res1} shows the 1-D correlation functions after the ZYAM procedure, $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$})$, in various ranges of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ for a fixed $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ range of 1--3 GeV. The correlation functions are obtained in the long-range region ($|\Delta\eta|>2$) and are shown for events selected by $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$. This event class contains a small fraction ($3\times10^{-5}$) of the minimum-bias $\mbox{$p$+Pb}$ events with highest multiplicity. The correlation functions are compared to the distributions of the recoil component, $\alpha Y^{\mathrm{corr}}_{\mathrm{peri}}(\mbox{$\Delta \phi$})$ in Eq.~(\ref{eq:ana3}), estimated from the peripheral event class defined by $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<10$ GeV. The scale factor $\alpha$ is chosen such that the near-side short-range yield matches between the two event classes (see Eq.~(\ref{eq:ana3}) and discussion around it). Figure~\ref{fig:res1} shows a clear near-side excess in the full $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ range studied in this analysis. An excess above the estimated recoil contribution is also observed on the away-side over the same $\pT$ range. \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{fig_05} \caption{\label{fig:res1} The per-trigger yield distributions $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$})$ and $Y^{\mathrm{recoil}}(\mbox{$\Delta \phi$})$ for events with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$ in the long-range region \mbox{$|\Delta\eta|>2$}. The distributions are shown for $1<\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}<3$ GeV in various $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ ranges. They are compared to the recoil contribution estimated from a peripheral event class defined by $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<10$ GeV using a rescaling procedure (see Eq.~(\ref{eq:ana3}) and discussion around it). The curves are Fourier fits including the first five harmonics.} \end{figure} To further quantify the properties of the long-range components, the $Y^{\mathrm{corr}}(\mbox{$\Delta \phi$})$ distributions are integrated over $|\mbox{$\Delta \phi$}|<\pi/3$ and $|\mbox{$\Delta \phi$}|>2\pi/3$, similar to the procedure used in previous analyses~\cite{Aad:2012gla,Abelev:2012ola}. The integrated yields, $\mbox{$Y_{\mathrm{int}}$}$, are obtained in several event classes and are plotted as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ in Fig.~\ref{fig:res2}. The near-side yields increase with trigger $\pT$, reach a maximum at $\pT\sim3$ GeV, and then decrease to a value close to zero at $\pT>10$ GeV. This trend is characteristic of the $\pT$ dependence of the Fourier harmonics in A+A collisions. In contrast, the away-side yields show a continuous increase across the full $\pT$ range, due to the contribution of the recoil component that mostly results from dijets. \begin{figure}[!h] \centering \includegraphics[width=0.9\columnwidth]{fig_06} \caption{\label{fig:res2} Integrated per-trigger yields $\mbox{$Y_{\mathrm{int}}$}$ as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ for $1<\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}<3$~GeV, for events in various $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ ranges on (a) the near-side and (b) the away-side. The errors bars and shaded bands represent the statistical and systematic uncertainties, respectively.} \end{figure} Figure~\ref{fig:res3} shows the centrality dependence of the long-range integrated yields for the event-activity based on $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ (left) and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (right) for particles in $1<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<3$ GeV range. The near-side yield is close to zero in low-activity events and increases with $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. The away-side yield shows a similar increase as a function of $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, but it starts at a value significantly above zero. The yield difference between these two regions is found to vary slowly with $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ or $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, indicating that the growth in the integrated yield with increasing event activity is similar on the near-side and the away-side. This behavior suggests the existence of an away-side long-range component that has a magnitude similar to the near-side long-range component. \begin{figure}[!h] \centering \includegraphics[width=0.9\columnwidth]{fig_07} \caption{\label{fig:res3} The integrated per-trigger yield, $\mbox{$Y_{\mathrm{int}}$}$, on the near-side (circles), the away-side (squares) and their difference (diamonds) as a function of (a) $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ and (b) $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ for pairs in $2<|\Delta\eta|<5$ and $1<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<3$ GeV. The yield difference is compared to the estimated recoil contribution in the away-side (solid lines). The error bars or the shaded bands represent the combined statistical and systematic uncertainties.} \end{figure} Figure~\ref{fig:res3} also shows (solid lines) the recoil component estimated from the low event-activity class ($\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<10$ GeV) via the rescaling procedure discussed in Sec.~\ref{sec:recoil}. The yield difference between the away-side and the near-side in this $\pT$ range is reproduced by this estimate of the recoil component. In other $\pT$ ranges, a systematic difference between the recoil component and the yield difference is observed and is attributed to the contribution of a genuine dipolar flow, $v_{1,1}$, to the correlation function (see discussion in Sec.~\ref{sec:v1}). To quantify the $\mbox{$\Delta \phi$}$ dependence of the measured long-range correlations, the first five harmonics of the correlation functions, $v_1$ to $v_5$, are extracted via the procedure described in Sec.~\ref{sec:fourier}. The following section summarizes the results for $v_2$--$v_5$, and the results for $v_1$ are discussed in Sec.~\ref{sec:v1}. \subsection{Fourier coefficients $v_2$--$v_5$} \label{sec:result2} Figure~\ref{fig:resb1} shows the $v_2$, $v_3$, and $v_4$ obtained \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{fig_08} \caption{\label{fig:resb1} The Fourier coefficients $v_2$, $v_3$, and $v_4$ as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ extracted from the correlation functions for events with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$, before (denoted by $v_n^{\rm{unsub}}$) and after (denoted by $v_n$) the subtraction of the recoil component. Each panel shows the results for one harmonic. The pairs are formed from charged particles with $1<\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}<3$ GeV and $|\Delta\eta|>2$. The error bars and shaded boxes represent the statistical and systematic uncertainties, respectively.} \end{figure} using the 2PC method described in Sec.~\ref{sec:fourier} for \mbox{$1<\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}<3$~GeV}. The results are shown both before (denoted by $v_n^{\rm{unsub}}$) and after the subtraction of the recoil component (Eq.~(\ref{eq:ana3})). The recoil contribution affects slightly the $v_n$ values for trigger $\pt<3$ GeV, but becomes increasingly important for higher trigger $\pT$ and higher-order harmonics. This behavior is expected as the dijet contributions, the dominant contribution to the recoil component, increase rapidly with $\pt$ (for example see Fig.~\ref{fig:res1} or Ref.~\cite{Aad:2012bu}). At high $\pT$, the contribution of dijets appears as a narrow peak at the away-side, leading to $v_n^{\rm{unsub}}$ coefficients with alternating sign: $(-1)^n$~\cite{Aad:2012bu}. In contrast, the $v_n$ values after recoil subtraction are positive across the full measured $\pt$ range. Hence, the recoil subtraction is necessary for the reliable extraction of the long-range correlations, especially at high $\pt$. Figure~\ref{fig:resb1b} shows the trigger $\pT$ dependence of the $v_2$--$v_5$ in several $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ event classes. The $v_5$ measurement is available only for three event-activity classes in a limited $\pT$ range. All flow harmonics show similar trends, i.e. they increase with $\pT$ up to 3--5 GeV and then decrease, but remain positive at higher $\pT$. For all event classes, the magnitude of the $v_n$ is largest for $n=2$, and decreases quickly with increasing $n$. The ATLAS data are compared to the measurement by the CMS experiment~\cite{Chatrchyan:2013nka} for an event-activity class in which the number of offline reconstructed tracks, $N_{\mathrm{trk}}^{\mathrm{off}}$, within $|\eta|<2.4$ and $\pT>0.4$ GeV is $220\leq N_{\mathrm{trk}}^{\mathrm{off}}<260$. This is comparable to the $220\leq\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<260$ event class used in the ATLAS analysis. A similar recoil removal procedure, with $N_{\mathrm{trk}}^{\mathrm{off}}<20$ as the peripheral events, has been used for the CMS data. Excellent agreement is observed between the two results. \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{fig_09} \caption{\label{fig:resb1b} The $v_n(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$ with $n=2$ to 5 for six $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ event-activity classes obtained for $|\mbox{$\Delta \eta$}|>2$ and the $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ range of 1--3 GeV. The error bars and shaded boxes represent the statistical and systematic uncertainties, respectively. Results in $220\leq\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<260$ are compared to the CMS data~\cite{Chatrchyan:2013nka} obtained by subtracting the peripheral events (the number of offline tracks $N_{\mathrm{trk}}^{\mathrm{off}}<20$), shown by the solid and dashed lines.} \end{figure} The extraction of the $v_n$ from $v_{n,n}$ relies on the factorization relation in Eq.~(\ref{eq:four2}). This factorization is checked by calculating $v_n$ using different ranges of $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ for events with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$ as shown in Fig.~\ref{fig:resb3}. The factorization behavior can also be studied via the ratio~\cite{Gardim:2012im,Heinz:2013bua} \begin{eqnarray} \label{eq:rn} r_n(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}) = \frac{v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})}{\sqrt{v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})}}\;, \end{eqnarray} with $r_n=1$ for perfect factorization. The results with recoil subtraction ($r_n$) and without subtraction ($r_n^{\mathrm{unsub}}$) are summarized in Fig.~\ref{fig:rn}, and they are shown as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}-\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$, because by construction the ratios equal one for $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}=\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$. This second method is limited to $\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}\lesssim4$ GeV, since requiring both particles to be at high $\pT$ reduces the number of the available pairs for $v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$ or $v_{n,n}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})$. In contrast, for the results shown in Fig.~\ref{fig:resb3}, using Eqs.~(\ref{eq:four2}) and (\ref{eq:four3}), the restriction applies to only one of the particles, i.e. $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}\lesssim4$ GeV. Results in Figs.~\ref{fig:resb3} and \ref{fig:rn} show that, in the region where the statistical uncertainty is small, the factorization holds to within a few percent for $v_2$ over $0.5<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<4$ GeV, within 10\% for $v_3$ over $0.5<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<3$ GeV, and within 20--30\% for $v_4$ over $0.5<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<4$ GeV (Fig.~\ref{fig:resb3} only). Furthermore, in this $\pT$ region, the differences between $r_n$ and $r_n^{\mathrm{unsub}}$ are very small ($<10\%$) as shown by Fig.~\ref{fig:rn}, consistent with the observation in Fig.~\ref{fig:resb1}. This level of factorization is similar to what was observed in peripheral Pb+Pb collisions~\cite{Aad:2012bu}. \begin{figure}[!h] \centering \includegraphics[width=0.9\columnwidth]{fig_10} \caption{\label{fig:resb3} The $v_2$ (left column), $v_3$ (middle column), and $v_4$ (right column) as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ extracted using four $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ bins in the long-range region $|\mbox{$\Delta \eta$}|>2$ for events with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$. The ratio of the $v_n(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$ in each $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ bin to those obtained with the default reference $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ bin of 1--3 GeV are shown in the bottom part of each column. The error bars and shaded bands represent the statistical and systematic uncertainties, respectively. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.9\columnwidth]{fig_11} \vspace*{-0.2cm} \caption{\label{fig:rn} The values of factorization variable defined by Eq.~(\ref{eq:rn}) before (denoted by $r_n^{\rm{unsub}}$) and after (denoted by $r_n$) the subtraction of the recoil component. They are shown for $n=2$ (top row) and $n=3$ (bottom row) as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}-\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ in various $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ ranges for events in $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$. The solid lines represent a theoretical prediction from Ref.~\cite{Kozlov:2014fqa}. The error bars represent the total experimental uncertainties.} \end{figure} Figure~\ref{fig:rn} also compares the $r_n$ data with a theoretical calculation from a viscous hydrodynamic model~\cite{Kozlov:2014fqa}. The model predicts at most a few percent deviation of $r_n$ from one, which is attributed to $\pT$-dependent decorrelation effects associated with event-by-event flow fluctuations~\cite{Gardim:2012im}. In most cases, the data are consistent with the prediction within uncertainties. Figure~\ref{fig:resb2} shows the centrality dependence of $v_2$, $v_3$, and $v_4$ as a function of $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$. The results are obtained for $0.4<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<3$ GeV, both before and after subtraction of the recoil contribution. The difference between $\mbox{$v_{n}^{\mathrm{unsub}}$}$ and $v_n$ is very small in central collisions, up to 3--4\% for both event-activity definitions. For more peripheral collisions, the difference is larger and reaches 20--30\% for $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\sim40$ or $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}\sim30$ GeV. The sign of the difference also alternates in $n$ (already seen in Fig.~\ref{fig:resb1}): i.e. $\mbox{$v_{n}^{\mathrm{unsub}}$}>v_n$ for even $n$ and $\mbox{$v_{n}^{\mathrm{unsub}}$}<v_n$ for odd $n$. This behavior is characteristic of the influence of the away-side dijet contribution to $\mbox{$v_{n}^{\mathrm{unsub}}$}$. The $v_n$ values in Fig.~\ref{fig:resb2} exhibit modest centrality dependence. The change of $v_2$ is less than 8\% over $140<\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<300$ (top 0.5\% of MB-triggered events) or $130<\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}<240$ GeV (top 0.05\% of MB-triggered events), covering about half of the full dynamic range. The centrality dependence of $v_3$ is stronger and exhibits a nearly linear increase with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$. \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{fig_12a}\\ \includegraphics[width=1\columnwidth]{fig_12b} \caption{\label{fig:resb2} The centrality dependence of $v_2$, $v_3$, and $v_4$ as a function of $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ (top row) and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (bottom row) for pairs with $0.4<\mbox{$p_{\mathrm{T}}^{\mathrm{a,b}}$}<3$ GeV and $|\mbox{$\Delta \eta$}|>2$. The results are obtained with (symbols) and without (lines) the subtraction of the recoil contribution. The error bars and shaded boxes on $v_n$ data represent the statistical and systematic uncertainties, respectively, while the error bars on the $\mbox{$v_{n}^{\mathrm{unsub}}$}$ represent the combined statistical and systematic uncertainties.} \end{figure} Figure~\ref{fig:resb2} shows that the overall centrality dependence is similar for $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$. The correlation data (not the fit, Eq.~(\ref{eq:map})) in Fig.~\ref{fig:2b} are used to map the $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$-dependence in the top row of Fig.~\ref{fig:resb2} to a corresponding $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$-dependence. The $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$-dependence of $v_n$ mapped from $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$-dependence is then compared to the directly measured $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$-dependence in Fig.~\ref{fig:resc1}. Good agreement is seen for $v_2$ and $v_3$. \begin{figure}[!h] \centering \includegraphics[width=0.8\columnwidth]{fig_13} \caption{\label{fig:resc1} The $v_2$ (left panel) and $v_3$ (right panel) as a function of $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ calculated directly for narrow ranges in $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ (open cicles) or obtained indirectly by mapping from the $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$-dependence of $v_n$ using the correlation data shown in Fig.~\ref{fig:2b}(b) (solid circles). The error bars and shaded boxes represent the statistical and systematic uncertainties, respectively.} \end{figure} \subsection{First-order Fourier coefficient $v_1$} \label{sec:v1} A similar analysis is performed to extract the dipolar flow $v_1$. Figure~\ref{fig:v1a} shows the $v_{1,1}$ values as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ in various ranges of $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ before and after the recoil subtraction. Before the recoil subtraction, $v_{1,1}^{\mathrm{unsub}}$ values are always negative and decrease nearly linearly with $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ and $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$, except for the $\pT$ region around 3--4 GeV where a shoulder-like structure is seen. This shoulder is very similar to that observed in A+A collisions, which is understood as a combined contribution from the negative recoil and positive dipolar flow in this $\pT$ range~\cite{Retinskaya:2012ky,Aad:2012bu} according to the following form~\cite{Borghini:2000cm,Borghini:2002mv}: \begin{eqnarray} \label{eq:v1} v_{1,1}^{\mathrm{unsub}}(\pT^{\mathrm a},\pT^{\mathrm b}) \approx v_1(\pT^{\mathrm a})v_1(\pT^{\mathrm b})-\frac{\pT^{\mathrm a}\pT^{\mathrm b}}{M\langle \pT^2\rangle}\;, \end{eqnarray} where $M$ and $\langle \pT^2\rangle$ are the multiplicity and average squared transverse momentum of the particles in the whole event, respectively. The negative correction term reflects the global momentum conservation contribution, which is important in low-multiplicity events and at high $\pT$. The shoulder-like structure in Fig.~\ref{fig:v1a} reflects the contribution of the dipolar flow term $v_1(\pT^{\mathrm a})v_1(\pT^{\mathrm b})$. \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{fig_14} \caption{\label{fig:v1a} The first-order harmonic of 2PC before recoil subtraction $v_{1,1}^{\mathrm{unsub}}$ (left panel) and after recoil subtraction $v_{1,1}$ (right panel) as a function of $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ for different $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ ranges for events with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$. The error bars and shaded boxes represent the statistical and systematic uncertainties, respectively.} \end{figure} After the recoil subtraction, the magnitude of $v_{1,1}$ is greatly reduced, suggesting that most of the momentum conservation contribution has been removed. The resulting $v_{1,1}$ values cross each other at around $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}\sim1.5$--2.0~GeV. This behavior is consistent with the expectation that the $v_1(\pT)$ function crosses zero at $\pT\sim1$--2~GeV, a feature that is also observed in A+A collisions~\cite{Retinskaya:2012ky,Aad:2012bu}. The trigger $\pT$ dependence of $v_1$ is obtained via a factorization procedure very similar to that discussed in Sec.~\ref{sec:fourier} \begin{eqnarray} \label{eq:v1b} v_1(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}) \equiv \frac{v_{1,1}(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})}{v_1(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})}\;, \end{eqnarray} where the dipolar flow in the associated $\pT$ bin, $v_1(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})$, is defined as \begin{eqnarray} \label{eq:v1c} v_{1}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}) = \mathrm{sign}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}-\pT^0)\sqrt{\left|v_{1,1}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$},\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})\right|}\;, \end{eqnarray} where $\mathrm{sign}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}-\pT^0)$ is the sign of the $v_{1}$, defined to be negative for $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}<\pT^0=1.5$ GeV and positive otherwise. This function is necessary in order to account for the sign change of $v_1$ at low $\pT$. To obtain the $v_1(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$, three reference $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ ranges, 0.5--1 GeV, 3--4 GeV and 4--5 GeV, are used to first calculate $v_{1}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})$. These values are then inserted into Eq.~(\ref{eq:v1b}) to obtain three $v_1(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$ functions. The uncertainties on the $v_1(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$ values are calculated via an error propagation through Eqs.~(\ref{eq:v1b}) and (\ref{eq:v1c}). The calculation is not possible for $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ in the range of 1--3 GeV, where the $v_{1,1}$ values are close to zero and hence the resulting $v_{1}(\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$})$ have large uncertainties. The results for $v_1(\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$})$ are shown in Fig.~\ref{fig:v1b} for these three reference $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ bins. They are consistent with each other. The $v_1$ value is negative at low $\pT$, crosses zero at around $\pT\sim1.5$ GeV, and increases to 0.1 at 4--6 GeV. This $\pT$-dependence is similar to the $v_1(\pT)$ measured by ATLAS experiment in Pb+Pb collisions at $\mbox{$\sqrt{s_{\mathrm{NN}}}$}=2.76$ TeV~\cite{Aad:2012bu}, except that the $v_1$ value in Pb+Pb collisions crosses zero at lower $\pT$ ($\sim 1.1$ GeV), which reflects the fact that the $\langle\pT\rangle$ in Pb+Pb at $\mbox{$\sqrt{s_{\mathrm{NN}}}$}=2.76$ TeV is smaller than that in $\mbox{$p$+Pb}$ at $\mbox{$\sqrt{s_{\mathrm{NN}}}$}=5.02$ TeV. \begin{figure}[!h] \centering \includegraphics[width=0.7\columnwidth]{fig_15} \caption{\label{fig:v1b} The $\mbox{$p_{\mathrm{T}}^{\mathrm{a}}$}$ dependence of $v_{1}$ extracted using the factorization relations Eqs.~(\ref{eq:v1b}) and (\ref{eq:v1c}) in three reference $\mbox{$p_{\mathrm{T}}^{\mathrm{b}}$}$ ranges for events with $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}\geq220$. The error bars and shaded boxes represent the statistical and systematic uncertainties, respectively.} \end{figure} \subsection{Comparison of $v_n$ results between high-multiplicity p+Pb and peripheral Pb+Pb collisions} In the highest multiplicity $\mbox{$p$+Pb}$ collisions, the charged particle multiplicity, $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, can reach more than 350 in $|\eta|<2.5$ and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ close to 300 GeV on the Pb-fragmentation side. This activity is comparable to Pb+Pb collisions at $\mbox{$\sqrt{s_{\mathrm{NN}}}$} =2.76$~TeV in the 45--50\% centrality interval, where the long-range correlation is known to be dominated by collective flow. Hence a comparison of the $v_n$ coefficients in similar event activity for the two collision systems can improve our current understanding of the origin of the long-range correlations. The left column of Fig.~\ref{fig:comp1} compares the $v_n$ values from $\mbox{$p$+Pb}$ collisions with $220\leq\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<260$ to the $v_n$ values for Pb+Pb collisions in the 55--60\% centrality interval from Ref.~\cite{Aad:2012bu}. These two event classes are chosen to have similar efficiency-corrected multiplicity of charged particles with $\pT>0.5$~GeV and $|\eta|<2.5$, characterized by its average value (\mbox{$\langle N_{\mathrm{ch}}\rangle$}) and its standard deviation ($\sigma$): $\mbox{$\langle N_{\mathrm{ch}}\rangle$}\pm\sigma\approx259\pm13$ for $\mbox{$p$+Pb}$ collisions and $\mbox{$\langle N_{\mathrm{ch}}\rangle$}\pm\sigma\approx 241\pm43$ for Pb+Pb collisions. The Pb+Pb results on $v_n$~\cite{Aad:2012bu} were obtained via an event-plane method by correlating tracks in $\eta>0$ ($\eta < 0$) with the event plane determined in the FCal in the opposite hemisphere. The larger $v_2$ values in Pb+Pb collisions can be attributed to the elliptic collision geometry of the Pb+Pb system, while the larger $v_4$ values are due to the non-linear coupling between $v_2$ and $v_4$ in the collective expansion~\cite{Luzum:2010ae}. The $v_3$ data for Pb+Pb collisions are similar in magnitude to those in $\mbox{$p$+Pb}$ collisions. However, the $\pT$ dependence of $v_n$ is different for the two systems. These observations are consistent with similar comparisons performed by the CMS experiment~\cite{Chatrchyan:2013nka}. Recently, Basar and Teaney~\cite{Basar:2013hea} have proposed a method to rescale the Pb+Pb data for a \begin{figure}[!h] \centering \includegraphics[width=1\columnwidth]{fig_16} \caption{\label{fig:comp1} The coefficients $v_2$ (top row), $v_3$ (middle row) and $v_4$ (bottom row) as a function of $\pT$ compared between $\mbox{$p$+Pb}$ collisions with $220\leq\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}<260$ in this analysis and Pb+Pb collisions in 55--60\% centrality from Ref.~\cite{Aad:2012bu}. The left column shows the original data with their statistical (error bars) and systematic uncertainties (shaded boxes). In the right column, the same Pb+Pb data are rescaled horizontally by a constant factor of 1.25, and the $v_2$ and $v_4$ are also down-scaled by an empirical factor of 0.66 to match the $\mbox{$p$+Pb}$ data.} \end{figure} proper comparison to the $\mbox{$p$+Pb}$ data. They argue that the $v_n(\pT)$ shape in the two collision systems are related to each other by a constant scale factor of $K=1.25$ accounting for the difference in their $\langle\pT\rangle$, and that one should observe a similar $v_n(\pT)$ dependence shape after rescaling the $\pT$ measured in Pb+Pb collisions to get $v_n(\pT/K)$. The difference in the overall magnitude of $v_2$ after the $\pT$-rescaling is entirely due to the elliptic geometry of Pb+Pb collisions. In order to test this idea, the $\pT$ for Pb+Pb collisions are rescaled by the constant factor of 1.25 and $v_n$ values with rescaled $\pT$ are displayed in the right column of Fig.~\ref{fig:comp1}. Furthermore, the magnitudes of $v_2$ and $v_4$ are also rescaled by a common empirical value of 0.66 to approximately match the magnitude of the corresponding $\mbox{$p$+Pb}$ $v_n$ data. The rescaled $v_n$ results are shown in the right column and compared to the $\mbox{$p$+Pb}$ $v_n$ data. They agree well with each other, in particular in the low-$\pT$ region ($\pT<2$--4 GeV) where the statistical uncertainties are small. \section{Summary} \label{sec:summary} This paper presents measurements of two-particle correlation (2PC) functions and the first five azimuthal harmonics $v_1$--$v_5$ in $\mbox{$\sqrt{s_{\mathrm{NN}}}$} =5.02$~TeV $\mbox{$p$+Pb}$ collisions with a total integrated luminosity of approximately 28~$\mathrm{nb}^{-1}$ recorded by the ATLAS detector at the LHC. The two-particle correlations and $v_n$ coefficients are obtained as a function of $\pT$ for pairs with $2<|\Delta \eta|<5$ in different intervals of event activity, defined by either $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$, the number of reconstructed tracks with $\pT>0.4$ GeV and $|\eta|<2.5$, or $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$, the total transverse energy over $-4.9<\eta<-3.2$ on the Pb-fragmentation side. Significant long-range correlations (extending to $|\mbox{$\Delta \eta$}|=5$) are observed for pairs at the near-side ($|\mbox{$\Delta \phi$}|<\pi/3$) over a wide range of transverse momentum ($\pT<12$ GeV) and broad ranges of $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$. A similar long-range correlation is also observed on the away-side ($|\mbox{$\Delta \phi$}|>2\pi/3$), after subtracting the recoil contribution estimated using the 2PC in low-activity events. The azimuthal structure of these long-range correlations is quantified using the Fourier coefficients $v_2$--$v_5$ as a function of $\pT$. The $v_n$ values increase with $\pT$ to 3--4 GeV and then decrease for higher $\pT$, but remain positive in the measured $\pT$ range. The overall magnitude of $v_n(\pT)$ is observed to decrease with $n$. The magnitudes of $v_n$ also increase with both $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ and $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$. The $v_2$ values seem to saturate at large $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ or $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ values, while the $v_3$ values show a linear increase over the measured $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$ or $\mbox{$E_{\mathrm{T}}^{{\scriptscriptstyle \mathrm{Pb}}}$}$ range. The first-order harmonic $v_1$ is also extracted from the 2PC. The $v_1(\pT)$ function is observed to change sign at $\pT\approx 1.5$--2.0 GeV and to increase to about 0.1 at $\pT>4$ GeV. The extracted $v_2(\pT)$, $v_3(\pT)$ and $v_4(\pT)$ are compared to the $v_n$ coefficients in Pb+Pb collisions at $\mbox{$\sqrt{s_{\mathrm{NN}}}$} =2.76$~TeV with similar $\mbox{$N_{\mathrm{ch}}^{\mathrm{rec}}$}$. After applying a scale factor of $K=1.25$ that accounts for the difference of mean $\pT$ in the two collision systems as suggested in Ref.~\cite{Basar:2013hea}, the shape of the $v_n(\pT/K)$ distribution in Pb+Pb collision is found to be similar to the shape of $v_n(\pT)$ distribution in $\mbox{$p$+Pb}$ collisions. This suggests that the long-range ridge correlations in high-multiplicity $\mbox{$p$+Pb}$ collisions and peripheral Pb+Pb collisions are driven by similar dynamics. \section*{ACKNOWLEDGEMENTS} We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWF and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF, DNSRC and Lundbeck Foundation, Denmark; EPLANET, ERC and NSRF, European Union; IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, DFG, HGF, MPG and AvH Foundation, Germany; GSRT and NSRF, Greece; ISF, MINERVA, GIF, I-CORE and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; BRF and RCN, Norway; MNiSW and NCN, Poland; GRICES and FCT, Portugal; MNE/IFA, Romania; MES of Russia and ROSATOM, Russian Federation; JINR; MSTD, Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of America. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide. \bibliographystyle{atlasBibStyleWoTitle} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,108,101,564,076
arxiv
\section{Introduction} All the graphs considered are undirected, simple, finite and \underline{connected}. The vertex set and edge set of a graph $G$ are denoted by $V(G)$ and $E(G)$. Let $v$ be a vertex of $G$. The \emph{open neighborhood} of $v$ is $\displaystyle N_G(v)=\{w \in V:vw \in E\}$, and the \emph{closed neighborhood} of $v$ is $N_G[v]=N(v)\cup \{v\}$. The \emph{degree} of $v$ is $\deg_G(v)=|N_G(v)|$. If $N_G[v]=V(G)$ (resp. $\deg_G(v)=1$), then $v$ is called \emph{universal} (resp. a \emph{leaf}). Let $W$ be a subset of vertices of a graph $G$. The open neighborhood of $W$ is $\displaystyle N_G(W)=\cup_{v\in W} N_G(v)$, and the closed neighborhood of $W$ is $N_G[W]=N_G(W)\cup W$. The subgraph of $G$ induced by $W$, denoted by $G[W]$, has as vertex set $W$ and $E(G[W]) = \{vw \in E(G) : v \in W,w \in W\}$. The \emph{complement} of $G$, denoted by $\overline{G}$, is the graph on the same vertices as $G$ such that two vertices are adjacent in $\overline{G}$ if and only if they are not adjacent in $G$. Let $G_1$, $G_2$ be two graphs having disjoint vertex sets. The (disjoint) \emph{union} $G=G_1+G_2$ is the graph such that $V(G)=V(G_1)\cup V(G_2)$ and $E(G)=E(G_1)\cup E(G_2)$. The \emph{join} $G=G_1\vee G_2$ is the graph such that $V(G)=V(G_1)\cup V(G_2)$ and $E(G)=E(G_1)\cup E(G_2)\cup \{uv:u\in V(G_1),v\in V(G_2)\} $. The distance between vertices $v,w\in V(G)$ is denoted by $d_G(v,w)$, or $d(v,w)$ if the graph $G$ is clear from the context. The diameter of $G$ is ${\rm diam}(G) = \max\{d(v,w) : v,w \in V(G)\}$. The distance between a vertex $v\in V(G)$ and a set of vertices $S\subseteq V(G)$, denoted by $d(v,S)$ is the minimum of the distances between $v$ and the vertices of $S$, that is to say, $d(v,S)=\min\{d(v,w):w\in S\}$. Undefined terminology can be found in \cite{chlezh11}. A vertex $x\in V(G)$ \emph{resolves} a pair of vertices $v,w\in V(G)$ if $d(v,x)\ne d(w,x)$. A set of vertices $S\subseteq V(G)$ is a \emph{locating set} of $G$, if every pair of distinct vertices of $G$ are resolved by some vertex in $S$. The \emph{metric dimension} $\beta(G)$ of $G$ is the minimum cardinality of a locating set. Locating sets were first defined by \cite{hararymelter} and \cite{slater}, and they have since been widely investigated (see \cite{chmppsw07,hmpsw10} and their references). Let $G=(V,E)$ be a graph of order $n$. If $\Pi=\{S_1,\ldots,S_k\}$ is a partition of $V$, we denote by $r(u|\Pi)$ the vector of distances between a vertex $u\in V$ and the elements of $\Pi$, that is, $r(u,\Pi)=(d(u,S_1),\dots ,d(u,S_k))$. The partition $\Pi$ is called a \emph{locating partition} of $G$ if, for any pair of distinct vertices $u,v\in V$, $r(u,\Pi)\neq r(v,\Pi)$. Observe that to prove that a given partition is locating, it is enough to check that the vectors of distances of every pair of vertices belonging to the same part are different. The \emph{partition dimension} $\beta_p(G)$ of $G$ is the minimum cardinality of a locating partition of $G$. Locating partitions were introduced in \cite{ChaSaZh00}, and further studied in \cite{bada12,chagiha08,fegooe06,ferogo14,gyro10,gyjakuta14,grstramiwi14,royeku16,royele14,tom08,toim09,tojasl07}. Next, some known results involving the partition dimension are shown. \begin{thm} [\cite{ChaSaZh00}]\label{mdpd} Let $G$ be a graph of order $n\ge3$ and diameter ${\rm diam}(G)=d$ \begin{enumerate} \item $\beta_p(G) \le \beta(G)+1$. \item $\beta_p(G) \le n-d+1$. Moreover, this bound is sharp. \item $\beta_p(G)=n-1$ if and only if $G$ is isomorphic to either the star $K_{1,n-1}$, or the complete split graph $K_{n-2} \vee \overline{K_2}$, or the graph $K_1\vee (K_1+K_{n-2})$. \end{enumerate} \end{thm} In \cite{tom08}, its author approached the characterization of the set of graphs of order $n\ge9$ having partition dimension $n-2$, presenting a collection of 23 graphs (as a matter fact there are 22, as the so-called graphs $G_4$ and $G_6$ are isomorphic). Although employing a different notation (see Table \ref{tab.equivalencias}), the characterization given in this paper is the following. \begin{thm} [\cite{tom08}]\label{n-2 wrong} Let $G=(V,E)$ be a graph of order $n\ge9$. Then $\beta_p(G)=n-2$ if and only if it belongs either to the family $\{H_i\}_{i=1}^{15}$, except $H_7$, (see Figure \ref{taun234}) or to the family $\{F_i\}_{i=1}^{8}$ (see Figure \ref{pdn-2 wrong}). \end{thm} \begin{figure}[!hbt] \begin{center} \includegraphics[width=0.85\textwidth]{tomescu8} \caption{ The thick horizontal segment means the join operation $\vee$. For example: $F_1\cong \overline{K_{n-3}}\vee (K_2+K_1)$, $F_3\cong K_1 \vee(\overline{K_{n-3}}+K_2)$ and $F_5\cong \overline{K_{n-4}}\vee (P_3+K_1)$.}\label{pdn-2 wrong} \end{center} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{|c|cccccccc|} \hline Figure \ref{pdn-2 wrong} & $F_1$ & $F_2$ & $F_3$ & $F_4$ & $F_5$ & $F_6$ & $F_7$ & $F_8$ \\ \hline Paper \cite{tom08} & $G_5$ & $K_{2,n-2}-e$ & $K_{1,n-1}+e$ & $G_{11}$ & $K_n-E(K_{1,3}+e)$ & $G_3$ & $G_7$ & $G_{12}$ \\ \hline \end{tabular} \end{center} \caption{The second row contains the names used in \cite{tom08} for the graphs shown in Figure\,\ref{pdn-2 wrong}.} \label{tab.equivalencias} \end{table} Thus, in particular, and according to \cite{tom08}, for every $n\ge9$, all the graphs $G$ displayed in Figure \ref{pdn-2 wrong} satisfy $\beta_p(G)=n-2$. However, for all of them, it holds that $\beta_p(G)=n-3$, as we will prove in this paper (see Corollaries \ref{f1234}, \ref{f56} and \ref{f78}). As a matter of example, we show next that $\beta_p(F_1)\le n-3$, for every $n\ge7$. First, notice that $F_1\cong \overline{K_{n-3}}\vee (K_2+K_1)$. Next, if $V(\overline{K_{n-3}})=\{v_1, \dots, v_{n-3}\}$, $V(K_2)=\{v_{n-2},v_{n-1}\}$ and $V(K_1)=\{v_n\}$, then consider the partition $\Pi=\{ \{v_1,v_{n-2}\}, \{v_2,v_{n-1}\}, \{v_3,v_{n}\}, \{v_4\}, \dots, \{v_{n-3} \} \}$. Finally, observe that $\Pi$ is a locating partition of $F_1$ since $2=d(v_i,v_{4})\neq d(v_{n+i-3},v_4)=1$, for every $i\in\{1,2,3\}$. \noindent The main contribution of this work is, after showing that the theorem of characterization presented in \cite{tom08} is far for being true, finding the correct answer to this problem. Motivated by this objective, we introduce the so-called \emph{twin number} $\tau(G)$ of a connected graph $G$, and present a list of basic properties, some of them directly related to the partition dimension $\beta_p(G)$. \noindent The rest of the paper is organized as follows. Section 2 is devoted to introduce the notions of twin class and twin number, and to show some basic properties. In Section 3, subdivided in three subsections, a number of results involving both the twin number and the partition dimension of a graph are obtained. Finally, Section 4 includes a theorem of characterization presenting, for every $n\ge9$, which graphs $G$ of order $n$ satisfy $\beta_p(G)=n-2$. \section{Twin number}\label{sec.tn} A pair of vertices $u,v\in V$ of a graph $G=(V,E)$ are called \emph{twins} if they have exactly the same set of neighbors other than $u$ and $v$. A \emph{twin set} of $G$ is any set of pairwise twin vertices of $G$. If $uv\in E$, then they are called \emph{true twins}, and otherwise \emph{false twins}. It is easy to verify that the so-called \emph{twin relation} is an equivalence relation on $V$, and that every equivalence class is either a clique or a stable set. An equivalence class of the twin relation is referred to as a \emph{twin class}. \begin{defi} {\rm The \emph{twin number} of a graph $G$, denoted by $\tau (G)$, is the maximum cardinality of a twin class of $G$. Every twin set of cardinality $\tau (G)$ will be referred to as a \emph{$\tau$-set}. } \end{defi} As a direct consequence of these definitions, the following list of properties hold. \begin{prop}\label{twin.list} Let $G=(V,E)$ be a graph of order $n$. Let $W$ be a twin set of $G$. Then \begin{enumerate} \item[(1)] If $w_1,w_2\in W$, then $d(w_1,z)=d(w_2,z)$, for every vertex $z\in V\setminus \{w_1,w_2\}$. \item[(2)] No two vertices of $W$ can belong to the same part of any locating partition. \item[(3)] $W$ induces either a complete graph or an empty graph. \item[(4)] Every vertex not in $W$ is either adjacent to all the vertices of $W$ or non-adjacent to any vertex of $W$. \item[(5)] $W$ is a twin set of $\overline{G}$. \item[(6)] $\tau (G)=\tau(\overline{G})$. \item[(7)] $\tau(G)\le \beta_p(G)$. \item[(8)] $\tau (G)=\beta_p(G)=n$ if and only if $G$ is the complete graph $K_n$. \item[(9)] $\tau (G)=n-1$ if and only if $G$ is the star $K_{1,n-1}$. \end{enumerate} \end{prop} It is a routine exercise to check all the results showed in Table \ref{tab.cartProductPaths} (see also \cite{ChaSaZh00} and the references given in \cite{chmppsw07}). \begin{table}[h] \begin{center} \begin{tabular}{|c|cccccc|} \hline $G$&$P_n$ &$C_{n}$ & $K_{1,n-1}$ & $K_{k,k}$ & $K_{k,n-k}$ & $K_n$ \\ {\rm order} $n$ & $n\ge 4$ & $n\ge 5$ & $n\ge 3$ & $4\le n=2k$ & $2\le k < n-k$ & $n\ge 2$ \\ \hline $\beta(G)$ & 1 & 2 &$n-2$ & $n-2$ & $n-2$ & $n-1$\\ $\tau(G)$& 1 & 1 &$n-1$ & $k$ & $n-k$ & $n$ \\ $\beta_p(G)$& 2 & 3 &$n-1$ & $k+1$ & $n-k$ & $n$ \\ \hline \end{tabular} \end{center} \caption{Metric dimension $\beta$, twin number $\tau$ and partition dimension $\beta_p$ of paths, cycles, stars, bicliques and cliques.} \label{tab.cartProductPaths} \end{table} \begin{figure}[!hbt] \begin{center} \includegraphics[width=0.95\textwidth]{tau} \caption{Graphs of order $n\ge4$ such that $\tau(G)=n-2$.}\label{pdn-1} \label{bpn-2} \end{center} \end{figure} We conclude this section by characterizing the set of graphs $G$ such that $\tau (G)=n-2$. \begin{prop}\label{prop.twingran} Let $G=(V,E)$ be a graph of order $n\ge 4$. Then, $\tau (G)=n-2$ if and only if $G$ is one of the following graphs (see Figure \ref{bpn-2}): \begin{enumerate} \item[{\rm(a)}] the complete split graph $K_{n-2} \vee \overline{K_2}$, obtained by removing an edge from the complete graph $K_n$; \item[{\rm(b)}] the graph $K_1\vee (K_1+K_{n-2})$, obtained by attaching a leaf to the complete graph $K_{n-1}$; \item[{\rm(c)}] the complete bipartite graph $K_{2,n-2}$; \item[{\rm(d)}] the complete split graph $\overline{K_{n-2}} \vee K_2 $. \end{enumerate} \end{prop} \begin{proof} It is straightforward to check that the twin number of the four graphs displayed in Figure \ref{bpn-2} is $n-2$. Conversely, suppose that $G$ is a graph such that $\tau (G)=n-2$. Let $x,y\in V$ such that $W=V\setminus \{x,y\}$ is the $\tau$-set of $G$. Since $G$ is connected, we may suppose without loss of generality that $W\subseteq N(x)$. We distinguish two cases. \noindent \textbf{Case 1}: $G[W] \cong K_{n-2}$. If $xy\notin E$, then $N(y)=W$, and thus $G\cong \overline{K_2}\vee K_{n-2}$. If $xy\in E$, then $N(y)=\{x\}$, as otherwise $G\cong K_{n}$, a contradiction. Thus, $G\cong K_1\vee (K_{n-2}+K_1)$. \noindent \textbf{Case 2}: $G[W] \cong \overline{K_{n-2}}$. If $xy\notin E$, then $N(y)=W$, and thus $G\cong K_{2,n-2}$. If $xy\in E$, then $N(y)=W$, as otherwise $G\cong K_{1,n-1}$, a contradiction. Hence, $G\cong \overline{K_{n-2}} \vee K_2 $. \end{proof} \section{Twin number versus partition dimension}\label{sec.tnmd} This section, consisting of 3 subsections, is devoted to obtain relations between the partition dimension $\beta_p(G)$ and the twin number $\tau(G)$ of a graph $G$. In the first subsection, a realization theorem involving both parameters is presented, without any further restriction than the inequality $\tau(G) \le \beta_p(G)$. The second subsection is devoted to study the parameter $\beta_p(G)$, when $G$ is a graph of order $n$ with "few" twin vertices, to be more precise, such that $\tau(G)\le \frac{n}{2}$. Finally, the last subsection examines $\beta_p(G)$, whenever $G$ is a graph for which $\tau(G)> \frac{n}{2}$. \subsection{Realization Theorem for trees} A complete $k$-ary tree of height $h$ is a rooted tree whose internal vertices have $k$ children and whose leaves are at distance $h$ from the root. Let $T(k,2)$ denote the complete $k$-ary tree of height 2. Suppose that $x$ is the root, $x_1,\dots ,x_k$ are the children of $x$, and $x_{i1},\dots ,x_{ik}$ are the children of $x_i$ for any $i\in \{ 1,\dots ,k\}$ (see Figure \ref{fig:kary}(a)). \begin{prop}\label{prop.kary} For any integer $k\ge 2$, $\tau (T(k,2))=k$ and $\beta_p(T(k,2))=k+1$. \end{prop} \begin{proof} Certainly, $\tau (T(k,2))=k$, and thus $\beta_p(T(k,2))\ge k$. Suppose that $\beta_p(T(k,2))=k$ and $\Pi=\{ S_1,\dots ,S_k \}$ is a locating partition of size $k$. In such a case, for every $i\in \{ 1,\dots ,k\}$ the vertices $x_{i1},\dots ,x_{ik}$ are twins, and thus each one belongs to a distinct part of $\Pi$. So, if $x_r,x_s\in S_i$ for some pair $r,s\in \{1,\dots ,k\}$, with $r\not=s$, then $r(x_r|\Pi)=r(x_s|\Pi)=(1,\dots,1,\underset{i)}0,1\dots ,1)$, which is a contradiction. Hence, the vertices $x_1,\dots ,x_k$ must belong to distinct parts of $\Pi$. We may assume that $x_i\in S_i$, for every $i\in \{ 1,\dots ,k\}$. Thus, if $x$ belongs to the part $S_i$, then $r(x|\Pi)=r(x_i|\Pi)=(1,\dots,1,\underset{i)}0,1\dots ,1)$, which is a contradiction. Hence, $\beta_p(T(k,2))\ge k+1$. Finally, consider the partition $\Pi=\{ S_1,\dots ,S_k, S_{k+1} \}$ such that $S_{k+1}=\{ x \}$ and, for any $i\in \{ 1,\dots ,k\}$, $S_i=\{ x_i,x_{1i},x_{2i},\dots ,x_{ki}\}$. Then, for every $u\in V(T(k,2))$ and for every $i,j,h\in \{ 1,\dots , k\}$ such that $j<i<h$: \begin{center} $r(u|\Pi) = \begin{cases} (2,\ldots,2,\overset{j)}{1}, 2,\ldots,2,\overset{i)}{0},2,\ldots,2,\overset{h)}{2},2,\ldots,2,2) & \text{if } u=x_{ij}\\ (2,\ldots,2,\overset{}{2}, \hspace{.1cm}2,\ldots,2,\overset{}{0},\hspace{.03cm}2,\ldots,2,\overset{}{2},\hspace{.07cm}2,\ldots,2,2) & \text{if } u=x_{ii} \\ (2,\ldots,2,\overset{}{2}, \hspace{.1cm}2,\ldots,2,\overset{}{0},\hspace{.03cm}2,\ldots,2,\overset{}{1},\hspace{.07cm}2,\ldots,2,2) & \text{if } u=x_{ih} \\ (1,\ldots,1,\overset{}{1}, \hspace{.1cm}1,\ldots,1,\overset{}{0},\hspace{.03cm}1,\ldots,1,\overset{}{1},\hspace{.07cm}1,\ldots,1,1) & \text{if } u=x_i \end{cases}$ \end{center} Therefore, $\Pi$ is a locating partition, implying that $\beta_p(T(k,2))= k+1$. \end{proof} \begin{figure}[ht] \begin{center} \includegraphics[width=1\textwidth]{kary2} \caption{The trees (a) $T(k,2)$ and (b) $T^*(k,h)$.} \label{fig:kary} \end{center} \end{figure} For any $k\ge 1$ and $h\ge 1$, let $T^*(k,h)$ denote the tree of order $k+2+2(h-1)^2$ defined as follows (see Figure \ref{fig:kary}(b)): $$V(T^*(k,h))= \{ x,z \} \cup \{ z_1,\dots ,z_k\} \cup \{ x_{(i,j)}: 1\le i,j\le h-1 \} \cup \{ y_{(i,j)}: 1\le i,j\le h-1 \},$$ $$E(T^*(k,h))= \{ x x_{(i,j)} : 1\le i,j\le h-1\}\cup \{ x_{(i,j)}y_{(i,j)}: 1\le i,j\le h-1 \} \cup \{ xz, zz_1,\dots ,zz_k\}.$$ \begin{prop}\label{prop.tab} Let $k,h$ be integers such that $k\ge 1$ and $h\ge k+2$. Then, $\tau (T^*(k,h))=k$ and $\beta_p(T^*(k,h))=h$. \end{prop} \begin{proof} Certainly, $\tau (T^*(k,h))=k$. Let $\beta_p(T^*(k,h))=t$. Next, we show that $t\ge h$. Let $\Pi=\{ S_1,\dots ,S_t\}$ be a locating partition of $T^*(k,h)$. If there exist two distinct pairs $(i,j)$ and $(i',j')$ such that the vertices $x_{(i,j)},x_{(i',j')}$ are in the same part of $\Pi$ and $y_{(i,j)},y_{(i',j')}$ are in the same part, then $r(x_{(i,j)}|\Pi)=r(x_{(i',j')}|\Pi)$, which is a contradiction. Notice that this tree contains $(h-1)^2$ pairs of vertices of the type $(x_{(i,j)},y_{(i,j)})$ and if $t\le h-2$, we achieve at most $(h-2)^2$ such pairs avoiding the preceding condition. Thus, $t\ge h-1$. Moreover, if $t= h-1$, then for every pair $(m,n)\in \{ 1,\dots ,h-1\}^2$, there exists a pair $(i,j)\in \{ 1,\dots ,h-1\}^2$ such that $x_{(i,j)}\in S_m$ and $y_{(i,j)}\in S_n$. So, by symmetry, we may assume without loss of generality that $x\in S_1$. Consider the vertices $x_{(i,j)},y_{(i,j)},x_{(i',j')},y_{(i',j')}$ such that $x_{(i,j)}\in S_2$, $y_{(i,j)}\in S_1$ and $x_{(i',j')}\in S_2$, $y_{(i',j')}\in S_2$. Then $r(x_{(i,j)}|\Pi)=r(x_{(i',j')}|\Pi)=(1,0,2,\dots,2)$, which is a contradiction. Hence, $t\ge h$. To prove the equality $t= h$, consider the partition $\Pi=\{S_1,\dots ,S_h\}$ such that: $$\left. \begin{array}{ll} S_i=\{ x_{(i, m)} : 1\le m \le h-1\} \cup \{ y_{(n,i)} : 1\le n\le h-1 \}\cup \{ z_i\}, \phantom{i}\hbox{ if $1\le i \le k$}\\ S_i=\{ x_{(i, m)} : 1\le m \le h-1\} \cup \{ y_{(n,i)} : 1\le n\le h-1 \}, \phantom{xxxxxi}\hbox{ if $k< i \le h-1$} &\\ S_h=\{ x,z \}. & \end{array} \right. $$ Let $i\in \{ 1,\dots , h-1\}$. Then, for every $m,n\in \{ 1,\dots , h-1\}$, $m,n\not= i$: \begin{center} $r(u|\Pi) = \begin{cases} (2,\ldots,2,\overset{i)}{0},2,\ldots,2,\overset{m)}{1},2,\ldots,2,1) & \text{if } u=x_{(i,m)}\\ (2,\ldots,2,{0},2,\ldots,2,{\, 2\,},2,\dots,2,1) & \text{if } u=x_{(i,i)}\\ \end{cases}$ $r(u|\Pi) = \begin{cases} (3,\ldots,3,\overset{i)}{0},3,\ldots,3,\overset{n)}{1},3,\dots,3,2) & \text{if } u=y_{(n,i)}\\ (3,\ldots,3,{0},3,\ldots,3,{3},3,\dots,3,2) & \text{if } u=y_{(i,i)} \end{cases}$ \end{center} Therefore, $r(u,|\Pi)\not= r(v|\Pi)$ if $u,v\in \{ x_{(i, m)} : 1\le m \le h-1\} \cup \{ y_{(n,i)} : 1\le n\le h-1 \}$ and $u\not= v$. Moreover, it is straightforward to check that, if $i\in \{ 1,\dots , k\}$, then for every $u\in S_i$, $u\not= z_i$, we have $$r(z_i|\Pi)=(2,\ldots,2,\overset{i)}{0},2,\ldots,\overset{k)}{2},3,\dots,3,1)\not= r(u|\Pi)\, .$$ Finally, for $x,z\in S_h$, we have $$r(x|\Pi)=(1,\ldots,\overset{k)}{1},1,\ldots,1,0)\not= (1,\ldots,\overset{k)}{1},2,\dots,2,0)=r(z|\Pi).$$ Therefore, $\Pi$ is a locating partition, implying that $\beta_p(T^*(k,h))=h$. \end{proof} \begin{thm} Let $a,b$ be integers such that $1\le a\le b$. Then, there exists a tree $T$ such that $\tau (T)=a$ and $\beta_p(T)=b$. \end{thm} \begin{proof} For $a=b=1$, the trivial graph $P_1$ satisfies $\tau (P_1)=\beta_p(P_1)=1$. For $a=b\ge 2$, consider the star $K_{1,a}$. For $a=1$ and $b=2$, take the path $P_4$. If $2\le a$ and $b=a+1$, consider the tree $T(a,2)$ studied in Proposition \ref{prop.kary}. Finally, if $a \ge 1$ and $b\ge a+2$, take the tree $T^*(a,b)$ analyzed in Proposition \ref{prop.tab}. \end{proof} \subsection{Twin number at most half the order} In this subsection, we approach the case when $G$ is a graph of order $n$ such that $\tau (G)=\tau \le \frac{n}{2}$. Concretely, we prove that, in such a case, $\beta_p(G)\le n-3$. \begin{lem}\label{lem.completempty} Let $D$ be a subset of vertices of size $k\ge 3$ of a graph $G$ such that $G[D]$ is neither complete nor empty. Then, there exist at least three different vertices $u,v,w\in D$ such that $uv\in E(G)$ and $uw\notin E(G)$ \end{lem} \begin{proof} If $G[D]$ is neither complete nor empty, then there is at least one vertex $u$ such that $1\le \deg_{G[D]}(u) \le k-2$. Let $v$ (resp. $w$) be a a vertex adjacent (resp. non-adjacent) to $u$. Then, $u,v,w$ satisfy the desired condition. \end{proof} \begin{lem}\label{lem.grau} If $G$ is a nontrivial graph of order $n$ with a vertex $u$ of degree $k$, then $\beta_p(G)\le n-\min \{k,n-1-k\}$. \end{lem} \begin{proof} Let $N(v)=\{x_1,\dots ,x_k\}$ and $V(G)\setminus N(v)=\{y_1, \dots ,y_{n-1-k}\}$ and $m=\min \{ k, n-1-k\}$. Take the partition $\Pi= \{ S_1,\dots ,S_m \} \cup \{ \{ z\} : z\notin S_1\cup \ldots \cup S_m \}\}$, where $S_i=\{x_i,y_i\}$ for $i=1,\dots ,m $. Observe that $\{v\}$ resolves the vertices of $S_i=\{x_i,y_i\}$ for $i=1,\dots ,m $. Therefore, $\Pi$ is a locating partition of $G$, impliying that $\beta_p(G)\le |\Pi|=n-m=n-\min \{ k, n-1-k\}$. \end{proof} \begin{cor}\label{cor.grau} If $G$ is a graph of order $n\ge 7$ with at least one vertex $u$ satisfying $3\le \deg(u)\le n-4$, then $\beta_p(G)\le n-3$. \end{cor} As a direct consequence of Theorem \ref{mdpd}, we know that if $G$ is a graph such that ${\rm diam}(G)\ge4$, then $\beta_p(G)\le n-3$. Next, we study the cases ${\rm diam}(G)=3$ and ${\rm diam}(G)=2$. \begin{prop}\label{prop.twinpetitdiam3} Let $G$ be a graph of order $n\ge 9$. If $\tau(G)\le \frac{n}{2}$ and ${\rm diam}(G)=3$, then $\beta_p(G)\le n-3$. \end{prop} \begin{proof} By Corollary \ref{cor.grau}, and having also in mind that $G$ has no universal vertex, since its diameter is 3, we may suppose that, for every vertex $w$, $\deg (w)\in \{ 1,2,n-3,n-2\}$. Let $u$ be a vertex of eccentricity $3$. Consider the nonempty subsets $D_i=\{ v : d(u,v)=i \}$ for $i=1,2,3$. If at most one of these three subsets has exactly one vertex, then there exist five distinct vertices $x_1,x_2,x_3,y_{1},y_{2}$ such that, for $i=1,2,3$, $x_i\in D_i$ and vertices $y_1$ and $y_2$ do not belong to the same set $D_i$. Consider the partition $\Pi= \{ S_1, S_2 \} \cup \{ \{ z\} : z\notin S_1\cup S_2 \}\}$, where $S_1=\{ x_1,x_2,x_3\}$ and $S_2=\{y_{1},y_{2}\}$. Then, $\{ u \}$ resolves every pair of vertices in $S_1$ and the vertices in $S_2$. Therefore, $\Pi$ is a locating partition, implying that $\beta_p(G)\le n-3$. Next, suppose that $|D_{i_0}|=n-3$ for exactly one value $i_0\in \{ 1,2,3\}$ and $|D_i|=1$ for $i\not= i_0$. We distinguish two cases. \begin{enumerate}[(1)] \item $G[D_{i_0}]$ is neither complete nor empty. Then by Lemma \ref{lem.completempty}, there exist vertices $r,s,t\in D_{i_0}$ such that $rs\in E(G)$ and $rt\notin E(G)$. Consider the sets $S_1=\{ s,t \}$ and $S_2=\{x_1,x_2,x_3\}$, where $x_i\in D_i$ for $i=1,2,3$, with the additional condition $S_2\cap \{ r,s,t \}=\emptyset$, which is possible since $|D_{i_0}|\ge 4$. Take the partition $\Pi= \{ S_1, S_2 \} \cup \{ \{ z\} : z\notin S_1\cup S_2 \}\}$. Observe that $\{ r \}$ resolves the vertices in $S_1$ and $\{ u \}$ resolves every pair of vertices in $S_2$. Therefore, $\Pi$ is a locating partition, implying that $\beta_p(G)\le n-3$. \item $G[D_{i_0}]$ is either complete or empty. We distinguish three cases, depending on for which $i_0\in \{ 1,2,3\}$, $|D_{i_0}|=n-3$. \begin{enumerate}[(a)] \item $|D_3|=n-3$. Then, $D_3$ is a twin set with $n-3$ vertices, a contradiction as $n\ge9$. \item $|D_1|=n-3$. Let $v$ be the (unique) vertex of $D_2$. Then $D_1\cap N(v)$ and $D_1\cap \overline{N(v)}$ are twin sets. If $\deg(v)=2$, then $|D_1\cap \overline{N(v)}|=n-4$, a contradiction. If $n-3 \le \deg(v) \le n-2$, then $|D_1\cap N(v)|\ge n-4$, again a contradiction. \item $|D_2|=n-3$. Let $v$ be the (unique) vertex of $D_3$. Then, both $N(v)$ and $D_2 \setminus N(v)$ are twin sets. Notice that $\deg (v)\in \{ 1,2,n-3 \}$. We distinguish cases. \begin{enumerate}[(c.i)] \item If $\deg (v)=1$ (resp. $\deg (v)=n-3$) , then $|D_2 \setminus N(v)|=n-4$ (resp. $|N(v)|=n-3$), a contradiction. \item If $\deg (v)=2$, then $|D_2 \setminus N(v)|=n-5$. Let $N(v)=\{a_1,a_2\}$, $D_2\setminus N(v)=\{b_1,\ldots,b_{n-5}\}$, $D_1=\{x\}$. Take the partition $\Pi= \{ S_1, S_2, S_3 \} \cup \{ \{ z\} : z\notin S_1\cup S_2\cup S_3 \}\}$, where $S_1=\{a_1,b_1\}$, $S_2=\{a_2,b_2\}$ and $S_3=\{x,b_3\}$. Observe that $\{ v \}$ resolves the vertices in $S_1$ and $S_2$ and $\{ u \}$ resolves the vertices in $S_3$. Therefore, $\Pi$ is a locating partition, implying that $\beta_p(G)\le n-3$. \end{enumerate} \end{enumerate} \end{enumerate} \vspace{-1.0cm}\end{proof} \begin{prop}\label{prop.twinpetitdiam2} Let $G$ be a graph of order $n\ge 9$. If $\tau(G)\le \frac{n}{2}$ and ${\rm diam}(G)=2$, then $\beta_p(G)\le n-3$. \end{prop} \begin{proof} By Corollary \ref{cor.grau}, we may suppose that, for every vertex $w\in V(G)$, $\deg (w)\in \{ 1,2,n-3,n-2,n-1\}$. We distinguish three cases. \begin{enumerate}[(i)] \item \emph{There exists a vertex $u$ of degree $2$.} Consider the subsets $D_1=N(u)=\{ x_1,x_2\}$ and $D_2=\{ v : d(u,v)=2\}$. We distinguish two cases. \begin{enumerate}[(1)] \item $G[D_{2}]$ is neither complete nor empty. Then, Lemma \ref{lem.completempty}, there exist three different vertices $r,s,t\in D_2$ such that $rs\in E(G)$ and $rt\notin E(G)$. Consider two different vertices $y_1,y_2\in D_2\setminus \{ r,s,t \}$ and let $S_1=\{ x_1,y_1\}$, $S_2=\{ x_2,y_2 \}$, $S_3=\{ s,t \}$. Then, $\{ u \}$ resolves the vertices in $S_1$ and in $S_2$, and $\{r \}$ resolves the vertices in $S_3$. Hence, $\Pi = \{ S_1,S_2,S_3 \}\cup \{ \{ z \} : z\notin S_1\cup S_2 \cup S_3 \}$ is a locating partition of $G$. \item $G[D_{2}]$ is either complete or empty. Then the subsets of $D_2$, $A=N(x_1)\cap\overline{N(x_2)}\cap D_2$, $B=N(x_1)\cap N(x_2)\cap D_2$ and $C=\overline{N(x_1)}\cap N(x_2)\cap D_2$ are twin sets. We distinguish cases. \begin{enumerate}[(a)] \item If either $\deg(x_1)\le 2$ or $\deg(x_2)\le 2$, then either $|C|\ge n-4$ or $|A|\ge n-4$, in both cases a contradiction as $n\ge9$. \item If both $x_1$ and $x_2$ have degree at least $n-3$, then $0 \le |A| \le 2$, $0 \le |C| \le 2$ and $n-7 \le |B| \le n-3$. We distinguish cases, depending on the size of $B$. \begin{enumerate}[(b.1)] \item $|B| \ge n-4$. Then, $\tau(G) \ge n-4$, a contradiction as $n\ge9$. \item $|B| = n-5 \ge 4$. If $D_2 \cong \overline {K_{n-3}}$, then $\tau(G)=n-4$, as $B\cup \{u\}$ is a (maximum) twin set of $G$. Suppose that $D_2 \cong K_{n-3}$. Let $A\cup C=\{y_1,y_2\}$, $\{b_1,b_2,b_3,b_4\}\subseteq B$. Consider the partition $\Pi= \{ S_1, S_2, S_3 \} \cup \{ \{ z\} : z\notin S_1\cup S_2\cup S_3 \}\}$, where $S_1=\{b_1,y_1\}$, $S_2=\{b_2,y_2\}$ and $S_3=\{b_3,u\}$. Observe that either $\{ x_1\}$ or $\{ x_2\}$ resolves the vertices of $S_1$ and $S_2$. Notice also that $\{ b_4\}$ resolves the vertices in $S_3$. Hence, $\Pi$ is a locating partition of $G$. \item $2 \le n-7 \le |B| \le n-6$. We may assume without loss of generality that $|A|=2$ and $1 \le |C| \le 2$. Let $A=\{a_1,a_2\}$, $\{b_1,b_2\}\subseteq B$, $\{c_1\}\subseteq C$. Consider the partition $\Pi= \{ S_1, S_2, S_3 \} \cup \{ \{ z\} : z\notin S_1\cup S_2\cup S_3 \}\}$, where $S_1=\{a_1,b_1\}$, $S_2=\{a_2,b_2\}$ and $S_3=\{x,c_1\}$. Observe that $\{ x_2\}$ resolves the vertices of $S_1$ and $S_2$. Notice also that $\{ u\}$ resolves the vertices in $S_3$. Hence, $\Pi$ is a locating partition of $G$. \end{enumerate} \end{enumerate} \end{enumerate} \item \emph{There exists at least one vertex $u$ of degree $1$ and there is no vertex of degree 2.} In this case, the neighbor $u$ of $v$ is a universal vertex $v$. Let $\Omega$ be the set of vertices different from $v$ that are not leaves. Notice that there are at most two vertices of degree 1 in $G$, as otherwise all vertices in $\Omega$ would have degree between $3$ and $n-4$, contradicting the assumption made at the beginning of the proof. If there are exactly two vertices of degree 1, then $|\Omega|=n-3$. In such a case, $\Omega$ induces a complete graph in $G$, as otherwise the non-universal vertices in $G[\Omega]$ would have degree at most $n-4$. So, $\Omega$ is a twin set, implying that $\tau (G) =n-3 > \frac{n}{2} $, a contradiction. Suppose thus that $u$ is the only vertex of degree 1, which means that $\Omega$ contains $n-2$ vertices, all of them of degree $n-3$ or $n-2$. Consider the graph $H=\overline{G}[\Omega]$. Certainly, $H$ has $n-2$ vertices, all of them of degree $0$ or $1$. Let $H_i$ denote the set of vertices of degree $i$ of $H$, for $i=0,1$. Observe that $|H_0|\le \frac{n}{2}$, since $H_0$ is a twin set in $G$. Hence, $|H_1|\ge4$, as $n\ge9$ and the size of $H_1$ must be even. We distinguish two cases, depending on the size of $H_1$. \begin{enumerate}[(a)] \item $|H_1|=4$. Notice that $|H_0|\ge3$. Let $\{y_1,y_2,y_3\}\subseteq H_0$ and $H_1=\{x_1,x_2,x_3,x_4\}$ such that $\{x_1x_2,x_3x_4\}\subseteq E(H_1)$. Consider the partition $ \Pi =\{ S_1,S_2,S_3\} \cup \{ \{ z \} : z\notin S_1\cup S_2 \cup S_3 \}$, where $S_1= \{x_1,y_1\}$, $S_2=\{ x_2,y_2\}$ and $S_3=\{ u,v\}$. Observe that $d_G(x_2,x_1)=2\not= 1=d_G(x_2,y_1)$, $d_G(x_4,x_3)=2\not= 1=d_G(x_4,y_3)$, $d_G(y_2,u)=2\not= 1=d_G(y_2,v)$. Hence, $\{ x_2 \}$ resolves the vertices in $S_1$, $\{ x_4 \}$ resolves the vertices in $S_2$ and $\{ y_2 \}$ resolves the vertices in $S_3$. Therefore, $\Pi$ is a locating partition of $G$, implying that $\beta_p(G)\le n-3$. \item $|H_1|\ge6$. Let $\{x_1,x_2,x_3,x_4,x_5,x_6\}\subseteq H_1$ such that $\{x_1x_2,x_3x_4,x_5x_6\}\subseteq E(H_1)$. Consider the partition $ \Pi =\{ S_1,S_2,S_3\} \cup \{ \{ z \} : z\notin S_1\cup S_2 \cup S_3 \}$, where $S_1= \{u,v\}$, $S_2=\{ x_2,x_4\}$ and $S_3=\{ x_3,x_5\}$. Observe that $d_G(u,x_1)=2\not= 1=d_G(v,x_1)$, $d_G(x_4,x_1)=2\not= 1=d_G(x_2,x_1)$, $d_G(x_3,x_6)=2\not= 1=d_G(x_5,x_6)$. Hence, $\{ x_1 \}$ resolves the vertices in $S_1$ and in $S_2$, and $\{ x_6 \}$ resolves the vertices in $S_3$. Therefore, $\Pi$ is a locating partition of $G$, implying that $\beta_p(G)\le n-3$. \end{enumerate} \item \emph{There are no vertices of degree at most 2.} In this case, all the vertices of $G$ have degree $n-3$, $n-2$ or $n-1$, that is to say, all the vertices of $\overline{G}$ have degree 0, 1 or 2. Since $G$ has at most $\frac{n}{2}$ pairwise twin vertices, there are at most $\frac{n}{2}$ vertices of degree 0 in $\overline{G}$. Let $H_i$ denote the set of vertices of degree $i$ of $\overline{G}$, for $i=0,1,2$. Let $\Gamma=H_1\cup H_2$ and $H=\overline{G}[\Gamma]$. We distinguish cases, depending on the size of $\Gamma$, showing in each of them a collection of three 2-subsets $S_1,S_2,S_3$ such that the corresponding partition $\Pi =\{ S_1,S_2,S_3 \}\cup \{ \{ z \} : z\notin S_1 \cup S_2\cup S_3 \}$ is a locating partition for $G$, implying thus that $\beta_p(G)\le n-3$. \begin{enumerate}[(a)] \item $|\Gamma|\in \{ 5, 6 \}$. Then, $|H_0|\ge 3$. Let $\{ y_1,y_2,y_3 \}\subseteq H_0$. It easy to check that in both cases $H$ contains at least three edges either of the form (i) $x_1x_2$, $x_3x_4$, $x_4x_5$ or of the form (ii) $x_1x_2$, $x_3x_4$, $x_5x_6$. Take $S_1=\{ x_1,y_1\}$, $S_2= \{ x_3,y_2\}$, $S_3=\{ x_5,y_3\}$. Notice that, in case (i), $\{ x_2 \}$ resolves the vertices in $S_1$ and $\{ x_4\}$ resolves the vertices in $S_2$ and in $S_3$, and in case (ii), $\{ x_2 \}$ resolves the vertices in $S_1$, $\{ x_4\}$ resolves the vertices in $S_2$ and $\{ x_6\}$ resolves the vertices in $S_3$. \item $|\Gamma|=7$. Then, $|H_0|\ge 2$. Let $\Gamma=\{x_1,x_2,x_3,x_4,x_5,x_6,x_7\}$ such that $\{x_1x_2, x_2x_3, x_4x_5,x_6x_7\}\subseteq E(\overline{G})$, $x_4x_6\notin E(\overline{G})$ and $\{y_1,y_2\}\subseteq H_0$ . Take $S_1= \{x_1,y_1\}$, $S_2=\{ x_3,y_2\}$ and $S_3=\{x_5,x_6\}$. Observe that $\{ x_2 \}$ resolves the vertices in $S_1$ and in $S_2$, and $\{ x_4 \}$ resolves the vertices in $S_3$. \item $|\Gamma| \ge 8$. Then, all the connected components of $H$ are isomorphic either to a path or to a cycle. We distinguish cases, depending on the number of components of $\overline{G}[\Gamma]$. \begin{enumerate}[(c.1)] \item If $H$ is connected, then $H$ contains a path $x_1x_2\dots x_8$ of length 7. Take $S_1=\{x_1,x_2\}$, $S_2=\{x_4,x_5\}$ and $S_3=\{x_7,x_8\}$. Then, $\{ x_3 \}$ resolves the vertices in $S_1$ and in $S_2$, and $\{ x_6 \}$ resolves the vertices in $S_3~$. \item If $H$ has 2 connected components, say $C_1$ and $C_2$, let assume that $|V(C_1)|\ge |V(C_2)|\ge 2$. We distinguish two cases. \begin{enumerate}[(c.2.1)] \item If one of the connected components has at most 3 vertices, then $|V(C_1)|\ge 5$ and $2\le |V(C_2)|\le 3$. If $x_1x_2x_3x_4x_5$ and $y_1y_2$ are paths contained in $C_1$ and $C_2$, respectively, then consider $S_1=\{x_1,y_1\}$, $S_2=\{x_3,y_2\}$ and $S_3=\{x_5,t\}$, where $t$ is any vertex different from $x_1,x_2,x_3,x_4,x_5,y_1,y_2$. Then, it is easy to check that $\{ x_2 \}$ resolves the vertices in $S_1$ and in $S_2$, and $\{ x_4 \}$ resolves the vertices in $S_3$. \item If both connected components have at least 4 vertices, let $x_1x_2x_3x_4$ and $y_1y_2y_3y_4$ be paths contained in $C_1$ and $C_2$, respectively. Take $S_1=\{x_1,y_1\}$, $S_2=\{x_3,y_2\}$ and $S_3=\{x_4,y_4\}$. Then, it is easy to check that $\{ x_2 \}$ resolves the vertices in $S_1$ and in $S_2$, and $\{ y_3 \}$ resolves the vertices in $S_3$. \end{enumerate} \item If $H$ has 3 connected components, say $C_1$, $C_2$ and $C_3$, then we may assume that $|V(C_1)|\ge 3$ and $|V(C_1)|\ge |V(C_2)|\ge |V(C_3)|\ge 2$. Let $x_1x_2x_3$, $y_1y_2$ and $z_1z_2$ be paths contained in $C_1$, $C_2$ and $C_3$ respectively. Take $S_1=\{x_1,y_1\}$, $S_2=\{x_3,y_2\}$ and $S_3=\{z_2,t\}$, where $t$ is a vertex in $C_1\cup C_2$ different from $x_1,x_2,x_3,y_1,y_2$ that exists because $|\Gamma|\ge 8$. Then, it is easy to check that $\{ x_2 \}$ resolves the vertices in $S_1$ and in $S_2$, and $\{ z_1 \}$ resolves the vertices in $S_3$. \item If $H$ has at least 4 connected components, say $C_1$, $C_2$, $C_3$ and $C_4$, then we may assume that $|V(C_1)|\ge |V(C_2)|\ge |V(C_3)|\ge |V(C_4)|\ge 2$. Let $x_1x_2$, $y_1y_2$, $z_1z_2$ and $t_1t_2$ be edges of $C_1$, $C_2$, $C_3$ and $C_4$ respectively. Take $S_1=\{x_1,y_1\}$, $S_2=\{y_2,z_2\}$ and $S_3=\{t_1,w\}$, where $w$ is a vertex in $C_1\cup (V\setminus \Gamma)$ different from $x_1,x_2$, that exists since $G$ has order at least $9$. Then, it is easy to check that $\{ x_2 \}$ resolves the vertices in $S_1$, $\{ z_1 \}$ resolves the vertices in $S_2$, and $\{ t_2 \}$ resolves the vertices in $S_3$. \end{enumerate} \end{enumerate} \end{enumerate} \vspace{-1.2cm}\end{proof} As a consequence of Theorem \ref{mdpd}, Proposition \ref{prop.twinpetitdiam3} and Proposition \ref{prop.twinpetitdiam2}, the following result is obtained. \begin{thm}\label{thm.twinpetit} Let $G$ be a graph of order $n\ge 9$. If $\tau(G)\le \frac{n}{2}$, then $\beta_p(G)\le n-3$. \end{thm} \subsection{Twin number greater than half the order} In this subsection, we focus our attention on the case when $G$ is a nontrivial graph of order $n$ such that $\tau (G)=\tau > \frac{n}{2}$. Notice that, in these graphs there is a unique $\tau$-set $W$. Among others, we prove that, in such a case, $\displaystyle \tau (G)\le \beta_p(G) \le \frac{n+\tau(G)}{2}$. \begin{prop}\label{cal} Let $G$ be a graph of order $n$, other than $K_n$. If $W$ is a $\tau$-set such that $G[W]\cong {K_{\tau}}$, then $\beta_p(G)\ge \tau(G) +1$. \end{prop} \begin{proof} Assume that $\Pi=\{S_1,S_2,\ldots,S_{\tau}\}$ is a locating partition of $G$. If $W=\{w_1,w_2,\ldots,w_{\tau}\}$, then we can assume without loss of generality that, for every $i\in\{1,...,\tau\}$, $w_i\in S_i$. Let $v$ be a vertex of $N(W)\setminus W$. Take $j\in\{1,...,\tau\}$ such that $\{w_j,v\}\subseteq S_j$. Certainly, $r(v|\Pi)=(1,\dots 1,\underset{j)}{0}, 1, \ldots, 1)=r(w_j|\Pi)$, a contradiction. \end{proof} \begin{thm}\label{kmedios} Let $G=(V,E)$ be a graph of order $n$ such that $\frac{n}{2} < \tau (G)=\tau= n-k$ and let $W$ be its $\tau$-set. If $G[W]\cong {K_{\tau}}$, then $\beta_p(G)\le n-k/2$. \end{thm} \begin{proof} Let $W=\{ w_1,\dots , w_{\tau}\}$, $W_1=N(W)\setminus W=\{ x\in V(G) : d(x,W)=1 \}$ and $W_2=V\setminus N[W]=\{ x\in V(G) : d(x,W)\ge 2 \}$, and denote $r=|W_1|$, $t=|W_2|$. Observe that $\{ W , W_1 , W_2 \} $ is a partition of $V(G)$ and $k=r+t$. Since $W$ is a set of twin vertices, we have that $xy\in E(G)$ for all $x\in W$ and $y\in W_1$. \begin{figure}[htb] \begin{center} \includegraphics[width=0.38\textwidth]{theorem6} \caption{In this figure, $W_1=N(W)\setminus W$ and $W_2=V\setminus N[W]$.} \label{partsofpi} \end{center} \end{figure} Consider the subsets $U_1=\{ x\in W_1 : \deg_{G[W_1]}(x)=r-1\}$ and $U_2=W_1\setminus U_1$ of $W_1$. If $x\in U_1$, then there exists at least one vertex $y\in W_2$ such that $x y\in E(G)$, otherwise $x$ should be in $W$. Let us assign to each vertex $x\in U_1$ one vertex $y(x)\in W_2$ such that $x \, y(x)\in E(G)$ and consider the set $A_2=\{ y(x) : x\in U_1\}\subseteq W_2$. Observe that for different vertices $x,x'\in U_1$, the vertices $y(x)$, $y(x')$ are not necessarily different. By construction, $|A_2|\le |U_1|$. Next, consider the subgraph $G[U_2]$ induced by the vertices of $U_2$. If $s=|U_2|$, then by definition, this subgraph has maximum degree at most $s-2$, and hence, the complement $\overline{G[U_2]}$ has minimum degree at least 1. It is well known that every graph without isolated vertices contains a dominating set of cardinality at most half the order (see \cite{ore}). Let $A_1$ be a dominating set of $\overline{G[U_2]}$ with $|A_1|\le s/2$. If ($W_1\cup W_2) \setminus (A_1\cup A_2)= \{ v_1, \dots , v_h\}$, we show that the partition $\Pi$ defined as follows is a locating partition of $G$ (see Figure \ref{partsofpi}): $$\Pi = \{ \{ x \} : x\in A_1\} \cup \{ \{ y \} : y\in A_2\} \cup \{ \{ w_i,v_i \} : 1\le i\le h\} \cup \{ \{ w_i \} : h+1 \le i \le \tau\} . $$ Observe that $\Pi$ is well defined since $h< k< \frac{n}{2} \le \tau$. To prove this claim, it is sufficient to show that for every $i\in \{ 1,\dots ,h\}$ there exists a part of $\Pi$ at different distance from $w_i$ and $v_i$. We distinguish the following cases: \begin{enumerate}[i)] \item If $v_i\in U_1$, consider the vertex $y(v_i)\in A_2$ such that $v_i \, y(v_i)\in E(G)$. Then, $ d(w_i, \{ y(v_i) \})= 2\not= 1=d(v_i, \{ y(v_i) \})$. \item If $v_i\in U_2\setminus A_1$, consider a vertex $x\in A_1$ dominating $v_i$ in $\overline{G[U_2]}$, i.e., $x\, v_i\notin E(G)$. Then, $ d(w_i, \{ x \})= 1<d(v_i, \{ x \})$. \item If $v_i\in W_2\setminus A_2$, then $ d(w_i, \{ w_{\tau} \})= 1<2\le d(v_i, \{ w_{\tau} \})$. \end{enumerate} Observe that $|A_2|\le |U_1| =r-s$ and $|A_2|\le |W_2| = t$, so we can deduce that $|A_2|\le (r-s+t)/2$. Therefore, the partition dimension of $G$ satisfies: $\beta_p(G) \le |\Pi|=n- |(W_1\cup W_2)\setminus (A_1\cup A_2)|=n-[(r+t)- (|A_1|+ |A_2|)]= n-k/2$ \end{proof} \begin{prop}\label{cal0} Let $G=(V,E)$ be a graph of order $n$ such that $\tau (G)=\tau > \frac{n}{2}$. If its $\tau$-set $W$ satisfies $G[W]\cong \overline{K_{\tau}}$, then $\beta_p(G)=\tau$. \end{prop} \begin{proof} Let $W=\{ w_1,\dots ,w_{\tau}\}$, $V\setminus W=\{ v_1,\dots ,v_s\}$ and $N(W)=\{ v_1,\dots ,v_r \}$, where $1\le r\le s< \tau$. By Proposition \ref{twin.list}(7), $\beta_p(G) \ge \tau$. To prove the equality, consider the partition $\Pi =\{S_1,\dots ,S_{\tau} \}$, where $Si=\{w_i,v_i\}$ if $1\le i\le s$, and $S_i=\{w_i\}$ if $s< i\le \tau$. Observe that for any $i,j\in \{1,\dots ,\tau\}$ with $i\not= j$, $h\in \{1,\dots ,r\}$ and $k\in \{r+1,\dots ,s\}$, $d(w_i,w_j)=2$, $d(v_h,w_j)=1$ and $d(v_k,w_j)=2$. To prove that $\Pi$ is a locating partition of $G$, we distinguish cases. \textbf{Case 1}: $1\le i\le r$. Then, $d(w_i,S_{\tau})=d(w_i,w_{\tau})=2 \neq 1= d(v_i,w_{\tau})=d(v_i,S_{\tau})$. \textbf{Case 2}: $r<i\le s$. We distinguish two cases. \textbf{Case 2.1}: For some $k \in \{1,\ldots, r\}$, $v_iv_k\notin E(G)$. Consider the part $S_k=\{ w_k,v_k\}$. On the one hand, $d(w_i,S_k)=1$ since $d(w_i,v_k)=1$. On the other hand, $d(v_i,S_k)\ge 2$ since $d(v_i,w_k)\ge 2$ and $d(v_i,v_k)\ge 2$. Therefore, $d(w_i,S_{k})\not= d(v_i,S_{k})$ (see Figure \ref{fig.twins}(a)). \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\textwidth]{twins} \caption{In both cases, the part $S_k$ resolves the pair $w_i$, $v_i$. Solid lines hold for adjacent vertices and dashed lines, for non-adjacent vertices. }\label{fig.twins} \end{center} \end{figure} \textbf{Case 2.2}: Vertex $v_i$ is adjacent to all vertices in $\{ v_1,\dots ,v_r\}$. As $v_i\notin W$, $v_iv_k\in E(G)$ for some $k\in \{r+1,\ldots, s\}$. Consider the part $S_k=\{ w_k,v_k\}$. On the one hand, $d(w_i,S_k)=2$ since $d(w_i,w_k)=2$ and $d(w_i,v_k)\ge 2$. On the other hand, $d(v_i,S_k)=1$ since $d(v_i,v_k)=1$. Therefore, $d(w_i,S_k)\not= d(v_i,S_k)$ (see Figure \ref{fig.twins}(b)). \end{proof} As a direct consequence of Proposition \ref{cal}, Theorem \ref{kmedios} and Proposition \ref{cal0}, the following result is derived. \begin{thm} \label{txulisimo} Let $G$ be a graph of order $n$, other than $K_n$, such that $\tau(G)=\tau>\frac{n}{2}$. Then, $\tau \le \beta_p(G) \le \frac{n+\tau}{2}$. Moreover, if $W$ is its $\tau$-set, then \begin{enumerate} \item $\beta_p(G)=\tau$ if and only if $G[W]\cong\overline{K_{\tau}}$. \item $\tau < \beta_p(G) \le \frac{n+\tau}{2}$ if and only if $G[W]\cong K_{\tau}$. \end{enumerate} \end{thm} \begin{cor} \label{xulisimo} Let $G$ be a nontrivial graph of order $n$, other than $K_n$, such that $\beta_p(G)=n-h$ and $\tau(G)=\tau>\frac{n}{2}$. Let $W$ be its $\tau$-set. Then, $n-2h \le \tau \le n-h-1$ if and only if $G[W]\cong K_{\tau}$. \end{cor} \begin{cor} \label{f1234} For every $n\ge7$, the graphs $F_1$, $F_2$, $F_3$ and $F_4$, displayed in Figure \ref{pdn-2 wrong}, satisfy $\beta(F_i)=n-3$. \end{cor} \section{Partition dimension almost the order} Our aim in this section is to completely characterize the set of all graphs of order $n\ge9$ such that $\beta_p(G)=n-2$. This issue was already approached in \cite{tom08}, but, as remarked in our introductory section, the list of 23 graphs presented for every order $n\ge9$ turned out to be wrong. As was shown in Proposition \ref{twin.list}(8), it is clear that the only graphs whose partition dimension equals its order, are the complete graphs. The next result, along with Proposition \ref{twin.list}(9) and Proposition \ref{prop.twingran}, allows us to characterize, in a pretty simple way, all connected graphs of order $n$ with partition dimension $n-1$, a result already proved in \cite{ChaSaZh00} for $n\ge3$.for the case $\beta_p(G)=n-1$, \begin{prop}\label{n-1iff} Let $G$ be a graph of order $n\ge 9$ and twin number $\tau$, and let $W$ be a $\tau$-set. Then, $\beta_p(G)=n-1$ if and only if $G$ satisfies one of the following conditions: \begin{enumerate}[(i)] \item $\tau=n-1$. \item $\tau=n-2$ and $G[W]\cong K_{n-2}$. \end{enumerate} \end{prop} \begin{proof} Suppose that $\beta_p(G)=n-1$. Then, by Theorem \ref{thm.twinpetit}, $\tau>\frac{n}{2}$. Thus, by Theorem \ref{txulisimo} and Corollary \ref{xulisimo}, $n-2 \le \tau \le n-1$ and if $\tau=n-2$, then $G[W]\cong K_{n-2}$. If $\tau=n-1$, i.e., if $G\cong K_{1,n-1}$, then $\beta_p(G)=n-1$. If $\tau=n-2$ and $G[W]\cong K_{n-2}$ then, according to Proposition \ref{cal}, $\beta_p(G)\ge \tau +1=n-1$, which means that $\beta_p(G)=n-1$, as $G$ is not the complete graph. \end{proof} \begin{cor} (\cite{ChaSaZh00})\label{pretty} Let $G$ be a graph of order $n\ge 9$. Then, $\beta_p(G)=n-1$ if and only if $G$ is one of the following graphs: \begin{enumerate} \item the star $K_{1,n-1}$. \item the complete split graph $K_{n-2} \vee \overline{K_2}$ obtained by removing an edge $e$ from the complete graph $K_n$ (see Figure \ref{bpn-2}(a)). \item the graph $K_1\vee (K_1+K_{n-2})$ obtained by attaching a leaf to the complete graph $K_{n-1}$ (see Figure \ref{bpn-2}(b)). \end{enumerate} \end{cor} Next, we approach the case $\beta_p(G)=n-2$. \begin{defi} {\rm Let $G=(V,E)$ a graph such that $\tau(G)=\tau$. Let $W$ be a $\tau$-set of $G$ such that $G[W]\cong K_{\tau}$. A vertex $v\in V \setminus W$ is said to be a \emph{$W$-distinguishing vertex} of $G$ if and only if, for every vertex $z \in N(W) \setminus W$, $d(v,z) \neq d(v,W)$.} \end{defi} \begin{lem}\label{lema3} Let $G=(V,E)$ be a nontrivial graph of order $n$ such that $\tau(G)=\tau> \frac{n}{2}$. Suppose that its $\tau$-set $W$ satisfies $G[W]\cong K_{\tau}$. Then, the following statements hold: \begin{enumerate} \item[(a)] If $G$ contains a $W$-distinguishing vertex, then $\beta_p(G)=\tau+1$. \item[(b)] If $G[N(W)\setminus W]$ contains an isolated vertex, then $\beta_p(G)=\tau+1$. \item[(c)] If $|N(W) \setminus W|=1$, then $\beta_p(G)=\tau+1$. \item[(d)] If $G[N(W)\setminus W]$ contains a universal vertex $v$, then $v$ is adjacent to at least one vertex of $V\setminus N [W]$. \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item[(a)] Let $v$ be a $W$-distinguishing vertex. Set $W=\{w_1,w_2,\ldots,w_{\tau}\}$ and $V\setminus W=\{v, z_1,\ldots,z_{r}\}$, where $r=n-\tau-1<\tau$. Take the partition $\Pi=\{S_1,\ldots,S_{\tau+1}\}$, where: $ S_1=\{w_1,z_1\}, \ldots, S_r=\{w_r,z_r\}, S_{r+1}=\{w_{r+1}\}, \ldots, S_{\tau}=\{w_{\tau}\}, S_{\tau+1}=\{v\}$. Notice that if $z_i\in N(W)\setminus W$, then $$d(z_i,S_{\tau +1})=d(z_i,v)\not= d(v,W)=d(v,w_i)=d(w_i,S_{\tau +1}),$$ and if $z_i\notin N(W)$, then for any $j\in \{1,\dots,\tau \}$ such that $i\not= j$ we have $$d(z_i,S_{j})=d(z_i,w_j)>1= d(w_i,w_j)=d(w_i,S_{j}).$$ Thus, $r(w_i|\Pi)\neq r(z_i|\Pi)$ for every $i\in\{1,\ldots,r\}$, and consequently $\Pi $ is a locating partition of $G$. \item[(b)] If $v$ is an isolated vertex in $G[N(W)\setminus W]$, then for every vertex $z \in N(W) \setminus W$, $d(v,z)=2 \neq 1 = d(v,W)$. Hence, $v$ is a $W$-distinguishing vertex of $G$, and, according to item (a), $\beta_p(G)=\tau +1$. \item[(c)] In this case, the only vertex in $N(W)\setminus W$ is isolated in $G[N(W)\setminus W]$ and, according to item (b), $\beta_p(G)=\tau +1$. \item[(d)] Notice that if $v$ is universal in $G[N(W)\setminus W]$ and has no neighbor in $V\setminus N[W]$, then $v$ would be a twin of any vertex in $W$, which is a contradiction. \end{enumerate} \vspace{-.8cm}\end{proof} As a straightforward consequence of item (b) of the previous lemma, the following holds. \begin{cor} \label{f56} For every $n\ge 9$, the graphs $F_5$ and $F_6$, displayed in Figure \ref{pdn-2 wrong}, satisfy $\beta(F_i)=n-3$. \end{cor} \begin{lem}\label{lema4} Let $G=(V,E)$ be a graph of order $n\ge9$ such that $\tau(G)=\tau=n-4$ and its $\tau$-set $W$ satisfies $G[W]\cong K_{\tau}$. If $|N(W) \setminus W|=2$, then $\beta_p(G)=n-3$. \end{lem} \begin{proof} Let $W=\{w_1,\ldots,w_{n-4}\}$, $V\setminus W=\{z_1,z_2,z_3,z_4\}$ and $N(W)\setminus W=\{z_1,z_2\}$. If $z_1z_2\not\in E$, then both $z_1$ and $z_2$ are isolated vertices in $G[N(W)\setminus W]$. Hence, by Lemma \ref{lema3}(b), $\beta_p(G)=\tau(G)+1=n-3$. Suppose that $z_1z_2\in E$. According to Lemma \ref{lema3}(b), both $z_1$ and $z_2$ are adjacent to at least one vertex of $V \setminus N(W)=\{z_3,z_4\}$. If, for some $i\in\{3,4\}$, either $\{z_1z_i,z_2z_i\}\subset E$, then $z_i$ is a $W$-distinguishing vertex, i.e., $\beta_p(G)=\tau+1=n-3$. Assume thus that $\{z_1z_3,z_2z_4\}\subset E$ and $\{z_1z_4,z_2z_3\}\cap E =\emptyset$ (see Figure \ref{fig_lemas7y8}(a)). Notice that none of the vertices of $V\setminus W$ is $W$-distinguishing. Take the partition $\Pi=\{S_1,\ldots,S_{n-3}\}$, where $$ S_1=\{w_1,z_1\}, S_2=\{w_2,z_2\}, S_3=\{z_3,z_4\}, S_4=\{w_3\},\ldots, S_{n-3}=\{w_{n-4}\}.$$ Clearly, $\Pi$ is a locating partition of $G$, since $d(w_1|S_3)=2 \not= 1=d(z_1,S_3)$, $d(w_2|S_3)=2 \not= 1=d(z_2|S_3)$ and $d(z_3|S_1)=1 \not= 2=d(z_4|S_1)$. Finally, from Proposition \ref{cal}, we derive that $\beta_p(G)=n-3$. \end{proof} As a straightforward consequence of this lemma, the following holds. \begin{cor} \label{f78} For every $n\ge 9$, the graphs $F_7$ and $F_8$, displayed in Figure \ref{pdn-2 wrong}, satisfy $\beta(F_i)=n-3$. \end{cor} \begin{figure}[t] \begin{center} \includegraphics[width=0.8\textwidth]{4vertices} \caption{In the three cases, $\tau=n-4$ and $G[W]\cong K_{n-4}$. Solid lines hold for adjacent vertices meanwhile dashed lines are optional.} \label{fig_lemas7y8} \end{center} \end{figure} \begin{figure}[!hbt] \begin{center} \includegraphics[width=0.99\textwidth]{laverdad} \caption{These are all graph families such that $\beta_p(G)=n-2$. If $i\in\{1,2\}$, then $\tau(H_i)=n-2$. If $i\in\{3,\ldots,10\}$, then $\tau(H_i)=n-3$. If $i\in\{11,\ldots,15\}$, then $\tau(H_i)=n-4$. }\label{taun234} \end{center} \end{figure} \begin{thm} \label{n-2 true} Let $G$ be a graph of order $n\ge9$. Then, $\beta_p(G)=n-2$ if and only if $G$ belongs to the following family $\{H_i\}_{i=1}^{15}$ (see Figure \ref{taun234}): \begin{center} \begin{tabular}{lll} $H_1 \cong K_{2,n-2}$ & $H_2\cong \overline{K_{n-2}} \vee K_2$ & $H_3\cong K_{n-3}\vee (K_2+K_1)$ \\ $H_4\cong K_{n-3}\vee \overline{K_{3}} $ & $H_5\cong (K_{n-3}+K_1)\vee K_2 $ & $H_6\cong (K_{n-3}+K_1)\vee \overline{K_2} $ \\ $ H_7\cong H_6 - e_1 $ & $ H_8\cong (K_{n-3} + K_2) \vee K_1 $ & $ H_9\cong H_8 - e_2 $\\ $ H_{10}\cong (K_{n-3}+\overline{K_2})\vee K_1 $& $ H_{11}\cong K_{n-4}\vee C_4 $ & $ H_{12}\cong K_{n-4}\vee P_4 $ \\ $ H_{13}\cong K_{n-4}\vee 2\, K_2 $ & $ H_{14}\cong (K_{n-4} + K_1) \vee P_3 - e'$ & $ H_{15}\cong H_{14} - e_3 $ \\ \end{tabular} \end{center} \vspace{.2cm}\noindent where $e'$ is an edge joining the vertex of $K_1$ with an endpoint of $P_3$ in $(K_{n-4} + K_1)$ and $e_1$ is an edge joining the vertex of $K_1$ with a vertex of $\overline {K_2}$ in $H_6$; $e_2$ is an edge joining the vertex of $K_1$ with a vertex of ${K_2}$ in $H_8$; $e_3$ is an edge joining the vertex of $K_1$ with an endpoint of ${P_3}$ in $H_{14}$. \end{thm} \begin{proof} ($\Longleftarrow$) First suppose that $G$ is a graph belonging to the family $\{H_i\}_{i=1}^{15}$. We distinguish three cases. \noindent \textbf{Case 1}: $G\in\{H_1,H_2\}$. Hence, $\tau(G)=n-2$ and its $\tau$-set $W$ satisfies $G[W]\cong \overline{K_{n-2}}$. Thus, according to Proposition \ref{cal0}, $\beta_p(G)=n-2$. \noindent \textbf{Case 2}: $G\in\{H_i\}_{i=3}^{10}$. Hence, $\tau(G)=n-3$ and its $\tau$-set $W$ satisfies $G[W]\cong K_{n-3}$. Thus, according to Proposition \ref{cal}, $\beta_p (G) \ge \tau(G)+1=n-2$. Furthermore, from Proposition \ref{n-1iff} we deduce that $\beta_p (G)=n-2$. \noindent \textbf{Case 3}: $G\in\{H_i\}_{i=11}^{15}$. Clearly, for all these graphs ${\rm diam}(G)=2$, $\tau(G)=n-4$ and its $\tau$-set $W$ satisfies $G[W]\cong K_{n-4}$. According to Proposition \ref{cal} and Theorem \ref{pretty}, $n-3 \le \beta_p(G) \le n-2$. Suppose that there exists a locating partition $\Pi=\{S_1, \dots, S_{n-3}\}$ of cardinality $n-3$. If $W=\{w_1, \dots, w_{n-4}\}$, assume that, for every $i\in\{1,\ldots,n-4\}$, $w_i\in S_i$. We distinguish two cases. \textbf{Case 3.1}: $G \in \{H_{11},H_{12},H_{13}\}$. Note that $N(W)=V(G)$ and in all cases there is a labelling $V(G) \setminus W=\{a_1,a_2,b_1,b_2\}$ such that $d(a_1,a_2)=1$, $d(b_1,b_2)=1$, $d(a_1,b_1)=2$ and $d(a_2,b_2)=2$ (see Figure \ref{fig_lemas7y8}(b)). Observe that $\vert S_{n-3}\vert=1$, since $r(z,\Pi)=(1, \dots,1,0)$ for every $z\in\{a_1,a_2,b_1,b_2\}\cap S_{n-3}$. Notice also that $\vert S_{i}\vert\leq2$ for $i\in \{1, \dots ,n-4\}$, as for every $x\in S_i$, we have $r(x,\Pi)=(1,\ldots,1,\overset{i)}{0},1,\ldots,1,h)$, with $h\in\{1,2\}$. Hence, there are exactly three sets of $\Pi$ of cardinality 2. We can suppose without loss of generality that $S_1=\{w_1,x\}$, $S_2=\{w_2,y\}$, $S_3=\{w_3,z\}$ and $S_{n-3}=\{t\}$, where $\{x,y,z,t\}=\{a_1,a_2,b_1,b_2\}$. Hence, $d(t,x)=d(t,y)=d(t,z)=2$, a contradiction. \textbf{Case 3.2}: $G \in \{H_{14},H_{15}\}$. Note that $\vert N(W) \setminus W\vert =3$ and that there is a labelling $V(G)\setminus W=\{a,b,c,z\}$ such that $N(W) \setminus W=\{a,b,c\}$, $d(a,b)=d(b,c)=d(b,z)=1$, $d(c,a)=d(c,z)=2$ and $d(a,z)\in\{1,2\}$ (see Figure \ref{fig_lemas7y8}(c)). Notice that $\vert S_{n-3}\vert\le 2$, since for every $x\in\{a,b,c\}\cap S_{n-3}$, $r(x,\Pi)=(1, \dots,1,0)$. Moreover, $b\notin S_{n-3}$, otherwise $a$ and $c$ do not belong to $S_{n-3}$ and we would have $r(a,\Pi)=r(c,\Pi)=(1,\ldots,1,1)$. So, we can assume without loss of generality that $\{w_1,b\}\subseteq S_1$. If $\{a,c\}\cap S_{n-3}\neq \emptyset$, then $r(w_1,\Pi)=r(b,\Pi)=(1,\ldots,1,1)$. Consequently, $r(w_i,\Pi)=r(c,\Pi)=(1,\ldots,1,2)$ for every $i\in \{1,\ldots,n-4\}$, a contradiction. ($\Longrightarrow$) Now assume that $G$ is a graph such that $\beta_p(G)=n-2$. By Theorem \ref{thm.twinpetit}, $\tau(G)>\frac{n}{2}$, and according to Corollary \ref{xulisimo}, we have $n-4 \le \tau(G) \le n-2$. We distinguish three cases, depending on the cardinality of $\tau(G)$. \noindent \textbf{Case 1}: $\tau(G)=n-2$. Thus, according to Proposition \ref{prop.twingran} and Theorem \ref{pretty}, $G\in\{H_1,H_2\}$. \noindent \textbf{Case 2}: $\tau(G)=n-3$. In thid case, from Proposition \ref{cal0} we deduce that its $\tau$-set $W$ satisfies $G[W]\cong K_{n-3}$. We distinguish three cases, depending on the cardinality of $N(W)\setminus W$. \textbf{Case 2.1}: $|N(W)\setminus W|=3$. In this case, $G[N(W)\setminus W] \in \{K_3, P_3, K_2+K_1, \overline{K_3} \}$. If $G[N(W)\setminus W]$ is $K_3$ or $P_3$, then $\tau (G)\ge n-2$, a contradiction. If $G[N(W)\setminus W]\cong K_2+K_1$, then $G\cong H_3$, and if $G[N(W)\setminus W]\cong\overline{K_3}$, then $G\cong H_4$. \textbf{Case 2.2}: $|N(W)\setminus W|=2$. In this case, $G[N(W)\setminus W] \in \{K_2, \overline{K_2} \}$ and $|V\setminus N(W)|=1$. Let $z$ be the vertex in $V\setminus N(W)$. If $G[N(W)\setminus W]\cong K_2$ and $\deg(z)=1$, then $\tau(G)=n-2$, a contradiction. If $G[N(W)\setminus W]\cong K_2$ and $\deg(z)=2$, then $G\cong H_5$. If $G[N(W)\setminus W]\cong \overline{K_2}$ and $\deg(z)=2$, then $G\cong H_6$. Finally, if $G[N(W)\setminus W]\cong \overline{K_2}$ and $\deg(z)=1$, then $G\cong H_7$. \textbf{Case 2.3}: $|N(W)\setminus W|=1$. Let $N(W)\setminus W=\{x\}$ and $V\setminus N(W)=\{y,z\}$. If $\deg(y)=\deg(z)=2$, then $G\cong H_8$. If $\{\deg(y),\deg(z)\}=\{1,2\}$, then $G\cong H_{9}$. If $\deg(y)=\deg(z)=1$, then $G\cong H_{10}$. \noindent \textbf{Case 3}: $\tau(G)=n-4$. Let $W$ be its $\tau$-set. In this case, from Proposition \ref{cal0} we deduce that its $\tau$-set $W$ satisfies $G[W]\cong K_{n-4}$. Moreover, from Lemmas \ref{lema3} and \ref{lema4}, we deduce that $G$ does not contain any $W$-distinguishing vertex and $|N(W)\setminus W|\ge 3$. Hence, $3\le |N(W)\setminus W|\le 4$. We distinguish two cases, depending on the cardinality of $N(W)\setminus W$. \textbf{Case 3.1}: $|N(W)\setminus W|=4$. According to Lemma \ref{lema3}, all vertices of $G[N(W)\setminus W]$ have degree either 1 or 2. Thus, $G[N(W)\setminus W]$ is a isomorphic to either $C_4$ or $P_4$ or $2K_2$. Hence, $G$ is isomorphic to either $H_{11}$ or $H_{12}$ or $H_{13}$. \textbf{Case 3.2}: $|N(W)\setminus W|=3$. According to Lemma \ref{lema3}, $G[N(W)\setminus W]$ is a either $C_3$ or a $P_3$. Suppose that $G[N(W)\setminus W]$ is $C_3$. Then, by Lemma \ref{lema3}(d), every vertex of $N(W)\setminus W$ is adjacent to the unique vertex $z$ of $V\setminus N(W)$, a contradiction since in this case $z$ would be a $W$-distinguishing vertex. Thus, $G[N(W)\setminus W]$ is $P_3$. According to Lemma \ref{lema3}(d), the central vertex $w$ of $P_3$ is adjacent to the unique vertex $z$ of $V\setminus N(W)$. Observe also that one of the remaining two vertices of this path may be adjacent to vertex $z$, but not both, since in this case $z$ would be a $W$-distinguishing vertex. Hence, $G$ is isomorphic to either $H_{14}$ or $H_{15}$. \end{proof} \section*{Acknowledgements} \vspace{-.3cm} Research partially supported by grants MINECO MTM2015-63791-R, Gen. Cat. DGR 2014SGR46 and MTM2014-60127-P.
1,108,101,564,077
arxiv
\section{Introduction} Studies on the problem of optimally arranging points on a sphere can date back to over one hundred years ago, when Thomson attempted to explain the periodic table in terms of the ``plum pudding'' model of the atom. Since then, several varied problems were proposed, and some of such problems are still unsolved now \cite{croft:1}. In general, these problems involve finding configurations of points on the surface of a sphere that maximize or minimize some given quantities, some of them are directly relevant to physics or chemistry where stable configurations tend to minimize some form of energy expression. The problem has the following general form. Let $x_1, x_2, \ldots, x_n$ be points on the unit sphere $S^{m-1}$ of the Euclidean space $\mathbb{R}^m$, denote \begin{equation} \label{vxn} V(X_n,m,\lambda) = \sum\limits_{1 \leq i < j \leq n} \left| x_i - x_j \right|^\lambda, \end{equation} where $X_n=(x_1,x_2,\ldots,x_n)$, and $\left| x_i-x_j \right|$ denotes the Euclidean distance between $x_i$ and $x_j$. For $\lambda \leq 0$, denote \begin{equation} \label{vnn} V_1(n,m,\lambda) = \min\limits_{X_n \subset S^{m-1}} V(X,n,m,\lambda) \end{equation} where \begin{equation} \label{vn0} V_1(n,m,0)= \min\limits_{X_n \subset S^{m-1}} \sum\limits_{1 \leq i < j \leq n} \log \frac{1}{\left| x_i - x_j \right|} \end{equation} When $m=3$, this is the 7th Problem listed by Steve Smale in \textit{Mathematical Problems for the Next Century} \cite{smale:1, smale:2}. For $\lambda > 0$, denote \begin{equation} \label{vnp} V_2(n,m,\lambda) = \max\limits_{X_n \subset S^{m-1}} V(X,n,m,\lambda) \end{equation} So far as we know, G. P\'{o}lya and G. Szeg\"{o} \cite{polya:1} first studied problems of such types in 1930s, since then, a number of results about $V_2(n,m,\lambda)$ have been derived. For example, L. Fejes T\'{o}th proved results for cases when $m=2, \lambda=1$ and when $n=m+1, \lambda=1$ \cite{toth:1}. E. Hille considered the asymptotic properties of $V_2(n,m,\lambda)/N$ when $n \to \infty$ for definite $m$ and $\lambda$, and gave some results \cite{hille:1}. K. B. Stolarsky proved bounds of $V_2(n,m,\lambda)$ for definite $m$ and $\lambda$ in \cite{stolarsky:1, stolarsky:2}, and gave some properties of point distributions corresponding $V_2(n,m,\lambda)$ when $m = 2$ and $m=3$ in \cite{stolarsky:3, stolarsky:4, stolarsky:5}. R. Alexander also proved bounds of $V_2(n,3,1)$ in \cite{alexander:1}, and discussed some generalized sums of distances in \cite{alexander:2, alexander:3}. G. D. Chakerian and M. S. Klamkin proved bounds of $V_2(n,m,1)$ in \cite{chakerian:1}. J. Berman and K. Hanes proved a property of the point distribution corresponding $V_2(n,3,1)$, and deduced some numerical results in \cite{berman:1}. G. Harman, J. Beck, T. Amdeberhan proved bounds of $V_2(n,m,\lambda)$ in \cite{harman:1, beck:1, amdeberhan:1}. Similar problems were also discussed in \cite{furedi:1, ali:1, saff:1, minghuijiang:1}. For $V_2(5,3,1)$, numerical computations show evidences for the conjecture that, it is obtained when 5 points form a bipyramid configuration in which case two points are on the two poles of $S^2$, while three other points are uniformly distributed on the equator. In this paper, we study this problem via interval arithmetic, and prove the conjecture through computer in comparatively short time. For related problems, this guides a different method. The main ideal of our proof is as follows. Firstly we express $V(X_5,3,1)$ as a function under certain coordinate system, secondly we exclude a domain where the bipyramid configuration is proved to correspond an only maximum of $V(X_5,3,1)$, lastly we subdivide the remaining domain, and prove that function values in these subdomains are less then the previous maximum obtained. So we complete the proof of the conjecture. \section{Mathematical descriptions of the problem} \label{5points:math} \subsection{Spherical coordinate system} We choose the spherical coordinate system as showed in Fig. \ref{coordsys}. A point $P$ on $S^2$ is identified by $(1,\phi,\theta)$, where $\phi \in \left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ is the angle from vector $\overrightarrow{OH}$, i.e., the projection of vector $\overrightarrow{OP}$ in $xoy$-plane, to vector $\overrightarrow{OP}$, positive if the $z$-coordinate of $P$ is positive, and $\theta \in [-\pi, \pi)$ is the angle from $x$-axis to vector $\overrightarrow{OH}$, positive if the $y$-coordinate of $P$ is positive. \begin{figure}[htbp] \centering \includegraphics[width=0.60\textwidth]{images/coordinate.eps} \caption{The spherical coordinate system} \label{coordsys} \end{figure} According to such definitions, we have following formulas transforming from spherical coordinate $(1,\phi,\theta)$ to Cartesian coordinate $(x,y,z)$, \begin{equation} \left\{ \begin{array}{rcl} x & = & \cos(\phi) \cos(\theta), \\ y & = & \cos(\phi) \sin(\theta), \\ z & = & \sin(\phi). \end{array} \right. \end{equation} Considering the spherical symmetry, we can choose the spherical coordinates for 5 points as follows: \begin{equation} A(1,0,0),B(1,\phi_1,\pi),C(1,\phi_2,\theta_2),D(1,\phi_3,\theta_3),E(1,\phi_4,\theta_4). \end{equation} Thus the sum of mutual distances of these points is \begin{equation} \begin{split} & f(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \\ = & \sqrt {2+ 2\,\cos \left( \phi_{{1}} \right)}+\sqrt {2-2\,\cos \left( \phi_{{2}} \right) \cos \left( \theta_{{2}} \right) }\\ &+\sqrt {2-2\,\cos\left( \phi_{{3}} \right) \cos \left( \theta_{{3}} \right) }+\sqrt {2 -2\,\cos \left( \phi_{{4}} \right) \cos \left( \theta_{{4}} \right) }\\ &+\sqrt {2\,\cos \left( \phi_{{1}} \right) \cos \left( \phi_{{2}} \right) \cos \left( \theta_{{2}} \right) +2-2\,\sin \left( \phi_{{1}} \right) \sin \left( \phi_{{2}} \right) }\\ &+\sqrt {2\,\cos \left( \phi_{{1}} \right) \cos \left( \phi_{{3}} \right) \cos \left( \theta_{{3}} \right) +2-2\,\sin \left( \phi_{{1}} \right) \sin \left( \phi_{{3}} \right) }\\ & +\sqrt {2\,\cos \left( \phi_{{1}} \right) \cos \left( \phi_{ {4}} \right) \cos \left( \theta_{{4}} \right) +2-2\,\sin \left( \phi_{ {1}} \right) \sin \left( \phi_{{4}} \right) }\\ &+\sqrt {-2\,\cos \left( \phi_{{3}} \right) \cos \left( \phi_{{2}} \right)\cos \left( \theta_{{2}}-\theta_{{3}} \right) +2-2\,\sin \left( \phi_{{2}} \right) \sin \left( \phi_{{3}} \right) }\\ &+\sqrt {-2\,\cos \left( \phi_{{2}} \right) \cos \left( \phi_{ {4}} \right)\cos \left( \theta_{{2}}- \theta_{{4}} \right) +2-2\,\sin \left( \phi_{{2}} \right) \sin \left( \phi_{{4 }} \right) }\\ &+\sqrt {-2\,\cos \left( \phi_{{3}} \right) \cos \left( \phi_{{4}} \right) \cos \left( \theta_{{3}}-\theta_{{4}} \right) +2-2\, \sin \left( \phi_{{3}} \right) \sin \left( \phi_{{4}} \right) }. \end{split} \end{equation} \subsection{Bipyramid distribution} \label{sec:bipyramid} Spherical coordinates of 5 points corresponding a bipyramid distribution are not unique, but the following 5 points indeed form a bipyramid configuration, \begin{equation} \label{bipycoords} A(1,0,0),B(1,-\dfrac{\pi}{3},\pi),C(1,\dfrac{\pi}{3},\pi),D(1,0,-\dfrac{\pi}{2}), E(1,0,\dfrac{\pi}{2}), \end{equation} as showed in Fig. \ref{bipydistrib}. \begin{figure}[htbp] \centering \includegraphics[width=0.60\textwidth]{images/bipyramid.eps} \caption{The bipyramid distribution} \label{bipydistrib} \end{figure} Denote the corresponding values of $(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4)$ by $$ \Theta_{bp} = (-\dfrac{\pi}{3}, \dfrac{\pi}{3}, \pi, 0, -\dfrac{\pi}{2}, 0, \dfrac{\pi}{2}), $$ then the corresponding value of function $f$ is \begin{equation} \label{fmax} \begin{split} fmax & = f(\Theta_{bp}) \\ & = 3\,\sqrt {3}+6\,\sqrt {2}+2\\ & \approx 15.68143380, \end{split} \end{equation} and the Hessian matrix of $f$ is {\footnotesize \begin{equation} \left( \begin {array}{ccccccc} {\dfrac { -\sqrt {3} } { 2 }}&{\dfrac { \sqrt {3} }{ 4 }}&0&{\dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { \sqrt {6} }{ 4 }}&{\dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { -\sqrt {6} }{ 4 }}\\\noalign{\medskip}{\dfrac { \sqrt {3} }{ 4 }}&{\dfrac { -\sqrt {3} }{ 2 }}&0&{\dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { -\sqrt {6} }{ 4 }}&{\dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { \sqrt {6} }{ 4 }}\\\noalign{\medskip}0&0&{\dfrac { -2\,\sqrt {3}-3 \,\sqrt {2} }{ 24 }}&{\dfrac { - \sqrt {6} }{ 16 }}&{\dfrac { \sqrt {2 } }{ 16 }}&{\dfrac { \sqrt {6} }{ 16 }}&{\dfrac { \sqrt {2} }{ 16 }}\\\noalign{\medskip}{\dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { -\sqrt {6} }{ 16 }}&{\dfrac { -3\,\sqrt {2}-4 }{ 8 }}&0&{\dfrac { -1 }{ 2 }}&0\\\noalign{\medskip}{ \dfrac { \sqrt {6} }{ 4 }}&{\dfrac { -\sqrt {6} }{ 4 }}&{\dfrac { \sqrt {2} }{ 16 }}&0&{\dfrac { -3\,\sqrt {2}-4 }{ 8 }}&0&{\dfrac { 1 }{ 2 }}\\\noalign{\medskip}{ \dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { -\sqrt {2} }{ 4 }}&{\dfrac { \sqrt {6} }{ 16 }}&{\dfrac { -1 }{ 2 }}&0&{\dfrac { -3\, \sqrt {2}-4 }{ 8 }}&0\\\noalign{\medskip}{ \dfrac { -\sqrt {6} }{ 4 }}&{\dfrac { \sqrt {6} }{ 4 }}&{\dfrac { \sqrt {2} }{ 16 }}&0&{\dfrac { 1 }{ 2 }}&0&{\dfrac { -3\,\sqrt {2}-4 }{ 8 }}\end {array} \right). \end{equation} } This matrix is negative definite, so the bipyramid distribution corresponds a maximum of function $f$. \subsection{Inequality form} As a matter of fact, what we are to prove is the following inequality, \begin{equation} \label{inequality} f(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \leq fmax, \quad (\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \in \mathcal{D}. \end{equation} where $$ \mathcal{D}= \left( [-\frac{\pi}{2},\frac{\pi}{2}], [-\frac{\pi}{2},\frac{\pi}{2}], [-\pi, \pi), [-\frac{\pi}{2},\frac{\pi}{2}], [-\pi, \pi), [-\frac{\pi}{2},\frac{\pi}{2}], [-\pi, \pi) \right), $$ and the equality holds if and only if $(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4)=\Theta_{bp}$. In the remaining part of this paper, we will according to following steps to prove this inequality. \begin{enumerate} \item Giving some restricted conditions and results to demonstrate that we only need to prove the inequality over a subdomain of $\mathcal{D}$, i.e., $\mathcal{D}^{(1)} \cup \mathcal{D}^{(2)}$ (see Eq. \eqref{eqn:bipyramid}). \item Analyzing interval Hessian matrices (Theorem \ref{th:pstv}, \ref{th:posdefeig} and \ref{th:extremepoint}) to prove that the equality holds only at $\Theta_{bp}$ over a subdomain of $\mathcal{D}^{(1)} \cup \mathcal{D}^{(2)}$, i.e., $\mathcal{D}_{bp}$ (see Proposition \ref{prop:bipyramid}). \item Analyzing interval Hessian matrices (Theorem \ref{th:nonpos} and \ref{th:notextremepoint}) to prove the corresponding strict inequality holds over a subdomain of $\mathcal{D}^{(1)} \cup \mathcal{D}^{(2)}$, i.e., $\mathcal{D}_{p}$ (see Proposition \ref{prop:pyramid}). \item Making use of the interval arithmetic(\S\,\ref{intarith}) to prove the corresponding strict inequality holds over the remaining domains, i.e., $(\mathcal{D}^{(1)} \cup \mathcal{D}^{(2)}) \backslash (\mathcal{D}_{bp} \cup \mathcal{D}_{p})$ (see Eq. \eqref{strictinequality}). \end{enumerate} \section{Restricted conditions and verification domain} \subsection{Some results} What we are to prove is in fact that, there exists no distribution of 5 points exclude the bipyramid distribution corresponding larger distance sum then $fmax$. We need following results so as to simplify this problem. \begin{proposition} \label{prop:phi1} If some configuration of 5 points corresponds larger function value of $f$ then the bipyramid configuration, and $AB$ is the second largest distance in ${5 \choose 2} = 10$ distances, $\phi_1$ should satisfy \begin{equation} \phi_1 \geq -2\,\arccos ( \sqrt {3}/6 +\sqrt {2}/3 ) \end{equation} \end{proposition} \begin{proof} From Equation \eqref{fmax}, we know that in order to attain larger distance sum than that the bipyramid configuration corresponds, the second largest distance must be not smaller then $$ ( ( 3\,\sqrt{3} + 6\,\sqrt{2} + 2 ) - 2 ) /9 = \sqrt {3}/3 + 2\,\sqrt {2}/3. $$ With the condition that $AB$ is the second largest distance, the result required can be deduced immediately. \end{proof} \begin{proposition} \label{prop:halfsphere} If 5 points are on the same half sphere, $f$ can not attains its maximum. \end{proposition} \begin{proof} Without loss of generality, suppose $z$-coordinates of 5 points are all nonpositive, if the $z$-coordinate of some point is negative, we move it to the symmetric position with respect to the $xoy$-plane, then we will get a larger distance sum. If 5 points are all distributed on the $xoy$-plane, the maximal distance sum is \cite{toth:1} (5 points form a regular pentagon) $5\,\cot \frac{\pi}{10}$, which is obviously smaller then the mutual distance sum corresponding the bipyramid configuration (see \S\,\ref{sec:bipyramid}). \end{proof} \begin{proposition} \label{prop:parder} If a partial derivative of function $f$ does not vary signs in a domain, then there exists no stationary point of $f$ in this domain. \end{proposition} \begin{theorem} \cite{berman:1} \label{th:gradient} Let $p_1,\ldots,p_n$ be points on the unit sphere $S^2$ in $\mathbb{R}^3$. Let $f:\, S^2 \to \mathbb{R}$ be defined by $f(x)=\sum\limits_{i=1}^n \left| x-p_i \right|$. if $f$ has a maximum at $p$, then $p=q/|q|$, where $q=\sum\limits_{i=1}^n (p-p_i)/\left| p-p_i \right|$. \end{theorem} \begin{theorem} \cite{stolarsky:1} \label{th:mindis} Suppose the 5 points are placed so that function $f$ is maximal, then any distance between two points cannot be less then $\frac{2}{15}$. \end{theorem} \subsection{Some restricted conditions} \label{conditions} We can consider the problem under following restricted conditions due to above results. \begin{condition} \label{as:secondlength} $AB$ is the second largest distance in ${5 \choose 2} = 10$ distances. \end{condition} \begin{condition} \label{as:pointc} $D$ is on the left half sphere, $C,E$ are on the right half sphere, $C$ is above $E$. \end{condition} \begin{condition}[by Proposition \ref{prop:phi1}] \label{as:phi1} $\phi_1 \geq -2\,\arccos ( \sqrt {3}/6 +\sqrt {2}/3 )$ \end{condition} \begin{condition}[by Proposition \ref{prop:halfsphere}] \label{as:halfsphere} Five points are not on any half sphere. \end{condition} \begin{condition}[by Theorem \ref{th:mindis}] \label{as:mindis} Distances between any two points are larger then $\frac{2}{15}$. \end{condition} \subsection{Domain subdivision} Under these conditions, the bipyramid configuration (corresponding the maximal distance sum conjectured) and the pyramid configuration (corresponding another stationary point of the function $f$) each corresponds only one coordinate representation. Further more, we can divide the domain in which we need to verify no distribution of points corresponds a larger distance sum into the following two subdomains: \begin{enumerate} \item $D$ is on the upper half sphere (denote this domain by $\mathcal{D}^{(1)}$): $ \begin{array}{l} \phi_{{1}}\in[-2\,\arccos \left( \sqrt {3}/6 +\sqrt {2}/3 \right),0],\\ \phi_{{2}}\in[-\pi/2,0],\\ \theta_{{2}}\in[0,\pi],\\ \phi_{{3}}\in[0,\pi/2],\\ \theta_{{3}}\in[-\pi,0],\\ \phi_{{4}}\in[-\pi/2,0],\\ \theta_{{4}}\in[0,\pi ]. \end{array} $ \item $D$ is on the lower half sphere, $C$ is on the upper half sphere (denote this domain by $\mathcal{D}^{(2)}$): $ \begin{array}{l} \phi_{{1}}\in[-2\,\arccos \left( \sqrt{3}/6 + \sqrt{2}/3 \right),0],\\ \phi_{{2}}\in[0,\pi/2],\\ \theta_{{2}}\in[0,\pi],\\ \phi_{{3}}\in[-\pi/2,\pi/2],\\ \theta_{{3}}\in[-\pi,0],\\ \phi_{{4}}\in[-\pi/2,\pi/2],\\ \theta_{{4}}\in[0,\pi]. \end{array} $ \end{enumerate} Now, we are to prove that, under Condition \ref{as:secondlength} - \ref{as:mindis}, function $f$ attains its maximum in $\mathcal{D}^{(1)}$ and $\mathcal{D}^{(2)}$ at the only point corresponding the bipyramid distribution of $A,B,C,D,E$, i.e., \begin{equation} \label{eqn:bipyramid} f(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \leq fmax, \quad (\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \in \mathcal{D}^{(1)} \cup \mathcal{D}^{(2)}. \end{equation} where the equality holds if and only if $(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) = \Theta_{bp}$. In the following parts of this paper, we will illustrate the domain verification methods, and detailed steps as well as results. \section{Domain near coordinates corresponding the bipyramid distribution} \subsection{Interval methods} We first briefly introduce the interval methods we used in our proof. \subsubsection{Interval arithmetic} \label{intarith} We define an interval as a set \cite{volker:1}: \begin{equation} X=[a,b]=\{ x: a \leq x \leq b \}, \end{equation} where $a,b \in \mathbb{R}$. $\underline{X}, \overline{X}$ respectively denote the left and right vertexes of the interval $X$. For intervals $X$ and $Y$, if $x > y$ for each $x \in X$ and each $y \in Y$, we say that $X > Y$. Other interval relations are understood the same way. An $n$-dimensional ``interval vector'' is an $n$-tuple of intervals $\mathbf{X}=(X_1,\ldots,X_n)$, which is used to denote some rectangular domain in $\mathbb{R}^n$. Let $\mathbb{IR}$ be the set of intervals over $\mathbb{R}$, and $\mathbb{IR}^n$ be the set of $n$-dimensional ``interval vectors''. We can define an imbedding from $\mathbb{R}$ to $\mathbb{IR}$ as follows $$\mu(x)=[x,x],$$ thus for numbers in $\mathbb{R}$, we can also consider them as intervals. We define interval arithmetic over $\mathbb{IR}$ as $$X \circ Y = \{x \circ y: x \in X, y \in Y \},$$ where $\circ$ is $``+", ``-", ``*"$ or $``/"$. Further more, for an elementary function $f$, we define a corresponding elementary mapping as $$f(X) = \{ f(x): x \in X \}.$$ When operands of interval arithmetic or arguments of elementary functions are intervals, we consider underlying computations are interval computations defined above, and the interval computation is of the same precedence as the corresponding arithmetic computation. Under above definitions, an arbitrary elementary function $f:\mathbb{R}^n \to \mathbb{R}$ can be expanded to a mapping over $\mathbb{IR}^n \to \mathbb{IR}$: \begin{equation} \tilde{f}(\mathbf{X}) =f(\mathbf{X}). \end{equation} Through such $\tilde{f}$, We can get an interval which contains the function range of $f$ over rectangular domain $\mathbf{X}$, this is the critical point we solve the problem. As a matter of fact, there are related programs used to process interval arithmetic, such as the procedure \textbf{evalr} can be used to implement interval arithmetic without errors. But in practice, it may be not necessary to implement errorless interval arithmetic, because what we get from interval arithmetic are just intervals contains the ranges of function values. Another problem is that acting such errorless interval arithmetic is always time-consuming, thus it cannot meet our needs. Considering the efficiency and the accuracy, we wrote an interval arithmetic package \textbf{IntervalArithmetic} based on the Maple system. The package uses rational numbers as interval vertexes, and acts computations with controllable errors. In fact the result it computes for $f(\mathbf{X})$ is an larger interval containing $\tilde{f}(\mathbf{X})$, and the difference can reduce to zero as intervals of $\mathbf{X}$ shrink to points. For the detailed code, see Appendix \ref{intervalarithmetic}. \subsubsection{Interval matrices} Relations of real matrices of the same order are understood componentwise. An interval matrix is defined as the following set of matrices: $$([\underline{a}_{ij},\overline{a}_{ij}])=[\underline{A},\overline{A}] =\{A \in \mathbb{R}^{n\, \times \, n}: \underline{A} \leq A \leq \overline{A}\},$$ where $$\underline{A}=(\underline{a}_{ij}),\overline{A}=(\overline{a}_{ij}).$$ When $\underline{A}$ and $\overline{A}$ are symmetric, we call the set of symmetric matrices in $[\underline{A},\overline{A}]$ a symmetric interval matrix which is also denoted by $[\underline{A},\overline{A}]$. For a interval matrix $[\underline{A},\overline{A}]$, denote its midpoint matrix by $A_c=\dfrac{\underline{A}+\overline{A}}{2}$, radius matrix by $A_\delta=\dfrac{\overline{A}-\underline{A}}{2}$. For a real symmetric matrix $A$, it is well know that all its eigenvalues are real, we denote them in decreasing order by $\lambda_1(A) \geq \lambda_2(A) \geq \cdots \geq \lambda_n(A)$, and denote the spectrum of $A$ (i.e. the maximum eigenvalue modulus) by $\varrho(A)$. For bounds of eigenvalues of matrices in an interval matrix, it can be directly deduced from the Wielandt-Hoffman theorem \cite{golub:1} that \begin{theorem} \label{th:wielandt-hoffman} For a symmetric interval matrix $[\underline{A},\overline{A}]$, the set $$ \{ \lambda_i(A) : A \in [\underline{A},\overline{A}]\} $$ is a compact interval, denote this compact interval by $$ [\underline{\lambda}_i([\underline{A},\overline{A}]), \overline{\lambda}_i([\underline{A},\overline{A}])], \, 1 \leq i \leq n, $$ then $$ [\underline{\lambda}_i([\underline{A},\overline{A}]), \overline{\lambda}_i([\underline{A},\overline{A}])] \subseteq [\lambda_i(A_c)-\varrho(A_\delta),\lambda_i(A_c)+\varrho(A_\delta)], \, i=1,\ldots,n. $$ \end{theorem} In fact, $\overline{\lambda}_1([\underline{A},\overline{A}])$ and $\underline{\lambda}_n([\underline{A},\overline{A}])$ can be solved explicitly \cite{rohn:1}, that is, \begin{theorem} \label{th:exteig} A real symmetric interval matrix $$ ([\underline{a}_{ij},\overline{a}_{ij}]) =\{A \in \mathbb{R}^{n\, \times \, n}: \underline{A} \leq A \leq \overline{A},\underline{A} =(\underline{a}_{ij}),\overline{A}=(\overline{a}_{ij})\} $$ corresponds following $2^{n-1}$ vertex matrices: $$A_k=(a_{kij}), 0 \leq k \leq 2^{n-1}-1,$$ where we denote the binary representation for $k$ by $k=(k_1 k_2 \cdots k_n)_2$, and $$ a_{kij}=\frac{1}{2}(\underline{a}_{ij}+\overline{a}_{ij}+(-1)^{k_i + k_j} (\underline{a}_{ij}-\overline{a}_{ij})). $$ For matrices in this symmetric interval matrix, minimal (or maximal) eigenvalues of them attain the minimum (respectively, maximum) at some vertex matrix $A_k$. \end{theorem} For a real symmetric interval matrix $[\underline{A},\overline{A}]$, we say it is positive (semi)definite if $A$ is positive (semi)definite for each $A \in [\underline{A},\overline{A}]$, and it is nonpositive (semi)definite if $A$ is not positive (semi)definite for each $A \in [\underline{A},\overline{A}]$. Definitions such as negative (semi)definiteness, nonnegative (semi)definiteness of $[\underline{A},\overline{A}]$ are understood the similar way. Now we introduce the results for verifying positive definiteness and nonpositive semidefiniteness of symmetric interval matrices, which can directly deduce criterions for negative definiteness, nonnegative definiteness, etc. Rohn has given the following theorem \cite{rohn:1}, which is an improvement on results in \cite{shi:1}, we state it in a varied way that adapts to be understood as an algorithm. \begin{theorem} \label{th:pstv} The real symmetric interval matrix $$([\underline{a}_{ij},\overline{a}_{ij}]) =\{A \in \mathbb{R}^{n\, \times \, n}: \underline{A} \leq A \leq \overline{A},\underline{A} =(\underline{a}_{ij}),\overline{A}=(\overline{a}_{ij})\}$$ is positive definite if and only if the following $2^{n-1}$ vertex matrices are all positive definite: $$A_k=(a_{kij}), 0 \leq k \leq 2^{n-1}-1,$$ where we denote the binary representation for $k$ by $k=(k_1 k_2 \cdots k_n)_2$, and $$ a_{kij}=\frac{1}{2}(\underline{a}_{ij}+\overline{a}_{ij}+(-1)^{k_i + k_j} (\underline{a}_{ij}-\overline{a}_{ij})). $$ \end{theorem} Theorem \ref{th:exteig} in fact implies Theorem \ref{th:pstv}, because the positive definiteness of a symmetric interval matrix is equal that the minimum of minimal eigenvalues of matrices in it is positive. So we can get from Theorem \ref{th:wielandt-hoffman} a sufficient condition for determining the positive definiteness of a symmetric interval matrix. \begin{theorem} \label{th:posdefeig} The symmetric interval matrix $[\underline{A},\overline{A}]$ is positive definite if $$ \lambda_n(A_c)-\varrho(A_\delta) > 0, $$ where $A_c$ and $A_\delta$ are the midpoint matrix and the radius matrix respectively. \end{theorem} Similarly, the nonpositive semidefiniteness of a symmetric interval matrix is equal that the maximum of minimal eigenvalues of matrices in it is negative, i.e., \begin{theorem} \label{th:nonpos} The symmetric interval matrix $[\underline{A},\overline{A}]$ is nonpositive semidefinite if $$ \lambda_n(A_c)+\varrho(A_\delta) < 0, $$ where $A_c$ and $A_\delta$ are the midpoint matrix and the radius matrix respectively. \end{theorem} The procedure \textbf{isdef} in the package \textbf{IntervalArithmetic} can implement algorithms above to verify the properties of symmetric interval matrices, such as positive definiteness, negative definiteness and so on. With the help of above theorems, we can use the following trivial results to determine the extreme point of a function in a domain. \begin{theorem} \label{th:extremepoint} If $g \in C^2(D)$, $X_0 \in D$ is a stationary point of $g$, and the Hessian matrix of $g$ over $D$ varies in a positive definite real symmetric interval matrix, then $X_0$ is the minimum point of $g$ in $D$. \end{theorem} \begin{theorem} \label{th:notextremepoint} If $g \in C^2(D)$, and the Hessian matrix of $g$ over $D$ varies in a nonpositive semidefinite real symmetric interval matrix, then no inner point of $D$ is the minimum point of $g$. \end{theorem} \subsection{Domain excluded near coordinates corresponding the bipyramid distribution} Now we introduce a disturbance $\left[-\frac{\pi}{377},\frac{\pi}{377}\right]$ on coordinates corresponding the bipyramid distribution, and obtain a rectangular domain, i.e., \begin{equation} \label{bpyva:pi} \left( \begin {array}{c} \phi_{{1}}\\\noalign{\medskip}\phi_{{2}} \\\noalign{\medskip}\theta_{{2}}\\\noalign{\medskip}\phi_{{3}} \\\noalign{\medskip}\theta_{{3}}\\\noalign{\medskip}\phi_{{4}} \\\noalign{\medskip}\theta_{{4}}\end {array} \right) \in \left( \begin {array}{c} \left[-{\frac {380}{1131}}\,\pi ,-{\frac {374}{ 1131}}\,\pi \right]\\\noalign{\medskip}\left[{\frac {374}{1131}}\,\pi ,{\frac { 380}{1131}}\,\pi \right]\\\noalign{\medskip}\left[{\frac {376}{377}}\,\pi ,{ \frac {378}{377}}\,\pi \right]\\\noalign{\medskip}\left[-{\frac {1}{377}}\,\pi ,{ \frac {1}{377}}\,\pi \right]\\\noalign{\medskip}\left[-{\frac {379}{754}}\,\pi ,- {\frac {375}{754}}\,\pi \right]\\\noalign{\medskip}\left[-{\frac {1}{377}}\,\pi , {\frac {1}{377}}\,\pi \right]\\\noalign{\medskip}\left[{\frac {375}{754}}\,\pi ,{ \frac {379}{754}}\,\pi \right]\end {array} \right). \end{equation} In this domain, $\theta_2$ varies in $\left[{\frac {376}{377}}\,\pi ,{\frac {378}{377}}\,\pi \right]$, which exceeds the bound we prescribed for $\theta_2$. But since the periodicity of function $f$, it is of no error. In fact, interval vertexes are represented by rational numbers in the Maple package \textbf{IntervalArithmetic}, so these intervals whose vertexes contain $\pi$ are enlarged to their rational representations, that is, the rectangular domain we actually obtain is \begin{equation} \mathcal{D}_{bp} = \left( \begin {array}{c} \left[-{\frac {1055530689}{1000000000}},-{\frac { 1038864413}{1000000000}}\right]\\\noalign{\medskip}\left[{\frac {1038864413}{ 1000000000}},{\frac {1055530689}{1000000000}}\right]\\\noalign{\medskip}\left[{ \frac {783314879}{250000000}},{\frac {98435181}{31250000}}\right] \\\noalign{\medskip}\left[-{\frac {10416421265218147343}{ 1250000000000000000000}},{\frac {10416421265218147343}{ 1250000000000000000000}}\right]\\\noalign{\medskip}\left[-{\frac {315825893}{ 200000000}},-{\frac {1562463189}{1000000000}}\right]\\\noalign{\medskip}\left[-{ \frac {10416421265218147343}{1250000000000000000000}},{\frac { 10416421265218147343}{1250000000000000000000}}\right]\\\noalign{\medskip}\left[{ \frac {1562463189}{1000000000}},{\frac {315825893}{200000000}}\right] \end {array} \right). \end{equation} The interval Hessian matrix of $f$ over $\mathcal{D}_{bp}$ can be calculated by interval arithmetic: \begin{equation} \mathcal{V} = \left(V_1,V_2,V_3,V_4,V_5,V_6,V_7\right), \end{equation} where $V_i(i=1,\ldots,7)$ are vectors as follows:\\[10pt] {\tiny \raggedright $ V_{{1}}= \left( \begin {array}{c} \left[-{\dfrac {9073071021}{10000000000}}, -{\dfrac {257887557}{312500000}}\right]\\\noalign{\medskip}\left[{\dfrac { 4158136493}{10000000000}},{\dfrac {1126402921}{2500000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {2503191333}{1000000000000}},{\dfrac { 2503191333}{1000000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {910889963}{ 2500000000}},-{\dfrac {428510269}{1250000000}}\right]\\\noalign{\medskip}\left[{ \dfrac {6038013477}{10000000000}},{\dfrac {1552383121}{2500000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {72871197}{200000000}},-{\dfrac {685616431 }{2000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {3104766243}{5000000000}}, -{\dfrac {6038013477}{10000000000}}\right]\end {array} \right), V_{{2}}= \left( \begin {array}{c} \left[{\dfrac {4158136493}{10000000000}},{ \dfrac {1126402921}{2500000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 2283824451}{2500000000}},-{\dfrac {8191611957}{10000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {23364423}{781250000}},{\dfrac {747661531} {25000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {114837637}{312500000}},-{ \dfrac {3397366803}{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 1559063347}{2500000000}},-{\dfrac {240452637}{400000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {183740219}{500000000}},-{\dfrac { 424670851}{1250000000}}\right]\\\noalign{\medskip}\left[{\dfrac {6011315931}{ 10000000000}},{\dfrac {779531673}{1250000000}}\right]\end {array} \right) $ \\[10pt] $ V_{{3}}= \left( \begin {array}{c} \left[-{\dfrac {2503191333}{1000000000000} },{\dfrac {2503191333}{1000000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 23364423}{781250000}},{\dfrac {747661531}{25000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {3523502821}{10000000000}},-{\dfrac { 2901860369}{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {1628135751}{ 10000000000}},-{\dfrac {359058763}{2500000000}}\right]\\\noalign{\medskip}\left[{ \dfrac {1556247929}{20000000000}},{\dfrac {9916175537}{100000000000}}\right] \\\noalign{\medskip}\left[{\dfrac {718117527}{5000000000}},{\dfrac { 1628135749}{10000000000}}\right]\\\noalign{\medskip}\left[{\dfrac {7781239691}{ 100000000000}},{\dfrac {2479043873}{25000000000}}\right]\end {array} \right), V_{{4}}= \left( \begin {array}{c} \left[-{\dfrac {910889963}{2500000000}},-{ \dfrac {428510269}{1250000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {114837637 }{312500000}},-{\dfrac {3397366803}{10000000000}}\right]\\\noalign{\medskip}\left[ -{\dfrac {1628135751}{10000000000}},-{\dfrac {359058763}{2500000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {214384921}{200000000}},-{\dfrac { 9891691177}{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {483991657}{ 20000000000}},{\dfrac {97153283}{4000000000}}\right]\\\noalign{\medskip}\left[-{ \dfrac {10001389}{20000000}},-{\dfrac {4999305593}{10000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {1041678267}{10000000000000}},{\dfrac { 1041678267}{10000000000000}}\right]\end {array} \right) $ \\[10pt] $ V_{{5}}= \left( \begin {array}{c} \left[{\dfrac {6038013477}{10000000000}},{ \dfrac {1552383121}{2500000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 1559063347}{2500000000}},-{\dfrac {240452637}{400000000}}\right] \\\noalign{\medskip}\left[{\dfrac {1556247929}{20000000000}},{\dfrac { 9916175537}{100000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {483991657}{ 20000000000}},{\dfrac {97153283}{4000000000}}\right]\\\noalign{\medskip}\left[-{ \dfrac {1058713931}{1000000000}},-{\dfrac {1002296947}{1000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {1041678267}{10000000000000}},{\dfrac { 1041678267}{10000000000000}}\right]\\\noalign{\medskip}\left[{\dfrac {312434903}{ 625000000}},{\dfrac {1250173619}{2500000000}}\right]\end {array} \right), V_{{6}}= \left( \begin {array}{c} \left[-{\dfrac {72871197}{200000000}},-{ \dfrac {685616431}{2000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {183740219 }{500000000}},-{\dfrac {424670851}{1250000000}}\right]\\\noalign{\medskip}\left[{ \dfrac {718117527}{5000000000}},{\dfrac {1628135749}{10000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {10001389}{20000000}},-{\dfrac {4999305593 }{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {1041678267}{ 10000000000000}},{\dfrac {1041678267}{10000000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {1071924603}{1000000000}},-{\dfrac { 2472922799}{2500000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {607208011}{ 25000000000}},{\dfrac {302494783}{12500000000}}\right]\end {array} \right) $ \\[10pt] $ V_{{7}}= \left( \begin {array}{c} \left[-{\dfrac {3104766243}{5000000000}},- {\dfrac {6038013477}{10000000000}}\right]\\\noalign{\medskip}\left[{\dfrac { 6011315931}{10000000000}},{\dfrac {779531673}{1250000000}}\right] \\\noalign{\medskip}\left[{\dfrac {7781239691}{100000000000}},{\dfrac { 2479043873}{25000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {1041678267}{ 10000000000000}},{\dfrac {1041678267}{10000000000000}}\right] \\\noalign{\medskip}\left[{\dfrac {312434903}{625000000}},{\dfrac {1250173619 }{2500000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {607208011}{25000000000}}, {\dfrac {302494783}{12500000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 1058713929}{1000000000}},-{\dfrac {20045939}{20000000}}\right]\end {array} \right) $ \\[10pt] } Through Theorem \ref{th:pstv}, we can judge that the symmetric interval matrix $\mathcal{V}$ is negative definite, and by Theorem \ref{th:extremepoint}, the conjectured configuration indeed corresponds the maximum of $f$ in $\mathcal{D}_{bp}$. That is \begin{proposition} \label{prop:bipyramid} The bipyramid distribution of 5 points represented by Eq. \eqref{bipycoords} is the only maximal distance sum distribution in domain $\mathcal{D}_{bp}$, i.e., \begin{equation} \label{eqn:bipyramid} f(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \leq fmax, \quad (\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \in \mathcal{D}_{bp}, \end{equation} where the equality holds if and only if $(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4)=\Theta_{bp}$. \end{proposition} \section{Domain near coordinates corresponding the pyramid distribution} \label{sec:pyramid} Under conditions in \S\,\ref{conditions}, coordinates representing the pyramid distribution are unique, while they corresponds a stationary point of function $f$, and the function value on this point is too close to $fmax$, therefore, we discuss it separately. \subsection{Pyramid distribution} The spherical coordinate corresponding the pyramid distribution is \begin{equation} \label{bycoords} \begin{array}{l} A(1,0,0),\\[5pt] B\left(1, -2\,\omega_1, \pi\right),\\[5pt] C\left(1, \dfrac{\pi}{2} -\omega_1, \pi\right),\\[5pt] D\left(1, \omega_2, -\omega_3 \right),\\[5pt] E\left(1, \omega_2, \omega_3 \right), \end{array} \end{equation} where {\footnotesize \begin{equation} \begin{split} \omega_1 & = \arcsin \left( -\dfrac{3}{4} + \dfrac{\sqrt{2}}{2}+ \dfrac{\sqrt {41-28\,\sqrt {2}}} {4} \right),\\ \omega_2 & = -\arcsin \left( \left( -\dfrac{3}{4} + \dfrac{\sqrt{2}}{2}+ \dfrac{\sqrt {41-28\,\sqrt {2}}} {4} \right) \sqrt{1- \left( -\dfrac{3}{4} + \dfrac{\sqrt{2}}{2}+ \dfrac{\sqrt {41-28\,\sqrt {2}}} {4} \right) ^2 } \right),\\ \omega_3 & = \arccot \left( \frac{ \left( -\dfrac{3}{4} + \dfrac{\sqrt{2}}{2}+ \dfrac{\sqrt {41-28\,\sqrt {2}}} {4} \right) ^2 } { \sqrt{1- \left( -\dfrac{3}{4} + \dfrac{\sqrt{2}}{2}+ \dfrac{\sqrt {41-28\,\sqrt {2}}} {4} \right) ^2 } } \right), \end{split}\nonumber \end{equation} }\\ as showed in Fig. \ref{pyconfig}. \begin{figure}[htbp] \centering \includegraphics[width=0.60\textwidth]{images/pyramid.eps} \caption{The pyramid configuration} \label{pyconfig} \end{figure} Denote the corresponding values of $(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4)$ by $$ \Theta_p = \left( -2\,\omega_1, \dfrac{\pi}{2} -\omega_1, \pi, \omega_2, -\omega_3, \omega_2, \omega_3 \right), $$ then the corresponding value of function $f$ is \begin{equation} \label{eqn:pyfmax} \begin{split} pyfmax & = f(\Theta_p)\\ & \approx 15.67482117. \end{split} \end{equation} \subsection{Domain excluded near coordinates corresponding the pyramid distribution} Similarly with the method we adopted near the bipyramid distribution, we introduce a disturbance of $\left[-\frac{\pi}{791},\frac{\pi}{791}\right]$ on coordinates corresponding the pyramid distribution, and finally obtain a rectangular domain \begin{equation} \mathcal{D}_{p} = \left( \begin {array}{c} \left[-{\frac {5157880419}{10000000000}},-{\frac {203137879}{400000000}}\right]\\\noalign{\medskip}\left[{\frac {1310916469}{ 1000000000}},{\frac {263771963}{200000000}}\right]\\\noalign{\medskip}\left[{ \frac {156881049}{50000000}},{\frac {3145564327}{1000000000}}\right] \\\noalign{\medskip}\left[-{\frac {39276321}{156250000}},-{\frac { 2434251099}{10000000000}}\right]\\\noalign{\medskip}\left[-{\frac {1508635943}{ 1000000000}},-{\frac {375173149}{250000000}}\right]\\\noalign{\medskip}\left[-{ \frac {39276321}{156250000}},-{\frac {2434251099}{10000000000}}\right] \\\noalign{\medskip}\left[{\frac {375173149}{250000000}},{\frac {1508635943 }{1000000000}}\right]\end {array} \right). \end{equation} The interval Hessian matrix of $f$ over $\mathcal{D}_{p}$ can be calculated by interval arithmetic, i.e., \begin{equation} \mathcal{W} = \left(W_1,W_2,W_3,W_4,W_5,W_6,W_7\right), \end{equation} where $W_i(i=1,\ldots,7)$ are vectors {\tiny \raggedright $ W_{{1}}= \left( \begin {array}{c} \left[-{\dfrac {68555671}{80000000}},-{ \dfrac {8085653887}{10000000000}}\right]\\\noalign{\medskip}\left[{\dfrac { 241020403}{625000000}},{\dfrac {4060471777}{10000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {5396514777}{10000000000000}},{\dfrac { 5396514777}{10000000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {665453073}{ 1000000000}},-{\dfrac {3261193771}{5000000000}}\right]\\\noalign{\medskip}\left[{ \dfrac {323160259}{1250000000}},{\dfrac {2727361243}{10000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {6654530721}{10000000000}},-{\dfrac { 6522387553}{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {545472247}{ 2000000000}},-{\dfrac {16158013}{62500000}}\right]\end {array} \right), W_{{2}}= \left( \begin {array}{c} \left[{\dfrac {241020403}{625000000}},{ \dfrac {4060471777}{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 567279649}{500000000}},-{\dfrac {544204613}{500000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {609173849}{50000000000}},{\dfrac { 1218347717}{100000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {68583739}{ 400000000}},-{\dfrac {1584190467}{10000000000}}\right]\\\noalign{\medskip}\left[-{ \dfrac {1486861643}{2500000000}},-{\dfrac {5875949419}{10000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {428648367}{2500000000}},-{\dfrac { 792095233}{5000000000}}\right]\\\noalign{\medskip}\left[{\dfrac {2937974709}{ 5000000000}},{\dfrac {5947446571}{10000000000}}\right]\end {array} \right) $ \\[10pt] $ W_{{3}}= \left( \begin {array}{c} \left[-{\dfrac {5396514777}{10000000000000 }},{\dfrac {5396514777}{10000000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 609173849}{50000000000}},{\dfrac {1218347717}{100000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {8085854539}{100000000000}},-{\dfrac { 1541389841}{25000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {9959577971}{ 100000000000}},-{\dfrac {9385595363}{100000000000}}\right] \\\noalign{\medskip}\left[{\dfrac {116759761}{5000000000}},{\dfrac {109700733 }{4000000000}}\right]\\\noalign{\medskip}\left[{\dfrac {9385595361}{100000000000}} ,{\dfrac {9959577971}{100000000000}}\right]\\\noalign{\medskip}\left[{\dfrac { 2335195223}{100000000000}},{\dfrac {2742518293}{100000000000}}\right] \end {array} \right), W_{{4}}= \left( \begin {array}{c} \left[-{\dfrac {665453073}{1000000000}},-{ \dfrac {3261193771}{5000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {68583739 }{400000000}},-{\dfrac {1584190467}{10000000000}}\right]\\\noalign{\medskip}\left[ -{\dfrac {9959577971}{100000000000}},-{\dfrac {9385595363}{100000000000} }\right]\\\noalign{\medskip}\left[-{\dfrac {8828398133}{10000000000}},-{\dfrac { 1678302277}{2000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {585736263}{ 5000000000}},-{\dfrac {2143830967}{25000000000}}\right]\\\noalign{\medskip}\left[- {\dfrac {4897161727}{10000000000}},-{\dfrac {4822526949}{10000000000}}\right] \\\noalign{\medskip}\left[{\dfrac {616963279}{100000000000}},{\dfrac { 1002395789}{100000000000}}\right]\end {array} \right) $ \\[10pt] $ W_{{5}}= \left( \begin {array}{c} \left[{\dfrac {323160259}{1250000000}},{ \dfrac {2727361243}{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 1486861643}{2500000000}},-{\dfrac {5875949419}{10000000000}}\right] \\\noalign{\medskip}\left[{\dfrac {116759761}{5000000000}},{\dfrac {109700733 }{4000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {585736263}{5000000000}},- {\dfrac {2143830967}{25000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 36387877}{31250000}},-{\dfrac {225679327}{200000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {1002395789}{100000000000}},-{\dfrac { 616963279}{100000000000}}\right]\\\noalign{\medskip}\left[{\dfrac {4813603601}{ 10000000000}},{\dfrac {1215175283}{2500000000}}\right]\end {array} \right), W_{{6}}= \left( \begin {array}{c} \left[-{\dfrac {6654530721}{10000000000}}, -{\dfrac {6522387553}{10000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac { 428648367}{2500000000}},-{\dfrac {792095233}{5000000000}}\right] \\\noalign{\medskip}\left[{\dfrac {9385595361}{100000000000}},{\dfrac { 9959577971}{100000000000}}\right]\\\noalign{\medskip}\left[-{\dfrac {4897161727}{ 10000000000}},-{\dfrac {4822526949}{10000000000}}\right]\\\noalign{\medskip}\left[ -{\dfrac {1002395789}{100000000000}},-{\dfrac {616963279}{100000000000}} \right]\\\noalign{\medskip}\left[-{\dfrac {8828398113}{10000000000}},-{\dfrac { 4195755701}{5000000000}}\right]\\\noalign{\medskip}\left[{\dfrac {171506479}{ 2000000000}},{\dfrac {1171472521}{10000000000}}\right]\end {array} \right) $ \\[10pt] $ W_{{7}}= \left( \begin {array}{c} \left[-{\dfrac {545472247}{2000000000}},-{ \dfrac {16158013}{62500000}}\right]\\\noalign{\medskip}\left[{\dfrac {2937974709}{ 5000000000}},{\dfrac {5947446571}{10000000000}}\right]\\\noalign{\medskip}\left[{ \dfrac {2335195223}{100000000000}},{\dfrac {2742518293}{100000000000}}\right] \\\noalign{\medskip}\left[{\dfrac {616963279}{100000000000}},{\dfrac { 1002395789}{100000000000}}\right]\\\noalign{\medskip}\left[{\dfrac {4813603601}{ 10000000000}},{\dfrac {1215175283}{2500000000}}\right]\\\noalign{\medskip}\left[{ \dfrac {171506479}{2000000000}},{\dfrac {1171472521}{10000000000}}\right] \\\noalign{\medskip}\left[-{\dfrac {582206031}{500000000}},-{\dfrac { 564198319}{500000000}}\right]\end {array} \right) $ \\[10pt] } Through Theorem \ref{th:nonpos}, we can judge that $\mathcal{W}$ is nonnegative semidefinite, and when the disturbance enlarges very little, it is still true. So by Theorem \ref{th:notextremepoint}, we know that values of $f$ cannot attain the maximum in $\mathcal{D}_{p}$, i.e., \begin{proposition} \label{prop:pyramid} Maximum of function $f$ can not attain in domain $\mathcal{D}_{p}$, i.e., \begin{equation} \label{eqn:pyramid} f(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) < fmax, \quad (\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \in \mathcal{D}_{p}. \end{equation} \end{proposition} \section{Other domains} Now, we are to prove the following strict inequality, \begin{equation} \label{strictinequality} \begin{split} & f(\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) < fmax,\\ & \mbox{where } (\phi_1, \phi_2, \theta_2, \phi_3, \theta_3, \phi_4, \theta_4) \in (\mathcal{D}^{(1)} \cup \mathcal{D}^{(2)}) \backslash (\mathcal{D}_{bp} \cup \mathcal{D}_{p}). \end{split} \end{equation} Algorithms in this section are implemented by procedures in the Maple package \textbf{fivepoints}, for the code, see appendix. \subsection{Branch and bound strategies} \label{5points:bb} We check domains over which variables take using the interval method, more precisely, we compute the interval value of the interval mapping corresponding some functions through interval arithmetic, properties of this interval may suggest that, when variables take values in this domain, function $f$ has no stationary point, or its maximum is less than the value corresponding the bipyramid configuration, or it is not necessary to consider the case for symmetries. All in all, function values in this domain cannot be greater than $fmax$ (these verification methods are implemented by procedure \textbf{ischecked} in the package \textbf{fivepoints}). The followings are methods we used to exclude domains contained in $\mathcal{D}^{(1)}$ and $\mathcal{D}^{(2)}$. \begin{enumerate} \item \label{pointc} (by Condition \ref{as:pointc}) Verify that $C$ is below $E$. \item \label{halfsphere} (by Condition \ref{as:halfsphere}) Verify that 5 points are in the same half sphere. \item \label{secondlength} (by Condition \ref{as:secondlength}) Verify that $AB$ is not the second largest distance. \item \label{mindis} (by Condition \ref{as:mindis}) Verify that the distance between some two points is less than $\dfrac{2}{15}$. \item \label{totaldis} Verify that the upper bound of function values are less than $fmax$ (see Eq. \eqref{fmax}). \item \label{derivative} (by Proposition \ref{prop:parder}) Verify that some partial derivative of $f$ does not change signs in this domain. \item \label{totaldefver} Compute the interval Hessian matrix corresponding this domain, and determine its negative definiteness through Theorem \ref{th:pstv}. \item \label{totaldefeig} Compute the interval Hessian matrix corresponding this domain, and determine its negative definiteness through Theorem \ref{th:posdefeig}. \item \label{totalnotdef} Compute the interval Hessian matrix corresponding this domain, and determine its nonnegative definiteness through Theorem \ref{th:nonpos}. \item \label{gradient} Determine there exists no maximal point in this domain through Theorem \ref{th:gradient}. \end{enumerate} Different methods should be used in different domains, for example, methods \ref{totaldefver} and \ref{totaldefeig} should be used first near points corresponding bipyramid distribution, then others (\ref{gradient}, \ref{totaldis}, \ref{derivative}) can be used; while method \ref{totalnotdef} should be used first near points corresponding pyramid distribution, then others (\ref{gradient}, \ref{totaldis}, \ref{derivative}) can be used; and for generic domains, methods can be used in turns as \ref{pointc}, \ref{halfsphere}, \ref{mindis}, \ref{secondlength}, \ref{totaldis}, \ref{gradient}, \ref{derivative}. For a domain to be verified, we choose appropriate verification methods and the verification order, if verifications are not successful, we subdivide the interval whose width is maximal into two equal intervals, and verify the two subdomains recursively. We set a positive number, if the largest interval width of a domain we get in the above process is less than this number, we stop subdividing this domain, and record it, this domain may contain distributions of points corresponding larger distance sums then the maximal distance sum conjectured. This process terminates when all domains have been verified. If all domains are verified successful, and no domain is contained in the record list, then we have proved the conjecture in fact. The complete algorithm is described below (implemented as the procedure \textbf{spchecked} in the package \textbf{fivepoints}): \begin{algorithm}[H] \scriptsize \dontprintsemicolon \linesnumbered \caption{CheckDomain} \KwIn{intervallist, methods, notcheckbipyramid, notcheckpyramid} \KwOut{true/false} \Begin{ checkbipyramid := not notcheckbipyramid; checkpyramid := not notcheckpyramid; checkmethods := methods \If{checkbipyramid}{ \If( \;\#\textit{bipyramidintervallist is the rectangular domain we excluded first near the bipyramid distribution.} ) {intervallist is contained in domain bipyramidintervallist}{ add $[-1]$ to checkprocess\; \#\textit{record the checking process}\; \Return{true} } \If{some variable interval in intervallist disjoints the corresponding variable interval in bipyramidintervallist} {checkbipyramid := false} } \If{checkpyramid}{ \If(\;\#\textit{pyramidintervallist is the rectangular domain we excluded first near the pyramid distribution.} ) {intervallist is contained in domain pyramidintervallist}{ add $[0]$ to checkprocess; \Return{true} } \If{some variable interval in intervallist disjoints the corresponding variable interval in pyramidintervallist} {checkpyramid := false} } \While{$methods_i \in methods$}{ \If{checking domain intervallist successfully through the method $methods_i$}{ add $[i]$ to checkprocess; \Return{true} } } dim := the widest interval position in interval vector intervallist\; \If{the width of the dim-th interval in intervallist $<$ $\frac{1}{1000}$}{ add intervallist to notchecked\; \#\textit{notchecked records domains cannot be checked successfully}\; add $[-4]$ to checkprocess; \Return{false} } add dim to checkprocess\; \#\textit{dim designates the variable to be subdivided, and is to be recorded in checking process}\; subintervallist := subdivide the domain intervallist in the dim-th interval\; cur := true\; \ForAll{$subintervallist_i \in subintervallist$}{ \If{not CheckDomain($subintervallist_i$, checkmethods, not checkbipyramid, not checkpyramid)}{cur := false} } \If{cur}{add $[-2]$ to checkprocess} \Else{add $[-3]$ to checkprocess} \Return{cur} } \end{algorithm} \subsection{Verification process} In order to subdivide domains into appropriate widths, we act some experiments first, finally we subdivide domains as follows: for $\mathcal{D}^{(1)}$ and $\mathcal{D}^{(2)}$ we divide in \S\,\ref{5points:math}, each domain is subdivided the way that each interval of it is trisected, so we get $3^7=2817$ subdomains each, denote them respectively by: $$\mathcal{D}^{(1)}_1,\mathcal{D}^{(1)}_2,\ldots,\mathcal{D}^{(1)}_{2187},$$ and $$\mathcal{D}^{(2)}_1,\mathcal{D}^{(2)}_2,\ldots,\mathcal{D}^{(2)}_{2187}.$$ If some of these subdomains are difficult to verify successfully, we can again subdivide them the same way. Actually, the following domains need to subdivide again: \begin{equation} \label{eqn:subdivision} \begin{split} &\mathcal{D}^{(1)}_{62}, \mathcal{D}^{(1)}_{158}, \mathcal{D}^{(1)}_{239}, \mathcal{D}^{(1)}_{863}, \mathcal{D}^{(1)}_{1102}, \mathcal{D}^{(1)}_{1105}, \mathcal{D}^{(1)}_{1106}, \mathcal{D}^{(1)}_{2114}, \mathcal{D}^{(1)}_{2132},\\ &\mathcal{D}^{(1)}_{1105-1101}, \mathcal{D}^{(1)}_{1106-834}, \mathcal{D}^{(1)}_{1106-861}, \mathcal{D}^{(1)}_{1106-1099}, \mathcal{D}^{(1)}_{1106-1100},\\ &\mathcal{D}^{(1)}_{1105-1101-1100}, \mathcal{D}^{(1)}_{1106-834-725}, \mathcal{D}^{(1)}_{1106-834-726},\\ &\mathcal{D}^{(1)}_{1106-834-725-1752}, \mathcal{D}^{(1)}_{1106-834-726-1507}, \mathcal{D}^{(1)}_{1106-834-726-1750}, \end{split} \end{equation} where $\mathcal{D}^{(1)}_{1105-1101}$ denotes the 1101-th subdomain in all 2187 subdomains of $\mathcal{D}^{(1)}_{1105}$, other similar notations are understood the same way. \subsection{Algorithm implementations} The Maple Package \textbf{fivepoints} implements algorithms described in above sections. For the detailed code, see Appendix \ref{fivepoints}. \section{Conclusion} The following is verification time for various domains (may differ on different computers, it is the time used by computers with Pentium IV 3.0 GHz CPU, and 1 GB RAM): \begin{enumerate} \item Time used to verify domain $\mathcal{D}^{(1)}$: $782534.203$ seconds. \item Time used to verify domain $\mathcal{D}^{(2)}$: $8797.600$ seconds. \item Total time: $791331.803$ seconds. \end{enumerate} This completes the proof of the problem of spherical distribution of 5 points.
1,108,101,564,078
arxiv
\section{Introduction} Relativistic jets are considered in various contexts in high-energy astrophysics, such as active galactic nuclei (AGNs) \citep{up95,ferrari98}, microquasars \citep{mirabel99}, and potentially gamma-ray bursts (GRBs) \citep{piran05,mes06}. The interaction between fast moving jets (the relevant Lorentz factors are $\gamma_{jet} \sim 10$--$20$ in AGNs and $\gamma_{jet} \gtrsim 10^2$ in GRBs) and the surrounding medium is very important to understand global dynamics of the jet system, because it is related to the mass, momentum, and energy transport across the boundary layers. In this context, development of velocity shear instabilities has been of interest (\citet{tur76,bp76,ferrari80,birk91,bodo04,osm08} and references therein). Moreover, a relativistic jet-medium boundary is a potential site of high energy particle acceleration as well \citep{ost00,so02}. Recently, it has been reported that the jet-medium interaction is more complex than thought even in the simplest one-dimensional (1D) case. Raising a Riemann problem of relativistic hydrodynamics (RHD), \citet{aloy06} showed that the tangential hydrodynamic velocity and the relevant Lorentz factor ($\gamma_{BL}$) in the boundary layer are anomalously accelerated ($\gamma_{BL}>\gamma_{jet}$) when the jet is over-pressured. \citet{mizuno08} studied relativistic magnetohydrodynamic (RMHD) effect, and reported that the perpendicular magnetic field enhances the boost effect. \citet{kom09b} discussed a similar tangential boost in their RMHD simulation of the collapser jet. Such anomalous boost effect may be responsible for increasing the jet's Lorentz factor \citep{aloy06,mizuno08} and for modulating the radiative signature of the jet \citep{aloy08}. However, its physical mechanism remains unclear, and therefore no quantitative analysis has been performed. In this paper, we study the mechanism of the anomalous boost by using RHD/RMHD simulations and an analytic theory. In Section 2, we describe the problem setup. In Section 3, we present the simulation results. In Section 4, we construct an RHD/RMHD theory of the problem. In Section 5, we additionally discuss kinetic aspects. The last section Section 6 contains discussions and summary. \section{Problem setup} Following earlier works \citep{aloy06,mizuno08}, we study a 1D Riemann problem in a jet-like configuration, which is schematically illustrated in Figure \ref{fig:jet}. A jet travels upward in the $+z$-direction in a stationary ambient medium. An interaction between the jet and the medium is considered in the $x$-direction, and we assume $\partial_y=\partial_z=0$. Initially they are separated by a discontinuity and we study the time evolution of this 1D system. We employ the following ideal RMHD equations \citep{anile89}. For convenience we set $c=1$ and employ Lorentz--Heaviside units such that all $({4\pi})^{1/2}$ factors disappear. \begin{eqnarray} && \partial_t(\gamma \rho) + \div (\gamma \rho \vec{v}) = 0 \label{eq:rmhda} \\ && \partial_t \vec{m} + \div ( \gamma^2 w_t \vec{v}\vec{v} - \vec{b}\vec{b} + p_t \vec{I} ) = 0 \label{eq:rmhdb} \\ && \partial_t \mathcal{E} + \div \vec{m} = 0 \label{eq:rmhdc} \\ && \partial_t \vec{B} + \div ( \vec{v}\vec{B} - \vec{B}\vec{v} ) = 0 \label{eq:rmhdd} \\ && \vec{E} + \vec{v} \times \vec{B} = 0 \label{eq:rmhde} \end{eqnarray} \begin{eqnarray} & \left\{ \begin{array}{ccl} \vec{m} &=& \gamma^2 w_t\vec{v} - b_0\vec{b} = \gamma^2 \rho h \vec{v} + (\vec{E}\times\vec{B}) \\ \mathcal{E} &=& \gamma^2w_t - b_0 b_0 - p_t \\ \vec{b} &=& ({\vec{B}}/{\gamma}) + \gamma (\vec{v}\cdot\vec{B})\vec{v} \\ b_0 &=& \gamma(\vec{v}\cdot\vec{B}) \\ w_t &=& \rho h + b^2 = \rho+{\Gamma p_g}/({\Gamma-1}) + b^2 \\ p_t &=& p_g + \frac{1}{2} b^2 \\ p_g &=& \rho T \end{array} \right. \end{eqnarray} In the above equations, $\gamma$ is the Lorentz factor, $\rho$ is the proper mass density, $\vec{v}$ is the velocity, $\vec{m}$ is the momentum density, $\mathcal{E}$ is the energy density, $w_t$ is the total enthalpy, $h$ is the specific enthalpy, $p_t$ is the total pressure, $p_g$ is the gas pressure, $T$ is the gas temperature including the Boltzmann constant, and $b^{\alpha} = ( b_0, \vec{b} )$ is the covariant magnetic field. Note that $b^2=b^{\alpha}b_{\alpha}=B^2/\gamma^2+(\vec{v}\cdot\vec{B})^2=(B^2-E^2)$ is a Lorentz invariant. We use an equation of state with a constant polytropic index of $\Gamma=4/3$. \begin{figure}[htbp] \begin{center} \ifjournal \plotone{f1.eps} \else \plotone{f1.pdf} \fi \caption{ Our jet geometry. We consider the jet in the left side ($L$) and the ambient medium in the right side ($R$). \label{fig:jet}} \end{center} \end{figure} We developed an RMHD code to numerically solve the problem. We employ a relativistic HLLD scheme \citep{mig09,miyoshi05}, which considers multiple states inside the Riemann fan in order to resolve discontinuities better. We interpolate the spatial profile by a monotonized central limiter \citep{mc} and solve the temporal evolution by the second order total variation diminishing (TVD) Runge--Kutta method. Relativistic primitive variables are recovered by \citet{mig07b}'s inversion scheme. The model parameters are presented in Table \ref{table}. The subscripts $L$ and $R$ denote the properties in the two regions ($L$ for the left side or the jet, and $R$ for the right side or the ambient medium). The Lorentz factor of the jet is set to $\gamma_{jet}=7$. We initially set $B_{x}=0$. In our 1D configuration this automatically means $B_x=0$ all the time. This condition $B_{x}=0$ allows us to simplify the numerical scheme, because a five wave HLLD problem is reduced to a three-wave problem (see \citet{mig09}, Section 3.4.1). The first model H1 has no magnetic fields (RHD). The other two models contain magnetic fields inside the jet: the jet-aligned magnetic field ($B_z$: model M1) and the out-of-plane magnetic field ($B_y$: model M2). Importantly, the total pressure $p_{t,L}$ is set to the same. These RMHD models are analogous to the ``poloidal'' (M1) and ``toroidal'' (M2) cases in \citet{mizuno08}. The spatial domain of $-0.2 \le x \le 0.2$ is resolved by 6400 grids. All simulation results are checked by an analytic solver by \citet{giac06}. \ifjournal \begin{deluxetable}{l|rccccccccc|rcccccccc} \rotate \tablewidth{0pt} \tabletypesize{\scriptsize} \else \begin{deluxetable*}{l|rccccccccc|rcccccccc} \fi \tablecaption{\label{table} List of Simulation Models} \tablehead{ \colhead{Model} & \multicolumn{10}{c|}{Left} & \multicolumn{9}{c}{Right} \\ \colhead{} & \colhead{$\rho_L$} & \colhead{$p_{g,L}$} & \colhead{$v_{x,L}$} & \colhead{$v_{y,L}$} & \colhead{$v_{z,L}$} & \colhead{$\gamma_{jet}$} & \colhead{$B_{x,L}$} & \colhead{$B_{y,L}$} & \colhead{$B_{z,L}$} & \colhead{$p_{t,L}$} & \colhead{$\rho_R$} & \colhead{$p_{g,R}$} & \colhead{$v_{x,R}$} & \colhead{$v_{y,R}$} & \colhead{$v_{z,R}$} & \colhead{$B_{x,R}$} & \colhead{$B_{y,R}$} & \colhead{$B_{z,R}$} & \colhead{$p_{t,R}$} } \startdata H1 (RHD) & 0.1 & 10 & 0 & 0 &0.99 & 7 & 0 & 0 & 0 & 10 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ M1 (RMHD) & 0.1 & 2 & 0 & 0 &0.99 & 7 & 0 & 0 & 4 & 10 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ M2 (RMHD) & 0.1 & 2 & 0 & 0 &0.99 & 7 & 0 & 28 & 0 & 10 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ \enddata \tablecomments{ Models and parameters for the Riemann problems. The subscript $L$ denotes the jet (left side) properties and $R$ for the ambient medium (right side). In addition, two parameter surveys are performed by changing $\rho_{L}=(10^{-2},10^{-3})$ and $p_{g,R}=(3, 0.3, 0.1)$. } \ifjournal \end{deluxetable} \else \end{deluxetable*} \fi \section{Results} Shown in Figure \ref{fig:profile}{\itshape a} are simulation results of the model H1 at $t=0.2$. One can recognize a three-wave structure: (1) a leftward rarefaction wave ($x \sim -0.02$), (2) a contact discontinuity that separates the jet and the ambient medium ($x \sim 0.02$), and (3) a right-going forward shock ($x \sim 0.12$). The system exhibits a self-similar evolution as those waves propagate in time. Numerical errors are negligible, thanks to the high resolution and the stable numerical scheme. In the rarefaction region between (1) and (2), the Lorentz factor of the fluid gradually increases from $\gamma_{jet} = 7$, and then it reaches to the maximum ($\sim 11.7$) at the left vicinity of the contact discontinuity. This is consistent with the anomalous boost demonstrated in previous works. Hereafter, we denote this boosted region as the ``boundary layer'' and define the relevant Lorentz factor in the flat region $\gamma_{BL}$. The tangential velocity increases there, as shown in the small box in Figure \ref{fig:profile}{\itshape a}. \begin{figure}[bhtp] \begin{center} \ifjournal \includegraphics[width={0.8\columnwidth},clip]{f2.eps} \else \includegraphics[width={\columnwidth},clip]{f2.pdf} \fi \caption{ (Color online) (a) Simulation result of model H1 at $t=0.2$. The fluid Lorentz factor $\gamma$, the gas pressure $p_{g}$, the normal velocity $v_x$, and the tangential velocity $v_z$ are presented. The tangential velocity $v_z$ in the boosted region is also zoomed up in the small box. (b) The total pressure $p_t$ ({\itshape solid lines}) and the gas pressure $p_g$ ({\itshape dashed lines}) in models H1 ({\itshape black}), M1 ({\itshape red thin line}), and M2 ({\itshape blue thick line}) at $t=0.2$. (c) The Lorentz factor $\gamma$ in three models at $t=0.2$. The small numbers indicate the Lorentz factors in the relevant flat regions. \label{fig:profile}} \end{center} \end{figure} The RMHD models evolve similarly as the RHD model H1 does. Figure \ref{fig:profile}{\itshape b} compares the pressure profiles of the three models, and Figure \ref{fig:profile}{\itshape c} shows the profiles of the Lorentz factor. Since the jet contains the magnetic field in the RMHD cases, the rarefaction wave fronts propagate faster than the RHD case, because the Alfv\`{e}n speeds ($\sim c$ in the proper frames) are faster than the sound speed ($c_s \sim {c}/{\sqrt{3}}$ in the proper frame). One can also see the tangential discontinuities between the jet and the ambient medium ($x \sim 0.03$ in model M1, $x \sim 0.01$ in M2), where the magnetic pressure disappears and the gas pressure suddenly increases to maintain the total pressure. The anomalous boost similarly takes place on the jet side of the those discontinuities. The forward shocks are just out of sight from figures in RMHD cases. As reported by \citet{mizuno08}, the model M2 with a perpendicular magnetic field ($B_y$) exhibits stronger boost ($\gamma_{BL} \sim 16.2$) than the model M1 with a parallel magnetic field ($B_z$) ($\gamma_{BL} \sim 11.1$). \section{Analytic theory} \subsection{RHD theory} In this section we study the mechanics of the anomalous boost problem. First we examine the RHD case. Combining the momentum equation (Equation \ref{eq:rmhdb}) and the energy equation (Equation \ref{eq:rmhdc}) \citep{sakai}, \begin{eqnarray*} \partial_t (\gamma^2\rho h\vec{v}) + \vec{v} \Big( \div ( {\gamma^2 \rho h \vec{v}} ) \Big) + {\gamma^2 \rho h} (\vec{v} \cdot \nabla) \vec{v} + \nabla p_g = 0 , \\ \vec{v} \partial_t (\gamma^2\rho h) - \vec{v} \partial_t p_g + \vec{v} \Big( \div ( {\gamma^2 \rho h \vec{v}} ) \Big) = 0 , \end{eqnarray*} we obtain \begin{eqnarray} \label{eq:dpdt} \gamma^2 \rho h \frac{D \vec{v}}{Dt} = -\nabla p_g - \vec{v} \frac{\partial p_g}{\partial t}. \end{eqnarray} Since $\partial_z=0$, the anomalous boost obviously comes from the last term, $\gamma^2 \rho h (D/Dt) v_z \sim - \partial_t p_g$. This term has no Newtonian counterpart, it is certainly a relativistic effect. In usual contexts, the term slows down the fluid bulk acceleration in the high-temperature regime ($p_g \gtrsim \rho$), as if the relativistic pressure increases the inertia. In this case, since the pressure decreases in the rarefaction region, the force in the last term boosts the fluid in the $z$-direction, until the fluid element reaches the constant-pressure region. We see that the term coverts excess internal energy to the energy of the bulk motion. Next, we arrange the momentum equation (Equation \ref{eq:rmhdb}) in the following way. \begin{eqnarray*} \label{eq:p_force} \gamma\rho (\partial_t + \vec{v} \cdot \nabla) ( \gamma h \vec{v} ) + \Big[ \partial_t ( \gamma\rho ) + \div ( \gamma \rho \vec{v} ) \Big] \gamma h \vec{v} = - \nabla p_g . \end{eqnarray*} Using Equation \ref{eq:rmhda}, we obtain \begin{eqnarray} \label{eq:constP} \gamma\rho \frac{D}{Dt} ( \gamma h v_z ) = 0. \end{eqnarray} Thus, the specific momentum (the momentum density per the gas density in this frame) remains constant {\itshape as it should be}. This is because no external forces accelerate the fluid, and because the ideal fluid assumption does not allow momentum transport in its own frame. We confirmed that $\gamma h v_z$ is well conserved in both sides in the simulation. In model H1, the jet velocity is initially relativistic ($v_{z,L}\sim 1$), and then we expect \begin{eqnarray} \label{eq:rh} \gamma h \sim {\rm const.} \end{eqnarray} in the rarefaction region. The behavior of Equation \ref{eq:rh} is controlled by the gas temperature, $T=(p_g/\rho)$. When the gas is cold ($T \ll 1$), both the specific enthalpy $h \sim 1$ and the Lorentz factor $\gamma$ remain constant; no boost occurs. When the gas is relativistically hot ($T \gg 1$), $h \sim 4T$ becomes a function of $T$. In this limit, we find \begin{eqnarray} \label{eq:rT} \gamma T = \gamma ({p_g}/{\rho}) \sim {\rm const.} \end{eqnarray} We see that the Lorentz factor increases when the relativistic temperature decreases. Physically this is relevant to the temporal decrease of the pressure (Equation \ref{eq:dpdt}). Combining with the polytropic law (${p_g}{\rho^{-\Gamma}} = {\rm const.}$), we obtain the following relations, \begin{eqnarray} \label{eq:rho} \gamma \rho^{\Gamma-1} \sim {\rm const.} \\ \label{eq:p} \gamma p_g^{(\Gamma-1)/\Gamma} \sim {\rm const.} \end{eqnarray} Using these relations, we can estimate the boosted Lorentz factor $\gamma_{BL}$. Inside the rarefaction region, the gas pressure decreases to that of the contact discontinuity ($p_{g,D}$). Since $p_{g,D} \gtrsim p_{g,R}$, we immediately obtain the upper bound of $\gamma_{BL}$, \begin{eqnarray} \label{eq:max} \gamma_{BL} \sim \gamma_{jet} \Big(\frac{p_{g,L}}{p_{g,D}}\Big)^{(\Gamma-1)/\Gamma} \lesssim \gamma_{jet} \Big(\frac{p_{g,L}}{p_{g,R}}\Big)^{1/4} . \end{eqnarray} It is interesting to see that $\gamma_{BL}$ is controlled by the external pressure $p_{t,R}$. The over-pressured jet pushes the discontinuity outward, and the external pressure terminates the boost by stopping the further development of the rarefaction structure. The external pressure does no mechanical work on the jet fluid. Note that the boost does not operate when the jet-side pressure becomes nonrelativistic ($T \lesssim 1$). We have another restriction from Equation \ref{eq:rT}, \begin{eqnarray} \label{eq:max2} \gamma_{BL} \ll \gamma_{jet} \Big(\frac{p_{g,L}}{\rho_{L}}\Big). \end{eqnarray} This will replace Equation \ref{eq:max}, when the external pressure is too low ($p_{g,R}\rightarrow 0$). We also examine the energy equation. Inside the over-pressured ($p_{g}\gg \rho$) and relativistically-moving ($4\gamma_{jet}^2 \gg 1$) jet, the fluid energy density is \begin{eqnarray} \label{eq:ene1} \mathcal{E} = ( \gamma^2 w_t - p_{g} ) \sim (\gamma \rho) \gamma h . \end{eqnarray} Substituting Equation \ref{eq:ene1} into Equation \ref{eq:rmhdc}, we obtain the same condition as Equation \ref{eq:rh}: \begin{eqnarray} \label{eq:ene2} \gamma\rho (\partial_t + \vec{v} \cdot \nabla) ( \gamma h ) + \Big[ \partial_t ( \gamma\rho ) + \div ( \gamma \rho \vec{v} ) \Big] \gamma h \nonumber \\ = \gamma\rho \frac{D}{Dt} ( \gamma h ) = 0 . \end{eqnarray} Equations \ref{eq:ene1} and \ref{eq:ene2} tell us that a specific energy density (the energy density per the lab-frame gas density) is conserved during the fluid convection. This is because the total energy flow ($\gamma^2w_t\vec{v}\sim 4\gamma^2p_g\vec{v}$) is much larger than the work to expand the jet outward ($p_g\vec{v}$), and because the ideal fluid contains no heat transfer in its proper frame. \subsection{RMHD theory} Let us consider the effect of the jet-aligned magnetic field, $\vec{B}_L=(0,0,B_z)$. After some algebra in Equations \ref{eq:rmhdb}, we find both the $z$-momentum and the $xz$ component of the stress-energy tensor are unchanged from hydrodynamic ones. Therefore we can utilize Equations \ref{eq:constP} and \ref{eq:rh}. We further consider flux conservation, \begin{eqnarray} \frac{B_z}{\gamma \rho} = {\rm const.} \end{eqnarray} Combining this with Equation \ref{eq:rho}, we obtain \begin{eqnarray} \label{eq:B} \gamma B_z^{({\Gamma-1})/({2-\Gamma})} \sim {\rm const.} \end{eqnarray} When $v_z \sim 1$ like the boosted rarefaction region, the magnetic pressure approximates $\frac{1}{2}b^2 \sim \frac{1}{2}B^2_{z}$. From Equations \ref{eq:p} and \ref{eq:B}, we construct the pressure condition across the tangential discontinuity, \begin{eqnarray} p_{g,L} \Big( \frac{\gamma_{jet}}{\gamma_{BL}} \Big)^{\frac{\Gamma}{\Gamma-1}} + \frac{B^2_{z,L}}{2} \Big( \frac{\gamma_{jet}}{\gamma_{BL}} \Big)^{\frac{2(2-\Gamma)}{\Gamma-1}} \sim p_{t,D} \gtrsim p_{t,R}. \end{eqnarray} The power indexes are both $4$ when $\Gamma=4/3$. Therefore we obtain a generalized upper bound, \begin{eqnarray} \label{eq:max_para} \gamma_{BL} \lesssim \gamma_{jet} \Big(\frac{p_{t,L}}{p_{t,R}}\Big)^{1/4} . \end{eqnarray} Note that the total pressure $p_t$ replaces the gas pressure $p_g$ in Equation \ref{eq:max}. In the case of the perpendicular magnetic field, $\vec{B}_L=(0,B_y,0)$, the initial choice of $v_y=0$ simplifies the equations (e.g., $b_0=b_x=0$), because both $v_y$ and $B_z$ remain zero \citep{romero05}. In this case, the boost comes from the temporal decrease of the total pressure, $\gamma^2 w_t (D/Dt) v_z \sim - \partial_t p_t$. From Equations \ref{eq:rmhda} and \ref{eq:rmhdb}, we can similarly derive the conservation law, \begin{eqnarray} \label{eq:by_const} \gamma\rho \frac{D}{Dt} \Big( \gamma \frac{w_t}{\rho} v_z \Big) &=& \gamma\rho \frac{D}{Dt} \Big( \gamma ( h+ \frac{b^2}{\rho}) v_z \Big) = 0 . \end{eqnarray} For simplicity, we consider the magnetically dominated limit of $b^2/\rho \gg h$ (or $b^2 \gg 4p_g$). In the jet side ($v_z\sim 1$) we expect $\gamma b^2 / \rho \sim {\rm const.}$ Combining this with the flux conservation \begin{eqnarray} \label{eq:by} \Big( \frac{B_y}{\gamma\rho} \Big)^2 = \frac{b^2}{\rho^2} = {\rm const.} , \end{eqnarray} we expect \begin{eqnarray} \gamma^2 b^2 \sim {\rm const.} \end{eqnarray} The condition across the discontinuity leads to an upper bound of $\gamma_{BL}$, \begin{eqnarray} p_{t,L} \Big( \frac{\gamma_{jet}}{\gamma_{BL}} \Big)^{2} \sim \frac{b_L^2}{2}\Big( \frac{\gamma_{jet}}{\gamma_{BL}} \Big)^{2} \sim p_{t,D} \gtrsim p_{t,R} ,\\ \label{eq:max_perp} \gamma_{BL} \lesssim \gamma_{jet} \Big(\frac{p_{t,L}}{p_{t,R}}\Big)^{1/2}. \end{eqnarray} Furthermore, from the polytropic law and Equation \ref{eq:by}, we see that the magnetic pressure decays more rapidly than the gas pressure, \begin{eqnarray} b^2 \propto p_g^{2/\Gamma} \sim p_g^{3/2}. \end{eqnarray} Consequently, the system behaves similarly as the hydrodynamic case once the gas contribution and the magnetic contribution become comparable. Therefore, we usually expect intermediate results between Equations \ref{eq:max_para} and \ref{eq:max_perp}. Among the two RMHD cases, the boost is more significant in the perpendicular case than in the parallel case \citep{mizuno08}. This is because more electromagnetic energy and momentum are available per a gas medium --- the jet initially contains larger field energy $\frac{1}{2}(B^2+E^2)$ and carries additional upward momentum in a form of Poynting flux ($\vec{E}\times\vec{B}$). We also recall that the boost process is related to the pressure decrease, and that the magnetic pressure preferably works in the perpendicular directions. \begin{figure}[hbtp] \begin{center} \ifjournal \includegraphics[width={\columnwidth},clip]{f3.eps} \else \includegraphics[width={\columnwidth},clip]{f3.pdf} \fi \caption{ Anomalous boost ($\gamma_{BL}/\gamma_{jet}$) as a function of the total pressure ($p_{t,L}/p_{t,R}$). Three models (H1, M1, and M2) are compared with the theories: Equations \ref{eq:max} and \ref{eq:max_para} ({\itshape solid line}) and Equation \ref{eq:max_perp} ({\itshape dotted line}). \label{fig:max}} \end{center} \end{figure} \subsection{Numerical Tests} In order to verify the scaling theory, we carry out series of parameter surveys, by controlling the external pressure, $p_{t,R}=p_{g,R}$ (Table \ref{table}). Figure \ref{fig:max} shows the boosted Lorentz factors ($\gamma_{BL}$) in our RMHD simulations as a function of ($p_{t,L}/p_{t,R}$). Those values are checked by analytic solutions \citep{giac06}. For example, in the reference cases ($p_{t,L}/p_{t,R}=10$), the theory predicts $\gamma_{BL}/\gamma_{jet} \lesssim 1.78$ (Equations \ref{eq:max} and \ref{eq:max_para}) and $\gamma_{BL}/\gamma_{jet} \lesssim 3.16$ (Equation \ref{eq:max_perp}), while we obtain $\gamma_{BL}/\gamma_{jet}=1.67$ (H1), $1.59$ (M1), and $2.31$ (M2) (see also Figure \ref{fig:profile}{\itshape c}). In general, one can see that the scaling laws are in excellent agreement with the boost amplitude in the H1 and M1 series. The M1 cases are slightly affected by another limitation (e.g., Equation \ref{eq:max2}), due to the lower initial temperature $(p_{g,L}/\rho_L)$ in the jet. In the case of the M2 series, Equation \ref{eq:max_perp} works as a looser upper limit. Since the theory is valid when the magnetic pressure dominates in the jet, $p_{t,L}\sim b_L^2/2$, it is reasonable that we obtain intermediate results in these specific cases. We perform another parameter survey by reducing the jet-side density $\rho_{L}$ (Table \ref{table}). The results are very similar. Since $p_{L} \gg \rho_{L}$, we have even better agreement with the theory in the M1 series. \section{Relevance for kinetic models} In this section, we examine the problem from the viewpoint of the kinetic theory. For brevity, we assume that the gas moves to the $+z$-direction with a speed of $\beta=v_z$, and we set the particle rest mass to $m=1$. Although the RHD theory does not assume a specific distribution function, a drifting Maxwellian \citep{jut11,synge} will be the best starting point: \begin{eqnarray} \label{eq:js} f (\vec{p}) d\vec{p} ~ \propto ~ \exp\Big[ -\frac{ \gamma (p_0 - \beta p_z) }{ T } \Big] ~d\vec{p} , \end{eqnarray} where $\vec{p}$ is the particle momentum, $p_0=[{1+(\vec{p}\cdot\vec{p})}]^{1/2}$ is the particle energy, and $\gamma,\beta$ are the fluid bulk properties. Shown in Figure \ref{fig:PDF} are momentum-space profiles of sample distribution functions. Two samples are generated by Equation \ref{eq:js}: (1) $T=100$ and $\gamma=7$ and (2) $T=70$ and $\gamma=10$ such that they satisfy Equation \ref{eq:rT}. We intend to mimic (1) the initial condition in the jet and (2) the evolved population in the rarefaction region, in model H1. The lab-frame density $\gamma \rho$ is set to the same. The $p_x$-profiles (Figure \ref{fig:PDF}{\itshape a}) are reasonably different due to the thermal spread. In contrast, the $p_z$-profiles (Figure \ref{fig:PDF}{\itshape b}), which significantly extend to the $+p_z$ direction, look quite similar. In the left side of the $p_z$-space ($p_z \ll 0$), from Equation \ref{eq:js} and $p_z \approx -p_0$, the asymptotic slope index $s$ of the distribution $F(p_z) \propto e^{s p_z}$ yields \begin{eqnarray} s \sim \frac{\gamma (1+\beta)}{T} \sim \frac{2\gamma}{T}. \end{eqnarray} We see that the population is quite limited in this side, when $\gamma$ is large. In the right side, the index $s$ will be \begin{eqnarray} \label{eq:s} s \sim -\frac{\gamma (1 - \beta)}{T} \sim -\frac{1}{2\gamma T} \sim {\rm const.} \end{eqnarray} Therefore the $p_z$-profile remains similar in this side, even when the ``fluid'' velocity changes. In addition, since the right-side population mainly carries the momentum and the energy, the two distributions carry nearly the same amount of the momentum and the energy density per the lab-frame density, as mentioned by Equations \ref{eq:constP} and \ref{eq:ene2}. The relative differences are 0.3\% in momentum and 0.6\% in energy, respectively. Important implication of Equation \ref{eq:s} is that the typical momentum spread is $s^{-1} \sim 2\gamma T$ in the $+p_z$-direction. Recalling the effective boost condition of $T \gg 1$, we see that a thermal umbrella is much bigger ($2T$ times) than the bulk Lorentz factor $\gamma$ in the relativistic momentum space. \begin{figure}[thbp] \begin{center} \ifjournal \includegraphics[width={\columnwidth},clip]{f4.eps} \else \includegraphics[width={\columnwidth},clip]{f4.pdf} \fi \caption{ (Color online) (a) Ideal gas distribution functions for (1) $T=100$ and $\gamma=7$ ({\itshape solid line}) and (2) $T=70$ and $\gamma=10$ ({\itshape dotted line}) in the $p_x$-space. (b) The same, but in the $p_z$-space. \label{fig:PDF}} \end{center} \end{figure} \section{Discussion and Summary} As shown in Equation \ref{eq:dpdt}, the anomalous bulk boost comes from the temporal decrease of relativistic pressure. From the energy viewpoint, the term transports the internal energy to that of the bulk motion ($p_g \Rightarrow \gamma$), as mentioned by \citet{aloy08}. The internal-to-bulk energy transport is somewhat counter-intuitive, however, it is a logical consequence of the relativistic fluid formalism. The site of the boost is the rarefaction region. The rarefaction wave involves the temporal pressure decrease behind its wave front and there is a room for the convective fluid motion (Equation \ref{eq:p_force}). In contrast, neither conditions are satisfied around the shocks. The anomalous boost does not occur on the other side of the contact/tangential discontinuity nor will it occur when another shock replaces the rarefaction wave. Therefore, the transition from the shock regime to the rarefaction wave regime \citep{rez02} would be a critical condition for the problem. Similar boost in the normal direction has recently been reported in magnetically-dominated rarefaction region as well \citep{mizuno09}. Another explanation is a relativistic free expansion in the jet frame \citep{kom09b}. When the relativistically strong pressure pushes the gas outward against the external medium, the lateral expansion can be relativistic in the jet frame. Then, the Lorentz factor in the observer frame yields $\gamma_{BL} \sim \gamma_{jet} (1-v'^2)^{-1/2}$, where $v'$ is the expansion speed in the jet frame. We expect that the term $-\vec{v}'\partial_{t'} p_t$ enhances such expansion in the rarefaction region, and that the relevant boost is projected into the tangential boost in the observer frame. Strictly speaking, a 1D problem in the observer frame is no longer identical to that in the jet frame, because a 1D expansion of the discontinuity front in the +$x$-direction is projected to the oblique direction in the jet frame. The two problems start differently and therefore the situation is more complicated. A potential limitation is that multi-dimensional instabilities may modulate the 1D evolution. Especially, the relativistic Kelvin--Helmholtz (KH) instabilities will be relevant. In the regime of our interest, the increasing Lorentz factor \citep{tur76,bp76,bodo04} and the flow-aligned magnetic field \citep{osm08} suppress the KH mode; for instance, if we employ \citet{bodo04}'s stability condition of $\gamma_{jet} > ( 1+2 \cos^{-2}{\theta} )$ in our RHD jet ($\gamma_{jet}=7$), where $\theta$ is the angle between the jet flow and the wavevector, the instability is allowed only in the quasi-transverse direction. On the other hand, shear layers with density asymmetry are known to be substantially KH-unstable. Once the KH vortex develops, the subsequent turbulence is likely to smooth the sharp lateral structure. While 1D-like signatures have been found in some three-dimensional RHD \citep{aloy05} and two-dimensional RMHD simulations \citep{mizuno08,tch09,kom09b}, interference with the KH and other instabilities needs further investigation. In addition, we need to keep in mind that the entire process depends on the ideal fluid assumption. In order to justify it, collisional or other scattering processes have to relax the gas much quicker than the dynamical timescale. However, those are difficult conditions especially in the jet side, where the physical processes look even slower by the relativistic effect. In Section 5, we show that the fluid bulk speed is considerably smaller than a wide thermal spread in the momentum profile, when the boost operates. We think that the counter-intuitive force may be just enforced by the ideal fluid assumption: i.e. the anomalous fluid acceleration may be an artifact of an expedient isotropic fluid velocity. In the real world, we expect that non-ideal effects such as the heat flow play roles. In fact, the system involves large gradient of the pressure and the temperature in the rarefaction regions and around the discontinuities. In the high-temperature regime of $T \gg 1$, the energy and momentum balances are mainly controlled by the pressure parts (the internal energy or the enthalpy flux), which can be sensitive to the local gas distribution functions. In summary, we examined the 1D anomalous relativistic boost \citep{aloy06,mizuno08} at the lateral boundary of relativistic jets. We numerically and theoretically confirmed that the anomalous boost occurs in the RHD and RMHD regimes. We further derived simple scaling laws for the accelerated Lorentz factor, \begin{eqnarray} \gamma_{BL} \lesssim \gamma_{jet} \Big(\frac{p_{t,L}}{p_{t,R}}\Big)^s \left\{ \begin{array}{cl} s=1/4 & ~~({\rm hydro,~parallel})\\ s=1/2 & ~~({\rm perpendicular}) \\ \end{array} \right. \nonumber \end{eqnarray} We also note that the process operates in an {\itshape ideal} fluid. The non-ideal effects (heat flow etc.) as well as multi-dimensional effects are left for future works. We hope that this work will be a basic piece for the boundary problems in relativistic jets and the relevant simulations. \begin{acknowledgments} The authors express their gratitude to Tadas Nakamura, Karl Schindler, Yosuke Matsumoto, and Masha Kuznetsova for helpful comments. S.Z. gratefully acknowledges support from NASA Postdoctoral Program. \end{acknowledgments} \ifjournal \clearpage \fi
1,108,101,564,079
arxiv
\section{ 1.\hspace{0.5cm}Introduction} It is known that a charged particle in a uniform magnetic field exhibits circular motion with specified frequency of revolution, called cyclotron frequency, and specified radius which depends on the charge and mass of particle and magnitude magnetic field [1]. Quantum mechanically, a charged particle in uniform magnetic field exhibits quantised energy levels, called Landau levels [2]. In classical arena, non-uniform magnetic fields of different kinds have different effects on dynamics of the charged particle. For instance, a charged particle in a magnetic field with non zero gradient undergo Grad B drift [3]. Charged particle moving along curved magnetic field experience centrifugal force perpendicular to magnetic field. Hence charged particles experience drift in their motion. This kind of drift is called curvature drift [3]. There are several applications of these effects. One of the most important application is magnetic mirrors, which are used to confine plasmas [4]. The motion of an electron in a magnetic field of constant gradient has been analyzed by Seymour \textit{et al.}, where he has derived the $x$ and $y$ coordinates of electron's trajectory in terms of elliptic integrals [5]. The non-uniform magnetic field required for an electron to exhibit trochoidal trajectory has been calculated and it turns out to be a function of $x$ coordinate only [6]. The quantum mechanical treatment of charged particle in a class of non-uniform magnetic field has been studied using Isospectral Hamiltonian and Supersymmetric quantum mechanics [7]. This analysis gives the same Landau Level spectrum [8] as in case of uniform magnetic field. Although classical trajectory of a charged particle cannot be solved for the most general non-uniform magnetic field \textit{i.e,} non-uniform in all the three coordinates but it can still be solved for some classes of non-uniform magnetic field. It should be noted that throughout the paper, only spatially varying magnetic field is considered without time dependence. We use Landau gauge to restrict vector potential to just one component for constant magnetic field along $z$ direction. To inculcate non uniformity in magnetic field, we introduce some function in vector potential. Using elementary classical mechanics, we obtain an integral equation which can be solved to get the $x$ and $y$ coordinates of the particle's trajectory. Lastly, we examine the sypersymmetry structure of non-uniform Hamiltonians and observe that exponentially decaying magnetic field with radial dependence breaks supersymmetry. \section{2.\hspace{0.5cm}Charged Particle in Uniform Magnetic Field} First, we use the quadrature to compute the trajectory of a charged particle in a uniform magnetic field, which is known to be circular. Let us assume a constant magnetic field in $z$ direction \textit{i.e.} $\vec{B}=B\hat k$. Vector potential for constant magnetic field is given by, \begin{equation} \vec{A}=-\frac{1}{2}(\vec{r}\times \vec{B}) \label{Eq.1} \end{equation} Direct calculation for the above magnetic field gives $A_x=-yB/2$ and $A_y=xB/2$ and the $z$ component is 0. We can choose Landau gauge to reduce $A$ to one component such that either $A_x=-yB$ or $A_y=xB$. Let us take $A_x=-yB$. Thus Lagrangian for the system can be written as, \begin{center} $\mathcal{L}=\sum_{i}(\frac{1}{2}m\dot q_i^2+qA\cdot\dot q_i)$ \end{center} where the symbols have their usual meaning. Here, we can assume no motion in $z$ direction. So expanding the Lagrangian gives the following expression. \begin{equation} \mathcal{L}=\frac{1}{2}m(\dot x^2+\dot y^2)-qyB\dot x \label{Eq.2} \end{equation} From Eq. \ref{Eq.2}, it is evident that $x$ is a cyclic coordinate so $p_x=\frac{\partial \mathcal{L}}{\partial \dot x}$ is conserved [1]. Thus we have $p_x=\frac{\partial \mathcal{L}}{\partial \dot x}=m\dot x-qBy=c$ or \begin{equation} \dot x=k_1+k_2y \label{Eq.3} \end{equation} where $k_1=c/m$ and $k_2=qB/m$. Since Lagrangian has no explicit time dependence, thus the total energy of the system is also a constant of motion. We can get the Hamiltonian by using the following expression, \begin{equation} \mathcal{H}=\sum_i(p_i\dot q_i-\mathcal{L}) \label{Eq.4} \end{equation} where $p_i=\frac{\partial \mathcal{L}}{\partial \dot q_i}$. After putting $\mathcal{L}$ in Eq. \ref{Eq.4}, we have, \begin{equation} \mathcal{H}=\frac{1}{2}m(\dot x^2+\dot y^2) \label{Eq.5} \end{equation} From Eq. \ref{Eq.3} and Eq. \ref{Eq.5} we get, \begin{equation} \dot y=\pm \sqrt{k_3-k_1^2-k_2^2y^2-2k_1k_2y} \label{Eq.6} \end{equation} where $k_3=2\mathcal{H}/m$. Thus we have, \begin{equation} \pm\frac{dy}{\sqrt{\alpha+\beta y+\gamma y^2}}=dt \label{Eq.7} \end{equation} where $\alpha=k_3-k_1^2$ , $\beta=-2k_1k_2$ and $\gamma=-k_2^2$. Integrating both sides with appropriate limits we get (one can check in integral table), \begin{equation} \frac{1}{\sqrt{-\gamma}}\cos^{-1}[-(\beta+2\gamma y)/\sqrt{q}]=\pm t \label{Eq.8} \end{equation} with $q=\beta^2-4\alpha\gamma$. Putting all the values and after simplifying we get, \begin{equation} y=\frac{\sqrt{k_3}}{k_2}\cos k_2t-\frac{k_1}{k_2} \label{Eq.9} \end{equation} From Eq. \ref{Eq.3} we can integrate and calculate $x$ to get : \begin{equation} x=\frac{\sqrt{k_3}}{k_2}\sin k_2t \label{Eq.10} \end{equation} Eqs. \ref{Eq.9} and Eq. \ref{Eq.10} define a circle (as can be checked easily by squaring and adding) with defined frequency of revolution, called cyclotron frequency $\omega=k_2=qB/m$. The radius of the orbit can be calculated from Eq. \ref{Eq.9} and Eq. \ref{Eq.10}. Thus we get, $r=\frac{\sqrt{k_3}}{k_2}=mv/qB$ by noting that if energy of the system is conserved then $\mathcal{H}=mv^2/2$ where $v$ is the initial velocity of the particle. \section{3.\hspace{0.5cm}Charged particle in a non-uniform magnetic field } We now treat the general case of non-uniform magnetic field. We introduce a function depending on $y$ in the Landau gauge as $A_x=-yBf(y)$ which gives us non-uniform magnetic field. The magnetic field can be calculated easily using $B=\nabla\times A$ which gives, \begin{equation} \vec{B}=(yBf'(y)+Bf(y))\hat k \label{Eq.11} \end{equation} where $f'(y)=\frac{df(y)}{dy}$. Eq.\ref{Eq.11} represents a special kind non-uniform magnetic field. If we specify $f(y)$ we obtain different classes of non-uniform magnetic field. We can again, assume no motion along $z$ axis. Lagrangian for this case can be written as, \begin{equation} \mathcal{L}=\frac{1}{2}m(\dot x^2+\dot y^2)-qyBf(y)\dot x \label{Eq.12} \end{equation} Considering symmetry of Lagrangian along $x$ we have $p_x=\frac{\partial \mathcal{L}}{\partial \dot x}=m\dot x-qByf(y)=c$ or : \begin{equation} \dot x=k_1+k_2yf(y) \label{Eq.13} \end{equation} where $k_1=c/m$ and $k_2=qB/m$. Hamiltonian for the system can be calculated using Eq. \ref{Eq.4} and simple calculation gives Eq. \ref{Eq.5}. Here the Hamiltonian is a constant of motion as the Lagrangian has no explicit time dependence. From Eqs. \ref{Eq.5} and \ref{Eq.13} we have, \begin{equation} \dot y=\pm\sqrt{k_3-k_1^2-k_2^2y^2f^2(y)-2k_1k_2yf(y)} \label{Eq.14} \end{equation} Eq. \ref{Eq.14} gives us the desired integral equation which can be solved for different $f(y)$ and in turn for different classes of non-uniform magnetic field to get $y$ as a function of $t$. This $y$ can be substituted in Eq. \ref{Eq.13} to get $x$ as a function of $t$. These two equations define the trajectory of the particle. The integral equation is : \begin{equation} \boxed {\int \frac{dy}{\sqrt{k_3-k_1^2-2k_1k_2 yf(y)-k_2^2y^2f^2(y)}}=\pm\int dt +K } \label{Eq.15} \end{equation} where $K$ is the constant of integration. \section{3.1\hspace{0.5cm}Special Cases} We can check that Eq. \ref{Eq.15} yields correct results for some special cases.\\ {\bfseries Case 1:} $f(y)$=1\\ In this case we get constant magnetic field along $z$ direction as can be checked by putting $f(y)=1$ in Eq. \ref{Eq.11}. Putting $f(y)=1$ in Eq. \ref{Eq.15} gives back Eq. \ref{Eq.7} which we have already solved. Thus we get circular trajectory for constant magnetic field.\\ {\bfseries Case 2:} $f(y)=1/y$\\ For this case, We can calculate that $\vec{B}=0$. This means particle must go undeviated \textit{i.e,} the trajectory should be a straight line. Putting $f(y)=1$, Eq. \ref{Eq.15} gives, \begin{center} $\int \frac{dy}{\sqrt{k_3-k_1^2-2k_1k_2-k_2^2}}=\pm t +K$ \end{center} The denominator is just a constant (say $a$). So we get, \begin{center} $y=at+K'$ where $K'=aK$ \end{center} It can be noted that both the signs $+$ and $-$ give the same trajectory. We can calculate $x$ from Eq. \ref{Eq.13} which comes out to be : \begin{center} $x=bt+D$ where $b=k_1+k_2$ and $D$ is a constant of integration. \end{center} $x$ and $y$ indeed define a straight line. We can check that $\dot x^2+\dot y^2=v^2$ where $v$ is the initial velocity of the particle. We can extent the same procedure for a non-uniform magnetic field in $x$ coordinate. In that case we use the Landau gauge in which $A_x=0=A_z$ and $A_y=xBf(x)$. Magnetic field in this case takes the following form, \begin{equation} \vec{B}=(xBf'(x)+Bf(x))\hat k \label{Eq.16} \end{equation} Accordingly Lagrangian assumes the form given by, \begin{equation} \mathcal{L}=\frac{1}{2}m(\dot x^2+\dot y^2)+qxBf(x)\dot x \label{Eq.17} \end{equation} Now $y$ is a cyclic coordinate so that $p_y=\frac{\partial \mathcal{L}}{\partial \dot y}=m\dot x+qBxf(x)=c$ or \begin{equation} \dot y=k_1-k_2xf(x) \label{Eq.18} \end{equation} where $k_1=c/m$ and $k_2=qB/m$. Hamiltonian of the system remains the same. Invoking energy conservation and using Eqs. \ref{Eq.17} and \ref{Eq.5} we obtain, \begin{equation} \dot x=\pm\sqrt{k_3-k_1^2-k_2^2x^2f^2(x)+2k_1k_2xf(x)} \label{Eq.19} \end{equation} This gives us another integral equation which can be solved to obtain particle's $x$ coordinate as a function of time which can be substituted in Eq. \ref{Eq.17} to obtain particle's $y$ coordinate. Thus we can obtain particle's trajectory. The integral equation is given as : \begin{equation} \boxed {\int \frac{dx}{\sqrt{k_3-k_1^2+2k_1k_2 xf(x)-k_2^2x^2f^2(x)}}=\pm\int dt +K } \label{Eq.20} \end{equation} where $K$ is the constant of integration. The above cases give us the same result as it should be. \section{ 4.\hspace{0.5cm}Exponentially decaying Magnetic field} Although Eq. \ref{Eq.15} can be solved for different kinds of non uniform magnetic fields, but let's consider an exponentially decaying magnetic field. Let the magnetic field be given by, \begin{equation} \vec{B}=Be^{-y}\hat k \label{Eq.21} \end{equation} where $B$ is a constant. From Eq. \ref{Eq.11} we can solve for $f(y)$. Thus we have, \begin{center} $Be^{-y}=yBf'(y)+Bf(y)$ \end{center} This is an ordinary differential equation. Solution to this is given by, \begin{equation} f(y)=\frac{c}{y}-\frac{e^{-y}}{ y} \label{Eq.22} \end{equation} We can fix $c=1$ so that, \begin{equation} f(y)=\frac{1}{y}(1-e^{-y}) \label{Eq.23} \end{equation} magnetic field still remains the same. Putting this in Eq. \ref{Eq.15} gives, \begin{equation} {\int \frac{dy}{\sqrt{k_3-k_1^2-2k_1k_2 (1-e^{-y})-k_2^2(1-e^{-y})^2}}=\pm\int dt +K } \label{Eq.24} \end{equation} Let us put $a=k_3-k_1^2$ , $b=k_2^2$ and $c=2k_1k_2$ then the integral has the solution given by : \begin{equation} \dfrac{\arcsin\left(\frac{2\left(c+b-a\right)\mathrm{e}^{y}-c-2b}{\sqrt{c^2+4ab}}\right)}{(\sqrt{c+b-a})}+K'=\pm t+K \label{Eq.25} \end{equation} To make equations look simpler let's put $c+b-a=\alpha^2$ , $c+2b=\beta$ and by calculation $c^2+4ab=2k_2\sqrt{k_3}$. Constants of integration can be manipulated such that $K'=K$ so the solution after rearranging the terms looks as, \begin{equation} y=\log\bigg(\frac{\sqrt{k_3}k_2}{\alpha^2}\sin (\pm\alpha t)+\frac{\beta}{2\alpha^2}\bigg) \label{Eq.26} \end{equation} From Eq. (13) we can calculate $x$ as, \begin{equation} x=\int \bigg(k_1+k_2-\frac{k_2}{l\sin(\pm\alpha t)+m}\bigg)dt \label{Eq.27} \end{equation} where $l=\frac{\sqrt{k_3}k_2}{\alpha^2}$ , $m=\frac{\beta}{2\alpha^2}$. Solution of this equation is, \begin{equation} x=(k_1+k_2)t-k_2\dfrac{2\arctan\left(\frac{m\tan\left(\frac{\alpha t}{2}\right)\pm l}{\sqrt{m^2-l^2}}\right)}{\alpha\sqrt{m^2-l^2}} \label{Eq.28} \end{equation} Simple calculation gives, \begin{equation} x=(k_1+k_2)t-2\arctan\left(\frac{(m/l)\tan\left(\frac{\alpha t}{2}\right)\pm 1}{\sqrt{(m/l)^2-1}}\right) \label{Eq.29} \end{equation} in terms of original constants we have, \begin{equation} x=(k_1+k_2)t-2\arctan\left(\frac{(k_1+k_2)\tan\left(\frac{\alpha t}{2}\right)\pm \sqrt{k_3}}{\sqrt{(k_1+k_2)^2-k_3}}\right) \label{Eq.30} \end{equation} Eqs. \ref{Eq.26} and \ref{Eq.30} define the trajectory of the particle. It is worth noting that if we have a particle with zero energy in magnetic field then it will remain stationary since magnetic field does no work. In this case this is indeed true. If we take zero energy then $k_3=0$ and a bit of calculation shows that $x=0$ and $y$=log($k_2$)=constant. Thus the particle remains stationary on point (0, log($k_2$)). Now to analyze particle's trajectory in this field let's assume the following : $k_3=1$ , $k_2=0.1$ (we have taken specific charge of the particle to be $1$ and assumed a strong magnetic field of $0.1$T!) ;$k_1+k_2=1.118$. This gives us, \begin{center} $x=1.12t-2\tan^{-1}(2.24\tan(t/4)+2)$;\\ $y=\log(0.08\sin(t/2)+0.1)$ \end{center} We plot these parametric equation for $t\in (0,50)$. In this Fig. \ref{fig.1} and the figures that follow, the horizontal axis is the x-axis and the vertical axis is the y-axis. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{trajectory.pdf} \caption{Particle's trajectory in exponentially decaying magnetic field $\vec{B}=Be^{-y}\hat k$} \label{fig.1} \end{figure} The particle shows periodic motion as can be inferred from the trajectory. We plot trajectory curves for several values of constants. There are several forms of $f(y)$ for which solution to Eq. \ref{Eq.15} exists and thus such class of non-uniform magnetic field can be analyzed easily. \begin{figure}[h] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{figure_1.pdf} \caption{$k_1=1 , k_2=0.1 , k_3=4$} \label{fig.2a} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{figure_1-1.pdf} \caption{$k_1=2 , k_2=0.2 , k_3=8$} \label{fig.2b} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{figure_1-2.pdf} \caption{$k_1=2 , k_2=2 , k_3=8$} \label{fig.2c} \end{subfigure}\begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{figure_1-3.pdf} \caption{$k_1=2 , k_2=4 , k_3=8$} \label{fig.2d} \end{subfigure} \caption{Plots of trajectory curves for different values of constants} \end{figure} \section{5.\hspace{0.5cm}Supersymmetry in Uniform and Non-Uniform \\Magnetic Field} Now we turn to the quantum mechanical treatment of the problem. The Pauli-Hamiltonian for a charged particle moving in two dimensions in a magnetic field [9] is given by, \begin{equation} 2H = (p_x+A_x)^2+(p_y+A_y)^2+(\nabla \times A)_z\sigma_z \label{Eq.31} \end{equation} where $\sigma_z$ is the Pauli z matrix. We have used natural units with $\hbar=1=m$. Suppose, we choose $A_x=-Byf(r)$ and $A_y=B_xf(r)$ where $r=\sqrt{x^2+y^2}$ then Eq. \ref{Eq.31} has the following form, \begin{equation} 2H=-\Big(\frac{d^2}{dx^2}+\frac{d^2}{dy^2}\Big)+B^2r^2f^2-2BfL_z+(2Bf+Brf'(r))\sigma_z \label{Eq.32} \end{equation} where $L_z$ is the z-component of the orbital angular momentum operator. We use cylindrical coordinates $(r, \phi)$ to solve the corresponding Schr$\ddot{o}$dinger equation. The wave function $\psi(r, \phi)$ can be factored as, \begin{equation} \psi(r,\phi)=R(r)e^{im\phi} \label{Eq.33} \end{equation} where $m=0,\pm 1,\pm 2,\pm 3, \dots$ are the eigenvalues of the operator $L_z$. On substituting Eq.\ref{Eq.33} into Eq. \ref{Eq.32} we obtain the following equation, \begin{equation} \frac{d^2R}{dr^2}+\frac{1}{r}\frac{dR}{dr}-\Big[B^2r^2f^2+\frac{m^2}{r^2}+2Bmf+ (2Bf+ Bpf'(r)\sigma_z)\Big]R(r)=-2ER(r) \label{Eq.34} \end{equation} If we further substitute $R(r)=\sqrt{r}A(r)$ into Eq. \ref{Eq.34} and choose the lower eigenvalue of $\sigma_z$, we obtain : \begin{equation} \Bigg[-\frac{d^2}{dr^2}+\Bigg(B^2r^2f^2-2Bf+2Bmf-Brf'(r)+\frac{m^2-\frac{1}{4}}{r^2}\Bigg)\Bigg]A(r)=2EA(r) \label{Eq.35} \end{equation} We can write the left hand side in the form $a^{\dagger}a$ where \begin{equation} a= \frac{d}{dr}+Brf-\frac{|m|+\frac{1}{2}}{r} \label{Eq.36} \end{equation} For $m\le0$, the decomposition holds and thus we have that $E_0\ge0$ [10]. $E_0=0$ occurs if and only if the solution of the equation $a\psi_0(r)=0$ \textit{i.e,} \begin{equation} \Bigg[\frac{d}{dr}+Brf-\frac{|m|+\frac{1}{2}}{r}\Bigg]\psi_0(r)=0 \label{Eq.37} \end{equation} is square integrable [10]. In that case Supersymmetry (SUSY) remains unbroken. Now consider the following cases : \begin{enumerate} \item \textbf{Uniform magnetic field}\\ If we choose $f(r)=1$ above, we end up having a uniform magnetic field. In this case, we can solve Eq. \ref{Eq.37} to get : \begin{equation} \frac{d\psi_0(r)}{dr}=\Bigg(\frac{|m|+\frac{1}{2}}{r}-Br\Bigg)\psi_0(r) \label{Eq.38} \end{equation} \begin{equation} \frac{d\psi_0(r)}{\psi_0(r)}=\Bigg(\frac{|m|+\frac{1}{2}}{r}-Br\Bigg)dr \label{Eq.39} \end{equation} Solution to Eq. \ref{Eq.39} is given by \begin{equation} \psi_0(r)=N_0r^{|m|+\frac{1}{2}}exp\Big(-\frac{1}{2}Br^2\Big) \label{Eq.40} \end{equation} where $N_0$ is the normalization factor. It is easy to see that $\psi_0(r)$ is square integrable as the polynomial factor is dominated by the exponential and overall integral is convergent. Thus in this case the SUSY remains unbroken. It is also known that in uniform magnetic field, the energy spectrum is same as that of a harmonic oscillator oscillating with cyclotron frequency. \item \textbf{Non-Uniform magnetic field}\\ There are several forms of $f(r)$ for which SUSY remains unbroken. For instance the function $f(r)=\frac{(r-a)(r-b)}{r^2}$ keeps the SUSY unbroken. Now consider the function \begin{equation} f(r)=\frac{1}{r}(1-e^{-r}) \label{Eq.41} \end{equation} For this function, first note that $\vec{B}=\nabla\times \vec{A}=(yBf'(y)+Bf(y))\hat k=Be^{-r}\hat k$. Thus for this function we obtain exponentially decaying magnetic field, this time with radial decay, which we considered in previous section for classical treatment. Now we deal with the peculiarity of this form. We can solve Eq. \ref{Eq.37} with this function and a bit of computation gives \begin{equation} \psi_0(r)=N_0r^{|m|+\frac{1}{2}}exp\big[-B(e^{-r}+r)\big] \label{Eq.42} \end{equation} Again, we can easily see that $\psi_0(r)$ is not square integrable as the integral diverges. Thus we have $E_0>0$ and thus \textit{SUSY is broken}. Energy spectrum still remains discretly quantised [10] and can be determined using \begin{equation} \bigintsss_0^{\infty}\sqrt{2m[E_n-W^2(r)]dr}=\Big(n+\frac{1}{2}\Big)\hbar\pi, n=0,1,2,3,\dots \label{Eq.43} \end{equation} where $W(r)$ is given by \begin{equation} W(r)=-\frac{\hbar}{\sqrt{2m}}\frac{\psi_0'(r)}{\psi_0(r)} \label{Eq.44} \end{equation} Thus we see that exponentially decaying magnetic field is really peculiar as it breaks supersymmetry. Further experiments may be designed to detect this kind of symmetry breaking. \end{enumerate} \section{6.\hspace{0.5cm}Discussion} The analysis presented in this paper provides an important recipe to find the trajectory of a charged particle in spatially varying magnetic field for some special classes of non uniform magnetic field. The more general case of solving the trajectory still remains an open problem. Nonetheless the example presented provides a way to solve the trajectory in exponentially varying magnetic field. We also observe that exponentially decaying magnetic field shows special properties in the sense that the ground state of the isospectral partner of the non-uniform Hamiltonian with exponential non-uniformity has non-zero ground state energy and thus it breaks supersymmetry. Experiments may be designed to detect this feature of exponentially decaying magnetic field. We also note that the results of the paper may be used in various other cases. Physical situations where non-uniform magnetic field appears is that of the earth itself. New insights can be gained by studying the trajectory of charged particles released from the sun in the earth's magnetic field. Study of the van Allen belts can be done based on this theory. \section{7.\hspace{0.5cm}Conclusion } In conclusion, here we have presented a method to analyze the motion of a charged particle in various classes of non-uniform magnetic field. We also presented an specific non-uniform magnetic field with peculiar properties classically as well as quantum mechanically. Although the integral equation presented do not have trivial solution for many forms of $f(y)$ or equivalently $f(x)$ but for such cases numerical integration can be used to find out the trajectory. It is observed that many forms have exact solution and thus it is useful to further study this method and generalize it to other forms. \section{8.\hspace{0.5cm}Acknowledgements} This work was carried out at Panjab University Chandigarh. The author is indebted to Prof. C. N. Kumar whose guidance helped to complete this work. The author also thanks Ms. Harneet Kaur, Dr. Amit Goyal, and Mr. Shivam Pal for useful discussions. \section{9.\hspace{0.5cm}References } [1] Goldstein, H.; Poole, C. P. and Safko, J. L. \textit{ Classical Mechanics }(3rd ed.). Addison-wesley (2001)\\ \big[2\big] Landau, L. D. and Lifschitz, E. M.;\textit{ Quantum Mechanics: Non-relativistic Theory}. Course of Theoretical Physics. Vol. 3 (3rd ed. London: Pergamon Press). ISBN 0750635398 (1977)\\ \big[3\big] Baumjohann, W. and Treumann, R. \textit{Basic Space Plasma Physics}. ISBN 978-1-86094-079-8 (1997)\\ \big[4\big] Krall,N. \textit{Principals of Plasma Physics}. Page 267 (1973)\\ \big[5\big] Seymour,P.W - \textit{Aust. Jour. Phys.} 12; 309-14\\ \big[6\big] RF Mathams, R.F. \textit{Aust. Jour. Phys}. 17(4), 547 - 552 \\ \big[7\big] Cooper,F.; Khare,A.; and Sukhatme,U. \textit{“Supersymmetry in quantum mechanics"},\textit{ Phys. Rep}. 251, 267-285, (1995)\\ \big[8\big] Khare,A. and Kumar,C.N. \textit{Mod. Phys. Lett.} A 8, 523-529 (1993)\\ \big[9\big] Khare,A. and Maharana,J. \textit{Nucl. Phys.} B224, 409 (1984)\\ \big[10\big] Cooper,F.; Khare,A.; and Sukhatme,U. \textit{Phys. Rep} 251, 267-285, (1995) \end{document}
1,108,101,564,080
arxiv
\section{Introduction} \IEEEPARstart{I}{n} the wake of the rapid growth of multiple modalities data (e.g., image, text, and video) on social media and the internet, cross-modal information retrieval have obtained much attention for storing and searching items in the large-scale data environment. Hashing based methods \cite{proceeding7, proceeding10, proceeding27, jour13, yan2020deep, yan2021task, yan2021precise, yan2021age, ma2018global, ma2020discriminative, ma2021learning, ma2020correlation, ma2017manifold, ma2017learning} has shown their superiority in the approximate nearest neighbor retrieval task because of their low storage consumption and fast search speed. Its superiority is achieved by compact binary codes representation, which targets constructing the compact binary representation in Hamming space to preserve the semantic affinity information across modalities as much as possible. By whether the label information is utilized, cross-modal hashing (CMH) methods can be categorized into two subclasses: unsupervised \cite{proceeding17, proceeding14, proceeding15, proceeding16, jour5, jour23} and supervised \cite{proceeding18, proceeding19, proceeding20, proceeding21, proceeding22} methods. The details are shown in Section \ref{Sec2}. Besides, most existing cross-modal hashing methods separately preserve the intra- and inter-modal affinities to learn the corresponding binary codes, which omit to consider the fusion semantic affinity among multi-modalities data. However, such fusion affinity is the key to capturing the cross-modal semantic similarity since different modalities' data could complement each other. Recently, some works have considered utilizing multiple similarities \cite{proceeding35, jour24}. Few works have been proposed to capture such fusion semantic similarity with binary codes. The symbolic achievement is Fusion Similarity Hashing (FSH) \cite{proceeding17}. FSH builds an asymmetrical fusion graph to capture the intrinsic relations across heterogeneous data and directly preserves the fusion similarity to a Hamming space. However, there are some limitations to be further addressed. Firstly, most existing CMH methods take graphs, which are always predefined separately in each modality, as input to model the distribution of data. These methods omit to consider the correlation of graph structure among multiple modalities, and cross-modal retrieval results highly rely on the quality of predefined affinity graphs \cite{proceeding17, proceeding14, proceeding16}. Secondly, most existing CMH methods deal with the preservation of intra- and inter-modal affinity separately to learn the binary code, omitting to consider the fusion affinity among multi-modalities data containing complementary information \cite{proceeding14, proceeding15, proceeding16}. Thirdly, most existing CMH methods relax the discrete constraints to solve the optimization objective, which could significantly degrade the retrieval performance \cite{proceeding14, proceeding18, proceeding19}. Besides, there are also some works \cite{quan2016object, yan2020depth} that try to adaptively learn the optimal graph affinity matrix from the data information and the learning tasks. Graph Optimized-Flexible Manifold Ranking (GO-FMR) \cite{quan2016object} obtains the optimal affinity matrix via direct $L_2$ regression under the guidance of the human established affinity matrix to infer the final predict labels more accurately. In our model, we learn an intrinsic anchor graph also via direct $L_2$ regression like \cite{quan2016object} based on the constructed anchor graph structure fusion matrix, but it also simultaneously tunes the structure of the intrinsic anchor graph to make the learned graph has exactly $C$ connected components. Hence, vertices in each connected component of the graph can be categorized into one cluster. In addition, GO-FMR needs $O(N^2)$ in computational complexity to learn the optimal affinity matrix ($N$ is the number of the vertices in the graph). With the increasing amount of data, it is intractable for off-line learning. Compared with GO-FMR, the proposed method only needs $O(PN)$ in computational complexity to learn the optimal anchor affinity matrix to construct the corresponding graph ($P$ is the number of the anchors in the graph and $P \ll N$). As a result, the storage space and the computational complexity for learning the graph could be decreased. \captionsetup{labelfont=it, textfont={bf,it}} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{AGSFH_flowchart} \caption{The flowchart of AGFSH. We utilize bi-modal data as an example. Firstly, in part I, we construct anchor graphs for image and text modalities and calculate the anchor graph structure fusion matrix by the Hadamard product. Secondly, part II shows intrinsic anchor graph learning with graph clustering by best approximating the anchor graph structure fusion matrix. Based on this process, training instances can be clustered into semantic space. Thirdly, in part III, binary code learning based on intrinsic anchor graph is realized by best approximating Hamming similarity matrix into intrinsic anchor graph. This process can guarantee that binary data could preserve the semantic relationship of training instances in semantic space.} \label{figure1} \end{figure*} To address the above limitations, in this paper, we propose Anchor Graph Structure Fusion Hashing (AGSFH) method to solve effective and efficient large-scale information retrieval across modalities. The flowchart of the proposed model is shown in Fig. \ref{figure1}. By constructing the anchor graph structure fusion matrix from different anchor graphs of multiple modalities with the Hadamard product, AGSFH can fully exploit the geometric property of underlying data structure, leading to the robustness to noise in multi-modal relevance among data instances. The key idea of AGSFH is attempting to directly preserve the fusion anchor affinity with complementary information among multi-modalities data into the common binary Hamming space. At the same time, the structure of the intrinsic anchor graph is adaptively tuned by a well-designed objective function so that the number of components of the intrinsic graph is exactly equal to the number of clusters. Based on this process, training instances can be clustered into semantic space, and binary data could preserve the semantic relationship of training instances. Besides, a discrete optimization framework is designed to learn the unified binary codes across modalities. Extensive experimental results on three public social datasets demonstrate the superiority of AGSFH in cross-modal retrieval. We highlight the main contributions of AGSFH as below. \begin{enumerate}[1.] \item We develop an anchor graph structure fusion affinity for large-scale data to directly preserve the anchor fusion affinity with complementary information among multi-modalities data into the common binary Hamming space. The fusion affinity can result in robustness to noise in multi-modal relevance and excellent performance improvement on cross-modal information retrieval. \item The structure of the intrinsic anchor graph is learned by preserving similarity across modalities and adaptively tuning clustering structure in a unified framework. The learned intrinsic anchor graph can fully exploit the geometric property of underlying data structure across multiple modalities and directly cluster training instances in semantic space. \item An alternative algorithm is designed to solve the optimization problem in AGSFH. Based on the algorithm, the binary codes are learned without relaxation, avoiding the large quantization error. \item Extensive experimental results on three public social datasets demonstrate the superiority of AGSFH in cross-modal retrieval. \end{enumerate} \section{Related works} \label{Sec2} Cross-modal hashing has obtained much attention due to its utility and efficiency. Current CMH methods are mainly divided into two categories: unsupervised ones and supervised ones. Unsupervised CMH methods \cite{proceeding17, proceeding14, proceeding15, proceeding16, jour5, jour23} mainly map the heterogeneous multi-modal data instances into the common compact binary codes by preserving the intra- and inter-modal relevance of training data and learn the hash functions during this process. Fusion Similarity Hashing (FSH) \cite{proceeding17} learns binary hash codes by preserving the fusion similarity of multiple modalities in an undirected asymmetric graph. Collaborative Subspace Graph Hashing (CSGH) \cite{proceeding14} constructs the unified hash codes by a two-stage collaborative learning framework. Joint Coupled-Hashing Representation (JCHR) \cite{proceeding15} learns the unified hash codes via embedding heterogeneous data into their corresponding binary spaces. Hypergraph-based Discrete Hashing (BGDH) \cite{proceeding16} obtains the binary codes with learning hypergraph and binary codes simultaneously. Robust and Flexible Discrete Hashing (RFDH) \cite{jour5} directly produces the hash codes via discrete matrix decomposition. Joint and individual matrix factorization hashing (JIMFH) \cite{jour23} learns unified hash codes with joint matrix factorization and individual hash codes with individual matrix factorization. Different from unsupervised ones, CVH \cite{proceeding18}, SCM \cite{proceeding19}, SePH \cite{proceeding20}, FDCH \cite{proceeding21}, ADCH \cite{proceeding22} are representative supervised cross-modal hashing methods. Cross View Hashing (CVH) extends the single-modal spectral hashing to multiple modalities and relaxes the minimization problem for learning the hash codes \cite{proceeding18}. Semantic Correlation Maximization (SCM) learns the hash functions by approximating the semantic affinity of label information in large-scale data \cite{proceeding19}. Semantics-Preserving Hashing (SePH) builds a probability distribution by the semantic similarity of data and minimizes the Kullback-Leibler divergence for getting the hash binary codes \cite{proceeding20}. Fast Discrete Cross-modal Hashing (FDCH) learns the hash codes and hash functions with regressing from class labels to binary codes \cite{proceeding21}. Asymmetric Discrete Cross-modal Hashing (ADCH) obtains the common latent representations across the modalities by the collective matrix factorization technique to learn the hash codes and constructs hash functions by a series of binary classifiers \cite{proceeding22}. Besides, some deep CMH models are developed using deep end-to-end architecture because of the superiority of extracting semantic information in data points. Some representative works are Deep Cross-Modal Hashing (DCMH) \cite{proceeding23}, Dual deep neural networks cross-modal hashing (DDCMH) \cite{proceeding24}, Deep Binary Reconstruction (DBRC) \cite{jour14}. Deep Cross-Modal Hashing (DCMH) learns features and hash codes in the same framework with deep neural networks, one for each modality, to perform feature learning from scratch \cite{proceeding23}. Dual deep neural networks cross-modal hashing (DDCMH) generates hash codes for different modalities by two deep networks, making full use of inter-modal information to obtain high-quality binary codes \cite{proceeding24}. Deep Binary Reconstruction (DBRC) simultaneously learns the correlation across modalities and the binary hashing codes, which also proposes both linear and nonlinear scaling methods to generate efficient codes after training the network \cite{jour14}. In comparison with the traditional hashing methods, deep cross-modal models have shown outstanding performance. \section{The proposed method} In this section, we will give a detailed introduction to our proposed AGSFH. \subsection{Notations and Definitions} Our AGSFH work belongs to unsupervised hashing approach, achieving state-of-the-art performance in cross-modal semantic similarity search. There are $N$ instances $O = \left\{o_1, o_2, \ldots, o_N\right\}$ in the database set and each instance $o_i = (x_i^1, x_i^2, \ldots, x_i^M)$ has $M$ feature vectors from $M$ modalities respectively. The database matrix $X^m=\left[x_1^m, x_2^m, \ldots , x_N^m \right]\in R^{d_m \times N}$ denotes the feature representations for the $m$th modality, and the feature vector $x_i^m$ is the $i$th data of $X^m$ with $d_m$ dimension. Besides, we assume that the training data set $O$ has $C$ clusters, i.e., $O$ has $C$ semantic categories. Given training data set $O$, the proposed AGSFH aims to learn a set of hash functions $H^m(x^m)=\left\{h_1^m(x^m), h_2^m(x^m), \ldots, h_K^m(x^m)\right\}$ for the $m$th modal data. At the same time, a common binary code matrix $B=\left[b_1, b_2, \ldots , b_N \right]\in \left\{-1, 1\right\}^{K \times N}$ is constructed, where binary vector $b_i \in \left\{-1, 1\right\}^{K}$ is the $K$-bits code for instance $o_i$. For the $m$th modality data, its hash function can be written as: \begin{eqnarray}\label{GSFH1} h_k^m(x^m)=sgn(f_k^m(x^m)), (k=1, 2, \ldots, K), \end{eqnarray} where $sgn(\cdot)$ is the sign function, which return $1$ if $f_k^m(\cdot) > 0$ and $-1$ otherwise. $f_k^m(\cdot)$ is the linear or non-linear mapping function for data of the $m$th modality. For simplicity, we define our hash function at the $m$th modality as $H^m(x^m)=sgn(W_m^Tx^m)$. \subsection{Anchor Graph Structure Fusion Hashing} In AGSFH, we first construct anchor graphs for multiple modalities and calculate the anchor graph structure fusion matrix by the Hadamard product. Next, AGSFH jointly learns the intrinsic anchor graph and preserves the anchor fusion affinity into the common binary Hamming space. In intrinsic anchor graph learning, the structure of the intrinsic anchor graph is adaptively tuned by a well-designed objective function so that the number of components of the intrinsic graph is exactly equal to the number of clusters. Based on this process, training instances can be clustered into semantic space. Binary code learning based on the intrinsic anchor graph can guarantee that binary data could preserve the semantic relationship of training instances in semantic space. \subsubsection{Anchor Graph Learning} Inspired by the idea of GSF \cite{jour15}, it is straightforward to consider our anchor graph structure fusion similarity as follows. However, in general, building a $k$-nearest-neighbor ($k$-NN) graph using all $N$ points from the database always needs $O(N^2)$ in computational complexity. Furthermore, learning an intrinsic graph among all $N$ points also takes $O(N^2)$ in computational complexity. Hence, with the increasing amount of data, it is intractable for off-line learning. To address these computationally intensive problems, inspired by Anchor Graph Hashing (AGH) \cite{proceeding1}, with multi-modal data, different $k$-NN anchor graph affinity matrices $\hat{A}^{m} (m=1,2, \ldots, M)$ are constructed for $M$ modalities respectively. Besides, We further propose an intrinsic anchor graph learning strategy, which learns the intrinsic anchor graph $\hat{S}$ of the intrinsic graph $S$. In particular, given an anchor set $T=\left\{t_1, t_2, \ldots, t_P\right\}$, where $t_i = (t_i^1, t_i^2, \ldots, t_i^M)$ is the $i$-th anchor across $M$ modalities, which are randomly sampled from the original data set $O$ or are obtained by performing clustering algorithm over $O$. The anchors matrix $T^m=\left[t_1^m, t_2^m, \ldots , t_P^m \right]\in R^{d_m \times P}$ denotes the feature representations of anchors for the $m$th modality, and the feature vector $t_i^m$ is the $i$th anchor of $T^m$. Then, the anchor graph $\hat{A}^{mT}=[\hat{a}_1^m, \hat{a}_2^m, \ldots, \hat{a}_N^m] \in R^{P \times N}$ between all $N$ data points and $P$ anchors at the $m$th modality can be computed as follows. Using the raw data points from $T$ and $O$ respectively represents a pairwise distance $b_{ji}=\left \| x_i^m - t_j^m\right \|_F^2$, we construct the anchor graph $\hat{A}^{mT}$ by using similar initial graph learning of GSF \cite{jour15}. Therefore, we can assign $k$ neighbor anchors to each data instance by the following Eq.(\ref{GSFH14}). \begin{equation}\label{GSFH14} \hat{a}_{ij}^{m\star}= \begin{cases} \frac{b_{i,k+1}-b_{i,j}}{kb_{i,k+1}-\sum_{j=1}^k b_{i,j}}, & j \leq k \\ 0, & otherwise. \end{cases} \end{equation} Since $P \ll N$, the size of the anchor graph is much smaller than the traditional graph, the storage space and the computational complexity for constructing the graph could be decreased. We can obtain different anchor graph affinity matrices $\hat{A}^{m} (m=1, 2, \ldots, M)$ for $M$ modalities respectively. Similar to \cite{jour15}, we use the Hadamard product to extract intrinsic edges in multiple anchor graphs for fusing different anchor graph structures $\hat{A}^{m}$ into one anchor affinity graph $\hat{A}$ by \begin{eqnarray}\label{GSFH15} \hat{A} = \prod_{m=1}^{M} \hat{A}^{m}. \end{eqnarray} which can reduce the storage space and the computational complexity of graph structure fusion. Given a fused anchor affinity matrix $\hat{A}$, we learn a anchor similarity matrix $\hat{S}$ so that the corresponding graph $S \simeq \bar{S} = \hat{S}\hat{S}^T $ has exactly $C$ connected components and vertices in each connected component of the graph can be categorized into one cluster. For the similarity matrix $S \simeq \bar{S}=\hat{S}\hat{S}^T \geq 0$, there is a theorem \cite{book2} about its Laplacian matrix $L$ \cite{jour17}: \newtheorem{thm}{\bf Theorem} \begin{thm}\label{thm1} The number $C$ of connected components of the graph $S$ is equal to the multiplicity of zeros as an eigenvalue of its Laplacian matrix $L$. \end{thm} The proof of Fan’s Theorem can be referred to \cite{jour18, jour19}. As we all known, $L$ is a semi-definite matrix. Hence, $L$ has $N$ non-negative eigenvalues $0 = \lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_N$. Theorem \ref{thm1} tells that if the constraint $\sum_{c=1}^C \lambda_c =0$ is satisfied, the graph $S$ has an ideal neighbors assignment and the data instances are already clustered into $C$ clusters. According to Fan’s theorem \cite{jour21}, we can obtain an objective function, \begin{eqnarray}\label{GSFH3} && \sum_{c=1}^C \lambda_c = \min_{U} \left \langle UU^T, L \right \rangle \nonumber \\ \mathrm{s.t.} && U \in R^{N \times C}, U^TU=I_C, \end{eqnarray} where $\left \langle \cdot \right \rangle$ denotes the Frobenius inner product of two matrices, $U^T = [u_1, u_2, \ldots, u_N]$, $L=D-\hat{S}\hat{S}^T$ is the Laplacian matrix, $I_C \in R^{C \times C}$ is an identity matrix, $D$ is a diagonal matrix and its elements are column sums of $\hat{S}\hat{S}^T$. Furthermore, for intrinsic anchor graph, $\bar{S}$ can be normalized as $\tilde{S} = \hat{S}\Lambda\hat{S}^T$ where $\Lambda= diag(\hat{S}^T1) \in R^{P \times P}$. The approximate intrinsic graph matrix $\tilde{S}$ has a key property as it has unit row and column sums. Hence, the graph laplacian of the intrinsic anchor graph is $L=I-\tilde{S}$, so the required $C$ graph laplacian eigenvectors $U$ in Eq.(\ref{GSFH3}) are also eigenvectors of $\tilde{S}$ but associated with the eigenvalue $1$ (the eigenvalue $1$ corresponding to eigenvalue $0$ of $L$). One can easily find $\tilde{S}=\hat{S}\Lambda\hat{S}^T$ has the same non-zeros eigenvalue with $E=\Lambda^{-1/2}\hat{S}^T \hat{S}\Lambda^{-1/2}$, resulting in $L=I-\tilde{S}$ has the same multiplicity of eigenvalue $0$ with $\hat{L}=I-E$. Hence, similar to Eq.(\ref{GSFH3}), then we have an objective function for anchor graph learning as follows. \begin{eqnarray}\label{GSFH37} && \sum_{c=1}^C \lambda_c = \min_{V} \left \langle VV^T, \hat{L} \right \rangle \nonumber \\ \mathrm{s.t.} && V \in R^{p \times C}, V^TV=I_C, \end{eqnarray} where $\hat{S}^T = [\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_N]$ and $V^T = [v_1, v_2, \ldots, v_P]$. Because the graph $\hat{A}$ contains edges of the intrinsic structure, we need $\hat{S}$ to best approximate $\hat{A}$, then we optimize the following objective function, \begin{eqnarray}\label{GSFH4} & & \max_{\hat{S}} \left \langle \hat{A}, \hat{S} \right \rangle \nonumber \\ \mathrm{s.t.} & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \end{eqnarray} where we constrain $1^T\hat{s}_j=1$ so that it has unit row sum. By combining Eq. (\ref{GSFH37}) with Eq. (\ref{GSFH4}), we have, \begin{eqnarray}\label{GSFH6} & & \min_{V,\hat{S}} \left \langle VV^T, \hat{L} \right \rangle - \gamma_1 \left \langle \hat{A}, \hat{S} \right \rangle + \gamma_2 \left \| \hat{S} \right \|_F^2 \nonumber \\ \mathrm{s.t.} & & V \in R^{p \times C}, V^TV=I_C, \nonumber \\ & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \end{eqnarray} where $\gamma_1$ is the weight controller parameter and $\gamma_2$ is the regularization parameter. To avoid trivial solution when optimizing objective with respect to $\hat{s}_j$ in Eq.(\ref{GSFH6}), we add $L_2$-norm regularization to smooth the elements of $\hat{S}$. We tune the structure of $\hat{S}$ adaptively so that we achieve the condition $\sum_{c=1}^C \lambda_c =0$ to obtain a anchor graph $\hat{S}$ with exactly $C$ number of connected components. As opposed to pre-computing affinity graphs, in Eq.(\ref{GSFH6}), the affinity of the adaptive anchor graph $\hat{S}$, i.e., $\hat{s}_{ij}$, is learned by modeling fused anchor graph $\hat{A}$ from multiple modalities. The learning procedures of multiple modalities mutually are beneficial and reciprocal. \subsubsection{The Proposed AGSFH Scheme} Ideally, if the instances $o_i$ and $o_j$ are similar, the Hamming distance between their binary codes should be minimal, and vice versa. We do this by maximizing the approximation between the learned intrinsic similarity matrix $\bar{S}$ and the Hamming similarity matrix $H=B^TB$, which can be written as $\max_{B} \left \langle \bar{S}, B^TB \right \rangle=Tr(B\bar{S}B^T)=Tr(B\hat{S}\hat{S}^TB^T)$. Then, we can assume $B_s = sgn(B\hat{S}) \in \left\{-1, +1\right\}^{K \times P}$ to be the binary anchors. To learn the binary codes, the objective function can be written as: \begin{eqnarray}\label{GSFH17} & & \max_{B,B_s} Tr(B\hat{S}B_s^T), \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}, B_s \in \left\{-1, +1\right\}^{K \times P}. \end{eqnarray} Intuitively, we learn the $m$th modality hash function $H^m(x^m)$ by minimizing the error term between the linear hash function in Eq.(\ref{GSFH1}) by $\left \| B- H^m(x^m) \right \|_F^2$. Such hash function learning can be easily integrated into the overall cross modality similarity persevering, which is rewritten as: \begin{eqnarray}\label{GSFH18} & & \min_{B,B_s,W_{m}} -Tr(B\hat{S}B_s^T)+\lambda \sum_{m=1}^M \left \| B- W_{m}^TX^{m} \right \|_F^2, \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}, B_s \in \left\{-1, +1\right\}^{K \times P}. \end{eqnarray} where $\lambda$ is a tradeoff parameter to control the weights between minimizing the binary quantization and maximizing the approximation. Therefore, by combining Eq.(\ref{GSFH6}) with Eq.(\ref{GSFH18}), we have the overall objective as follows. \begin{eqnarray}\label{GSFH19} & & \min_{V,\hat{S},B,B_s,W_{m}} Tr(V^T\hat{L}V) - \gamma_1 Tr(\hat{A}^T \hat{S}) + \gamma_2 \left \| \hat{S} \right \|_F^2 \nonumber \\ && -\gamma_3 Tr(B\hat{S}B_s^T)+\lambda\sum_{m=1}^M \left \| B- W_{m}^TX^{m} \right \|_F^2 \nonumber \\ \mathrm{s.t.} & & V \in R^{P \times C}, V^TV=I_C, \nonumber \\ & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \nonumber\\ & & B \in \left\{-1, +1\right\}^{K \times N}, B_s \in \left\{-1, +1\right\}^{K \times P}, \end{eqnarray} where $\gamma_3$ is a weight controller parameter, $\hat{L}=I-\Lambda^{-1/2}\hat{S}^T \hat{S}\Lambda^{-1/2}$, and $\Lambda= diag(\hat{S}^T1)$. \subsection{Algorithms Design} Objective Eq.(\ref{GSFH19}) is a mixed binary programming and non-convex with variables $V$, $\hat{S}$, $B$, $B_s$, and $W_{m}$ together. To solve this issue, an alternative optimization framework is developed, where only one variable is optimized with the other variable fixed at each step. The details of the alternative scheme are as follows. \begin{enumerate}[1.] \setlength{\listparindent}{2em} \item $\textbf{$\hat{S}$ step}$. \par\setlength\parindent{2em} By fixing $V$, $\Lambda$, $B$, $B_s$, and $W_{m}$, optimizing problem (\ref{GSFH19}) becomes: \begin{eqnarray}\label{GSFH20} && \min_{\hat{S}} Tr(V^T\hat{L}V) - \gamma_1 Tr(\hat{A}^T \hat{S}) + \gamma_2 \left \| \hat{S} \right \|_F^2 \nonumber \\ && -\gamma_3 Tr(B\hat{S}B_s^T) \nonumber \\ \mathrm{s.t.} & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1. \nonumber\\ \end{eqnarray} Note that the problem Eq.(\ref{GSFH20}) is independent between different $j$, then we have, \begin{eqnarray}\label{GSFH21} && \min_{\hat{s}_j} f(\hat{s}_j) = \hat{s}_j^T (\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j \nonumber \\ && - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) \hat{s}_j, \nonumber \\ \mathrm{s.t.} & & \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \nonumber\\ \end{eqnarray} where $\tilde{V}=\Lambda^{-1/2}V$. We can find the constraints in problem (\ref{GSFH21}) is simplex, which indeed can lead to sparse solution $\hat{s}_j$ and have empirical success in various applications (because $\left \| \hat{s}_j \right \|_1 = 1^T\hat{s}_j=1$). In order to solve the Eq.(\ref{GSFH21}) for large $P$, it is more appropriate to apply first-order methods. In this paper, we use the Nesterov's accelerated projected gradient method to optimize Eq.(\ref{GSFH21}). We will present the details of the optimization as follows. We can easily find that the objective function (\ref{GSFH21}) is convex, the gradient of the objective function (\ref{GSFH21}) is Lipschitz continuous, and the Lipschitz constant is $Lp=2 \left \| \tilde{V}\tilde{V}^T + \gamma_2 I \right \|_2$ (i.e., the largest singular value of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$). One can see the detailed proofs of these results in Theorem \ref{thm2} and Theorem \ref{thm3}, respectively. According to these results, problem (\ref{GSFH21}) can be efficiently solved by Nesterov's optimal gradient method (OGM) \cite{jour22}. \begin{thm}\label{thm2} The objective function $f(\hat{s}_j)$ is convex. \end{thm} \begin{proof} Given any two vector $\hat{s}_j^1, \hat{s}_j^2 \in R^{P \times 1}$ and a positive number $\mu \in (0,1)$, we have \begin{eqnarray}\label{GSFH37} && f(\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) - (\mu f(\hat{s}_j^1) + (1-\mu)f(\hat{s}_j^2)) \nonumber \\ && = (\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2)^T (\tilde{V}\tilde{V}^T + \gamma_2 I) (\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) \nonumber \\ && - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) (\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) \nonumber \\ && - \mu (\hat{s}_j^{1T} (\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j^1 - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) \hat{s}_j^1) \nonumber \\ && - (1-\mu) (\hat{s}_j^{2T} (\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j^2 \nonumber \\ && - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) \hat{s}_j^2) \end{eqnarray} By some algebra, (\ref{GSFH37}) is equivalent to \begin{eqnarray}\label{GSFH38} && f(\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) - (\mu f(\hat{s}_j^1) + (1-\mu)f(\hat{s}_j^2)) \nonumber \\ && = \mu(\mu-1)(\hat{s}_j^1 - \hat{s}_j^2)^T (\tilde{V}\tilde{V}^T + \gamma_2 I)(\hat{s}_j^1 - \hat{s}_j^2) \nonumber \\ && = \mu(\mu-1)(\left \| \tilde{V}^T (\hat{s}_j^1 - \hat{s}_j^2) \right \|_F^2 + \gamma_2\left \|\hat{s}_j^1 - \hat{s}_j^2\right \|_F^2 ) \nonumber \\ && \leq 0. \end{eqnarray} Therefore, we have \begin{eqnarray}\label{GSFH39} && f(\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) \leq \mu f(\hat{s}_j^1) + (1-\mu)f(\hat{s}_j^2). \end{eqnarray} According to the definition of convex function, we know $f(\hat{s}_j)$ is convex. This completes the proof. \end{proof} \begin{thm}\label{thm3} The gradient of the objective function $f(\hat{s}_j)$ is Lipschitz continuous and the Lipschitz constant is $Lp=2 \left \| \tilde{V}\tilde{V}^T + \gamma_2 I \right \|_2$ (i.e., the largest singular value of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$). \end{thm} \begin{proof} According to (\ref{GSFH21}), we can obtain the gradient of $f(\hat{s}_j)$ \begin{eqnarray}\label{GSFH40} && \nabla f(\hat{s}_j) = 2(\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s). \end{eqnarray} For any two vector $\hat{s}_j^1, \hat{s}_j^2 \in R^{P \times 1}$, we have \begin{eqnarray}\label{GSFH41} && \left \| \nabla f(\hat{s}_j^1) - \nabla f(\hat{s}_j^2) \right \|_F^2 \nonumber \\ && = \left \| 2(\tilde{V}\tilde{V}^T + \gamma_2 I)(\hat{s}_j^1 - \hat{s}_j^2) \right \|_F^2 \nonumber \\ && = Tr((U\Sigma U^T(\hat{s}_j^1 - \hat{s}_j^2))^T(U\Sigma U^T(\hat{s}_j^1 - \hat{s}_j^2))), \end{eqnarray} where $U\Sigma U^T$ is the SVD decomposition of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$ and the singular values are $\left \{ \sigma_1, \ldots, \sigma_u \right \}$ listed in a descending order. By some algebra, (\ref{GSFH41}) is equivalent to \begin{eqnarray}\label{GSFH42} && \left \| \nabla f(\hat{s}_j^1) - \nabla f(\hat{s}_j^2) \right \|_F^2 \nonumber \\ && = Tr(U^T(\hat{s}_j^1 - \hat{s}_j^2)(\hat{s}_j^1 - \hat{s}_j^2)^T U\Sigma^2) \nonumber \\ && \leq \sigma_1^2 Tr(U^T(\hat{s}_j^1 - \hat{s}_j^2)(\hat{s}_j^1 - \hat{s}_j^2)^T U) \nonumber \\ && = \sigma_1^2 \left \| \hat{s}_j^1 - \hat{s}_j^2 \right \|_F^2, \end{eqnarray} where $\sigma_1$ is the largest singular value, and the last two equations come from the fact that $U^TU=I_u$ and $UU^T=I_P$. From (\ref{GSFH42}), we have \begin{eqnarray}\label{GSFH43} && \left \| \nabla f(\hat{s}_j^1) - \nabla f(\hat{s}_j^2) \right \|_F \nonumber \\ && \leq Lp \left \| \hat{s}_j^1 - \hat{s}_j^2 \right \|_F \end{eqnarray} Therefore, $\nabla f(\hat{s}_j)$ is Lipschitz continuous and the Lipschitz constant is the largest singular value of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$, i.e., $Lp=\left \| 2(\tilde{V}\tilde{V}^T + \gamma_2 I) \right \|_2 = 2 \left \| \tilde{V}\tilde{V}^T + \gamma_2 I\right \|_2$. This completes the proof. \end{proof} In particular, we construct two sequences, i.e., $\hat{s}_j^t$ and $z_j^t$, and alternatively update them in each iteration round. For the convenience of notations, we use $C$ to represent the associated constraints in Eq.(\ref{GSFH21}). At the iteration $t$, the two sequences are \begin{eqnarray}\label{GSFH22} &\hat{s}_j^t = \arg \min_{\hat{s}_j \in C} \phi (\hat{s}_j, z_j^{t-1}) = f(z_j^{t-1}) \nonumber \\ & + (\hat{s}_j-z_j^{t-1})^T \nabla f(z_j^{t-1}) + \frac{Lp}{2} \left \| \hat{s}_j-z_j^{t-1} \right \|_2^2 , \end{eqnarray} and \begin{eqnarray}\label{GSFH23} z_j^t = \hat{s}_j^t + \frac{c_t - 1}{c_{t+1}} (\hat{s}_j^t - \hat{s}_j^{t-1}), \end{eqnarray} where $\phi (\hat{s}_j, z_j^{t-1})$ is the proximal function of $f(\hat{s}_j)$ on $z_j^{t-1}$, $\hat{s}_j^t$ includes the approximate solution obtained by minimizing the proximal function over $\hat{s}_j$, and $z_j^t$ stores the search point that is constructed by linearly combining the latest two approximate solutions, i.e., $\hat{s}_j^t$ and $\hat{s}_j^{t-1}$. According to \cite{jour22}, the combination coefficient is updated in each iteration round \begin{eqnarray}\label{GSFH24} c_{t+1} = \frac{1+\sqrt{4c_t+1}}{2}. \end{eqnarray} With extra terms independent of $\hat{s}_j$, we could write the objective function in Eq.(\ref{GSFH22}) into a more compact form as follows: \begin{eqnarray}\label{GSFH25} \hat{s}_j^t = \arg \min_{\hat{s}_j \in C} \frac{Lp}{2} \left \| \hat{s}_j-(z_j^{t-1}-\frac{1}{Lp} \nabla f(z_j^{t-1})) \right \|_2^2 . \end{eqnarray} Eq.(\ref{GSFH25}) is an euclidean projection problem on the simplex space, which is the same as Eq.(\ref{GSFH7}). According to the Karush-Kuhn-Tucker condition \cite{book1}, it can be verified that the optimal solution $\hat{s}_j^{t}$ is \begin{eqnarray}\label{GSFH26} \hat{s}_j^{t}=(z_j^{t-1}-\frac{1}{Lp} \nabla f(z_j^{t-1})+\eta1)_+. \end{eqnarray} By alternatively updating $\hat{s}_j^t$, $z_j^t$ and $c_{t+1}$ with (\ref{GSFH22}), (\ref{GSFH23}) and (\ref{GSFH24}) until convergence, the optimal solution can be obtained. Note that recent results \cite{jour20, jour22} show that the gradient-based methods with smooth optimization can achieve the optimal convergence rate $O(\frac{1}{t^2})$, here $t$ is number of iterations. Here the convergence criteria is that the relative change of $\left \| \hat{s}_j \right \|_2$ is less than $10^{-4}$. We initialize $\hat{s}_j^0$ via solving Eq.(\ref{GSFH21}) without considering the constraints. We take the partial derivative of the objective (\ref{GSFH21}) with respect to $\hat{s}_j$. By setting this partial derivative to zero, a closed-form solution of $\hat{s}_j^0$ is acquired \begin{eqnarray}\label{GSFH27} \hat{s}_j^{0}= (\tilde{V}\tilde{V}^T + \gamma_2 I)^{-1}(\gamma_1 \hat{a}_j + \gamma_3 B_s^T b_j). \end{eqnarray} The full OMG algorithm is summarized in Algorithm \ref{alg:OGM}. \begin{algorithm}[h] \caption{Optimal Gradient Method (OGM).} \label{alg:OGM} \begin{algorithmic}[1] \Require $\tilde{V}$, $B$, $B_s$, $\hat{A}$, $\gamma_1$, $\gamma_2$, $\gamma_3$; maximum iteration number $T_{OGM}$. \Ensure $\hat{S}$. \State Initialize $j=1$. \Repeat \State Initialize $\hat{s}_j^0$ by Eq.(\ref{GSFH27}), $z_j^0 = \hat{s}_j^0$, $t=1$, $c_1=1$. \Repeat \State $\hat{s}_j^{t}=(z_j^{t-1}-\frac{1}{Lp} \nabla f(z_j^{t-1})+\eta1)_+$. \State $c_{t+1} = \frac{1+\sqrt{4c_t+1}}{2}$. \State $z_j^t = \hat{s}_j^t + \frac{c_t - 1}{c_{t+1}} (\hat{s}_j^t - \hat{s}_j^{t-1})$. \Until Convergence criteria is satisfied or reaching the maximum iteration. \State $\hat{s}_j=\hat{s}_j^{t}$. \State $j=j+1$. \Until $j$ is equal to $N$. \State $\hat{S}^T = [\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_N]$. \end{algorithmic} \end{algorithm} \item $\textbf{$\Lambda$ step}$. By fixing $V$, $\hat{S}$, $B$, $B_s$, and $W_{m}$, it is easy to solve $\Lambda$ by \begin{eqnarray}\label{GSFH28} \Lambda= diag(\hat{S}^T1). \end{eqnarray} \item $\textbf{$V$ step}$. By fixing $\hat{S}$, $\Lambda$, $B$, $B_s$, and $W_{m}$, then, Eq.(\ref{GSFH19}) becomes \begin{eqnarray}\label{GSFH29} && \min_{V} Tr(V^T \hat{L} V) \nonumber \\ \mathrm{s.t.} & & V \in R^{P \times C}, V^TV=I_C. \end{eqnarray} The optimal $V$ for Eq.(\ref{GSFH29}) is formed by the $C$ eigenvectors corresponding to the top $C$ smallest eigenvalues of the normalization Laplacian matrix $\hat{L}$. \item $\textbf{$B$ step}$. \label{sub1} By fixing $V$, $\hat{S}$, $\Lambda$, $B_s$, and $W_{m}$, the corresponding sub-problem is: \begin{eqnarray}\label{GSFH30} & & \min_{B} -\gamma_3 Tr(B\hat{S}B_s^T)+\lambda\sum_{m=1}^M \left \| B- W_{m}^TX^{m} \right \|_F^2 \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}, \end{eqnarray} which can be expended into: \begin{eqnarray}\label{GSFH31} & & \min_{B} - Tr(B(\gamma_3 \hat{S}B_s^T + 2 \lambda \sum_{m=1}^M X^{mT}W_{m} ) \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}. \end{eqnarray} This sub-problem can be solved by the following updating result: \begin{eqnarray}\label{GSFH32} B = sgn(\gamma_3 B_s \hat{S}^T + 2 \lambda \sum_{m=1}^M W_{m}^T X^{m}). \end{eqnarray} \item $\textbf{$B_s$ step}$. By fixing $V$, $\hat{S}$, $\Lambda$, $B$, and $W_{m}$, the updating of $B_s$ can be referred to: \begin{eqnarray}\label{GSFH33} & & \min_{B_s} - \gamma_3 Tr(B\hat{S}B_s^T) \nonumber \\ \mathrm{s.t.} & & B_s \in \left\{-1, +1\right\}^{K \times P}. \end{eqnarray} With the same scheme to deal with sup-problem (\ref{sub1}), this subproblem can be solved as follow: \begin{eqnarray}\label{GSFH34} B_s = sgn(B\hat{S}). \end{eqnarray} which is similar to the before assumption in Eq.(\ref{GSFH17}). \item $\textbf{$W_{m}$ step}$. By fixing $V$, $\hat{S}$, $\Lambda$, $B$, and $B_s$, this sub-problem finds the best mapping coefficient $W_{m}$ by minimizing $\left \| B- W_{m}^TX^{m} \right \|_F^2 $ with the traditional linear regression. Therefore, we update $W_{m}$ as: \begin{eqnarray}\label{GSFH35} W_{m} = (X^{m}X^{mT})^{-1}X^{m}B^T. \end{eqnarray} \end{enumerate} The full AGSFH algorithm is summarized in Algorithm \ref{alg:GSFH}. \begin{algorithm}[h] \caption{Anchor Graph Structure Fusion Hashing (AGSFH).} \label{alg:GSFH} \begin{algorithmic}[1] \Require feature matrices $X^{m} (m=1,2,\ldots,M)$; code length $K$, the number of anchor points $P$, the cluster number $C$, the number of neighbor points $k$, maximum iteration number $T_{iter}$; parameters $\gamma_1$, $\gamma_2$, $\gamma_3$, $\lambda$. \Ensure The hash codes $B$ for training instances $O$ and the projection coefficient matrix $W_{m} (m=1,2,\ldots,M)$. \State Uniformly and randomly select $P$ sample pairs from training instances as the anchors $T$. \State Construct anchor graph $\hat{A}^{m} (m=1,2,\ldots,M)$ from the data matrix $X^{m}$ and the anchor matrix $T^{m}$ by a $k$-NN graph algorithm. \State Calculate anchor graph structure fusion matrix $\hat{A}$ by Eq.(\ref{GSFH15}). \State Initialize $V$ by $C$ number of eigenvectors corresponding to the top $C$ smallest eigenvalues of the Laplacian matrix $\hat{L}=I-D^{-1/2}\hat{A}^T \hat{A}D^{-1/2}$, and $D= diag(\hat{A}^T1)$. \State Initialize $\Lambda = I_P$. \State Initialize $W_{m} (m=1,2,\ldots,M)$ randomly. \State Initialize hash codes $B$ and $B_s$ randomly, such that $-1$ and $1$ in each bit are balanced. \Repeat \State Update $\hat{S}$ by Algorithm \ref{alg:OGM}. \State Update $\Lambda$ by Eq.(\ref{GSFH28}). \State Update $V$ by Eq.(\ref{GSFH29}), i.e., $V$ is formed by $C$ eigenvectors with the top $C$ smallest eigenvalues of $\hat{L}=I-\Lambda^{-1/2}\hat{S}^T \hat{S}\Lambda^{-1/2}$. \State Update $B$ by Eq.(\ref{GSFH32}). \State Update $B_s$ by Eq.(\ref{GSFH34}). \State Update $W_{m} (m=1,2,\ldots,M)$ by Eq.(\ref{GSFH35}). \Until Objective function of Eq.(\ref{GSFH19}) converges or reaching the maximum iteration. \end{algorithmic} \end{algorithm} \subsection{Convergence Analysis} The original problem Eq.(\ref{GSFH19}) is not a joint convex problem of $\hat{S}$, $V$, $B$, $B_s$, and $W_{m}$. Hence, we may not obtain a global solution. We divide the original problem into six subproblems, i.e., Eqs.(\ref{GSFH21}), (\ref{GSFH28}), (\ref{GSFH29}), (\ref{GSFH30}), (\ref{GSFH33}) and (\ref{GSFH35}). Since Eq.(\ref{GSFH21}) is a constrained quadratic minimization, Eq.(\ref{GSFH28}) is a linear equation, $\hat{L}$ in Eq.(\ref{GSFH29}) is positive semi-definite, Eqs.(\ref{GSFH30}), (\ref{GSFH33}) are constrained linear minimization, and Eq.(\ref{GSFH35}) is a quadratic minimization, each of them is the convex problem. The six subproblems are solved alternatively, so AGSFH will converge to a local solution. In the section \ref{Convergences}, we will show the convergence curves. \subsection{Computational Analysis} The complexity of the proposed AGSFH mainly consists of six parts: 1) updating intrinsic anchor graph $\hat{S}$, 2) updating $\Lambda$, 3) calculating $C$ eigenvectors of $\hat{L}$, 4) updating hash codes $B$, 5) updating anchor hash codes $B_s$, 6) updating projection coefficients $W_{m} (m=1,2,\ldots,M)$. These six parts are repeated until meeting the convergence condition, and they take $O (T_{OGM}(P+CP+KP+P^2+CP^2+P^3)N)$, $O (NP)$, $O (CP^2+P^2N)$, $O (K(P+1+\sum_{m=1}^M d_m)N)$, $O (KPN)$, and $O (\sum_{m=1}^M (d_m^2+Kd_m)N+\sum_{m=1}^M (d_m^2+d_m^3))$ respectively. Thus, the total complexity of Eq.(\ref{GSFH19}) is \begin{eqnarray}\label{GSFH36} &O (T_{iter}(T_{OGM}(P+CP+KP+P^2+CP^2+P^3) \nonumber \\ &+K(P+1+\sum_{m=1}^M d_m)+\sum_{m=1}^M (d_m^2+Kd_m) \nonumber \\ &+P+P^2+KP)N), \end{eqnarray} where $T_{iter}$ is the total number of iterations. We can find AGSFH has a linear training time complexity to the training samples. \section{Experiments} \renewcommand{\arraystretch}{1.0} \begin{table}[htb] \centering \footnotesize \setlength{\belowcaptionskip}{10pt} \caption{Comparison of MAP with Two Cross-modal Retrieval Tasks on Wiki Benchmark.} \label {Table.1} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Wiki}\cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.2065 &0.2131 &0.1985 &0.1983 \cr &BGDH &0.1815 &0.1717 &0.1717 &0.1717 \cr &FSH &0.2426 &0.2609 &0.2622 &{\bf 0.2710} \cr &RFDH &0.2443 &0.2455 &0.2595 &0.2616 \cr &JIMFH &0.2384 &0.2501 &0.2472 &0.2542 \cr &{\bf AGSFH} &{\bf 0.2548} &{\bf 0.2681} &{\bf 0.2640} &0.2680 \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.2130 &0.2389 &0.2357 &0.2380 \cr &BGDH &0.1912 &0.1941 &0.2129 &0.2129 \cr &FSH &0.4150 &0.4359 &0.4753 &0.4956 \cr &RFDH &0.4185 &0.4438 &0.4633 &0.4922 \cr &JIMFH &0.3653 &0.4091 &0.4270 &0.4456 \cr &{\bf AGSFH} &{\bf 0.5782} &{\bf 0.6005} &{\bf 0.6175} &{\bf 0.6214} \cr \hline \end{tabular}} \end{table} \renewcommand{\arraystretch}{1.0} \begin{table}[htb] \centering \footnotesize \setlength{\belowcaptionskip}{10pt} \caption{Comparison of MAP with Two Cross-modal Retrieval Tasks on MIRFlickr25K Benchmark.} \label {Table.2} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Method} & \multicolumn{4}{c}{MIRFlickr25K}\cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.5240 &0.5238 &0.5238 &0.5238 \cr &BGDH &0.5244 &0.5248 &0.5248 &0.5244 \cr &FSH &0.6347 &0.6609 &0.6630 &0.6708 \cr &RFDH &0.6525 &0.6601 &0.6659 &0.6659 \cr &JIMFH &{\bf 0.6563} &{\bf 0.6703} &0.6737 &0.6813 \cr &{\bf AGSFH} &0.6509 &0.6650 &{\bf 0.6777} &{\bf 0.6828} \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.5383 &0.5381 &0.5382 &0.5379 \cr &BGDH &0.5360 &0.5360 &0.5360 &0.5360 \cr &FSH &0.6229 &0.6432 &0.6505 &0.6532 \cr &RFDH &0.6389 &0.6405 &0.6417 &0.6438 \cr &JIMFH &0.6432 &0.6570 &0.6605 &0.6653 \cr &{\bf AGSFH} &{\bf 0.6565} &{\bf 0.6862} &{\bf 0.7209} &{\bf 0.7505} \cr \hline \end{tabular}} \end{table} \renewcommand{\arraystretch}{1.0} \begin{table}[htb] \centering \footnotesize \setlength{\belowcaptionskip}{10pt} \caption{Comparison of MAP with Two Cross-modal Retrieval Tasks on NUS-WIDE Benchmark.} \label {Table.3} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Method} & \multicolumn{4}{c}{NUS-WIDE}\cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.4181 &0.4550 &0.4551 &0.4652 \cr &BGDH &0.4056 &0.4056 &0.4056 &0.4056 \cr &FSH &{\bf 0.5021} &0.5200 &0.5398 &{\bf 0.5453} \cr &RFDH &0.4701 &0.4699 &0.4611 &0.4772 \cr &JIMFH &0.4952 &{\bf 0.5334} &0.5223 &0.5334 \cr &{\bf AGSFH} &0.4856 &0.5189 &{\bf 0.5401} &0.5433 \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.4505 &0.5132 &0.5201 &0.5121 \cr &BGDH &0.3856 &0.3851 &0.3851 &0.3850 \cr &FSH &0.4743 &0.4953 &0.5114 &0.5327 \cr &RFDH &0.4701 &0.4713 &0.4651 &0.4626 \cr &JIMFH &0.4613 &0.4757 &0.5107 &0.5179 \cr &{\bf AGSFH} &{\bf 0.5152} &{\bf 0.5834} &{\bf 0.6144} &{\bf 0.6362} \cr \hline \end{tabular}} \end{table} \begin{figure*} [ht] \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NImageQTextB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NTextQImageB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NImageQTextB_MirFlickr25K}} \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NTextQImageB_MirFlickr25K}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NImageQTextB_wiki}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NTextQImageB_wiki}} \caption{TopN-precision Curves @ $32$ bits on three cross-modal benchmark datasets. (a) and (b) are Nuswide. (c) and (d) are MirFlickr25K. (e) and (f) are Wiki.} \label{figure3} \end{figure*} \begin{figure*} [ht] \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RImageQTextB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RTextQImageB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RImageQTextB_MirFlickr25K}} \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RTextQImageB_MirFlickr25K}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RImageQTextB_wiki}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RTextQImageB_wiki}} \caption{Precision-Recall Curves @ $32$ bits on three cross-modal benchmark datasets. (a) and (b) are Nuswide. (c) and (d) are MirFlickr25K. (e) and (f) are Wiki.} \label{figure4} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{visualize_retrieval_I2T} \caption{Examples of image-based text retrieval on Wiki using AGSFH. For each image query (first column), we have obtained top $10$ retrieved texts (column $2$-$11$): top row demonstrates the retrieved texts, middle row demonstrates its corresponding images, bottom row demonstrates its corresponding class labels. The retrieved results irrelevant to the query in semantic categories are marked with the red box.} \label{figure32} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{visualize_retrieval_T2I} \caption{Examples of text-based image retrieval on Wiki using AGSFH. For each text query (first column), the top $10$ retrieved images are shown in the third column: top row demonstrates the retrieved images, bottom row demonstrates its corresponding class labels. The second column shows the ground truth image corresponding to the text query. The retrieved results irrelevant to the query in semantic categories are marked with the red box.} \label{figure31} \end{figure*} In this section, we show the comparison results of AGSFH with other state-of-the-art CMH approaches on three multi-modal benchmark databases. All the results are realized on a 64-bit Windows PC with a 2.20GHz i7-8750H CPU and 16.0GB RAM. \subsection{Experimental Settings} \subsubsection{Datasets} We perform experiments on three multi-modal benchmarks datasets: Wiki \cite{jour9}, MIRFlickr25K \cite{jour10}, and NUS-WIDE \cite{jour11}. The details are given as follows. Wiki \cite{jour9} dataset is comprised of $2,866$ image-text pairs, which are sampled from Wikipedia. The dataset contains ten semantic categories such that each pair belongs to one of these categories. Each image is represented by a $128$-dimensional bag-of-words visual vector, and each text is represented by a $10$-dimensional word-vectors. We randomly select $2,173$ image-text pairs from the original dataset as the training set (which is used as the database in the retrieval task), and the remaining $693$ image-text pairs are used as the query set. MIRFlickr25K \cite{jour10} is a dataset containing $25,000$ image-text pairs in $24$ semantic categories. In our experiment, we remove pairs whose textual tags appear less than $20$ times. Accordingly, we get $20,015$ pairs in total, in which the $150$-dimensional histogram vector is used to represent the image, and the $500$-dimensional latent semantic vector is used to represent the text. We randomly select $2,000$ image-tag pairs as the query set and the remaining $18,015$ pairs as the database in the retrieval task. We also randomly selected $5000$ image-text pairs from the remaining $18,015$ pairs as the training set to learn the hash model. NUS-WIDE \cite{jour11} is a multi-label image dataset that contains $269,648$ images with $5,018$ unique tags in $81$ categories. Similar to \cite{jour11}, we sample the ten largest categories with the corresponding $186,577$ images. We then randomly split this dataset into a training set with $184,577$ image-text pairs and a query set with $2,000$ image-text pairs. We should note that each image is annotated by a $500$-dimensional bag-of-words visual vector, and texts are represented by a $1,000$-dimensional index vector. Besides, we randomly selected $5000$ image-text pairs as the training set, which is used to learn the hash model. \subsubsection{Compared Methods} Our model is a shallow unsupervised CMH learning model. In order to conduct a fair comparison, we conduct several experiments and compare the AGSFH method with other shallow unsupervised CMH baselines: CSGH \cite{proceeding14}, BGDH \cite{proceeding16}, FSH \cite{proceeding17}, RFDH \cite{jour5}, JIMFH \cite{jour23}. To implement the baselines, we use and modify their provided source codes. We note that the results are averaged over five runs. For further showing the superiority of performance of our proposal, we compare our work with some deep learning methods, i.e., DCMH \cite{ proceeding23}, DDCMH \cite{ proceeding24}, and DBRC \cite{jour14}, on MirFlickr25K in Section \ref{sec4.5}. \subsubsection{Implementation Details} We utilize the public codes with the fixed parameters in the corresponding papers to perform the baseline CMH methods. Besides, in our experiments, the hyper-parameter $\lambda$ is a trade-off hyper-parameter, which is set as $300$ on three datasets. The weight controller hyper-parameters $\gamma_1$ and $\gamma_3$ are both set to be $0.01$. The regularization hyper-parameter $\gamma_2$ is set to $10$ for better performance. The number of clusters $C$ is set as $60$ in all our experiments. For training efficiency, the number of anchor points $P$ is set to $900$ for all the datasets. The number of neighbor points $k$ is set to $45$. \subsubsection{Evaluation Criteria} We adopt three standard metrics to show the retrieval performance of AGSFH: mean average precision (MAP), topN-precision curves, and precision-recall curves. The MAP metric is calculated by the top $50$ returned retrieval samples. A more detailed introduction of these evaluation criteria is contained in \cite{proceeding30}. Two typical cross-modal retrieval tasks are evaluated for our approach: image-query-text task (i.e. I$\rightarrow$T) and text-query-image task (i.e. T$\rightarrow$I). \subsection{Experimental Analysis} \renewcommand{\arraystretch}{1.0} \begin{table*}[htb] \centering \tiny \setlength{\tabcolsep}{5pt} \caption{Training Time (in Seconds) of Different Hashing Methods on MIRFlickr25K and NUS-WIDE.} \label {Table.4} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{Methods} &\multicolumn{4}{c}{MIRFlickr25K} &\multicolumn{4}{c}{NUS-WIDE}\cr \cmidrule(lr){2-5} \cmidrule(lr){6-9} &16 bits&32 bits&64 bits&128 bits&16 bits&32 bits&64 bits&128 bits\cr \hline CSGH &23.4576 &22.1885 &22.6214 &23.2040 &434.8411 &438.4742 &446.0972 &455.3075 \cr BGDH &0.6640 &0.7100 &1.0135 &1.4828 &8.8514 &10.0385 &12.6808 &16.6654 \cr FSH &5.2739 &5.2588 &5.6416 &6.2217 &75.6872 &77.9825 &83.4182 &89.5166 \cr RFDH &61.9264 &125.8577 &296.4683 &801.0725 &2132.1634 &4045.4016 &8995.0049 &22128.4188 \cr JIMFH &2.1706 &2.3945 &3.0242 &4.3020 &23.3772 &26.9233 &32.0088 &43.6795 \cr {\bf AGSFH} &79.2261 &80.3913 &136.8426 &79.4166 &638.8646 &871.4929 &794.8452 &961.3188 \cr \hline \end{tabular}} \end{table*} \begin{table*}[htb] \centering \tiny \setlength{\belowcaptionskip}{5pt} \caption{Testing Time (in Seconds) of Different Hashing Methods on MIRFlickr25K and NUS-WIDE.} \label {Table.5} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{ccccccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{4}{c}{MIRFlickr25K} &\multicolumn{4}{c}{NUS-WIDE}\cr \cmidrule(lr){3-6} \cmidrule(lr){7-10} &&16 bits&32 bits&64 bits&128 bits&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.1935 &0.2320 &0.3626 &1.0183 &3.2961 &4.4186 &6.8955 &12.4145 \cr &BGDH &0.1662 &0.2322 &0.3610 &1.0342 &3.3980 &4.4255 &6.9406 &12.4911 \cr &FSH &0.1984 &0.2464 &0.3555 &1.0110 &3.4221 &4.5401 &7.1110 &12.6618 \cr &RFDH &0.1941 &0.2522 &0.3672 &1.0303 &3.4753 &4.4805 &6.9003 &12.6593 \cr &JIMFH &0.1782 &0.2405 &0.3742 &1.0076 &3.2559 &4.5116 &7.0505 &12.6736 \cr &{\bf AGSFH} &0.1528 &0.2403 &0.3784 &0.9411 &3.9483 &4.5114 &6.9818 &12.6628 \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.1562 &0.2133 &0.3670 &1.0157 &3.2620 &4.4325 &6.9195 &12.3532 \cr &BGDH &0.1442 &0.2194 &0.3464 &1.0355 &3.4354 &4.4582 &6.8954 &12.3665 \cr &FSH &0.1533 &0.2130 &0.3639 &1.0101 &3.4172 &4.4222 &6.9067 &12.422 \cr &RFDH &0.1455 &0.2263 &0.3584 &1.0281 &3.3462 &4.4351 &6.9336 &12.4925 \cr &JIMFH &0.1527 &0.2101 &0.3444 &1.0105 &3.4214 &4.4370 &6.9354 &12.4270 \cr &{\bf AGSFH} &0.1371 &0.2224 &0.3532 &0.9485 &3.4478 &4.4860 &6.9820 &12.4307 \cr \hline \end{tabular}} \end{table*} \subsubsection{Retrieval Performance} In Table.\ref{Table.1}, \ref{Table.2} and \ref{Table.3}, the MAP evaluation results are exhibited on all three data sets, i.e., Wiki, MIRFlickr25K, and NUS-WIDE. From these figures, for all cross-modal tasks (i.e., image-query-text and text-query-image), AGSFH achieves a significantly better result than all comparison methods On Wiki and MIRFlickr25K. Besides, on NUS-WIDE, AGSFH also achieves comparable performance with JIMFH and FSH on image-query-text task, outperforming other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. The superiority of AGSFH can be attributed to their capability to reduce the effect of information loss, which directly learns the intrinsic anchor graph to exploit the geometric property of underlying data structure across multiple modalities, as well as avoids the large quantization error. Besides, AGSFH learns the structure of the intrinsic anchor graph adaptively to make training instances clustered into semantic space and preserves the anchor fusion affinity into the common binary Hamming space to guarantee that binary data can preserve the semantic relationship of training instances in semantic space. The above observations show the effectiveness of the proposed AGSFH. We can find another observation that the average increase of text-query-image retrieval task is larger than the average increase of image-query-text retrieval task. This is because that image includes more noise and outliers than text. The topN-precision curves with code length $32$ bits on all three data sets are demonstrated in Figs. \ref{figure3}. From the experimental results, the topN-precision results are in accordance with mAP evaluation values. AGSFH has better performance than others comparison methods for cross-modal hashing search tasks on Wiki and MIRFlickr25K. Furthermore, on NUS-WIDE, AGSFH demonstrates comparable performance with JIMFH and FSH on image-query-text task, outperforming other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. In the retrieval system, we focus more on the front items in the retrieved list returned by the search algorithm. Hence, AGSFH achieves better performance on all retrieval tasks in some sense. From Table.\ref{Table.1}, \ref{Table.2}, \ref{Table.3} and Figs. \ref{figure3}-\ref{figure4}, AGSFH usually demonstrates large margins on performance when compared with other methods about cross-modal hashing search tasks on Wiki and MIRFlickr25K. At the same time, on NUS-WIDE, AGSFH also exhibits comparable performance with JIMFH and FSH on image-query-text task, better than other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. We consider two possible reasons for explaining this phenomenon. Firstly, AGSFH directly learns the intrinsic anchor graph to exploit the geometric property of underlying data structure across multiple modalities, as well as avoids the large quantization error, which can be robust to data (i.e., image and text data) outliers and noises. Thus, AGSFH can achieve performance improvement. Secondly, AGSFH adjusts the structure of the intrinsic anchor graph adaptively to make training instances clustered into semantic space and preserves the anchor fusion affinity into the common binary Hamming space. This process can extract the high-level hidden semantic relationship in the image and text. Therefore, AGSFH could find the common semantic clusters that reflect the semantic properties more precisely. On the consequences, under the guidance of the common semantic clusters, AGSFH can achieve better performance on cross-modal retrieval tasks. The precision-recall curves with the code length of $32$ bits are also demonstrated in Fig. \ref{figure4}. By calculating the area under precision-recall curves, we can discover that AGSFH outperforms comparison methods for cross-modal hashing search tasks on Wiki and MIRFlickr25K. In addition, on NUS-WIDE, AGSFH has comparable performance with JIMFH and FSH on image-query-text task, better than other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. Moreover, two examples of image-query-text retrieval task and text-query-image retrieval task over Wiki dataset by AGSFH are performed. The results are showed in Fig. \ref{figure32} and Fig. \ref{figure31}. Fig.\ref{figure32} reports that the classes of the second, fourth, sixth, and ninth searched texts are different from the image query belonging to the music category (in the first row). This is because the visual appearance of the image query in the music category is very similar to the incorrect images in the retrieved results. The example of text retrieving images is displayed in Fig.\ref{figure31}. As we can see, the classes of the first, second, fourth, fifth, and sixth retrieved images by AGSFH are different from the text query belonging to the warfare category. We can find that the visual appearance of the text query's corresponding images is very similar to the incorrect retrieved images. \subsubsection{Training Time} We check the time complexity of all the compared CMH methods in the training phase on two large benchmark datasets MIRFlickr25K and NUSWIDE, showing the efficiency of our approach AGSFH. Table. \ref{Table.4} illustrates the training time comparison result. From the table, compared with CSGH, BGDH, FSH, and JIMFH, we can find our AGSFH takes a little longer time than these hashing methods but spends a shorter time to RFDH with different code lengths from $16$ bits to $128$ bits. On the other hand, our approach AGSFH has nearly no change in acceptable training time with different code lengths from $16$ bits to $128$ bits, showing the superiority of AGSFH in time complexity. \subsubsection{Testing Time} We compare the testing time with all the compared CMH methods in Table. \ref{Table.5}. From this table, we observe that the compared cross-modal hashing methods take nearly identical time for retrieval as well as our AGSFH. \begin{table}[htb] \centering \tiny \setlength{\belowcaptionskip}{5pt} \caption{MAP Comparison of AGSFH and Deep Cross-modal Hashing Methods on MirFlickr25K.} \label {Table.6} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{cccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{3}{c}{MirFlickr25K} \cr\cline{3-5} &&16 bits&32 bits&64 bits\cr \hline \multirow{4}{*}{I$\rightarrow$T} &DCMH &0.7410 &0.7465 &0.7485 \cr &DDCMH &{\bf 0.8208} &{\bf 0.8434} &{\bf 0.8551} \cr &DBRC &0.5922 &0.5922 &0.5854 \cr &{\bf AGSFH} &0.7239 &0.7738 &0.8135 \cr \hline \multirow{4}{*}{T$\rightarrow$I} &DCMH &{\bf 0.7827} &0.7900 &0.7932 \cr &DDCMH &0.7731 &0.7766 &0.7905 \cr &DBRC &0.5938 &0.5952 &0.5938 \cr &{\bf AGSFH} &0.7505 &{\bf 0.8009} &{\bf 0.8261} \cr \hline \end{tabular}} \end{table} \subsubsection{Comparison with Deep Cross-modal Hashing} \label{sec4.5} We further compare AGSFH with three state-of-the-art deep CMH methods, i.e., DCMH \cite{ proceeding23}, DDCMH \cite{ proceeding24}, and DBRC \cite{jour14}, on MirFlickr25K dataset. We utilize the deep image features, extracted by the CNN-F network \cite{ proceeding31} with the same parameters in \cite{ proceeding23} and the original text features for evaluating the MAP results of AGSFH. In addition, the performance of DCMH, DDCMH, and DBRC (without public code) is presented using the results in its paper. The results of the experiments are shown in Table. \ref{Table.6}. From Table. \ref{Table.6}, we could find that AGSFH outperforms DCMH and DBRC on both the Image-to-Text and Text-to-Image tasks, but AGSFH is inferior to DDCMH on the Image-to-Text task and is better than it on the Text-to-Image task. From these observations, we conduct the conclusion that AGSFH is not a deep hashing model, yet it can outperform some of the state-of-the-art deep cross-modal hashing methods, i.e., DCMH, DBRC. As the DDCMH shows significantly superior results than the proposed AGSFH, our AGSFH contains many strengths over DDCMH, like simple structure, easy to implement, time-efficient, strong interpretability, and so on. It further verifies the effectiveness of the proposed model AGSFH. \subsection{Empirical Analysis} \begin{figure}[ht] \centering \includegraphics[width=0.5\columnwidth]{objectvalue.pdf} \caption{Convergence analysis.} \label{figure5} \end{figure} \begin{figure*}[ht] \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loglambda.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loglambda.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loggamma1.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loggamma1.pdf}} \caption{MAP values versus hyper-parameters. (a) and (b) are $\lambda$. (c) and (d) are $\gamma_1$.} \label{figure6} \end{figure*} \begin{figure*}[ht] \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loggamma2.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loggamma2.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loggamma3.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loggamma3.pdf}} \caption{MAP values versus hyper-parameters. (a) and (b) are $\gamma_2$. (c) and (d) are $\gamma_3$.} \label{figure7} \end{figure*} \begin{figure*}[ht] \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_c.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_c.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_k.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_k.pdf}} \caption{MAP values versus hyper-parameters. (a) and (b) are $c$. (c) and (d) are $k$.} \label{figure8} \end{figure*} \begin{figure}[ht] \centering \subfigure{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_P.pdf}} \centering \subfigure{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_P.pdf}} \caption{MAP values versus hyper-parameter $P$.} \label{figure9} \end{figure} \subsubsection{Convergence} \label{Convergences} We empirically validate the convergence property of the proposed AGSFH. Specifically, we conduct the empirical analysis of the value for the objective on all three datasets with the fixed $64$ bits hash code. Fig. \ref{figure5} illustrates the convergence curves, where we have normalized the value of the objective by the number of training data and the maximum value for the objective. From the figure Fig. \ref{figure5}, we can easily find that the objective value of AGSFH decreases sharply with less than $40$ iterations on all three datasets and has no significant change after several iterations. The result shows the fast convergence property of Algorithm \ref{alg:GSFH}. \subsubsection{Ablation Experiments Analysis} \begin{table}[htb] \centering \tiny \setlength{\belowcaptionskip}{5pt} \caption{Ablation Results of AGSFH on MirFlickr25K.} \label {Table.7} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{3}{c}{MirFlickr25K} \cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{3}{*}{I$\rightarrow$T} &AGSFH\_appro &0.6249 &{\bf 0.6531} &0.6660 &0.6688 \cr &AGSFH\_reg &0.6397 &0.6353 &0.6609 &{\bf 0.6709} \cr &{\bf AGSFH} &{\bf 0.6421} &0.6509 &{\bf 0.6702} &0.6701 \cr \hline \multirow{3}{*}{T$\rightarrow$I} &AGSFH\_appro &0.6419 &0.6720 &0.6990 &0.7095 \cr &AGSFH\_reg &0.6496 &0.6612 &0.7013 &0.7026 \cr &{\bf AGSFH} &{\bf 0.6676} &{\bf 0.6745} &{\bf 0.7101} &{\bf 0.7163} \cr \hline \end{tabular}} \end{table} To obtain a deep insight about AGSFH, we further performed ablation experiments analysis on MIRFlickr-25K dataset, including two variations of AGSFH, that is: $1)$ AGSFH\_appro and $2)$ AGSFH\_reg. Compared with AGSFH, AGSFH\_appro drops out the second term in Eq.(\ref{GSFH19}). This second term means that the learned adaptive anchor graph $\hat{S}$ should best approximate the fused anchor graph $\hat{A}$. In contrast, AGSFH\_reg discards the third term in Eq.(\ref{GSFH19}). The third term means adding $L_2$-norm regularization to smooth the elements of the learned anchor graph $\hat{S}$. The MAP results of AGSFH and its variations are shown in Table. \ref{Table.7}. From this table, we can find the following. \begin{enumerate}[(1)] \setlength{\listparindent}{2em} \item AGSFH outperforms AGSFH\_appro and AGSFH\_reg greatly, showing the effect of the second term and the third term in Eq.(\ref{GSFH19}). \item AGSFH\_appro has approximately equivalent performance over AGSFH\_reg. This phenomenon indicates that the second term and the third term in Eq.(\ref{GSFH19}) are both useful for performance improvements. They have different effect improvements of AGSFH. \end{enumerate} \subsubsection{Parameter Sensitivity Analysis} In this part, we analyze the parameter sensitivity of two cross-modal retrieval tasks with various experiment settings in empirical experiments on all datasets. Our AGSFH involves seven model hyper-parameters: the trade-off hyper-parameter $\lambda$, the weight controller hyper-parameters $\gamma_1$ and $\gamma_3$, the regularization hyper-parameter $\gamma_2$, the number of clusters $C$, the number of anchors points $P$, and the number of neighbor points $k$. We fix the length of binary codes as $32$ bits and fix one of these two hyper-parameters with others fixed to verify the superior and stability of our approach AGSFH on a wide parameter range. Fig. \ref{figure6}, \ref{figure7}, \ref{figure8}, \ref{figure9} presents the MAP experimental results of AGSFH. From these figures, relatively stable and superior performance to a large range of parameter variations is shown, verifying the robustness to some parameter variations. \section{Conclusion} In this paper, we propose a novel cross-modal hashing approach for efficient retrieval tasks across modalities, termed Anchor Graph Structure Fusion Hashing (AGSFH). AGSFH directly learns an intrinsic anchor fusion graph, where the structure of the intrinsic anchor graph is adaptively tuned so that the number of components of the intrinsic graph is exactly equal to the number of clusters. Based on this process, training instances can be clustered into semantic space. Besides, AGSFH attempts to directly preserve the anchor fusion affinity with complementary information among multi-modalities data into the common binary Hamming space, capturing intrinsic similarity and structure across modalities by hash codes. A discrete optimization framework is designed to learn the unified binary codes across modalities. Extensive experimental results on three public social datasets demonstrate the superiority of AGSFH in cross-modal retrieval. \bibliographystyle{IEEEtran} \section{Introduction} \IEEEPARstart{I}{n} the wake of the rapid growth of multiple modalities data (e.g., image, text, and video) on social media and the internet, cross-modal information retrieval have obtained much attention for storing and searching items in the large-scale data environment. Hashing based methods \cite{proceeding7, proceeding10, proceeding27, jour13, yan2020deep, yan2021task, yan2021precise, yan2021age, ma2018global, ma2020discriminative, ma2021learning, ma2020correlation, ma2017manifold, ma2017learning} has shown their superiority in the approximate nearest neighbor retrieval task because of their low storage consumption and fast search speed. Its superiority is achieved by compact binary codes representation, which targets constructing the compact binary representation in Hamming space to preserve the semantic affinity information across modalities as much as possible. By whether the label information is utilized, cross-modal hashing (CMH) methods can be categorized into two subclasses: unsupervised \cite{proceeding17, proceeding14, proceeding15, proceeding16, jour5, jour23} and supervised \cite{proceeding18, proceeding19, proceeding20, proceeding21, proceeding22} methods. The details are shown in Section \ref{Sec2}. Besides, most existing cross-modal hashing methods separately preserve the intra- and inter-modal affinities to learn the corresponding binary codes, which omit to consider the fusion semantic affinity among multi-modalities data. However, such fusion affinity is the key to capturing the cross-modal semantic similarity since different modalities' data could complement each other. Recently, some works have considered utilizing multiple similarities \cite{proceeding35, jour24}. Few works have been proposed to capture such fusion semantic similarity with binary codes. The symbolic achievement is Fusion Similarity Hashing (FSH) \cite{proceeding17}. FSH builds an asymmetrical fusion graph to capture the intrinsic relations across heterogeneous data and directly preserves the fusion similarity to a Hamming space. However, there are some limitations to be further addressed. Firstly, most existing CMH methods take graphs, which are always predefined separately in each modality, as input to model the distribution of data. These methods omit to consider the correlation of graph structure among multiple modalities, and cross-modal retrieval results highly rely on the quality of predefined affinity graphs \cite{proceeding17, proceeding14, proceeding16}. Secondly, most existing CMH methods deal with the preservation of intra- and inter-modal affinity separately to learn the binary code, omitting to consider the fusion affinity among multi-modalities data containing complementary information \cite{proceeding14, proceeding15, proceeding16}. Thirdly, most existing CMH methods relax the discrete constraints to solve the optimization objective, which could significantly degrade the retrieval performance \cite{proceeding14, proceeding18, proceeding19}. Besides, there are also some works \cite{quan2016object, yan2020depth} that try to adaptively learn the optimal graph affinity matrix from the data information and the learning tasks. Graph Optimized-Flexible Manifold Ranking (GO-FMR) \cite{quan2016object} obtains the optimal affinity matrix via direct $L_2$ regression under the guidance of the human established affinity matrix to infer the final predict labels more accurately. In our model, we learn an intrinsic anchor graph also via direct $L_2$ regression like \cite{quan2016object} based on the constructed anchor graph structure fusion matrix, but it also simultaneously tunes the structure of the intrinsic anchor graph to make the learned graph has exactly $C$ connected components. Hence, vertices in each connected component of the graph can be categorized into one cluster. In addition, GO-FMR needs $O(N^2)$ in computational complexity to learn the optimal affinity matrix ($N$ is the number of the vertices in the graph). With the increasing amount of data, it is intractable for off-line learning. Compared with GO-FMR, the proposed method only needs $O(PN)$ in computational complexity to learn the optimal anchor affinity matrix to construct the corresponding graph ($P$ is the number of the anchors in the graph and $P \ll N$). As a result, the storage space and the computational complexity for learning the graph could be decreased. \captionsetup{labelfont=it, textfont={bf,it}} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{AGSFH_flowchart} \caption{The flowchart of AGFSH. We utilize bi-modal data as an example. Firstly, in part I, we construct anchor graphs for image and text modalities and calculate the anchor graph structure fusion matrix by the Hadamard product. Secondly, part II shows intrinsic anchor graph learning with graph clustering by best approximating the anchor graph structure fusion matrix. Based on this process, training instances can be clustered into semantic space. Thirdly, in part III, binary code learning based on intrinsic anchor graph is realized by best approximating Hamming similarity matrix into intrinsic anchor graph. This process can guarantee that binary data could preserve the semantic relationship of training instances in semantic space.} \label{figure1} \end{figure*} To address the above limitations, in this paper, we propose Anchor Graph Structure Fusion Hashing (AGSFH) method to solve effective and efficient large-scale information retrieval across modalities. The flowchart of the proposed model is shown in Fig. \ref{figure1}. By constructing the anchor graph structure fusion matrix from different anchor graphs of multiple modalities with the Hadamard product, AGSFH can fully exploit the geometric property of underlying data structure, leading to the robustness to noise in multi-modal relevance among data instances. The key idea of AGSFH is attempting to directly preserve the fusion anchor affinity with complementary information among multi-modalities data into the common binary Hamming space. At the same time, the structure of the intrinsic anchor graph is adaptively tuned by a well-designed objective function so that the number of components of the intrinsic graph is exactly equal to the number of clusters. Based on this process, training instances can be clustered into semantic space, and binary data could preserve the semantic relationship of training instances. Besides, a discrete optimization framework is designed to learn the unified binary codes across modalities. Extensive experimental results on three public social datasets demonstrate the superiority of AGSFH in cross-modal retrieval. We highlight the main contributions of AGSFH as below. \begin{enumerate}[1.] \item We develop an anchor graph structure fusion affinity for large-scale data to directly preserve the anchor fusion affinity with complementary information among multi-modalities data into the common binary Hamming space. The fusion affinity can result in robustness to noise in multi-modal relevance and excellent performance improvement on cross-modal information retrieval. \item The structure of the intrinsic anchor graph is learned by preserving similarity across modalities and adaptively tuning clustering structure in a unified framework. The learned intrinsic anchor graph can fully exploit the geometric property of underlying data structure across multiple modalities and directly cluster training instances in semantic space. \item An alternative algorithm is designed to solve the optimization problem in AGSFH. Based on the algorithm, the binary codes are learned without relaxation, avoiding the large quantization error. \item Extensive experimental results on three public social datasets demonstrate the superiority of AGSFH in cross-modal retrieval. \end{enumerate} \section{Related works} \label{Sec2} Cross-modal hashing has obtained much attention due to its utility and efficiency. Current CMH methods are mainly divided into two categories: unsupervised ones and supervised ones. Unsupervised CMH methods \cite{proceeding17, proceeding14, proceeding15, proceeding16, jour5, jour23} mainly map the heterogeneous multi-modal data instances into the common compact binary codes by preserving the intra- and inter-modal relevance of training data and learn the hash functions during this process. Fusion Similarity Hashing (FSH) \cite{proceeding17} learns binary hash codes by preserving the fusion similarity of multiple modalities in an undirected asymmetric graph. Collaborative Subspace Graph Hashing (CSGH) \cite{proceeding14} constructs the unified hash codes by a two-stage collaborative learning framework. Joint Coupled-Hashing Representation (JCHR) \cite{proceeding15} learns the unified hash codes via embedding heterogeneous data into their corresponding binary spaces. Hypergraph-based Discrete Hashing (BGDH) \cite{proceeding16} obtains the binary codes with learning hypergraph and binary codes simultaneously. Robust and Flexible Discrete Hashing (RFDH) \cite{jour5} directly produces the hash codes via discrete matrix decomposition. Joint and individual matrix factorization hashing (JIMFH) \cite{jour23} learns unified hash codes with joint matrix factorization and individual hash codes with individual matrix factorization. Different from unsupervised ones, CVH \cite{proceeding18}, SCM \cite{proceeding19}, SePH \cite{proceeding20}, FDCH \cite{proceeding21}, ADCH \cite{proceeding22} are representative supervised cross-modal hashing methods. Cross View Hashing (CVH) extends the single-modal spectral hashing to multiple modalities and relaxes the minimization problem for learning the hash codes \cite{proceeding18}. Semantic Correlation Maximization (SCM) learns the hash functions by approximating the semantic affinity of label information in large-scale data \cite{proceeding19}. Semantics-Preserving Hashing (SePH) builds a probability distribution by the semantic similarity of data and minimizes the Kullback-Leibler divergence for getting the hash binary codes \cite{proceeding20}. Fast Discrete Cross-modal Hashing (FDCH) learns the hash codes and hash functions with regressing from class labels to binary codes \cite{proceeding21}. Asymmetric Discrete Cross-modal Hashing (ADCH) obtains the common latent representations across the modalities by the collective matrix factorization technique to learn the hash codes and constructs hash functions by a series of binary classifiers \cite{proceeding22}. Besides, some deep CMH models are developed using deep end-to-end architecture because of the superiority of extracting semantic information in data points. Some representative works are Deep Cross-Modal Hashing (DCMH) \cite{proceeding23}, Dual deep neural networks cross-modal hashing (DDCMH) \cite{proceeding24}, Deep Binary Reconstruction (DBRC) \cite{jour14}. Deep Cross-Modal Hashing (DCMH) learns features and hash codes in the same framework with deep neural networks, one for each modality, to perform feature learning from scratch \cite{proceeding23}. Dual deep neural networks cross-modal hashing (DDCMH) generates hash codes for different modalities by two deep networks, making full use of inter-modal information to obtain high-quality binary codes \cite{proceeding24}. Deep Binary Reconstruction (DBRC) simultaneously learns the correlation across modalities and the binary hashing codes, which also proposes both linear and nonlinear scaling methods to generate efficient codes after training the network \cite{jour14}. In comparison with the traditional hashing methods, deep cross-modal models have shown outstanding performance. \section{The proposed method} In this section, we will give a detailed introduction to our proposed AGSFH. \subsection{Notations and Definitions} Our AGSFH work belongs to unsupervised hashing approach, achieving state-of-the-art performance in cross-modal semantic similarity search. There are $N$ instances $O = \left\{o_1, o_2, \ldots, o_N\right\}$ in the database set and each instance $o_i = (x_i^1, x_i^2, \ldots, x_i^M)$ has $M$ feature vectors from $M$ modalities respectively. The database matrix $X^m=\left[x_1^m, x_2^m, \ldots , x_N^m \right]\in R^{d_m \times N}$ denotes the feature representations for the $m$th modality, and the feature vector $x_i^m$ is the $i$th data of $X^m$ with $d_m$ dimension. Besides, we assume that the training data set $O$ has $C$ clusters, i.e., $O$ has $C$ semantic categories. Given training data set $O$, the proposed AGSFH aims to learn a set of hash functions $H^m(x^m)=\left\{h_1^m(x^m), h_2^m(x^m), \ldots, h_K^m(x^m)\right\}$ for the $m$th modal data. At the same time, a common binary code matrix $B=\left[b_1, b_2, \ldots , b_N \right]\in \left\{-1, 1\right\}^{K \times N}$ is constructed, where binary vector $b_i \in \left\{-1, 1\right\}^{K}$ is the $K$-bits code for instance $o_i$. For the $m$th modality data, its hash function can be written as: \begin{eqnarray}\label{GSFH1} h_k^m(x^m)=sgn(f_k^m(x^m)), (k=1, 2, \ldots, K), \end{eqnarray} where $sgn(\cdot)$ is the sign function, which return $1$ if $f_k^m(\cdot) > 0$ and $-1$ otherwise. $f_k^m(\cdot)$ is the linear or non-linear mapping function for data of the $m$th modality. For simplicity, we define our hash function at the $m$th modality as $H^m(x^m)=sgn(W_m^Tx^m)$. \subsection{Anchor Graph Structure Fusion Hashing} In AGSFH, we first construct anchor graphs for multiple modalities and calculate the anchor graph structure fusion matrix by the Hadamard product. Next, AGSFH jointly learns the intrinsic anchor graph and preserves the anchor fusion affinity into the common binary Hamming space. In intrinsic anchor graph learning, the structure of the intrinsic anchor graph is adaptively tuned by a well-designed objective function so that the number of components of the intrinsic graph is exactly equal to the number of clusters. Based on this process, training instances can be clustered into semantic space. Binary code learning based on the intrinsic anchor graph can guarantee that binary data could preserve the semantic relationship of training instances in semantic space. \subsubsection{Anchor Graph Learning} Inspired by the idea of GSF \cite{jour15}, it is straightforward to consider our anchor graph structure fusion similarity as follows. However, in general, building a $k$-nearest-neighbor ($k$-NN) graph using all $N$ points from the database always needs $O(N^2)$ in computational complexity. Furthermore, learning an intrinsic graph among all $N$ points also takes $O(N^2)$ in computational complexity. Hence, with the increasing amount of data, it is intractable for off-line learning. To address these computationally intensive problems, inspired by Anchor Graph Hashing (AGH) \cite{proceeding1}, with multi-modal data, different $k$-NN anchor graph affinity matrices $\hat{A}^{m} (m=1,2, \ldots, M)$ are constructed for $M$ modalities respectively. Besides, We further propose an intrinsic anchor graph learning strategy, which learns the intrinsic anchor graph $\hat{S}$ of the intrinsic graph $S$. In particular, given an anchor set $T=\left\{t_1, t_2, \ldots, t_P\right\}$, where $t_i = (t_i^1, t_i^2, \ldots, t_i^M)$ is the $i$-th anchor across $M$ modalities, which are randomly sampled from the original data set $O$ or are obtained by performing clustering algorithm over $O$. The anchors matrix $T^m=\left[t_1^m, t_2^m, \ldots , t_P^m \right]\in R^{d_m \times P}$ denotes the feature representations of anchors for the $m$th modality, and the feature vector $t_i^m$ is the $i$th anchor of $T^m$. Then, the anchor graph $\hat{A}^{mT}=[\hat{a}_1^m, \hat{a}_2^m, \ldots, \hat{a}_N^m] \in R^{P \times N}$ between all $N$ data points and $P$ anchors at the $m$th modality can be computed as follows. Using the raw data points from $T$ and $O$ respectively represents a pairwise distance $b_{ji}=\left \| x_i^m - t_j^m\right \|_F^2$, we construct the anchor graph $\hat{A}^{mT}$ by using similar initial graph learning of GSF \cite{jour15}. Therefore, we can assign $k$ neighbor anchors to each data instance by the following Eq.(\ref{GSFH14}). \begin{equation}\label{GSFH14} \hat{a}_{ij}^{m\star}= \begin{cases} \frac{b_{i,k+1}-b_{i,j}}{kb_{i,k+1}-\sum_{j=1}^k b_{i,j}}, & j \leq k \\ 0, & otherwise. \end{cases} \end{equation} Since $P \ll N$, the size of the anchor graph is much smaller than the traditional graph, the storage space and the computational complexity for constructing the graph could be decreased. We can obtain different anchor graph affinity matrices $\hat{A}^{m} (m=1, 2, \ldots, M)$ for $M$ modalities respectively. Similar to \cite{jour15}, we use the Hadamard product to extract intrinsic edges in multiple anchor graphs for fusing different anchor graph structures $\hat{A}^{m}$ into one anchor affinity graph $\hat{A}$ by \begin{eqnarray}\label{GSFH15} \hat{A} = \prod_{m=1}^{M} \hat{A}^{m}. \end{eqnarray} which can reduce the storage space and the computational complexity of graph structure fusion. Given a fused anchor affinity matrix $\hat{A}$, we learn a anchor similarity matrix $\hat{S}$ so that the corresponding graph $S \simeq \bar{S} = \hat{S}\hat{S}^T $ has exactly $C$ connected components and vertices in each connected component of the graph can be categorized into one cluster. For the similarity matrix $S \simeq \bar{S}=\hat{S}\hat{S}^T \geq 0$, there is a theorem \cite{book2} about its Laplacian matrix $L$ \cite{jour17}: \newtheorem{thm}{\bf Theorem} \begin{thm}\label{thm1} The number $C$ of connected components of the graph $S$ is equal to the multiplicity of zeros as an eigenvalue of its Laplacian matrix $L$. \end{thm} The proof of Fan’s Theorem can be referred to \cite{jour18, jour19}. As we all known, $L$ is a semi-definite matrix. Hence, $L$ has $N$ non-negative eigenvalues $0 = \lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_N$. Theorem \ref{thm1} tells that if the constraint $\sum_{c=1}^C \lambda_c =0$ is satisfied, the graph $S$ has an ideal neighbors assignment and the data instances are already clustered into $C$ clusters. According to Fan’s theorem \cite{jour21}, we can obtain an objective function, \begin{eqnarray}\label{GSFH3} && \sum_{c=1}^C \lambda_c = \min_{U} \left \langle UU^T, L \right \rangle \nonumber \\ \mathrm{s.t.} && U \in R^{N \times C}, U^TU=I_C, \end{eqnarray} where $\left \langle \cdot \right \rangle$ denotes the Frobenius inner product of two matrices, $U^T = [u_1, u_2, \ldots, u_N]$, $L=D-\hat{S}\hat{S}^T$ is the Laplacian matrix, $I_C \in R^{C \times C}$ is an identity matrix, $D$ is a diagonal matrix and its elements are column sums of $\hat{S}\hat{S}^T$. Furthermore, for intrinsic anchor graph, $\bar{S}$ can be normalized as $\tilde{S} = \hat{S}\Lambda\hat{S}^T$ where $\Lambda= diag(\hat{S}^T1) \in R^{P \times P}$. The approximate intrinsic graph matrix $\tilde{S}$ has a key property as it has unit row and column sums. Hence, the graph laplacian of the intrinsic anchor graph is $L=I-\tilde{S}$, so the required $C$ graph laplacian eigenvectors $U$ in Eq.(\ref{GSFH3}) are also eigenvectors of $\tilde{S}$ but associated with the eigenvalue $1$ (the eigenvalue $1$ corresponding to eigenvalue $0$ of $L$). One can easily find $\tilde{S}=\hat{S}\Lambda\hat{S}^T$ has the same non-zeros eigenvalue with $E=\Lambda^{-1/2}\hat{S}^T \hat{S}\Lambda^{-1/2}$, resulting in $L=I-\tilde{S}$ has the same multiplicity of eigenvalue $0$ with $\hat{L}=I-E$. Hence, similar to Eq.(\ref{GSFH3}), then we have an objective function for anchor graph learning as follows. \begin{eqnarray}\label{GSFH37} && \sum_{c=1}^C \lambda_c = \min_{V} \left \langle VV^T, \hat{L} \right \rangle \nonumber \\ \mathrm{s.t.} && V \in R^{p \times C}, V^TV=I_C, \end{eqnarray} where $\hat{S}^T = [\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_N]$ and $V^T = [v_1, v_2, \ldots, v_P]$. Because the graph $\hat{A}$ contains edges of the intrinsic structure, we need $\hat{S}$ to best approximate $\hat{A}$, then we optimize the following objective function, \begin{eqnarray}\label{GSFH4} & & \max_{\hat{S}} \left \langle \hat{A}, \hat{S} \right \rangle \nonumber \\ \mathrm{s.t.} & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \end{eqnarray} where we constrain $1^T\hat{s}_j=1$ so that it has unit row sum. By combining Eq. (\ref{GSFH37}) with Eq. (\ref{GSFH4}), we have, \begin{eqnarray}\label{GSFH6} & & \min_{V,\hat{S}} \left \langle VV^T, \hat{L} \right \rangle - \gamma_1 \left \langle \hat{A}, \hat{S} \right \rangle + \gamma_2 \left \| \hat{S} \right \|_F^2 \nonumber \\ \mathrm{s.t.} & & V \in R^{p \times C}, V^TV=I_C, \nonumber \\ & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \end{eqnarray} where $\gamma_1$ is the weight controller parameter and $\gamma_2$ is the regularization parameter. To avoid trivial solution when optimizing objective with respect to $\hat{s}_j$ in Eq.(\ref{GSFH6}), we add $L_2$-norm regularization to smooth the elements of $\hat{S}$. We tune the structure of $\hat{S}$ adaptively so that we achieve the condition $\sum_{c=1}^C \lambda_c =0$ to obtain a anchor graph $\hat{S}$ with exactly $C$ number of connected components. As opposed to pre-computing affinity graphs, in Eq.(\ref{GSFH6}), the affinity of the adaptive anchor graph $\hat{S}$, i.e., $\hat{s}_{ij}$, is learned by modeling fused anchor graph $\hat{A}$ from multiple modalities. The learning procedures of multiple modalities mutually are beneficial and reciprocal. \subsubsection{The Proposed AGSFH Scheme} Ideally, if the instances $o_i$ and $o_j$ are similar, the Hamming distance between their binary codes should be minimal, and vice versa. We do this by maximizing the approximation between the learned intrinsic similarity matrix $\bar{S}$ and the Hamming similarity matrix $H=B^TB$, which can be written as $\max_{B} \left \langle \bar{S}, B^TB \right \rangle=Tr(B\bar{S}B^T)=Tr(B\hat{S}\hat{S}^TB^T)$. Then, we can assume $B_s = sgn(B\hat{S}) \in \left\{-1, +1\right\}^{K \times P}$ to be the binary anchors. To learn the binary codes, the objective function can be written as: \begin{eqnarray}\label{GSFH17} & & \max_{B,B_s} Tr(B\hat{S}B_s^T), \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}, B_s \in \left\{-1, +1\right\}^{K \times P}. \end{eqnarray} Intuitively, we learn the $m$th modality hash function $H^m(x^m)$ by minimizing the error term between the linear hash function in Eq.(\ref{GSFH1}) by $\left \| B- H^m(x^m) \right \|_F^2$. Such hash function learning can be easily integrated into the overall cross modality similarity persevering, which is rewritten as: \begin{eqnarray}\label{GSFH18} & & \min_{B,B_s,W_{m}} -Tr(B\hat{S}B_s^T)+\lambda \sum_{m=1}^M \left \| B- W_{m}^TX^{m} \right \|_F^2, \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}, B_s \in \left\{-1, +1\right\}^{K \times P}. \end{eqnarray} where $\lambda$ is a tradeoff parameter to control the weights between minimizing the binary quantization and maximizing the approximation. Therefore, by combining Eq.(\ref{GSFH6}) with Eq.(\ref{GSFH18}), we have the overall objective as follows. \begin{eqnarray}\label{GSFH19} & & \min_{V,\hat{S},B,B_s,W_{m}} Tr(V^T\hat{L}V) - \gamma_1 Tr(\hat{A}^T \hat{S}) + \gamma_2 \left \| \hat{S} \right \|_F^2 \nonumber \\ && -\gamma_3 Tr(B\hat{S}B_s^T)+\lambda\sum_{m=1}^M \left \| B- W_{m}^TX^{m} \right \|_F^2 \nonumber \\ \mathrm{s.t.} & & V \in R^{P \times C}, V^TV=I_C, \nonumber \\ & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \nonumber\\ & & B \in \left\{-1, +1\right\}^{K \times N}, B_s \in \left\{-1, +1\right\}^{K \times P}, \end{eqnarray} where $\gamma_3$ is a weight controller parameter, $\hat{L}=I-\Lambda^{-1/2}\hat{S}^T \hat{S}\Lambda^{-1/2}$, and $\Lambda= diag(\hat{S}^T1)$. \subsection{Algorithms Design} Objective Eq.(\ref{GSFH19}) is a mixed binary programming and non-convex with variables $V$, $\hat{S}$, $B$, $B_s$, and $W_{m}$ together. To solve this issue, an alternative optimization framework is developed, where only one variable is optimized with the other variable fixed at each step. The details of the alternative scheme are as follows. \begin{enumerate}[1.] \setlength{\listparindent}{2em} \item $\textbf{$\hat{S}$ step}$. \par\setlength\parindent{2em} By fixing $V$, $\Lambda$, $B$, $B_s$, and $W_{m}$, optimizing problem (\ref{GSFH19}) becomes: \begin{eqnarray}\label{GSFH20} && \min_{\hat{S}} Tr(V^T\hat{L}V) - \gamma_1 Tr(\hat{A}^T \hat{S}) + \gamma_2 \left \| \hat{S} \right \|_F^2 \nonumber \\ && -\gamma_3 Tr(B\hat{S}B_s^T) \nonumber \\ \mathrm{s.t.} & & \forall j, \hat{s}_j \geq 0, 1^T\hat{s}_j=1. \nonumber\\ \end{eqnarray} Note that the problem Eq.(\ref{GSFH20}) is independent between different $j$, then we have, \begin{eqnarray}\label{GSFH21} && \min_{\hat{s}_j} f(\hat{s}_j) = \hat{s}_j^T (\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j \nonumber \\ && - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) \hat{s}_j, \nonumber \\ \mathrm{s.t.} & & \hat{s}_j \geq 0, 1^T\hat{s}_j=1, \nonumber\\ \end{eqnarray} where $\tilde{V}=\Lambda^{-1/2}V$. We can find the constraints in problem (\ref{GSFH21}) is simplex, which indeed can lead to sparse solution $\hat{s}_j$ and have empirical success in various applications (because $\left \| \hat{s}_j \right \|_1 = 1^T\hat{s}_j=1$). In order to solve the Eq.(\ref{GSFH21}) for large $P$, it is more appropriate to apply first-order methods. In this paper, we use the Nesterov's accelerated projected gradient method to optimize Eq.(\ref{GSFH21}). We will present the details of the optimization as follows. We can easily find that the objective function (\ref{GSFH21}) is convex, the gradient of the objective function (\ref{GSFH21}) is Lipschitz continuous, and the Lipschitz constant is $Lp=2 \left \| \tilde{V}\tilde{V}^T + \gamma_2 I \right \|_2$ (i.e., the largest singular value of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$). One can see the detailed proofs of these results in Theorem \ref{thm2} and Theorem \ref{thm3}, respectively. According to these results, problem (\ref{GSFH21}) can be efficiently solved by Nesterov's optimal gradient method (OGM) \cite{jour22}. \begin{thm}\label{thm2} The objective function $f(\hat{s}_j)$ is convex. \end{thm} \begin{proof} Given any two vector $\hat{s}_j^1, \hat{s}_j^2 \in R^{P \times 1}$ and a positive number $\mu \in (0,1)$, we have \begin{eqnarray}\label{GSFH37} && f(\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) - (\mu f(\hat{s}_j^1) + (1-\mu)f(\hat{s}_j^2)) \nonumber \\ && = (\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2)^T (\tilde{V}\tilde{V}^T + \gamma_2 I) (\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) \nonumber \\ && - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) (\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) \nonumber \\ && - \mu (\hat{s}_j^{1T} (\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j^1 - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) \hat{s}_j^1) \nonumber \\ && - (1-\mu) (\hat{s}_j^{2T} (\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j^2 \nonumber \\ && - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s) \hat{s}_j^2) \end{eqnarray} By some algebra, (\ref{GSFH37}) is equivalent to \begin{eqnarray}\label{GSFH38} && f(\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) - (\mu f(\hat{s}_j^1) + (1-\mu)f(\hat{s}_j^2)) \nonumber \\ && = \mu(\mu-1)(\hat{s}_j^1 - \hat{s}_j^2)^T (\tilde{V}\tilde{V}^T + \gamma_2 I)(\hat{s}_j^1 - \hat{s}_j^2) \nonumber \\ && = \mu(\mu-1)(\left \| \tilde{V}^T (\hat{s}_j^1 - \hat{s}_j^2) \right \|_F^2 + \gamma_2\left \|\hat{s}_j^1 - \hat{s}_j^2\right \|_F^2 ) \nonumber \\ && \leq 0. \end{eqnarray} Therefore, we have \begin{eqnarray}\label{GSFH39} && f(\mu \hat{s}_j^1 + (1-\mu)\hat{s}_j^2) \leq \mu f(\hat{s}_j^1) + (1-\mu)f(\hat{s}_j^2). \end{eqnarray} According to the definition of convex function, we know $f(\hat{s}_j)$ is convex. This completes the proof. \end{proof} \begin{thm}\label{thm3} The gradient of the objective function $f(\hat{s}_j)$ is Lipschitz continuous and the Lipschitz constant is $Lp=2 \left \| \tilde{V}\tilde{V}^T + \gamma_2 I \right \|_2$ (i.e., the largest singular value of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$). \end{thm} \begin{proof} According to (\ref{GSFH21}), we can obtain the gradient of $f(\hat{s}_j)$ \begin{eqnarray}\label{GSFH40} && \nabla f(\hat{s}_j) = 2(\tilde{V}\tilde{V}^T + \gamma_2 I) \hat{s}_j - (\gamma_1 \hat{a}_j^T + \gamma_3 b_j^T B_s). \end{eqnarray} For any two vector $\hat{s}_j^1, \hat{s}_j^2 \in R^{P \times 1}$, we have \begin{eqnarray}\label{GSFH41} && \left \| \nabla f(\hat{s}_j^1) - \nabla f(\hat{s}_j^2) \right \|_F^2 \nonumber \\ && = \left \| 2(\tilde{V}\tilde{V}^T + \gamma_2 I)(\hat{s}_j^1 - \hat{s}_j^2) \right \|_F^2 \nonumber \\ && = Tr((U\Sigma U^T(\hat{s}_j^1 - \hat{s}_j^2))^T(U\Sigma U^T(\hat{s}_j^1 - \hat{s}_j^2))), \end{eqnarray} where $U\Sigma U^T$ is the SVD decomposition of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$ and the singular values are $\left \{ \sigma_1, \ldots, \sigma_u \right \}$ listed in a descending order. By some algebra, (\ref{GSFH41}) is equivalent to \begin{eqnarray}\label{GSFH42} && \left \| \nabla f(\hat{s}_j^1) - \nabla f(\hat{s}_j^2) \right \|_F^2 \nonumber \\ && = Tr(U^T(\hat{s}_j^1 - \hat{s}_j^2)(\hat{s}_j^1 - \hat{s}_j^2)^T U\Sigma^2) \nonumber \\ && \leq \sigma_1^2 Tr(U^T(\hat{s}_j^1 - \hat{s}_j^2)(\hat{s}_j^1 - \hat{s}_j^2)^T U) \nonumber \\ && = \sigma_1^2 \left \| \hat{s}_j^1 - \hat{s}_j^2 \right \|_F^2, \end{eqnarray} where $\sigma_1$ is the largest singular value, and the last two equations come from the fact that $U^TU=I_u$ and $UU^T=I_P$. From (\ref{GSFH42}), we have \begin{eqnarray}\label{GSFH43} && \left \| \nabla f(\hat{s}_j^1) - \nabla f(\hat{s}_j^2) \right \|_F \nonumber \\ && \leq Lp \left \| \hat{s}_j^1 - \hat{s}_j^2 \right \|_F \end{eqnarray} Therefore, $\nabla f(\hat{s}_j)$ is Lipschitz continuous and the Lipschitz constant is the largest singular value of $2(\tilde{V}\tilde{V}^T + \gamma_2 I)$, i.e., $Lp=\left \| 2(\tilde{V}\tilde{V}^T + \gamma_2 I) \right \|_2 = 2 \left \| \tilde{V}\tilde{V}^T + \gamma_2 I\right \|_2$. This completes the proof. \end{proof} In particular, we construct two sequences, i.e., $\hat{s}_j^t$ and $z_j^t$, and alternatively update them in each iteration round. For the convenience of notations, we use $C$ to represent the associated constraints in Eq.(\ref{GSFH21}). At the iteration $t$, the two sequences are \begin{eqnarray}\label{GSFH22} &\hat{s}_j^t = \arg \min_{\hat{s}_j \in C} \phi (\hat{s}_j, z_j^{t-1}) = f(z_j^{t-1}) \nonumber \\ & + (\hat{s}_j-z_j^{t-1})^T \nabla f(z_j^{t-1}) + \frac{Lp}{2} \left \| \hat{s}_j-z_j^{t-1} \right \|_2^2 , \end{eqnarray} and \begin{eqnarray}\label{GSFH23} z_j^t = \hat{s}_j^t + \frac{c_t - 1}{c_{t+1}} (\hat{s}_j^t - \hat{s}_j^{t-1}), \end{eqnarray} where $\phi (\hat{s}_j, z_j^{t-1})$ is the proximal function of $f(\hat{s}_j)$ on $z_j^{t-1}$, $\hat{s}_j^t$ includes the approximate solution obtained by minimizing the proximal function over $\hat{s}_j$, and $z_j^t$ stores the search point that is constructed by linearly combining the latest two approximate solutions, i.e., $\hat{s}_j^t$ and $\hat{s}_j^{t-1}$. According to \cite{jour22}, the combination coefficient is updated in each iteration round \begin{eqnarray}\label{GSFH24} c_{t+1} = \frac{1+\sqrt{4c_t+1}}{2}. \end{eqnarray} With extra terms independent of $\hat{s}_j$, we could write the objective function in Eq.(\ref{GSFH22}) into a more compact form as follows: \begin{eqnarray}\label{GSFH25} \hat{s}_j^t = \arg \min_{\hat{s}_j \in C} \frac{Lp}{2} \left \| \hat{s}_j-(z_j^{t-1}-\frac{1}{Lp} \nabla f(z_j^{t-1})) \right \|_2^2 . \end{eqnarray} Eq.(\ref{GSFH25}) is an euclidean projection problem on the simplex space, which is the same as Eq.(\ref{GSFH7}). According to the Karush-Kuhn-Tucker condition \cite{book1}, it can be verified that the optimal solution $\hat{s}_j^{t}$ is \begin{eqnarray}\label{GSFH26} \hat{s}_j^{t}=(z_j^{t-1}-\frac{1}{Lp} \nabla f(z_j^{t-1})+\eta1)_+. \end{eqnarray} By alternatively updating $\hat{s}_j^t$, $z_j^t$ and $c_{t+1}$ with (\ref{GSFH22}), (\ref{GSFH23}) and (\ref{GSFH24}) until convergence, the optimal solution can be obtained. Note that recent results \cite{jour20, jour22} show that the gradient-based methods with smooth optimization can achieve the optimal convergence rate $O(\frac{1}{t^2})$, here $t$ is number of iterations. Here the convergence criteria is that the relative change of $\left \| \hat{s}_j \right \|_2$ is less than $10^{-4}$. We initialize $\hat{s}_j^0$ via solving Eq.(\ref{GSFH21}) without considering the constraints. We take the partial derivative of the objective (\ref{GSFH21}) with respect to $\hat{s}_j$. By setting this partial derivative to zero, a closed-form solution of $\hat{s}_j^0$ is acquired \begin{eqnarray}\label{GSFH27} \hat{s}_j^{0}= (\tilde{V}\tilde{V}^T + \gamma_2 I)^{-1}(\gamma_1 \hat{a}_j + \gamma_3 B_s^T b_j). \end{eqnarray} The full OMG algorithm is summarized in Algorithm \ref{alg:OGM}. \begin{algorithm}[h] \caption{Optimal Gradient Method (OGM).} \label{alg:OGM} \begin{algorithmic}[1] \Require $\tilde{V}$, $B$, $B_s$, $\hat{A}$, $\gamma_1$, $\gamma_2$, $\gamma_3$; maximum iteration number $T_{OGM}$. \Ensure $\hat{S}$. \State Initialize $j=1$. \Repeat \State Initialize $\hat{s}_j^0$ by Eq.(\ref{GSFH27}), $z_j^0 = \hat{s}_j^0$, $t=1$, $c_1=1$. \Repeat \State $\hat{s}_j^{t}=(z_j^{t-1}-\frac{1}{Lp} \nabla f(z_j^{t-1})+\eta1)_+$. \State $c_{t+1} = \frac{1+\sqrt{4c_t+1}}{2}$. \State $z_j^t = \hat{s}_j^t + \frac{c_t - 1}{c_{t+1}} (\hat{s}_j^t - \hat{s}_j^{t-1})$. \Until Convergence criteria is satisfied or reaching the maximum iteration. \State $\hat{s}_j=\hat{s}_j^{t}$. \State $j=j+1$. \Until $j$ is equal to $N$. \State $\hat{S}^T = [\hat{s}_1, \hat{s}_2, \ldots, \hat{s}_N]$. \end{algorithmic} \end{algorithm} \item $\textbf{$\Lambda$ step}$. By fixing $V$, $\hat{S}$, $B$, $B_s$, and $W_{m}$, it is easy to solve $\Lambda$ by \begin{eqnarray}\label{GSFH28} \Lambda= diag(\hat{S}^T1). \end{eqnarray} \item $\textbf{$V$ step}$. By fixing $\hat{S}$, $\Lambda$, $B$, $B_s$, and $W_{m}$, then, Eq.(\ref{GSFH19}) becomes \begin{eqnarray}\label{GSFH29} && \min_{V} Tr(V^T \hat{L} V) \nonumber \\ \mathrm{s.t.} & & V \in R^{P \times C}, V^TV=I_C. \end{eqnarray} The optimal $V$ for Eq.(\ref{GSFH29}) is formed by the $C$ eigenvectors corresponding to the top $C$ smallest eigenvalues of the normalization Laplacian matrix $\hat{L}$. \item $\textbf{$B$ step}$. \label{sub1} By fixing $V$, $\hat{S}$, $\Lambda$, $B_s$, and $W_{m}$, the corresponding sub-problem is: \begin{eqnarray}\label{GSFH30} & & \min_{B} -\gamma_3 Tr(B\hat{S}B_s^T)+\lambda\sum_{m=1}^M \left \| B- W_{m}^TX^{m} \right \|_F^2 \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}, \end{eqnarray} which can be expended into: \begin{eqnarray}\label{GSFH31} & & \min_{B} - Tr(B(\gamma_3 \hat{S}B_s^T + 2 \lambda \sum_{m=1}^M X^{mT}W_{m} ) \nonumber \\ \mathrm{s.t.} & & B \in \left\{-1, +1\right\}^{K \times N}. \end{eqnarray} This sub-problem can be solved by the following updating result: \begin{eqnarray}\label{GSFH32} B = sgn(\gamma_3 B_s \hat{S}^T + 2 \lambda \sum_{m=1}^M W_{m}^T X^{m}). \end{eqnarray} \item $\textbf{$B_s$ step}$. By fixing $V$, $\hat{S}$, $\Lambda$, $B$, and $W_{m}$, the updating of $B_s$ can be referred to: \begin{eqnarray}\label{GSFH33} & & \min_{B_s} - \gamma_3 Tr(B\hat{S}B_s^T) \nonumber \\ \mathrm{s.t.} & & B_s \in \left\{-1, +1\right\}^{K \times P}. \end{eqnarray} With the same scheme to deal with sup-problem (\ref{sub1}), this subproblem can be solved as follow: \begin{eqnarray}\label{GSFH34} B_s = sgn(B\hat{S}). \end{eqnarray} which is similar to the before assumption in Eq.(\ref{GSFH17}). \item $\textbf{$W_{m}$ step}$. By fixing $V$, $\hat{S}$, $\Lambda$, $B$, and $B_s$, this sub-problem finds the best mapping coefficient $W_{m}$ by minimizing $\left \| B- W_{m}^TX^{m} \right \|_F^2 $ with the traditional linear regression. Therefore, we update $W_{m}$ as: \begin{eqnarray}\label{GSFH35} W_{m} = (X^{m}X^{mT})^{-1}X^{m}B^T. \end{eqnarray} \end{enumerate} The full AGSFH algorithm is summarized in Algorithm \ref{alg:GSFH}. \begin{algorithm}[h] \caption{Anchor Graph Structure Fusion Hashing (AGSFH).} \label{alg:GSFH} \begin{algorithmic}[1] \Require feature matrices $X^{m} (m=1,2,\ldots,M)$; code length $K$, the number of anchor points $P$, the cluster number $C$, the number of neighbor points $k$, maximum iteration number $T_{iter}$; parameters $\gamma_1$, $\gamma_2$, $\gamma_3$, $\lambda$. \Ensure The hash codes $B$ for training instances $O$ and the projection coefficient matrix $W_{m} (m=1,2,\ldots,M)$. \State Uniformly and randomly select $P$ sample pairs from training instances as the anchors $T$. \State Construct anchor graph $\hat{A}^{m} (m=1,2,\ldots,M)$ from the data matrix $X^{m}$ and the anchor matrix $T^{m}$ by a $k$-NN graph algorithm. \State Calculate anchor graph structure fusion matrix $\hat{A}$ by Eq.(\ref{GSFH15}). \State Initialize $V$ by $C$ number of eigenvectors corresponding to the top $C$ smallest eigenvalues of the Laplacian matrix $\hat{L}=I-D^{-1/2}\hat{A}^T \hat{A}D^{-1/2}$, and $D= diag(\hat{A}^T1)$. \State Initialize $\Lambda = I_P$. \State Initialize $W_{m} (m=1,2,\ldots,M)$ randomly. \State Initialize hash codes $B$ and $B_s$ randomly, such that $-1$ and $1$ in each bit are balanced. \Repeat \State Update $\hat{S}$ by Algorithm \ref{alg:OGM}. \State Update $\Lambda$ by Eq.(\ref{GSFH28}). \State Update $V$ by Eq.(\ref{GSFH29}), i.e., $V$ is formed by $C$ eigenvectors with the top $C$ smallest eigenvalues of $\hat{L}=I-\Lambda^{-1/2}\hat{S}^T \hat{S}\Lambda^{-1/2}$. \State Update $B$ by Eq.(\ref{GSFH32}). \State Update $B_s$ by Eq.(\ref{GSFH34}). \State Update $W_{m} (m=1,2,\ldots,M)$ by Eq.(\ref{GSFH35}). \Until Objective function of Eq.(\ref{GSFH19}) converges or reaching the maximum iteration. \end{algorithmic} \end{algorithm} \subsection{Convergence Analysis} The original problem Eq.(\ref{GSFH19}) is not a joint convex problem of $\hat{S}$, $V$, $B$, $B_s$, and $W_{m}$. Hence, we may not obtain a global solution. We divide the original problem into six subproblems, i.e., Eqs.(\ref{GSFH21}), (\ref{GSFH28}), (\ref{GSFH29}), (\ref{GSFH30}), (\ref{GSFH33}) and (\ref{GSFH35}). Since Eq.(\ref{GSFH21}) is a constrained quadratic minimization, Eq.(\ref{GSFH28}) is a linear equation, $\hat{L}$ in Eq.(\ref{GSFH29}) is positive semi-definite, Eqs.(\ref{GSFH30}), (\ref{GSFH33}) are constrained linear minimization, and Eq.(\ref{GSFH35}) is a quadratic minimization, each of them is the convex problem. The six subproblems are solved alternatively, so AGSFH will converge to a local solution. In the section \ref{Convergences}, we will show the convergence curves. \subsection{Computational Analysis} The complexity of the proposed AGSFH mainly consists of six parts: 1) updating intrinsic anchor graph $\hat{S}$, 2) updating $\Lambda$, 3) calculating $C$ eigenvectors of $\hat{L}$, 4) updating hash codes $B$, 5) updating anchor hash codes $B_s$, 6) updating projection coefficients $W_{m} (m=1,2,\ldots,M)$. These six parts are repeated until meeting the convergence condition, and they take $O (T_{OGM}(P+CP+KP+P^2+CP^2+P^3)N)$, $O (NP)$, $O (CP^2+P^2N)$, $O (K(P+1+\sum_{m=1}^M d_m)N)$, $O (KPN)$, and $O (\sum_{m=1}^M (d_m^2+Kd_m)N+\sum_{m=1}^M (d_m^2+d_m^3))$ respectively. Thus, the total complexity of Eq.(\ref{GSFH19}) is \begin{eqnarray}\label{GSFH36} &O (T_{iter}(T_{OGM}(P+CP+KP+P^2+CP^2+P^3) \nonumber \\ &+K(P+1+\sum_{m=1}^M d_m)+\sum_{m=1}^M (d_m^2+Kd_m) \nonumber \\ &+P+P^2+KP)N), \end{eqnarray} where $T_{iter}$ is the total number of iterations. We can find AGSFH has a linear training time complexity to the training samples. \section{Experiments} \renewcommand{\arraystretch}{1.0} \begin{table}[htb] \centering \footnotesize \setlength{\belowcaptionskip}{10pt} \caption{Comparison of MAP with Two Cross-modal Retrieval Tasks on Wiki Benchmark.} \label {Table.1} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Wiki}\cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.2065 &0.2131 &0.1985 &0.1983 \cr &BGDH &0.1815 &0.1717 &0.1717 &0.1717 \cr &FSH &0.2426 &0.2609 &0.2622 &{\bf 0.2710} \cr &RFDH &0.2443 &0.2455 &0.2595 &0.2616 \cr &JIMFH &0.2384 &0.2501 &0.2472 &0.2542 \cr &{\bf AGSFH} &{\bf 0.2548} &{\bf 0.2681} &{\bf 0.2640} &0.2680 \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.2130 &0.2389 &0.2357 &0.2380 \cr &BGDH &0.1912 &0.1941 &0.2129 &0.2129 \cr &FSH &0.4150 &0.4359 &0.4753 &0.4956 \cr &RFDH &0.4185 &0.4438 &0.4633 &0.4922 \cr &JIMFH &0.3653 &0.4091 &0.4270 &0.4456 \cr &{\bf AGSFH} &{\bf 0.5782} &{\bf 0.6005} &{\bf 0.6175} &{\bf 0.6214} \cr \hline \end{tabular}} \end{table} \renewcommand{\arraystretch}{1.0} \begin{table}[htb] \centering \footnotesize \setlength{\belowcaptionskip}{10pt} \caption{Comparison of MAP with Two Cross-modal Retrieval Tasks on MIRFlickr25K Benchmark.} \label {Table.2} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Method} & \multicolumn{4}{c}{MIRFlickr25K}\cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.5240 &0.5238 &0.5238 &0.5238 \cr &BGDH &0.5244 &0.5248 &0.5248 &0.5244 \cr &FSH &0.6347 &0.6609 &0.6630 &0.6708 \cr &RFDH &0.6525 &0.6601 &0.6659 &0.6659 \cr &JIMFH &{\bf 0.6563} &{\bf 0.6703} &0.6737 &0.6813 \cr &{\bf AGSFH} &0.6509 &0.6650 &{\bf 0.6777} &{\bf 0.6828} \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.5383 &0.5381 &0.5382 &0.5379 \cr &BGDH &0.5360 &0.5360 &0.5360 &0.5360 \cr &FSH &0.6229 &0.6432 &0.6505 &0.6532 \cr &RFDH &0.6389 &0.6405 &0.6417 &0.6438 \cr &JIMFH &0.6432 &0.6570 &0.6605 &0.6653 \cr &{\bf AGSFH} &{\bf 0.6565} &{\bf 0.6862} &{\bf 0.7209} &{\bf 0.7505} \cr \hline \end{tabular}} \end{table} \renewcommand{\arraystretch}{1.0} \begin{table}[htb] \centering \footnotesize \setlength{\belowcaptionskip}{10pt} \caption{Comparison of MAP with Two Cross-modal Retrieval Tasks on NUS-WIDE Benchmark.} \label {Table.3} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Method} & \multicolumn{4}{c}{NUS-WIDE}\cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.4181 &0.4550 &0.4551 &0.4652 \cr &BGDH &0.4056 &0.4056 &0.4056 &0.4056 \cr &FSH &{\bf 0.5021} &0.5200 &0.5398 &{\bf 0.5453} \cr &RFDH &0.4701 &0.4699 &0.4611 &0.4772 \cr &JIMFH &0.4952 &{\bf 0.5334} &0.5223 &0.5334 \cr &{\bf AGSFH} &0.4856 &0.5189 &{\bf 0.5401} &0.5433 \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.4505 &0.5132 &0.5201 &0.5121 \cr &BGDH &0.3856 &0.3851 &0.3851 &0.3850 \cr &FSH &0.4743 &0.4953 &0.5114 &0.5327 \cr &RFDH &0.4701 &0.4713 &0.4651 &0.4626 \cr &JIMFH &0.4613 &0.4757 &0.5107 &0.5179 \cr &{\bf AGSFH} &{\bf 0.5152} &{\bf 0.5834} &{\bf 0.6144} &{\bf 0.6362} \cr \hline \end{tabular}} \end{table} \begin{figure*} [ht] \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NImageQTextB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NTextQImageB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NImageQTextB_MirFlickr25K}} \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NTextQImageB_MirFlickr25K}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NImageQTextB_wiki}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_NTextQImageB_wiki}} \caption{TopN-precision Curves @ $32$ bits on three cross-modal benchmark datasets. (a) and (b) are Nuswide. (c) and (d) are MirFlickr25K. (e) and (f) are Wiki.} \label{figure3} \end{figure*} \begin{figure*} [ht] \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RImageQTextB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RTextQImageB_Nuswide}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RImageQTextB_MirFlickr25K}} \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RTextQImageB_MirFlickr25K}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RImageQTextB_wiki}} \hfill \centering \subfigure[ ]{ \includegraphics[width=.3\textwidth]{P_RTextQImageB_wiki}} \caption{Precision-Recall Curves @ $32$ bits on three cross-modal benchmark datasets. (a) and (b) are Nuswide. (c) and (d) are MirFlickr25K. (e) and (f) are Wiki.} \label{figure4} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{visualize_retrieval_I2T} \caption{Examples of image-based text retrieval on Wiki using AGSFH. For each image query (first column), we have obtained top $10$ retrieved texts (column $2$-$11$): top row demonstrates the retrieved texts, middle row demonstrates its corresponding images, bottom row demonstrates its corresponding class labels. The retrieved results irrelevant to the query in semantic categories are marked with the red box.} \label{figure32} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{visualize_retrieval_T2I} \caption{Examples of text-based image retrieval on Wiki using AGSFH. For each text query (first column), the top $10$ retrieved images are shown in the third column: top row demonstrates the retrieved images, bottom row demonstrates its corresponding class labels. The second column shows the ground truth image corresponding to the text query. The retrieved results irrelevant to the query in semantic categories are marked with the red box.} \label{figure31} \end{figure*} In this section, we show the comparison results of AGSFH with other state-of-the-art CMH approaches on three multi-modal benchmark databases. All the results are realized on a 64-bit Windows PC with a 2.20GHz i7-8750H CPU and 16.0GB RAM. \subsection{Experimental Settings} \subsubsection{Datasets} We perform experiments on three multi-modal benchmarks datasets: Wiki \cite{jour9}, MIRFlickr25K \cite{jour10}, and NUS-WIDE \cite{jour11}. The details are given as follows. Wiki \cite{jour9} dataset is comprised of $2,866$ image-text pairs, which are sampled from Wikipedia. The dataset contains ten semantic categories such that each pair belongs to one of these categories. Each image is represented by a $128$-dimensional bag-of-words visual vector, and each text is represented by a $10$-dimensional word-vectors. We randomly select $2,173$ image-text pairs from the original dataset as the training set (which is used as the database in the retrieval task), and the remaining $693$ image-text pairs are used as the query set. MIRFlickr25K \cite{jour10} is a dataset containing $25,000$ image-text pairs in $24$ semantic categories. In our experiment, we remove pairs whose textual tags appear less than $20$ times. Accordingly, we get $20,015$ pairs in total, in which the $150$-dimensional histogram vector is used to represent the image, and the $500$-dimensional latent semantic vector is used to represent the text. We randomly select $2,000$ image-tag pairs as the query set and the remaining $18,015$ pairs as the database in the retrieval task. We also randomly selected $5000$ image-text pairs from the remaining $18,015$ pairs as the training set to learn the hash model. NUS-WIDE \cite{jour11} is a multi-label image dataset that contains $269,648$ images with $5,018$ unique tags in $81$ categories. Similar to \cite{jour11}, we sample the ten largest categories with the corresponding $186,577$ images. We then randomly split this dataset into a training set with $184,577$ image-text pairs and a query set with $2,000$ image-text pairs. We should note that each image is annotated by a $500$-dimensional bag-of-words visual vector, and texts are represented by a $1,000$-dimensional index vector. Besides, we randomly selected $5000$ image-text pairs as the training set, which is used to learn the hash model. \subsubsection{Compared Methods} Our model is a shallow unsupervised CMH learning model. In order to conduct a fair comparison, we conduct several experiments and compare the AGSFH method with other shallow unsupervised CMH baselines: CSGH \cite{proceeding14}, BGDH \cite{proceeding16}, FSH \cite{proceeding17}, RFDH \cite{jour5}, JIMFH \cite{jour23}. To implement the baselines, we use and modify their provided source codes. We note that the results are averaged over five runs. For further showing the superiority of performance of our proposal, we compare our work with some deep learning methods, i.e., DCMH \cite{ proceeding23}, DDCMH \cite{ proceeding24}, and DBRC \cite{jour14}, on MirFlickr25K in Section \ref{sec4.5}. \subsubsection{Implementation Details} We utilize the public codes with the fixed parameters in the corresponding papers to perform the baseline CMH methods. Besides, in our experiments, the hyper-parameter $\lambda$ is a trade-off hyper-parameter, which is set as $300$ on three datasets. The weight controller hyper-parameters $\gamma_1$ and $\gamma_3$ are both set to be $0.01$. The regularization hyper-parameter $\gamma_2$ is set to $10$ for better performance. The number of clusters $C$ is set as $60$ in all our experiments. For training efficiency, the number of anchor points $P$ is set to $900$ for all the datasets. The number of neighbor points $k$ is set to $45$. \subsubsection{Evaluation Criteria} We adopt three standard metrics to show the retrieval performance of AGSFH: mean average precision (MAP), topN-precision curves, and precision-recall curves. The MAP metric is calculated by the top $50$ returned retrieval samples. A more detailed introduction of these evaluation criteria is contained in \cite{proceeding30}. Two typical cross-modal retrieval tasks are evaluated for our approach: image-query-text task (i.e. I$\rightarrow$T) and text-query-image task (i.e. T$\rightarrow$I). \subsection{Experimental Analysis} \renewcommand{\arraystretch}{1.0} \begin{table*}[htb] \centering \tiny \setlength{\tabcolsep}{5pt} \caption{Training Time (in Seconds) of Different Hashing Methods on MIRFlickr25K and NUS-WIDE.} \label {Table.4} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{cccccccccc} \hline \multirow{2}{*}{Methods} &\multicolumn{4}{c}{MIRFlickr25K} &\multicolumn{4}{c}{NUS-WIDE}\cr \cmidrule(lr){2-5} \cmidrule(lr){6-9} &16 bits&32 bits&64 bits&128 bits&16 bits&32 bits&64 bits&128 bits\cr \hline CSGH &23.4576 &22.1885 &22.6214 &23.2040 &434.8411 &438.4742 &446.0972 &455.3075 \cr BGDH &0.6640 &0.7100 &1.0135 &1.4828 &8.8514 &10.0385 &12.6808 &16.6654 \cr FSH &5.2739 &5.2588 &5.6416 &6.2217 &75.6872 &77.9825 &83.4182 &89.5166 \cr RFDH &61.9264 &125.8577 &296.4683 &801.0725 &2132.1634 &4045.4016 &8995.0049 &22128.4188 \cr JIMFH &2.1706 &2.3945 &3.0242 &4.3020 &23.3772 &26.9233 &32.0088 &43.6795 \cr {\bf AGSFH} &79.2261 &80.3913 &136.8426 &79.4166 &638.8646 &871.4929 &794.8452 &961.3188 \cr \hline \end{tabular}} \end{table*} \begin{table*}[htb] \centering \tiny \setlength{\belowcaptionskip}{5pt} \caption{Testing Time (in Seconds) of Different Hashing Methods on MIRFlickr25K and NUS-WIDE.} \label {Table.5} \resizebox{0.9\textwidth}{!}{ \begin{tabular}{ccccccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{4}{c}{MIRFlickr25K} &\multicolumn{4}{c}{NUS-WIDE}\cr \cmidrule(lr){3-6} \cmidrule(lr){7-10} &&16 bits&32 bits&64 bits&128 bits&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{6}{*}{I$\rightarrow$T} &CSGH &0.1935 &0.2320 &0.3626 &1.0183 &3.2961 &4.4186 &6.8955 &12.4145 \cr &BGDH &0.1662 &0.2322 &0.3610 &1.0342 &3.3980 &4.4255 &6.9406 &12.4911 \cr &FSH &0.1984 &0.2464 &0.3555 &1.0110 &3.4221 &4.5401 &7.1110 &12.6618 \cr &RFDH &0.1941 &0.2522 &0.3672 &1.0303 &3.4753 &4.4805 &6.9003 &12.6593 \cr &JIMFH &0.1782 &0.2405 &0.3742 &1.0076 &3.2559 &4.5116 &7.0505 &12.6736 \cr &{\bf AGSFH} &0.1528 &0.2403 &0.3784 &0.9411 &3.9483 &4.5114 &6.9818 &12.6628 \cr \hline \multirow{6}{*}{T$\rightarrow$I} &CSGH &0.1562 &0.2133 &0.3670 &1.0157 &3.2620 &4.4325 &6.9195 &12.3532 \cr &BGDH &0.1442 &0.2194 &0.3464 &1.0355 &3.4354 &4.4582 &6.8954 &12.3665 \cr &FSH &0.1533 &0.2130 &0.3639 &1.0101 &3.4172 &4.4222 &6.9067 &12.422 \cr &RFDH &0.1455 &0.2263 &0.3584 &1.0281 &3.3462 &4.4351 &6.9336 &12.4925 \cr &JIMFH &0.1527 &0.2101 &0.3444 &1.0105 &3.4214 &4.4370 &6.9354 &12.4270 \cr &{\bf AGSFH} &0.1371 &0.2224 &0.3532 &0.9485 &3.4478 &4.4860 &6.9820 &12.4307 \cr \hline \end{tabular}} \end{table*} \subsubsection{Retrieval Performance} In Table.\ref{Table.1}, \ref{Table.2} and \ref{Table.3}, the MAP evaluation results are exhibited on all three data sets, i.e., Wiki, MIRFlickr25K, and NUS-WIDE. From these figures, for all cross-modal tasks (i.e., image-query-text and text-query-image), AGSFH achieves a significantly better result than all comparison methods On Wiki and MIRFlickr25K. Besides, on NUS-WIDE, AGSFH also achieves comparable performance with JIMFH and FSH on image-query-text task, outperforming other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. The superiority of AGSFH can be attributed to their capability to reduce the effect of information loss, which directly learns the intrinsic anchor graph to exploit the geometric property of underlying data structure across multiple modalities, as well as avoids the large quantization error. Besides, AGSFH learns the structure of the intrinsic anchor graph adaptively to make training instances clustered into semantic space and preserves the anchor fusion affinity into the common binary Hamming space to guarantee that binary data can preserve the semantic relationship of training instances in semantic space. The above observations show the effectiveness of the proposed AGSFH. We can find another observation that the average increase of text-query-image retrieval task is larger than the average increase of image-query-text retrieval task. This is because that image includes more noise and outliers than text. The topN-precision curves with code length $32$ bits on all three data sets are demonstrated in Figs. \ref{figure3}. From the experimental results, the topN-precision results are in accordance with mAP evaluation values. AGSFH has better performance than others comparison methods for cross-modal hashing search tasks on Wiki and MIRFlickr25K. Furthermore, on NUS-WIDE, AGSFH demonstrates comparable performance with JIMFH and FSH on image-query-text task, outperforming other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. In the retrieval system, we focus more on the front items in the retrieved list returned by the search algorithm. Hence, AGSFH achieves better performance on all retrieval tasks in some sense. From Table.\ref{Table.1}, \ref{Table.2}, \ref{Table.3} and Figs. \ref{figure3}-\ref{figure4}, AGSFH usually demonstrates large margins on performance when compared with other methods about cross-modal hashing search tasks on Wiki and MIRFlickr25K. At the same time, on NUS-WIDE, AGSFH also exhibits comparable performance with JIMFH and FSH on image-query-text task, better than other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. We consider two possible reasons for explaining this phenomenon. Firstly, AGSFH directly learns the intrinsic anchor graph to exploit the geometric property of underlying data structure across multiple modalities, as well as avoids the large quantization error, which can be robust to data (i.e., image and text data) outliers and noises. Thus, AGSFH can achieve performance improvement. Secondly, AGSFH adjusts the structure of the intrinsic anchor graph adaptively to make training instances clustered into semantic space and preserves the anchor fusion affinity into the common binary Hamming space. This process can extract the high-level hidden semantic relationship in the image and text. Therefore, AGSFH could find the common semantic clusters that reflect the semantic properties more precisely. On the consequences, under the guidance of the common semantic clusters, AGSFH can achieve better performance on cross-modal retrieval tasks. The precision-recall curves with the code length of $32$ bits are also demonstrated in Fig. \ref{figure4}. By calculating the area under precision-recall curves, we can discover that AGSFH outperforms comparison methods for cross-modal hashing search tasks on Wiki and MIRFlickr25K. In addition, on NUS-WIDE, AGSFH has comparable performance with JIMFH and FSH on image-query-text task, better than other remaining comparison methods, and shows significantly better performance than all comparison methods on text-query-image task. Moreover, two examples of image-query-text retrieval task and text-query-image retrieval task over Wiki dataset by AGSFH are performed. The results are showed in Fig. \ref{figure32} and Fig. \ref{figure31}. Fig.\ref{figure32} reports that the classes of the second, fourth, sixth, and ninth searched texts are different from the image query belonging to the music category (in the first row). This is because the visual appearance of the image query in the music category is very similar to the incorrect images in the retrieved results. The example of text retrieving images is displayed in Fig.\ref{figure31}. As we can see, the classes of the first, second, fourth, fifth, and sixth retrieved images by AGSFH are different from the text query belonging to the warfare category. We can find that the visual appearance of the text query's corresponding images is very similar to the incorrect retrieved images. \subsubsection{Training Time} We check the time complexity of all the compared CMH methods in the training phase on two large benchmark datasets MIRFlickr25K and NUSWIDE, showing the efficiency of our approach AGSFH. Table. \ref{Table.4} illustrates the training time comparison result. From the table, compared with CSGH, BGDH, FSH, and JIMFH, we can find our AGSFH takes a little longer time than these hashing methods but spends a shorter time to RFDH with different code lengths from $16$ bits to $128$ bits. On the other hand, our approach AGSFH has nearly no change in acceptable training time with different code lengths from $16$ bits to $128$ bits, showing the superiority of AGSFH in time complexity. \subsubsection{Testing Time} We compare the testing time with all the compared CMH methods in Table. \ref{Table.5}. From this table, we observe that the compared cross-modal hashing methods take nearly identical time for retrieval as well as our AGSFH. \begin{table}[htb] \centering \tiny \setlength{\belowcaptionskip}{5pt} \caption{MAP Comparison of AGSFH and Deep Cross-modal Hashing Methods on MirFlickr25K.} \label {Table.6} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{cccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{3}{c}{MirFlickr25K} \cr\cline{3-5} &&16 bits&32 bits&64 bits\cr \hline \multirow{4}{*}{I$\rightarrow$T} &DCMH &0.7410 &0.7465 &0.7485 \cr &DDCMH &{\bf 0.8208} &{\bf 0.8434} &{\bf 0.8551} \cr &DBRC &0.5922 &0.5922 &0.5854 \cr &{\bf AGSFH} &0.7239 &0.7738 &0.8135 \cr \hline \multirow{4}{*}{T$\rightarrow$I} &DCMH &{\bf 0.7827} &0.7900 &0.7932 \cr &DDCMH &0.7731 &0.7766 &0.7905 \cr &DBRC &0.5938 &0.5952 &0.5938 \cr &{\bf AGSFH} &0.7505 &{\bf 0.8009} &{\bf 0.8261} \cr \hline \end{tabular}} \end{table} \subsubsection{Comparison with Deep Cross-modal Hashing} \label{sec4.5} We further compare AGSFH with three state-of-the-art deep CMH methods, i.e., DCMH \cite{ proceeding23}, DDCMH \cite{ proceeding24}, and DBRC \cite{jour14}, on MirFlickr25K dataset. We utilize the deep image features, extracted by the CNN-F network \cite{ proceeding31} with the same parameters in \cite{ proceeding23} and the original text features for evaluating the MAP results of AGSFH. In addition, the performance of DCMH, DDCMH, and DBRC (without public code) is presented using the results in its paper. The results of the experiments are shown in Table. \ref{Table.6}. From Table. \ref{Table.6}, we could find that AGSFH outperforms DCMH and DBRC on both the Image-to-Text and Text-to-Image tasks, but AGSFH is inferior to DDCMH on the Image-to-Text task and is better than it on the Text-to-Image task. From these observations, we conduct the conclusion that AGSFH is not a deep hashing model, yet it can outperform some of the state-of-the-art deep cross-modal hashing methods, i.e., DCMH, DBRC. As the DDCMH shows significantly superior results than the proposed AGSFH, our AGSFH contains many strengths over DDCMH, like simple structure, easy to implement, time-efficient, strong interpretability, and so on. It further verifies the effectiveness of the proposed model AGSFH. \subsection{Empirical Analysis} \begin{figure}[ht] \centering \includegraphics[width=0.5\columnwidth]{objectvalue.pdf} \caption{Convergence analysis.} \label{figure5} \end{figure} \begin{figure*}[ht] \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loglambda.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loglambda.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loggamma1.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loggamma1.pdf}} \caption{MAP values versus hyper-parameters. (a) and (b) are $\lambda$. (c) and (d) are $\gamma_1$.} \label{figure6} \end{figure*} \begin{figure*}[ht] \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loggamma2.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loggamma2.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_loggamma3.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_loggamma3.pdf}} \caption{MAP values versus hyper-parameters. (a) and (b) are $\gamma_2$. (c) and (d) are $\gamma_3$.} \label{figure7} \end{figure*} \begin{figure*}[ht] \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_c.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_c.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_k.pdf}} \centering \subfigure[ ]{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_k.pdf}} \caption{MAP values versus hyper-parameters. (a) and (b) are $c$. (c) and (d) are $k$.} \label{figure8} \end{figure*} \begin{figure}[ht] \centering \subfigure{ \includegraphics[width=0.45\columnwidth]{MAPImageQTextB_P.pdf}} \centering \subfigure{ \includegraphics[width=0.45\columnwidth]{MAPTextQImageB_P.pdf}} \caption{MAP values versus hyper-parameter $P$.} \label{figure9} \end{figure} \subsubsection{Convergence} \label{Convergences} We empirically validate the convergence property of the proposed AGSFH. Specifically, we conduct the empirical analysis of the value for the objective on all three datasets with the fixed $64$ bits hash code. Fig. \ref{figure5} illustrates the convergence curves, where we have normalized the value of the objective by the number of training data and the maximum value for the objective. From the figure Fig. \ref{figure5}, we can easily find that the objective value of AGSFH decreases sharply with less than $40$ iterations on all three datasets and has no significant change after several iterations. The result shows the fast convergence property of Algorithm \ref{alg:GSFH}. \subsubsection{Ablation Experiments Analysis} \begin{table}[htb] \centering \tiny \setlength{\belowcaptionskip}{5pt} \caption{Ablation Results of AGSFH on MirFlickr25K.} \label {Table.7} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{ccccccc} \hline \multirow{2}{*}{Tasks}& \multirow{2}{*}{Methods} & \multicolumn{3}{c}{MirFlickr25K} \cr\cline{3-6} &&16 bits&32 bits&64 bits&128 bits\cr \hline \multirow{3}{*}{I$\rightarrow$T} &AGSFH\_appro &0.6249 &{\bf 0.6531} &0.6660 &0.6688 \cr &AGSFH\_reg &0.6397 &0.6353 &0.6609 &{\bf 0.6709} \cr &{\bf AGSFH} &{\bf 0.6421} &0.6509 &{\bf 0.6702} &0.6701 \cr \hline \multirow{3}{*}{T$\rightarrow$I} &AGSFH\_appro &0.6419 &0.6720 &0.6990 &0.7095 \cr &AGSFH\_reg &0.6496 &0.6612 &0.7013 &0.7026 \cr &{\bf AGSFH} &{\bf 0.6676} &{\bf 0.6745} &{\bf 0.7101} &{\bf 0.7163} \cr \hline \end{tabular}} \end{table} To obtain a deep insight about AGSFH, we further performed ablation experiments analysis on MIRFlickr-25K dataset, including two variations of AGSFH, that is: $1)$ AGSFH\_appro and $2)$ AGSFH\_reg. Compared with AGSFH, AGSFH\_appro drops out the second term in Eq.(\ref{GSFH19}). This second term means that the learned adaptive anchor graph $\hat{S}$ should best approximate the fused anchor graph $\hat{A}$. In contrast, AGSFH\_reg discards the third term in Eq.(\ref{GSFH19}). The third term means adding $L_2$-norm regularization to smooth the elements of the learned anchor graph $\hat{S}$. The MAP results of AGSFH and its variations are shown in Table. \ref{Table.7}. From this table, we can find the following. \begin{enumerate}[(1)] \setlength{\listparindent}{2em} \item AGSFH outperforms AGSFH\_appro and AGSFH\_reg greatly, showing the effect of the second term and the third term in Eq.(\ref{GSFH19}). \item AGSFH\_appro has approximately equivalent performance over AGSFH\_reg. This phenomenon indicates that the second term and the third term in Eq.(\ref{GSFH19}) are both useful for performance improvements. They have different effect improvements of AGSFH. \end{enumerate} \subsubsection{Parameter Sensitivity Analysis} In this part, we analyze the parameter sensitivity of two cross-modal retrieval tasks with various experiment settings in empirical experiments on all datasets. Our AGSFH involves seven model hyper-parameters: the trade-off hyper-parameter $\lambda$, the weight controller hyper-parameters $\gamma_1$ and $\gamma_3$, the regularization hyper-parameter $\gamma_2$, the number of clusters $C$, the number of anchors points $P$, and the number of neighbor points $k$. We fix the length of binary codes as $32$ bits and fix one of these two hyper-parameters with others fixed to verify the superior and stability of our approach AGSFH on a wide parameter range. Fig. \ref{figure6}, \ref{figure7}, \ref{figure8}, \ref{figure9} presents the MAP experimental results of AGSFH. From these figures, relatively stable and superior performance to a large range of parameter variations is shown, verifying the robustness to some parameter variations. \section{Conclusion} In this paper, we propose a novel cross-modal hashing approach for efficient retrieval tasks across modalities, termed Anchor Graph Structure Fusion Hashing (AGSFH). AGSFH directly learns an intrinsic anchor fusion graph, where the structure of the intrinsic anchor graph is adaptively tuned so that the number of components of the intrinsic graph is exactly equal to the number of clusters. Based on this process, training instances can be clustered into semantic space. Besides, AGSFH attempts to directly preserve the anchor fusion affinity with complementary information among multi-modalities data into the common binary Hamming space, capturing intrinsic similarity and structure across modalities by hash codes. A discrete optimization framework is designed to learn the unified binary codes across modalities. Extensive experimental results on three public social datasets demonstrate the superiority of AGSFH in cross-modal retrieval. \bibliographystyle{IEEEtran}
1,108,101,564,081
arxiv
\section{Introduction} In this article we construct (in a rigorous mathematical way) interacting quantum field theories over a $p$-adic spacetime in an arbitrary dimension. We provide a large family of energy functionals $E(\varphi,J)$ admitting natural discretizations in finite-dimensional vector spaces such that the partition function \begin{equation} Z^{\text{phys}}(J)=\int D(\varphi)e^{-\frac{1}{K_{B}T}E(\varphi,J)} \label{Eq_0 \end{equation} can be defined rigorously as the limit of the mentioned discretizations. Our main result is the construction of a measure on a function space such that (\ref{Eq_0}) makes mathematical sense, and the calculations \ of the $n$-point correlation functions can be carried out using perturbation expansions via functional derivatives, in a rigorous mathematical way. Our results include $\varphi^{4}$-theories. In this case, $E(\varphi,J)$ can be interpreted as a Landau-Ginzburg functional of a continuous Ising model \ (i.e. $\varphi \in\mathbb{R}$) with external magnetic field $J$. If $J=0$, then $E(\varphi,0)$ is invariant under $\varphi\rightarrow-\varphi$. We show that the systems attached to discrete versions of $E(\varphi,0)$ have spontaneous breaking symmetry when the temperature $T$ is less than the critical temperature. From now on $p$ denotes a fixed prime number different from $2$. A $p$-adic number is a series of the for \begin{equation} x=x_{-k}p^{-k}+x_{-k+1}p^{-k+1}+\ldots+x_{0}+x_{1}p+\ldots,\text{ with x_{-k}\neq0\text{,} \label{p-adic-number \end{equation} where the $x_{j}$s \ are $p$-adic digits, i.e. numbers in the set $\left\{ 0,1,\ldots,p-1\right\} $. The set of all possible series of the form (\ref{p-adic-number}) constitutes the field of $p$-adic numbers $\mathbb{Q _{p}$. There are natural field operations, sum and multiplication, on series of the form (\ref{p-adic-number}), see e.g. \cite{Koblitz}. There is also a natural norm in $\mathbb{Q}_{p}$ defined as $\left\vert x\right\vert _{p}=p^{k}$, for a nonzero $p$-adic number of the form (\ref{p-adic-number}). The field of $p$-adic numbers with the distance induced by $\left\vert \cdot\right\vert _{p}$ is a complete ultrametric space. The ultrametric (or non-Archimedean) property refers to the fact that $\left\vert x-y\right\vert _{p}\leq\max\left\{ \left\vert x-z\right\vert _{p},\left\vert z-y\right\vert _{p}\right\} $ for any $x$, $y$, $z\in\mathbb{Q}_{p}$. We denote by $\mathbb{Z}_{p}$ the unit ball, which consists of all series with expansions of the form (\ref{p-adic-number}) with $-k\geq0$. We extend the $p-$adic norm to $\mathbb{Q}_{p}^{N}$ by taking $||x||_{p}=\max_{1\leq i\leq N}|x_{i}|_{p}$, for $x=(x_{1},\dots,x_{N})\in\mathbb{Q}_{p}^{N}$. A fundamental scientific problem is the understanding of the structure of space-time at the level of the Planck scale, and the construction of physical-mathematical models of it. This problem occurs naturally when trying to unify general relativity and quantum mechanics. In the 1930s Bronstein showed that general relativity and quantum mechanics imply that the uncertainty $\Delta x$ of any length measurement satisfies $\Delta x\geq L_{\text{Planck}}:=\sqrt{\frac{\hbar G}{c^{3}}}$, where $L_{\text{Planck}}$ is the Planck length ($L_{\text{Planck}}\approx10^{-33}$ $cm$). This implies that space-time is not an infinitely divisible continuum (mathematically speaking, the spacetime must be a completely disconnected topological space at the level of the Planck scale). Bronstein's \ inequality has motivated the development of several different physical theories. At any rate, this inequality implies the need of using non-Archimedean mathematics in models dealing with the Planck scale. In the 1980s, Volovich proposed the conjecture that the space-time at the Planck scale is non-Archimedean, see \cite{Volovich1}. This conjecture has propelled a wide variety of investigations in cosmology, quantum mechanics, string theory, QFT, etc., and the influence of this conjecture is still relevant nowadays, see e.g. \cite{Abdesselam}, \cite{Bocardo-Zuniga}-\cite{GC-Zuniga}, \cite{Gubser et al.}-\cite{Harlow et al}, \cite{Kochubei et al}-\cite{KKZuniga}, \cite{LM89}-\cite{Mis2}, \cite{V-V-Z}-\cite{Zuniga-LNM-2016}. The space $\mathbb{Q}_{p}^{N}$ has a very rich mathematical structure. The axiomatic quantum field \ theory can be extended to $\mathbb{Q}_{p}^{N}$. In \cite{Mendoza-Zuniga}, we construct a family of quantum scalar fields over a $p-$adic spacetime which satisfy $p-$adic analogues of the G\aa rding--Wightman axioms. Since the space of test functions on $\mathbb{Q}_{p}^{N}$ is nuclear the techniques of white noise calculus are available in the $p$-adic setting, see e.g. \cite{Ber-Kon}, \cite{Gelfand-Vilenkin}, \cite{Huang-Yang}, \cite{Hida et al}. This implies that a rigorous functional integral approach is available in the $p$-adic framework, see e.g. \cite{Glimm-Jaffe}, \cite{Simon-0}, \cite{Simon-1}. In \cite{Zuniga-JFAA}, see also \cite[Chapter 11]{KKZuniga}, \cite{Albeverio-et-al-3}-\cite{Albeverio-et-al-4}, we introduced a class of non-Archimedean massive Euclidean fields, in arbitrary dimension, which are constructed as solutions of certain covariant $p$-adic stochastic pseudodifferential equations, by using techniques of white noise calculus. In \cite{Arroyo-Zuniga}, we construct a large class of interacting Euclidean quantum field theories, over a $p$-adic space time, by using white noise calculus. These quantum fields fulfill all the Osterwalder-Schrader axioms, except the reflection positivity. In all these theories the time is a $p$-adic variable. Since $\mathbb{Q}_{p}$ is not an ordered field, there is no notion of past and future. In certain theories, it is possible to introduce a quadratic form. The orthogonal group of this form plays the role of Lorentz group. Anyway, we do not have a light cone structure, and then this type of theory is also acausal, see \cite{Mendoza-Zuniga}. The relevant feature is that the vacuum of\ all these theories performs fluctuations. In the case of $\varphi^{4}$-theories the energy functional $E(\varphi,0)$ takes the for \begin{align} E(\varphi,0;\delta,\gamma,\alpha_{2},\alpha_{4}) & =\frac{\gamma}{2}\text{ {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi\left( x\right) \boldsymbol{W}\left( \partial,\delta\right) \varphi\left( x\right) d^{N}x+\frac{\alpha_{2}}{2}\text{ {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi^{2}\left( x\right) d^{N}x\nonumber\\ & +\frac{\alpha_{4}}{2 {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi^{4}\left( x\right) d^{N}x, \label{Eq_energy \end{align} where $\varphi:\mathbb{Q}_{p}^{N}\rightarrow\mathbb{R}$ is a test function ($\varphi\in\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $), $\delta>N$, $\gamma>0$, $\alpha_{2}\geq0$, $\alpha_{4}\geq0$, and $\boldsymbol{W}\left( \partial,\delta\right) \varphi\left( x\right) =\mathcal{F}_{\kappa\rightarrow x}^{-1}(A_{w_{\delta}}(\left\Vert \kappa\right\Vert )\mathcal{F}_{x\rightarrow\kappa}\varphi)$ is pseudodifferential operator, whose symbol has a singularity at the origin. An interesting observation is that the one-dimensional Vladimirov operator is a special case of the operators $\boldsymbol{W}\left( \partial,\delta\right) $, in this case the action $E(\varphi,0;\delta,\gamma,0,0)$ appeared in $p$-adic string theory, see \cite{Spokoiny}, \cite{Zhang}, \cite{Zabrodin}, see also \cite{GC-Zuniga} and the references therein. In order to make sense of the partition function attached to $E(\varphi ,0;\delta,\gamma,\alpha_{2},\alpha_{4})$, see (\ref{Eq_0}), we discretize the fields like in classical QFT. As fields we use test functions $\varphi\in$ $\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $, which are locally constant with compact support. We have $\mathcal{D}_{\mathbb{R }\left( \mathbb{Q}_{p}^{N}\right) =\cup_{l=1}^{\infty}\mathcal{D _{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) $, where $\mathcal{D _{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) \simeq\mathbb{R}^{\#G_{l }$ is a real, finite dimensional vector space consisting of test functions supported in the ball $B_{l}^{N}=\left\{ x\in\mathbb{Q}_{p}^{N};\left\Vert x\right\Vert _{p}\leq p^{l}\right\} $ having the for \begin{equation} \varphi\left( x\right) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \text{, \ }\varphi\left( \boldsymbol{i}\right) \in\mathbb{R}\text{,} \label{Eq_0_1 \end{equation} where $G_{l}$ is a finite set of indices and $\Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) $ is the characteristic function of the ball $B_{-l}^{N}\left( \boldsymbol{i}\right) =\left\{ x\in \mathbb{Q}_{p}^{N};\left\Vert x-\boldsymbol{i}\right\Vert _{p}\leq p^{-l}\right\} $. Now a natural discretization of partition function $\mathcal{Z}^{\left( l\right) }$ is obtained by restring the fields to $\mathcal{D}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) \simeq \mathbb{R}^{\#G_{l}}$ as follows. By identifying $\varphi$ with the column vector $\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}$, one obtains that \[ E(\varphi,0;\delta,\gamma,\alpha_{2},0) {\textstyle\sum\limits_{\boldsymbol{i},\boldsymbol{j}\in G_{l}}} p^{-lN}U_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{i \right) \varphi\left( \boldsymbol{j}\right) , \] is a quadratic form in $\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}$, cf. Lemma \ref{Lemma6}, and thus taking $K_{B}T=1$, it results natural to propose that \[ \mathcal{Z}^{\left( l\right) }=\int D_{l}(\varphi)e^{-E(\varphi ,0;\delta,\gamma,\alpha_{2},0)}\overset{\text{def.}}{=}\mathcal{N}_{l {\textstyle\int\limits_{\mathbb{R}^{\#G_{l}}}} e^{ {\textstyle\sum\limits_{\boldsymbol{i},\boldsymbol{j}\in G_{l}}} p^{-lN}U_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{i \right) \varphi\left( \boldsymbol{j}\right) {\textstyle\prod\limits_{\boldsymbol{i}\in G_{l}}} d\varphi\left( \boldsymbol{i}\right) , \] where $\mathcal{N}_{l}$ is a normalization constant, {\textstyle\prod\nolimits_{\boldsymbol{i}\in G_{l}}} d\varphi\left( \boldsymbol{i}\right) $ is the Lebesgue measure of $\mathbb{R}^{\#G_{l}}$, which is a finite dimensional Gaussian integral. We denote the corresponding Gaussian measure as $\mathbb{P}_{l}$. The next step is to show the existence of a probability measure $\mathbb{P}$ such that $\mathbb{P=}\lim_{l\rightarrow\infty}\mathbb{P}_{l}$ `in some sense'. This requires to pass to the momenta space and using the Lizorkin space $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \subset \mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $, resp. $\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) \subset \mathcal{D}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) $. The key point is that the operator \[ \frac{\gamma}{2}\boldsymbol{W}\left( \partial,\delta\right) +\frac {\alpha_{2}}{2}:\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \rightarrow\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \] has an inverse in $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $ for any $\alpha_{2}\geq0$. The construction of the measure $\mathbb{P}$ is made into two steps. In the first step, by using Kolmogorov's consistency theorem, one shows the existence of a unique probability measure $\mathbb{P}$ in $\mathbb{R}^{\infty}\cup\left\{ \text{point}\right\} $ such any linear functional $f\rightarrow\int_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) }fd\mathbb{P}_{l}$, where $f$ is a continuous bounded function in $\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p ^{N}\right) $, has unique extension of the form $\int_{\mathcal{L _{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) }fd\mathbb{P}_{l =\int_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) }fd\mathbb{P}$, cf. Lemma \ref{Lemma11}. In the second step by using the \ Gel'fand triple $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \hookrightarrow L_{\mathbb{R}}^{2}\left( \mathbb{Q}_{p}^{N}\right) \hookrightarrow\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p ^{N}\right) $, where $\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q _{p}^{N}\right) $ is the topological dual of $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $, and the Bochner-Minlos theorem, there exists a probability measure $\mathbb{P}$ on $\left( \mathcal{L}_{\mathbb{R}}^{\prime }\left( \mathbb{Q}_{p}^{N}\right) ,\mathcal{B}\right) $, \ that coincides with the probability measure constructed in the first step, cf. Theorem \ref{Theorem1}. For an interaction energy $E_{\text{int}}(\varphi)$ satisfying $\exp\left( -E_{\text{int}}(\varphi)\right) \leq1$, it verifies that \ {\displaystyle\int\nolimits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) }d\mathbb{P}_{l} {\displaystyle\int\nolimits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) }d\mathbb{P\rightarrow {\displaystyle\int\nolimits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q _{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) }d\mathbb{P \] as $l\rightarrow\infty$. Then a $\mathcal{P}\left( \varphi\right) $-theory is given by a cylinder probability measure of the for \begin{equation} \frac{1_{\mathcal{L}_{\mathbb{R}}}\left( \varphi\right) e^{-E_{\text{int }\left( \varphi\right) }d\mathbb{P}} {\displaystyle\int\nolimits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q _{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) }d\mathbb{P}} \label{measure_1 \end{equation} in the space of fields $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) $. It is important to mention that we do not require the Wick regularization operation in $e^{-E_{\text{int}}\left( \varphi\right) }$ because we are restricting the fields to be test functions. Here we consider polynomial interactions. For more general interactions the Wick calculus is necessary, see \cite{Arroyo-Zuniga}, \cite{GS1999}. The advantage of the approach presented here is that all the perturbation calculations can be carried out in the standard way using functional derivatives, but in a mathematically rigorous way, see Theorem \ref{Theorem2}. The mathematical framework presented here allows the construction of complex-value measures of typ \[ \frac{1_{\mathcal{L}_{\mathbb{R}}}\left( \varphi\right) \exp\sqrt {-1}\left\{ \frac{\alpha_{4}}{2 {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi^{4}\left( x\right) d^{N}x {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} J(x)\varphi\left( x\right) d^{N}x\right\} } {\displaystyle\int\nolimits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q _{p}^{N}\right) }} \exp\sqrt{-1}\left\{ \frac{\alpha_{4}}{2 {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi^{4}\left( x\right) d^{N}x\right\} d\mathbb{P}}d\mathbb{P}\text{. \] Furthermore all the corresponding perturbation expansions can be carried out in the standard form. These measures are obtained from measures of type (\ref{measure_1}) by performing a Wick rotation of type $\varphi \rightarrow\sqrt{-1}\varphi$, see Section \ref{Section_Wick_rotation}. The novelty is that this Wick rotation is not performed in the spacetime, and thus \ all these quantum field theories are acausal. More precisely, the special relativity is not valid in the spacetime of these theories. However, the vacuum of all these theories perform thermal (resp. quantum) fluctuations, because the Feynman rules are valid, at least formally, in these theories. The energy functional $E(\varphi,J;\delta,\gamma,\alpha_{2},\alpha_{4})$, $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) $, see (\ref{Eq_energy}), can be interpreted as the Hamiltonian of a continuous Ising model in the ball $B_{l}^{N}$ with an external magnetic field $J$. The Landau-Ginzburg energy functional $E(\varphi,0;\delta,\gamma,\alpha_{2 ,\alpha_{4})$ is non-local, i.e. only long range interactions occur, it has translational symmetries and $\boldsymbol{Z}_{2}$ symmetry ($\varphi \rightarrow-\varphi$), see Section \ref{Section_Landau_Ginzburg}. We obtain the motion equation for a system with free energy $E(\varphi,0;\delta ,\gamma,\alpha_{2},\alpha_{4})$, see Theorem \ref{Theorem3}. By using this result we show that below the critical temperature the systems must pick one of the two states $+\varphi_{0}$ or $-\varphi_{0}$ (which are constant solutions of the motion equation) which means that \ there is a spontaneous symmetry breaking. Finally, all the results presented in this article are valid if $\mathbb{Q _{p}$ is replaced by any non-Archimedean local field. \section{\label{Section1} Basic facts on $p$-adic analysis} In this section we fix the notation and collect some basic results on $p$-adic analysis that we will use through the article. For a detailed exposition on $p$-adic analysis the reader may consult \cite{A-K-S}, \cite{Taibleson}, \cite{V-V-Z}. \subsection{The field of $p$-adic numbers} Along this article $p$ will denote a prime number. Since we have to deal with quadratic forms, for the sake of simplicity, we assume that $p\geq3$ along the article. The field of $p-$adic numbers $\mathbb{Q}_{p}$ is defined as the completion of the field of rational numbers $\mathbb{Q}$ with respect to the $p-$adic norm $|\cdot|_{p}$, which is defined as \[ |x|_{p} \begin{cases} 0 & \text{if }x=0\\ p^{-\gamma} & \text{if }x=p^{\gamma}\dfrac{a}{b}, \end{cases} \] where $a$ and $b$ are integers coprime with $p$. The integer $\gamma =ord_{p}(x):=ord(x)$, with $ord(0):=+\infty$, is called the\textit{\ $p-$\textit{adic order of} $x$. We extend the $p-$adic norm to $\mathbb{Q _{p}^{N}$ by takin \[ ||x||_{p}:=\max_{1\leq i\leq N}|x_{i}|_{p},\qquad\text{for }x=(x_{1 ,\dots,x_{N})\in\mathbb{Q}_{p}^{N}. \] We define $ord(x)=\min_{1\leq i\leq N}\{ord(x_{i})\}$, then $||x||_{p =p^{-ord(x)}$.\ The metric space $\left( \mathbb{Q}_{p}^{N},||\cdot ||_{p}\right) $ is a complete ultrametric space. As a topological space $\mathbb{Q}_{p}$\ is homeomorphic to a Cantor-like subset of the real line, see e.g. \cite{A-K-S}, \cite{V-V-Z}. Any $p-$adic number $x\neq0$ has a unique expansion of the form \[ x=p^{ord(x)}\sum_{j=0}^{\infty}x_{j}p^{j}, \] where $x_{j}\in\{0,1,2,\dots,p-1\}$ and $x_{0}\neq0$. By using this expansion, we define \textit{the fractional part }$\{x\}_{p}$\textit{ of }$x\in \mathbb{Q}_{p}$ as the rational number \[ \{x\}_{p} \begin{cases} 0 & \text{if }x=0\text{ or }ord(x)\geq0\\ p^{ord(x)}\sum_{j=0}^{-ord(x)-1}x_{j}p^{j} & \text{if }ord(x)<0. \end{cases} \] In addition, any $x\in\mathbb{Q}_{p}^{N}\smallsetminus\left\{ 0\right\} $ can be represented uniquely as $x=p^{ord(x)}v\left( x\right) $ where $\left\Vert v\left( x\right) \right\Vert _{p}=1$. \subsection{Topology of $\mathbb{Q}_{p}^{N}$} For $r\in\mathbb{Z}$, denote by $B_{r}^{N}(a)=\{x\in\mathbb{Q}_{p ^{N};||x-a||_{p}\leq p^{r}\}$ \textit{the ball of radius }$p^{r}$ \textit{with center at} $a=(a_{1},\dots,a_{N})\in\mathbb{Q}_{p}^{N}$, and take $B_{r ^{N}(0):=B_{r}^{N}$. Note that $B_{r}^{N}(a)=B_{r}(a_{1})\times\cdots\times B_{r}(a_{N})$, where $B_{r}(a_{i}):=\{x\in\mathbb{Q}_{p};|x_{i}-a_{i}|_{p}\leq p^{r}\}$ is the one-dimensional ball of radius $p^{r}$ with center at $a_{i}\in\mathbb{Q}_{p}$. The ball $B_{0}^{N}$ equals the product of $N$ copies of $B_{0}=\mathbb{Z}_{p}$, \textit{the ring of }$p-$\textit{adic integers}. We also denote by $S_{r}^{N}(a)=\{x\in\mathbb{Q}_{p}^{N ;||x-a||_{p}=p^{r}\}$ \textit{the sphere of radius }$p^{r}$ \textit{with center at} $a=(a_{1},\dots,a_{N})\in\mathbb{Q}_{p}^{N}$, and take $S_{r ^{N}(0):=S_{r}^{N}$. We notice that $S_{0}^{1}=\mathbb{Z}_{p}^{\times}$ (the group of units of $\mathbb{Z}_{p}$), but $\left( \mathbb{Z}_{p}^{\times }\right) ^{N}\subsetneq S_{0}^{N}$. The balls and spheres are both open and closed subsets in $\mathbb{Q}_{p}^{N}$. In addition, two balls in $\mathbb{Q}_{p}^{N}$ are either disjoint or one is contained in the other. As a topological space $\left( \mathbb{Q}_{p}^{N},||\cdot||_{p}\right) $ is totally disconnected, i.e. the only connected \ subsets of $\mathbb{Q}_{p ^{N}$ are the empty set and the points. A subset of $\mathbb{Q}_{p}^{N}$ is compact if and only if it is closed and bounded in $\mathbb{Q}_{p}^{N}$, see e.g. \cite[Section 1.3]{V-V-Z}, or \cite[Section 1.8]{A-K-S}. The balls and spheres are compact subsets. Thus $\left( \mathbb{Q}_{p}^{N},||\cdot ||_{p}\right) $ is a locally compact topological space. Since $(\mathbb{Q}_{p}^{N},+)$ is a locally compact topological group, there exists a Haar measure $d^{N}x$, which is invariant under translations, i.e. $d^{N}(x+a)=d^{N}x$. If we normalize this measure by the condition $\int_{\mathbb{Z}_{p}^{N}}dx=1$, then $d^{N}x$ is unique. \begin{notation} We will use $\Omega\left( p^{-r}||x-a||_{p}\right) $ to denote the characteristic function of the ball $B_{r}^{N}(a)$. For more general sets, we will use the notation $1_{A}$ for the characteristic function of a set $A$. \end{notation} \subsection{The Bruhat-Schwartz space} A complex-valued function $\varphi$ defined on $\mathbb{Q}_{p}^{N}$ is \textit{called locally constant} if for any $x\in\mathbb{Q}_{p}^{N}$ there exist an integer $l(x)\in\mathbb{Z}$ such tha \begin{equation} \varphi(x+x^{\prime})=\varphi(x)\text{ for any }x^{\prime}\in B_{l(x)}^{N}. \label{local_constancy \end{equation} A function $\varphi:\mathbb{Q}_{p}^{N}\rightarrow\mathbb{C}$ is called a \textit{Bruhat-Schwartz function (or a test function)} if it is locally constant with compact support. Any test function can be represented as a linear combination, with complex coefficients, of characteristic functions of balls. The $\mathbb{C}$-vector space of Bruhat-Schwartz functions is denoted by $\mathcal{D}(\mathbb{Q}_{p}^{N}):=\mathcal{D}$. We denote by $\mathcal{D _{\mathbb{R}}(\mathbb{Q}_{p}^{N}):=\mathcal{D}_{\mathbb{R}}$\ the $\mathbb{R $-vector space of Bruhat-Schwartz functions. For $\varphi\in\mathcal{D (\mathbb{Q}_{p}^{N})$, the largest number $l=l(\varphi)$ satisfying (\ref{local_constancy}) is called \textit{the exponent of local constancy (or the parameter of constancy) of} $\varphi$. We denote by $\mathcal{D}_{m}^{l}(\mathbb{Q}_{p}^{N})$ the finite-dimensional space of test functions from $\mathcal{D}(\mathbb{Q}_{p}^{N})$ having supports in the ball $B_{m}^{N}$ and with parameters \ of constancy $\geq l$. We now define a topology on $\mathcal{D}$ as follows. We say that a sequence $\left\{ \varphi_{j}\right\} _{j\in\mathbb{N}}$ of functions in $\mathcal{D}$ converges to zero, if the two following conditions hold: (1) there are two fixed integers $k_{0}$ and $m_{0}$ such that \ each $\varphi_{j}\in$ $\mathcal{D}_{m_{0}}^{k_{0}}$; (2) $\varphi_{j}\rightarrow0$ uniformly. $\mathcal{D}$ endowed with the above topology becomes a topological vector space. \subsection{$L^{\rho}$ spaces} Given $\rho\in\lbrack0,\infty)$, we denote by $L^{\rho}:=L^{\rho}\left( \mathbb{Q} _{p}^{N}\right) :=L^{\rho}\left( \mathbb{Q} _{p}^{N},d^{N}x\right) ,$ the $\mathbb{C}-$vector space of all the complex valued functions $g$ satisfying $\int_ \mathbb{Q} _{p}^{N}}\left\vert g\left( x\right) \right\vert ^{\rho}d^{N}x<\infty$. The corresponding $\mathbb{R}$-vector spaces are denoted as $L_{\mathbb{R}}^{\rho }\allowbreak:=L_{\mathbb{R}}^{\rho}\left( \mathbb{Q} _{p}^{N}\right) =L_{\mathbb{R}}^{\rho}\left( \mathbb{Q} _{p}^{N},d^{N}x\right) $, $1\leq\rho<\infty$. If $U$ is an open subset of $\mathbb{Q}_{p}^{N}$, $\mathcal{D}(U)$ denotes the space of test functions with supports contained in $U$, then $\mathcal{D}(U)$ is dense in \[ L^{\rho}\left( U\right) =\left\{ \varphi:U\rightarrow\mathbb{C};\left\Vert \varphi\right\Vert _{\rho}=\left\{ \int_{U}\left\vert \varphi\left( x\right) \right\vert ^{\rho}d^{N}x\right\} ^{\frac{1}{\rho}}<\infty\right\} , \] where $d^{N}x$ is the normalized Haar measure on $\left( \mathbb{Q}_{p ^{N},+\right) $, for $1\leq\rho<\infty$, see e.g. \cite[Section 4.3]{A-K-S}. We denote by $L_{\mathbb{R}}^{\rho}\left( U\right) $ the real counterpart of $L^{\rho}\left( U\right) $. \subsection{The Fourier transform} Set $\chi_{p}(y)=\exp(2\pi i\{y\}_{p})$ for $y\in\mathbb{Q}_{p}$. The map $\chi_{p}(\cdot)$ is an additive character on $\mathbb{Q}_{p}$, i.e. a continuous map from $\left( \mathbb{Q}_{p},+\right) $ into $S$ (the unit circle considered as multiplicative group) satisfying $\chi_{p}(x_{0 +x_{1})=\chi_{p}(x_{0})\chi_{p}(x_{1})$, $x_{0},x_{1}\in\mathbb{Q}_{p}$. \ The additive characters of $\mathbb{Q}_{p}$ form an Abelian group which is isomorphic to $\left( \mathbb{Q}_{p},+\right) $. The isomorphism is given by $\kappa\rightarrow\chi_{p}(\kappa x)$, see e.g. \cite[Section 2.3]{A-K-S}. Given $\kappa=(\kappa_{1},\dots,\kappa_{N})$ and $y=(x_{1},\dots ,x_{N})\allowbreak\in\mathbb{Q}_{p}^{N}$, we set $\kappa\cdot x:=\sum _{j=1}^{N}\kappa_{j}x_{j}$. The Fourier transform of $\varphi\in \mathcal{D}(\mathbb{Q}_{p}^{N})$ is defined as \[ (\mathcal{F}\varphi)(\kappa)=\int_{\mathbb{Q}_{p}^{N}}\chi_{p}(\kappa\cdot x)\varphi(x)d^{N}x\quad\text{for }\kappa\in\mathbb{Q}_{p}^{N}, \] where $d^{N}x$ is the normalized Haar measure on $\mathbb{Q}_{p}^{N}$. The Fourier transform is a linear isomorphism from $\mathcal{D}(\mathbb{Q}_{p ^{N})$ onto itself satisfying \begin{equation} (\mathcal{F}(\mathcal{F}\varphi))(\kappa)=\varphi(-\kappa), \label{Eq_FFT \end{equation} see e.g. \cite[Section 4.8]{A-K-S}. We will also use the notation $\mathcal{F}_{x\rightarrow\kappa}\varphi$ and $\widehat{\varphi}$\ for the Fourier transform of $\varphi$. The Fourier transform extends to $L^{2}$. If $f\in L^{2},$ its Fourier transform is defined as \[ (\mathcal{F}f)(\kappa)=\lim_{k\rightarrow\infty}\int_{||x||_{p}\leq p^{k} \chi_{p}(\kappa\cdot x)f(x)d^{N}x,\quad\text{for }\kappa\i \mathbb{Q} _{p}^{N}, \] where the limit is taken in $L^{2}$. We recall that the Fourier transform is unitary on $L^{2},$ i.e. $||f||_{L^{2}}=||\mathcal{F}f||_{L^{2}}$ for $f\in L^{2}$ and that (\ref{Eq_FFT}) is also valid in $L^{2}$, see e.g. \cite[Chapter III, Section 2]{Taibleson}. \subsection{Distributions} The $\mathbb{C}$-vector space $\mathcal{D}^{\prime}\left( \mathbb{Q}_{p ^{n}\right) $ $:=\mathcal{D}^{\prime}$ of all continuous linear functionals on $\mathcal{D}(\mathbb{Q}_{p}^{n})$ is called the \textit{Bruhat-Schwartz space of distributions}. Every linear functional on $\mathcal{D}$ is continuous, i.e. $\mathcal{D}^{\prime}$\ agrees with the algebraic dual of $\mathcal{D}$, see e.g. \cite[Chapter 1, VI.3, Lemma]{V-V-Z}. We denote by $\mathcal{D}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{n}\right) $ $:=\mathcal{D}_{\mathbb{R}}^{\prime}$ the dual space of $\mathcal{D _{\mathbb{R}}$. We endow $\mathcal{D}^{\prime}$ with the weak topology, i.e. a sequence $\left\{ T_{j}\right\} _{j\in\mathbb{N}}$ in $\mathcal{D}^{\prime}$ converges to $T$ if $\lim_{j\rightarrow\infty}T_{j}\left( \varphi\right) =T\left( \varphi\right) $ for any $\varphi\in\mathcal{D}$. \ The map \ \begin{array} [c]{lll \mathcal{D}^{\prime}\times\mathcal{D} & \rightarrow & \mathbb{C}\\ \left( T,\varphi\right) & \rightarrow & T\left( \varphi\right) \end{array} \] is a bilinear form which is continuous in $T$ and $\varphi$ separately. We call this map the pairing between $\mathcal{D}^{\prime}$ and $\mathcal{D}$. From now on we will use $\left( T,\varphi\right) $ instead of $T\left( \varphi\right) $. Every $f$\ in $L_{loc}^{1}$ defines a distribution $f\in\mathcal{D}^{\prime }\left( \mathbb{Q}_{p}^{N}\right) $ by the formula \[ \left( f,\varphi\right) {\textstyle\int\limits_{\mathbb{Q}_{p}^{n}}} f\left( x\right) \varphi\left( x\right) d^{N}x. \] Such distributions are called \textit{regular distributions}. Notice that for $f$\ $\in L_{\mathbb{R}}^{2}$, $\left( f,\varphi\right) =\left\langle f,\varphi\right\rangle $, where $\left\langle \cdot,\cdot\right\rangle $ denotes the scalar product in $L_{\mathbb{R}}^{2}$. \begin{remark} \label{Nota_Nuclear}Let $B(\psi,\varphi)$ be a bilinear functional, $\psi \in\mathcal{D}\left( \mathbb{Q}_{p}^{n}\right) $, $\varphi\in\mathcal{D \left( \mathbb{Q}_{p}^{m}\right) $. Then there exists a unique distribution $T\in\mathcal{D}^{\prime}\left( \mathbb{Q}_{p}^{n}\times\mathbb{Q}_{p ^{m}\right) $ such that \[ \left( T,\psi\left( x\right) \varphi\left( y\right) \right) =B(\psi,\varphi)\text{, for }\psi\in\mathcal{D}\left( \mathbb{Q}_{p ^{n}\right) ,\varphi\in\mathcal{D}\left( \mathbb{Q}_{p}^{m}\right) , \] cf. \cite[Chapter 1, VI.7, Theorem]{V-V-Z} \end{remark} \subsection{The Fourier transform of a distribution} The Fourier transform $\mathcal{F}\left[ T\right] $ of a distribution $T\in\mathcal{D}^{\prime}\left( \mathbb{Q}_{p}^{n}\right) $ is defined b \[ \left( \mathcal{F}\left[ T\right] ,\varphi\right) =\left( T,\mathcal{F \left[ \varphi\right] \right) \text{ for all }\varphi\in\mathcal{D (\mathbb{Q}_{p}^{n})\text{. \] The Fourier transform $T\rightarrow\mathcal{F}\left[ T\right] $ is a linear (and continuous) isomorphism from $\mathcal{D}^{\prime}\left( \mathbb{Q _{p}^{n}\right) $\ onto $\mathcal{D}^{\prime}\left( \mathbb{Q}_{p ^{n}\right) $. Furthermore, $T=\mathcal{F}\left[ \mathcal{F}\left[ T\right] \left( -\xi\right) \right] $. \section{$\boldsymbol{W}_{\delta}$ \ operators and their discretizations} \subsection{The $\boldsymbol{W}_{\delta}$ operators} Take \ $\mathbb{R}_{+}:=\left\{ x\in\mathbb{R};x\geq0\right\} $, and fix a functio \[ w_{\delta}:\mathbb{Q}_{p}^{N}\rightarrow\mathbb{R}_{+ \] satisfying the following properties: \noindent(i) $w_{\delta}\left( y\right) $ is a radial i.e. $w_{\delta }(y)=w_{\delta}(\left\Vert y\right\Vert _{p})$; \noindent(ii) $w_{\delta}(\left\Vert y\right\Vert _{p})$ is continuous and increasing function of $\left\Vert y\right\Vert _{p}$; \noindent(iii) $w_{\delta}\left( y\right) =0$ if and only if $y=0$; \noindent(iv) there exist constants $C_{0},C_{1}>0$ and $\delta>N$ such that \begin{equation} C_{0}\left\Vert y\right\Vert _{p}^{\delta}\leq w_{\delta}(\left\Vert y\right\Vert _{p})\leq C_{1}\left\Vert y\right\Vert _{p}^{\delta}\text{, for }y\in\mathbb{Q}_{p}^{N}\text{.} \label{Eq_1 \end{equation} We now define the operator \begin{equation} \boldsymbol{W}_{\delta}\varphi(x)={\int\limits_{\mathbb{Q}_{p}^{N}} \frac{\varphi\left( x-y\right) -\varphi\left( x\right) }{w_{\delta}\left( \Vert y\Vert_{p}\right) }d^{N}y\text{, for }\varphi\in\mathcal{D}\left( \mathbb{Q}_{p}^{N}\right) \text{.} \label{EQ_oper_W_def \end{equation} The operator $\boldsymbol{W}_{\delta}$ is pseudodifferential, more precisely, i \begin{equation} A_{w_{\delta}}\left( \kappa\right) :={\int\limits_{\mathbb{Q}_{p}^{N}} \frac{1-\chi_{p}\left( y\cdot\kappa\right) }{w_{\delta}\left( \Vert y\Vert_{p}\right) }d^{N}y, \label{Eq_Kernel \end{equation} then \begin{equation} \boldsymbol{W}_{\delta}\varphi\left( x\right) =-\mathcal{F}_{\kappa \rightarrow x}^{-1}\left[ A_{w_{\delta}}\left( \kappa\right) \mathcal{F _{x\rightarrow\kappa}\varphi\right] =:-\boldsymbol{W}\left( \partial ,\delta\right) \varphi\left( x\right) \text{, for }\varphi\in \mathcal{D}\left( \mathbb{Q}_{p}^{N}\right) \text{.} \label{EQ_oper_W_pseudo \end{equation} The function $A_{w_{\delta}}\left( \kappa\right) $ is radial (so we use the notation $A_{w_{\delta}}\left( \kappa\right) =A_{w_{\delta}}\left( \Vert\kappa\Vert_{p}\right) $), continuous, non-negative, $A_{w_{\delta }\left( 0\right) =0$, and it satisfies \[ C_{0}^{\prime}\left\Vert \kappa\right\Vert _{p}^{\delta-N}\leq A_{w_{\delta }(\left\Vert \kappa\right\Vert _{p})\leq C_{1}^{\prime}\left\Vert \kappa\right\Vert _{p}^{\delta-N}\text{, for }\kappa\in\mathbb{Q}_{p ^{N}\text{, \] cf. \cite[Lemmas 4, 5, 8 ]{Zuniga-LNM-2016}. The operator $\boldsymbol{W \left( \partial,\delta\right) $ extends to an unbounded and densely defined operator in $L^{2}\left( \mathbb{Q}_{p}^{N}\right) $ with domain \begin{equation} Dom(\boldsymbol{W}\left( \partial,\delta\right) )=\left\{ \varphi\in L^{2};A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p})\mathcal{F}\varphi\in L^{2}\right\} . \label{Dom_W \end{equation} In addition: \noindent(i) $\left( \boldsymbol{W}\left( \partial,\delta\right) ,Dom(\boldsymbol{W}\left( \partial,\delta\right) )\right) $ is self-adjoint and positive operator; \noindent(ii) $-\boldsymbol{W}\left( \partial,\delta\right) $ is the infinitesimal generator of a contraction $C_{0}-$semigroup, cf. \cite[Proposition 7]{Zuniga-LNM-2016}. A relevant fact is that the evolution equatio \[ \frac{\partial u\left( x,t\right) }{\partial t}+\boldsymbol{W}\left( \partial,\delta\right) u(x,t)=0\text{, \ \ \ }x\in\mathbb{Q}_{p}^{N}\text{, }t\geq0\text{, \] is a $p$-adic heat equation, which means that the corresponding semigroup is attached to a Markov stochastic process, see \cite[Theorem 16 {Zuniga-LNM-2016}. \begin{example} An important example of a $\boldsymbol{W}\left( \partial,\delta\right) $ operator is the Taibleson-Vladimirov operator, which is defined a \[ \boldsymbol{D}^{\beta}\phi\left( x\right) =\frac{1-p^{\beta}}{1-p^{-\beta -N}}\int\limits_{\mathbb{Q}_{p}^{N}}\frac{\phi\left( x-y\right) -\phi\left( x\right) }{\left\Vert y\right\Vert _{p}^{\beta+N}}d^{N}y=\mathcal{F _{\kappa\rightarrow x}^{-1}\left( \left\Vert \kappa\right\Vert _{p}^{\beta }\mathcal{F}_{x\rightarrow\kappa}\phi\right) \text{, \] where $\beta>0$ and $\phi\in\mathcal{D}\left( \mathbb{Q}_{p}^{N}\right) $, see \cite[Section 2.2.7]{Zuniga-LNM-2016}. \end{example} The $\boldsymbol{W}_{\delta}$ operators were introduced by Chac\'{o}n-Cort\'{e}s and Z\'{u}\~{n}iga-Galindo, see \cite{Zuniga-LNM-2016 \ and the references therein. They are a generalization of the Vladimirov and Taibleson operators. \subsection{Discretization of $\boldsymbol{W}_{\delta}$ operators} For $l\geq1$, we \ set $G_{l}:=p^{-l}\mathbb{Z}_{p}^{N}/p^{l}\mathbb{Z _{p}^{N}$ and denote by $\mathcal{D}_{\mathbb{R}}^{l}(\mathbb{Q}_{p ^{N}):=\mathcal{D}_{\mathbb{R}}^{l}$ the $\mathbb{R}$-vector space of all test functions of the for \begin{equation} \varphi\left( x\right) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \text{, \ }\varphi\left( \boldsymbol{i}\right) \in\mathbb{R}\text{,} \label{Eq_repre \end{equation} where $\boldsymbol{i}$ runs through a fixed system of representatives of $G_{l}$, and $\Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) $ is the characteristic function of the ball $\boldsymbol{i +p^{l}\mathbb{Z}_{p}^{N}$. Notice that $\varphi$ is supported on $p^{-l}\mathbb{Z}_{p}^{N}$ and that $\mathcal{D}_{\mathbb{R}}^{l}$ is a finite dimensional vector space spanned by the basis \begin{equation} \left\{ \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \right\} _{\boldsymbol{i}\in G_{l}}. \label{Basis \end{equation} Then we will identify $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}$ with the column vector $\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}$. Furthermore, $\mathcal{D}_{\mathbb{R}}^{l}$ $\hookrightarrow\mathcal{D}_{\mathbb{R}}^{l+1}$ (continuous embedding), and $\mathcal{D}_{\mathbb{R}}=\underrightarrow{\lim}\mathcal{D}_{\mathbb{R} ^{l}=\cup_{l=1}^{\infty}\mathcal{D}_{\mathbb{R}}^{l}$. \begin{remark} We set \[ d\left( l,w_{\delta}\right) :={\int\limits_{\mathbb{Q}_{p}^{N}\setminus B_{-l}^{N}}}\frac{d^{N}y}{w_{\delta}\left( \Vert y\Vert_{p}\right) }. \] By (\ref{Eq_1}), $d\left( l,w_{\delta}\right) <\infty$. Furthermore, we \ hav \begin{equation} \frac{p^{\left( \delta-N\right) l}}{C_{1}}{\int\limits_{\mathbb{Q}_{p ^{N}\setminus\mathbb{Z}_{p}^{N}}}\frac{d^{N}z}{\Vert z\Vert_{p}^{\delta}}\leq d\left( l,w_{\delta}\right) \leq\frac{p^{\left( \delta-N\right) l}}{C_{0 }{\int\limits_{\mathbb{Q}_{p}^{N}\setminus\mathbb{Z}_{p}^{N}}}\frac{d^{N z}{\Vert z\Vert_{p}^{\delta}}, \label{Eq_1A \end{equation} which implies that $d\left( l,w_{\delta}\right) \geq Cp^{\left( \delta-N\right) l}$ for some positive constant $C$. In particular, $d\left( l,w_{\delta}\right) \rightarrow\infty$ as $l\rightarrow\infty$. \end{remark} We denote by $\boldsymbol{W}_{\delta}^{\left( l\right) }$ the restriction $\boldsymbol{W}_{\delta}:\mathcal{D}_{\mathbb{R}}\left( B_{l}^{N}\right) \rightarrow\mathcal{D}_{\mathbb{R}}\left( B_{l}^{N}\right) $. Take $\varphi\in\mathcal{D}_{\mathbb{R}}\left( B_{l}^{N}\right) $ is the \begin{gather} \boldsymbol{W}_{\delta}^{\left( l\right) }\varphi(x)={\int \limits_{\mathbb{Q}_{p}^{N}}}\frac{\varphi\left( x-y\right) -\varphi\left( x\right) }{w_{\delta}\left( \Vert y\Vert_{p}\right) }d^{N}y={\int \limits_{B_{l}^{N}}}\frac{\varphi\left( x-y\right) -\varphi\left( x\right) }{w_{\delta}\left( \Vert y\Vert_{p}\right) }d^{N}y+\label{Eq_D_l}\\ {\int\limits_{\mathbb{Q}_{p}^{N}\smallsetminus B_{l}^{N}}}\frac{\varphi\left( x-y\right) -\varphi\left( x\right) }{w_{\delta}\left( \Vert y\Vert _{p}\right) }d^{N}y={\int\limits_{B_{l}^{N}}}\frac{\varphi\left( x-y\right) -\varphi\left( x\right) }{w_{\delta}\left( \Vert y\Vert_{p}\right) d^{N}y-\left( {\int\limits_{\mathbb{Q}_{p}^{N}\setminus B_{l}^{N}} \frac{d^{N}y}{w_{\delta}\left( \Vert y\Vert_{p}\right) }\right) \varphi\left( x\right) .\nonumber \end{gather} \begin{notation} The cardinality of a finite set $A$ is denoted as $\#A$. \end{notation} We set \begin{equation} A_{\boldsymbol{i},\boldsymbol{j}}\left( l\right) :=\left\{ \begin{array} [c]{lll \frac{p^{-lN}}{w_{\delta}\left( \left\Vert \boldsymbol{i}-\boldsymbol{j \right\Vert _{p}\right) } & \text{if} & \boldsymbol{i}\neq\boldsymbol{j}\\ & & \\ 0 & \text{if} & \boldsymbol{i}=\boldsymbol{j}\text{, \end{array} \right. \label{Eq_Matrix_A \end{equation} and $A:=\left[ A_{\boldsymbol{i},\boldsymbol{j}}\left( l\right) \right] _{\boldsymbol{i},\boldsymbol{j}\in G_{l}}$. We denote by $\mathbb{I}$ the identity matrix of size $\#G_{l}\times\#G_{l}$. \begin{lemma} \label{Lemma4}The restriction $\boldsymbol{W}_{\delta}^{\left( l\right) }:\mathcal{D}_{\mathbb{R}}^{l}\rightarrow\mathcal{D}_{\mathbb{R}}^{l}$ is a well-defined linear operator. Furthermore, the following formula holds true: \[ \boldsymbol{W}_{\delta}^{\left( l\right) }\varphi(x) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \left\{ {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} A_{\boldsymbol{i},\boldsymbol{j}}\left( l\right) \varphi\left( \boldsymbol{j}\right) -\varphi\left( \boldsymbol{i}\right) d\left( l,w_{\delta}\right) \right\} \Omega\left( p^{l}\left\Vert x-\boldsymbol{i \right\Vert _{p}\right) \text{, \] which implies that $A-d\left( l,w_{\delta}\right) \mathbb{I}$ is the matrix of the operator $\boldsymbol{W}_{\delta}^{\left( l\right) }$ in the basis (\ref{Basis}). \end{lemma} \begin{proof} For $x\in\boldsymbol{i}+p^{l}\mathbb{Z}_{p}^{N}$ and for $\varphi\left( x\right) $ of the from (\ref{Eq_repre}), we have \begin{gather*} \boldsymbol{W}_{\delta}^{\left( l\right) }\varphi(x)={\int \limits_{\mathbb{Q}_{p}^{N}}}\frac{\varphi\left( y\right) -\varphi\left( x\right) }{w_{\delta}\left( \Vert y-x\Vert_{p}\right) }d^{N}y={\int \limits_{\mathbb{Q}_{p}^{N}}}\frac {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} \varphi\left( \boldsymbol{j}\right) \Omega\left( p^{l}\left\Vert y-\boldsymbol{j}\right\Vert _{p}\right) -\varphi\left( \boldsymbol{i \right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) }{w_{\delta}\left( \Vert y-x\Vert_{p}\right) }d^{N}y\\ {\textstyle\sum\limits_{\substack{\boldsymbol{j}\in G_{l}\\\boldsymbol{j \neq\boldsymbol{i}}}} \text{ }{\int\limits_{\mathbb{Q}_{p}^{N}}}\frac{\varphi\left( \boldsymbol{j \right) \Omega\left( p^{l}\left\Vert y-\boldsymbol{j}\right\Vert _{p}\right) }{w_{\delta}\left( \Vert y-x\Vert_{p}\right) }d^{N y+{\int\limits_{\mathbb{Q}_{p}^{N}}}\frac{\varphi\left( \boldsymbol{i \right) \left\{ \Omega\left( p^{l}\left\Vert y-\boldsymbol{i}\right\Vert _{p}\right) -\Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \right\} }{w_{\delta}\left( \Vert y-x\Vert_{p}\right) d^{N}y\\ {\textstyle\sum\limits_{\substack{\boldsymbol{j}\in G_{l}\\\boldsymbol{j \neq\boldsymbol{i}}}} A_{\boldsymbol{i},\boldsymbol{j}}\left( l\right) \varphi\left( \boldsymbol{j}\right) +{\int\limits_{\mathbb{Q}_{p}^{N}\setminus \boldsymbol{i}+p^{l}\mathbb{Z}_{p}^{N}}}\frac{\varphi\left( \boldsymbol{i \right) \left\{ \Omega\left( p^{l}\left\Vert y-\boldsymbol{i}\right\Vert _{p}\right) -1\right\} }{w_{\delta}\left( \Vert y-x\Vert_{p}\right) d^{N}y. \end{gather*} No \begin{align*} {\int\limits_{\mathbb{Q}_{p}^{N}\setminus\left( \boldsymbol{i}+p^{l \mathbb{Z}_{p}^{N}\right) }}\frac{\varphi\left( \boldsymbol{i}\right) \left\{ \Omega\left( p^{l}\left\Vert y-\boldsymbol{i}\right\Vert _{p}\right) -1\right\} }{w_{\delta}\left( \Vert y-x\Vert_{p}\right) d^{N}y & ={\int\limits_{\mathbb{Q}_{p}^{N}\setminus p^{l}\mathbb{Z}_{p}^{N }}\frac{\varphi\left( \boldsymbol{i}\right) \left\{ \Omega\left( p^{l}\left\Vert z\right\Vert _{p}\right) -1\right\} }{w_{\delta}\left( \Vert z+\left( \boldsymbol{i}-x\right) \Vert_{p}\right) }d^{N}z\\ & =-\varphi\left( \boldsymbol{i}\right) {\int\limits_{\mathbb{Q}_{p ^{N}\setminus p^{l}\mathbb{Z}_{p}^{N}}}\frac{d^{N}z}{w_{\delta}\left( \Vert z\Vert_{p}\right) }. \end{align*} \end{proof} \section{Energy functionals} \subsection{Energy functionals in the coordinate space} For $\varphi\in\mathcal{D}_{\mathbb{R}}(\mathbb{Q}_{p}^{N})$, and $\delta>N$, $\gamma>0$, $\alpha_{2}>0$, we define the energy functional \begin{equation} E_{0}(\varphi):=E_{0}(\varphi;\delta,\gamma,\alpha_{2})=\frac{\gamma}{4}\text{ \ {\textstyle\iint\limits_{\mathbb{Q}_{p}^{N}\times\mathbb{Q}_{p}^{N}}} \text{ }\frac{\left\{ \varphi\left( x\right) -\varphi\left( y\right) \right\} ^{2}}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }d^{N}xd^{N}y+\frac{\alpha_{2}}{2}\int\limits_{\mathbb{Q}_{p}^{N}}\varphi ^{2}\left( x\right) d^{N}x\geq0. \label{Energy_Functioal_E_0 \end{equation} Then $E_{0}$ is a well-defined real-valued functional on $\mathcal{D _{\mathbb{R}}$. Notice that $E_{0}(\varphi)=0$ if an only if $\varphi=0$. The restriction of $E_{0}$ to $\mathcal{D}_{\mathbb{R}}^{l}$ (denoted as $E_{0}^{\left( l\right) }$) provides a natural discretization of $E_{0}$. \begin{remark} \label{Nota_discretization}The functional \[ E_{m}^{\prime}(\varphi):=\int\limits_{\mathbb{Q}_{p}^{N}}\varphi^{m}\left( x\right) d^{N}x\text{ for }m\in\mathbb{N}\smallsetminus\left\{ 0\right\} \text{, }\varphi\in\mathcal{D}_{\mathbb{R}}^{l}\text{, \] discretizes as \[ E_{m}^{\prime}(\varphi)=p^{-lN {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{m}\left( \boldsymbol{i}\right) \text{. \] \end{remark} \begin{lemma} \label{Lemma5}For $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}$, the following formula holds true \[ E_{0}^{\left( l\right) }(\varphi)=p^{-lN}\left( \frac{\gamma}{2}d\left( l,w_{\delta}\right) +\frac{\alpha_{2}}{2}\right) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{2}\left( \boldsymbol{i}\right) -\frac{\gamma}{2}p^{_{-lN} {\textstyle\sum\limits_{\boldsymbol{i},\boldsymbol{j}\in G_{l}}} A_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{i}\right) \varphi\left( \boldsymbol{j}\right) . \] \end{lemma} \begin{proof} We se \begin{align*} E_{0}^{\prime}(\varphi) & :=\frac{\gamma}{4}\text{ {\textstyle\iint\limits_{\mathbb{Q}_{p}^{N}\times\mathbb{Q}_{p}^{N}}} \text{ }\frac{\left\{ \varphi\left( x\right) -\varphi\left( y\right) \right\} ^{2}}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }d^{N}xd^{N}y\\ & =\frac{\gamma}{4}\text{ {\textstyle\iint\limits_{\mathbb{Q}_{p}^{N}\times\mathbb{Q}_{p}^{N}}} \text{ }\frac{\left\{ {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi\left( \boldsymbol{i}\right) \left[ \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) -\Omega\left( p^{l}\left\Vert y-\boldsymbol{i}\right\Vert _{p}\right) \right] \right\} ^{2}}{w_{\delta }\left( \left\Vert x-y\right\Vert _{p}\right) }d^{N}xd^{N}y. \end{align*} Now, by using that for $\boldsymbol{i}\neq\boldsymbol{j}$ \[ \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \Omega\left( p^{l}\left\Vert y-\boldsymbol{j}\right\Vert _{p}\right) =1\text{ }\Rightarrow\text{ }\Omega\left( p^{l}\left\Vert x-\boldsymbol{j \right\Vert _{p}\right) \Omega\left( p^{l}\left\Vert y-\boldsymbol{i \right\Vert _{p}\right) =0, \] we get that \begin{gather*} \left\{ {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi\left( \boldsymbol{i}\right) \left[ \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) -\Omega\left( p^{l}\left\Vert y-\boldsymbol{i}\right\Vert _{p}\right) \right] \right\} ^{2}=\\% {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{2}\left( \boldsymbol{i}\right) \left[ \Omega\left( p^{l \left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) -\Omega\left( p^{l}\left\Vert y-\boldsymbol{i}\right\Vert _{p}\right) \right] ^{2}-\\ {\textstyle\sum\limits_{\substack{\boldsymbol{i},\boldsymbol{j}\in G_{l}\\\boldsymbol{i}\neq\boldsymbol{j}}}} \varphi\left( \boldsymbol{i}\right) \varphi\left( \boldsymbol{j}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \Omega\left( p^{l}\left\Vert y-\boldsymbol{j}\right\Vert _{p}\right) . \end{gather*} Therefor \[ E_{0}^{\prime}(\varphi)=\frac{\gamma}{4}\text{ {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} E_{\boldsymbol{i}}^{\left( 1\right) }(\varphi)-\frac{\gamma}{2}\text{ {\textstyle\sum\limits_{\substack{\boldsymbol{i},\boldsymbol{j}\in G_{l}\\\boldsymbol{i}\neq\boldsymbol{j}}}} E_{\boldsymbol{i},\boldsymbol{j}}^{(2)}(\varphi), \] wher \begin{gather*} E_{\boldsymbol{i}}^{\left( 1\right) }(\varphi):=\varphi^{2}\left( \boldsymbol{i}\right) {\textstyle\iint\limits_{\mathbb{Q}_{p}^{N}\times\mathbb{Q}_{p}^{N}}} \text{ }\frac{\left[ \Omega\left( p^{l}\left\Vert x-\boldsymbol{i \right\Vert _{p}\right) -\Omega\left( p^{l}\left\Vert y-\boldsymbol{i \right\Vert _{p}\right) \right] ^{2}}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }d^{N}xd^{N}y=\\ \varphi^{2}\left( \boldsymbol{i}\right) \int\limits_{\left\Vert x\right\Vert _{p}>p^{-l}}\text{ }\int\limits_{\left\Vert y\right\Vert _{p}\leq p^{-l }\text{ }\frac{d^{N}xd^{N}y}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }+\varphi^{2}\left( \boldsymbol{i}\right) \int \limits_{\left\Vert x\right\Vert _{p}\leq p^{-l}}\text{ }\int \limits_{\left\Vert y\right\Vert _{p}>p^{-l}}\text{ }\frac{d^{N}xd^{N y}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }=\\ 2\varphi^{2}\left( \boldsymbol{i}\right) \int\limits_{\left\Vert x\right\Vert _{p}>p^{-l}}\text{ }\int\limits_{\left\Vert y\right\Vert _{p}\leq p^{-l}}\text{ }\frac{d^{N}xd^{N}y}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }=2p^{-lN}\varphi^{2}\left( \boldsymbol{i}\right) d\left( l,w_{\delta}\right) . \end{gather*} And \ for $\boldsymbol{i},\boldsymbol{j}\in G_{l}$, with $\boldsymbol{i \neq\boldsymbol{j}$, \begin{align*} E_{\boldsymbol{i},\boldsymbol{j}}^{(2)}(\varphi) & :=\varphi\left( \boldsymbol{i}\right) \varphi\left( \boldsymbol{j}\right) {\textstyle\iint\limits_{\mathbb{Q}_{p}^{N}\times\mathbb{Q}_{p}^{N}}} \frac{\Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \Omega\left( p^{l}\left\Vert y-\boldsymbol{j}\right\Vert _{p}\right) }{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }d^{N}xd^{N}y\\ & =\frac{p^{_{-2lN}}}{w_{\delta}\left( \left\Vert \boldsymbol{i -\boldsymbol{j}\right\Vert _{p}\right) }\varphi\left( \boldsymbol{i}\right) \varphi\left( \boldsymbol{j}\right) . \end{align*} Consequently, \begin{align} E_{0}^{\prime}(\varphi) & =\frac{\gamma}{2}p^{-lN}d\left( l,w_{\delta }\right) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{2}\left( \boldsymbol{i}\right) -\frac{\gamma}{2 {\textstyle\sum\limits_{\substack{\boldsymbol{i},\boldsymbol{j}\in G_{l}\\\boldsymbol{i}\neq\boldsymbol{j}}}} \text{ }\frac{p^{_{-2lN}}}{w_{\delta}\left( \left\Vert \boldsymbol{i -\boldsymbol{j}\right\Vert _{p}\right) }\varphi\left( \boldsymbol{i}\right) \varphi\left( \boldsymbol{j}\right) \nonumber\\ & =\frac{\gamma}{2}p^{-lN}d\left( l,w_{\delta}\right) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{2}\left( \boldsymbol{i}\right) -\frac{\gamma}{2}p^{_{-lN} {\textstyle\sum\limits_{\boldsymbol{i},\boldsymbol{j}\in G_{l}}} A_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{i}\right) \varphi\left( \boldsymbol{j}\right) . \label{Eq_S_phi \end{align} The announced formula follows from (\ref{Eq_S_phi}) by using Remark \ref{Nota_discretization}. \end{proof} We now set $U\left( l\right) :=U=\left[ U_{\boldsymbol{i},\boldsymbol{j }(l)\right] _{\boldsymbol{i},\boldsymbol{j}\in G_{l}}$, wher \[ U_{\boldsymbol{i},\boldsymbol{j}}(l):=\left( \frac{\gamma}{2}d\left( l,w_{\delta}\right) +\frac{\alpha_{2}}{2}\right) \delta_{\boldsymbol{i ,\boldsymbol{j}}-\frac{\gamma}{2}A_{\boldsymbol{i},\boldsymbol{j}}(l), \] where $\delta_{\boldsymbol{i},\boldsymbol{j}}$\ denotes the Kronecker delta. Notice that $U=\left( \frac{\gamma}{2}d\left( l,w_{\delta}\right) +\frac{\alpha_{2}}{2}\right) \mathbb{I}-\frac{\gamma}{2}A$ is the matrix of the operato \[ -\frac{\gamma}{2}\boldsymbol{W}_{\delta}+\frac{\alpha_{2}}{2 \] acting on $\mathcal{D}_{\mathbb{R}}^{l}$, in the basis (\ref{Basis}), cf. Lemma \ref{Lemma4}. \begin{lemma} \label{Lemma6}With the above notation the following formula holds true \begin{equation} E_{0}^{\left( l\right) }(\varphi)=\left[ \varphi\left( \boldsymbol{i \right) \right] _{\boldsymbol{i}\in G_{l}}^{T}p^{-lN}U(l)\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}} {\textstyle\sum\limits_{\boldsymbol{i},\boldsymbol{j}\in G_{l}}} p^{-lN}U_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{i \right) \varphi\left( \boldsymbol{j}\right) \geq0, \label{Eq_formula_E_0 \end{equation} for $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}$, where $U$ is a symmetric, positive definite matrix. Consequently $p^{-lN}U(l)$ is a diagonalizable and invertible matrix. \end{lemma} \subsection{A motion equation in $\mathcal{D}_{\mathbb{R}}^{l}$} Given $J\in\mathcal{D}_{\mathbb{R}}(\mathbb{Q}_{p}^{N})$, we se \[ E_{0}\left( \varphi,J\right) :=E_{0}\left( \varphi,J;\delta,\gamma ,\alpha_{2}\right) =E_{0}\left( \varphi\right) {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} J\left( x\right) \varphi\left( x\right) d^{N}x, \] for $\varphi\in\mathcal{D}_{\mathbb{R}}$. Notice that there exists a positive integer $l_{0}$ such that $J\in\mathcal{D}_{\mathbb{R}}^{l}$ for $l\geq l_{0 $. We denote by $E_{0}^{\left( l\right) }\left( \varphi,J\right) $\ the restriction of $E_{0}\left( \varphi,J\right) $ to $\mathcal{D}_{\mathbb{R }^{l}$. \begin{lemma} \label{Lemma7}Take $l\geq l_{0}$. Then the functional $E_{0}^{\left( l\right) }\left( \varphi,J\right) $ has a minimizer satisfyin \[ \left( -\frac{\gamma}{2}\boldsymbol{W}_{\delta}^{\left( l\right) +\frac{\gamma}{2}d\left( l,w_{\delta}\right) +\alpha_{2}\right) \varphi(x)=J(x). \] \end{lemma} \begin{proof} We identify $\mathcal{D}_{\mathbb{R}}^{l}\simeq\left( \mathbb{R}^{\#G_{l },\left\vert \cdot\right\vert \right) $, where $\left\vert \left( t_{1},\ldots,t_{\#G_{l}}\right) \right\vert =\max_{1\leq j\leq\#G_{l }\left\vert t_{j}\right\vert $. Then by Lemmas \ref{Lemma5} and \ref{Lemma6}, $E_{0}^{\left( l\right) }\left( \varphi,J\right) $ is a function of $\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}$, more precisely \begin{align*} E_{0}^{\left( l\right) }\left( \varphi,J\right) & =p^{-lN}\left( \frac{\gamma}{2}d\left( l,w_{\delta}\right) +\frac{\alpha_{2}}{2}\right) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{2}\left( \boldsymbol{i}\right) -\frac{\gamma}{2}p^{_{-lN} {\textstyle\sum\limits_{\boldsymbol{i},\boldsymbol{j}\in G_{l}}} A_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{i}\right) \varphi\left( \boldsymbol{j}\right) -p^{_{-lN} {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} J(\boldsymbol{i})\varphi\left( \boldsymbol{i}\right) \\ & =\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}^{T}p^ \acute{ lN}U\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}-\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i \in G_{l}}^{T}\left[ J\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}\text{. \end{align*} Since $U$ is a positive definite matrix, the function $E^{\left( l\right) }\left( \varphi,J\right) $ has a minimizer, which satisfies \[ \frac{\partial E_{0}^{\left( l\right) }\left( \varphi,J\right) {\partial\varphi\left( \boldsymbol{i}\right) }=2p^{-lN}\left( \frac{\gamma }{2}d\left( l,w_{\delta}\right) +\frac{\alpha_{2}}{2}\right) \varphi\left( \boldsymbol{i}\right) -\frac{\gamma}{2}p^{_{-lN} {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} A_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{j}\right) -p^{_{-lN}}J(\boldsymbol{i})=0\text{, \] for all $\boldsymbol{i}\in G_{l}$, i.e \[ -\frac{\gamma}{2}\left\{ {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} A_{\boldsymbol{i},\boldsymbol{j}}(l)\varphi\left( \boldsymbol{j}\right) -d\left( l,w_{\delta}\right) \varphi\left( \boldsymbol{i}\right) \right\} +\left( \frac{\gamma}{2}d\left( l,w_{\delta}\right) +\alpha_{2}\right) \varphi\left( \boldsymbol{i}\right) =J(\boldsymbol{i}). \] By using Lemma \ref{Lemma4} we ge \[ -\frac{\gamma}{2}\boldsymbol{W}_{\delta}^{\left( l\right) }\varphi\left( x\right) +\left( \frac{\gamma}{2}d\left( l,w_{\delta}\right) +\alpha _{2}\right) \varphi\left( x\right) =J(x). \] \end{proof} \subsection{The Fourier transform in $\mathcal{D}^{l}(\mathbb{Q}_{p}^{N})$} We denote by $\mathcal{D}^{l}(\mathbb{Q}_{p}^{N}):=\mathcal{D}^{l}$ the $\mathbb{C}$-vector space of the test functions $\varphi\in\mathcal{D (\mathbb{Q}_{p}^{N})$ having the form: $\varphi\left( x\right) =\sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) $, $\varphi\left( \boldsymbol{i}\right) \in\mathbb{C}$. Alternatively, $\mathcal{D}^{l}$ the $\mathbb{C}$-vector space of the test functions $\varphi\in\mathcal{D}(\mathbb{Q}_{p}^{N})$ satisfying: \begin{enumerate} \item supp $\varphi=B_{l}^{N}$; \item for any $x\in B_{l}^{N}$, $\varphi\mid_{x+p^{l}\mathbb{Z}_{p}^{N }=\varphi\left( x\right) $. \end{enumerate} Then by using that $\mathcal{F}_{x\rightarrow\kappa}\left( \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \right) \allowbreak=p^{-lN}\chi_{p}\left( \boldsymbol{i\cdot}\kappa\right) \Omega\left( p^{-l}\left\Vert \kappa\right\Vert _{p}\right) $, we get tha \begin{equation} \widehat{\varphi}\left( \kappa\right) =p^{-lN}\Omega\left( p^{-l}\left\Vert \kappa\right\Vert _{p}\right) \sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \chi_{p}\left( \boldsymbol{i\cdot}\kappa\right) . \label{Eq_10 \end{equation} By using the identity $\Omega\left( p^{-l}\left\Vert \kappa\right\Vert _{p}\right) =\sum_{\boldsymbol{j}\in G_{l}}$ $\Omega\left( p^{l}\left\Vert \kappa-\boldsymbol{j}\right\Vert _{p}\right) $ in (\ref{Eq_10}), \begin{align} \widehat{\varphi}\left( \kappa\right) & =\sum_{\boldsymbol{j}\in G_{l }\left\{ p^{-lN}\sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i \right) \chi_{p}\left( \boldsymbol{i\cdot j}\right) \right\} \Omega\left( p^{l}\left\Vert \kappa-\boldsymbol{j}\right\Vert _{p}\right) \nonumber\\ & =:\sum_{\boldsymbol{j}\in G_{l}}\widehat{\varphi}\left( \boldsymbol{j \right) \Omega\left( p^{l}\left\Vert \kappa-\boldsymbol{j}\right\Vert _{p}\right) . \label{Eq_11 \end{align} Conversely \begin{align} \varphi\left( x\right) & =\sum_{\boldsymbol{j}\in G_{l}}\left\{ p^{-lN}\sum_{\boldsymbol{i}\in G_{l}}\widehat{\varphi}\left( \boldsymbol{i \right) \chi_{p}\left( -\boldsymbol{i\cdot j}\right) \right\} \Omega\left( p^{l}\left\Vert x-\boldsymbol{j}\right\Vert _{p}\right) \nonumber\\ & =\sum_{\boldsymbol{j}\in G_{l}}\varphi\left( \boldsymbol{j}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{j}\right\Vert _{p}\right) . \label{Eq_11A \end{align} It follows from (\ref{Eq_11})-(\ref{Eq_11A}) that the Fourier transform is an automorphism of the $\mathbb{C}$-vector space $\mathcal{D}^{l}$. \begin{remark} \label{Nota1}(i) For $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}(\mathbb{Q _{p}^{N})$, $\overline{\widehat{\varphi}\left( \kappa\right) =\widehat{\varphi}\left( -\kappa\right) $ and \begin{equation} \left\vert \widehat{\varphi}\left( \kappa\right) \right\vert ^{2 =\sum_{\boldsymbol{i}\in G_{l}}\left\vert \widehat{\varphi}\left( \boldsymbol{i}\right) \right\vert ^{2}\Omega\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) . \label{Eq_14 \end{equation} (ii) The formulae \begin{equation} \widehat{\varphi}\left( \boldsymbol{j}\right) =p^{-lN}\sum_{\boldsymbol{i \in G_{l}}\varphi\left( \boldsymbol{i}\right) \chi_{p}\left( \boldsymbol{i\cdot j}\right) \text{,}\ \ \varphi\left( \boldsymbol{j \right) =p^{-lN}\sum_{\boldsymbol{i}\in G_{l}}\widehat{\varphi}\left( \boldsymbol{i}\right) \chi_{p}\left( -\boldsymbol{i\cdot j}\right) \label{Eq_14A \end{equation} give the discrete Fourier transform its inverse in the additive group $G_{l}$. \end{remark} \subsection{Lizorkin spaces of second kind} The spac \[ \mathcal{L}:=\mathcal{L}(\mathbb{Q}_{p}^{N})=\left\{ \varphi\in \mathcal{D}(\mathbb{Q}_{p}^{N}) {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi\left( x\right) d^{N}x=0\right\} \] is called \textit{the }$p$\textit{-adic Lizorkin space of second kind}. The real Lizorkin space of second kind is $\mathcal{L}_{\mathbb{R}}:=\mathcal{L _{\mathbb{R}}(\mathbb{Q}_{p}^{N})=\mathcal{L}(\mathbb{Q}_{p}^{N )\cap\mathcal{D}_{\mathbb{R}}(\mathbb{Q}_{p}^{N})$. I \[ \mathcal{FL}:=\mathcal{FL}(\mathbb{Q}_{p}^{N})=\left\{ \widehat{\varphi \in\mathcal{D}(\mathbb{Q}_{p}^{N});\widehat{\varphi}\left( 0\right) =0\right\} , \] then the Fourier transform gives rise to an isomorphism of $\mathbb{C}$-vector spaces from $\mathcal{L}$\ into $\mathcal{FL}$. The topological dual $\mathcal{L}^{\prime}:=\mathcal{L}^{\prime}(\mathbb{Q}_{p}^{N})$\ of the space $\mathcal{L}$ is called \textit{the }$p$\textit{-adic Lizorkin space of distributions of second kind. }The real version is denoted as\textit{ }$\mathcal{L}_{_{\mathbb{R}}}^{\prime}:=\mathcal{L}_{_{\mathbb{R}}}^{\prime }(\mathbb{Q}_{p}^{N})$\textit{. } Let $\boldsymbol{A}(\partial)$ be a pseudodifferential operator defined as \[ \boldsymbol{A}(\partial)\varphi\left( x\right) =\mathcal{F}_{\kappa \rightarrow x}^{-1}(A(\left\Vert \kappa\right\Vert _{p})\mathcal{F _{x\rightarrow\kappa}\mathcal{\varphi})\text{, for }\varphi\in\mathcal{D _{\mathbb{R}}(\mathbb{Q}_{p}^{N}), \] where $A(\left\Vert \kappa\right\Vert _{p})$ is a real-valued and radial function satisfying \[ A(\left\Vert \kappa\right\Vert _{p})=0\text{ if and only if }\kappa=0\text{. \] Then, the Lizorkin space $\mathcal{L}_{\mathbb{R}}$ is invariant under $\boldsymbol{A}(\partial)$. For further details about Lizorkin spaces and pseudodifferential operators, the reader may consult \cite[Sections 7.3, 9.2]{A-K-S}. We now define for $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $, \[ \mathcal{L}^{l}:=\mathcal{L}^{l}(\mathbb{Q}_{p}^{N})=\left\{ \varphi\left( x\right) =\sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) ,\varphi\left( \boldsymbol{i}\right) \in\mathbb{C};p^{-lN}\sum _{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) =0\right\} , \] resp. $\mathcal{L}_{\mathbb{R}}^{l}:=\mathcal{L}_{\mathbb{R}}^{l (\mathbb{Q}_{p}^{N})=\mathcal{L}^{l}\cap\mathcal{D}_{\mathbb{R}}^{l}$, an \[ \mathcal{FL}^{l}:=\mathcal{FL}^{l}(\mathbb{Q}_{p}^{N})=\left\{ \widehat {\varphi}\left( \kappa\right) =\sum_{\boldsymbol{i}\in G_{l}}\widehat {\varphi}\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) ,\widehat{\varphi}\left( \boldsymbol{i}\right) \in\mathbb{C};\widehat{\varphi}\left( \boldsymbol{0 \right) =0\right\} , \] By the formulae (\ref{Eq_14A}), the Fourier transform $\mathcal{F :\mathcal{L}^{l}\rightarrow\mathcal{F}\mathcal{L}^{l}$ is an automorphism of $\mathbb{C}$-vector spaces. The multiplication by the function $A(\left\Vert \kappa\right\Vert _{p})$ gives rise to a linear transformation from $\mathcal{L}^{l}$ onto itself. Consequently, $\boldsymbol{A}(\partial ):\mathcal{L}^{l}\rightarrow\mathcal{L}^{l}$ is a well-defined linear operator. \subsection{Energy functionals in the momenta space} By using (\ref{EQ_oper_W_def})-(\ref{EQ_oper_W_pseudo}), for $\varphi \in\mathcal{D}_{\mathbb{R}}$, we hav \ {\textstyle\iint\limits_{\mathbb{Q}_{p}^{N}\times\mathbb{Q}_{p}^{N}}} \text{ }\frac{\left\{ \varphi\left( x\right) -\varphi\left( y\right) \right\} ^{2}}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }d^{N}xd^{N}y= {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \text{ }\varphi\left( x\right) \left( -\boldsymbol{W}_{\delta}\right) \varphi\left( x\right) d^{N}x. \] Then \begin{align*} E_{0}(\varphi) & =\frac{\gamma}{2}\text{\ {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \text{ }\varphi\left( x\right) \left( -\boldsymbol{W}_{\delta}\right) \varphi\left( x\right) d^{N}x+\frac{\alpha_{2}}{2 {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi^{2}\left( x\right) d^{N}x\\ & =\frac{\gamma}{2}\text{\ {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \text{ }\varphi\left( x\right) \boldsymbol{\boldsymbol{W}}\left( \partial,\delta\right) \varphi\left( x\right) d^{N}x+\frac{\alpha_{2}}{2 {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi^{2}\left( x\right) d^{N}x\\ & =\frac{\gamma}{2}\text{\ {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \text{ }A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p})\left\vert \widehat{\varphi}\left( \kappa\right) \right\vert ^{2}d^{N}\kappa +\frac{\alpha_{2}}{2 {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \left\vert \widehat{\varphi}\left( \kappa\right) \right\vert ^{2}d^{N \kappa\\ & =\text{\ {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \text{ }\left( \frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p})+\frac{\alpha_{2}}{2}\right) \left\vert \widehat{\varphi}\left( \kappa\right) \right\vert ^{2}d^{N}\kappa. \end{align*} Now by using (\ref{Eq_14}), for $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}$, we have \begin{align*} E_{0}(\varphi) & =p^{-lN {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}\smallsetminus\left\{ \boldsymbol{0}\right\} }} \text{\ }\left( \frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \boldsymbol{j \right\Vert _{p})+\frac{\alpha_{2}}{2}\right) \left\vert \widehat{\varphi }\left( \boldsymbol{j}\right) \right\vert ^{2}\\ & +\left\vert \widehat{\varphi}\left( \boldsymbol{0}\right) \right\vert ^{2}\left\{ {\textstyle\int\limits_{p^{l}\mathbb{Z}_{p}^{N}}} \text{ }\left( \frac{\gamma}{2}A_{w_{\delta}}(\left\Vert z\right\Vert _{p})+\frac{\alpha_{2}}{2}\right) d^{N}z\right\} , \end{align*} where $\widehat{\varphi}\left( \boldsymbol{j}\right) =$ $\widehat{\varphi }_{1}\left( \boldsymbol{j}\right) +\sqrt{-1}\widehat{\varphi}_{2}\left( \boldsymbol{j}\right) \in\mathbb{C}$. Here we use the alternative notation $\widehat{\varphi_{1}}\left( \boldsymbol{j}\right) =\operatorname{Re}\left( \widehat{\varphi}\left( \boldsymbol{j}\right) \right) $, $\widehat{\varphi }_{2}\left( \boldsymbol{j}\right) =\operatorname{Im}\left( \widehat {\varphi}\left( \boldsymbol{j}\right) \right) $ which more convenient for us. \begin{remark} Notice that \[ \mathcal{FL}_{\mathbb{R}}^{l}=\left\{ \widehat{\varphi}\left( \kappa\right) =\sum_{\boldsymbol{i}\in G_{l}}\widehat{\varphi}\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) ,\widehat{\varphi}\left( \boldsymbol{i}\right) \in\mathbb{C};\widehat {\varphi}\left( 0\right) =0,\text{ }\overline{\widehat{\varphi}\left( \kappa\right) }=\widehat{\varphi}\left( -\kappa\right) \right\} , \] and that the condition $\overline{\widehat{\varphi}\left( \kappa\right) }=\widehat{\varphi}\left( -\kappa\right) $ implies that $\widehat{\varphi }_{1}\left( -\boldsymbol{i}\right) =\widehat{\varphi}_{1}\left( \boldsymbol{i}\right) $\ and $\widehat{\varphi}_{2}\left( -\boldsymbol{i \right) =-\widehat{\varphi}_{2}\left( \boldsymbol{i}\right) $ for any $\boldsymbol{i}\in G_{l}$. This implies that $\mathcal{FL}_{\mathbb{R}}^{l}$ is $\mathbb{R}$-vector space of dimension $\#G_{l}$ $-1$. \end{remark} \begin{remark} \label{Nota_Basis}We set $G_{l}\smallsetminus\left\{ \boldsymbol{0}\right\} :=G_{l}^{+}\bigsqcup G_{l}^{-}$, where the subsets $G_{l}^{+}$, $G_{l}^{-}$ satisfy that \ \begin{array} [c]{lll G_{l}^{+} & \rightarrow & G_{l}^{-}\\ \boldsymbol{i} & \rightarrow & -\boldsymbol{i \end{array} \] is a bijection. We recall here that $G_{l}$ is a finite additive group. Since $\#G_{l}^{+}=\#G_{l}^{-}$ necessarily $\#\left( G_{l}\smallsetminus\left\{ \boldsymbol{0}\right\} \right) =p^{2Nl}-1$ is even, and thus $p\geq3$. Then any function from $\mathcal{FL}_{\mathbb{R}}^{l}$ can be uniquely represented as \[ \widehat{\varphi}\left( \kappa\right) =\sum_{\boldsymbol{i}\in G_{l}^{+ }\widehat{\varphi}_{1}\left( \boldsymbol{i}\right) \Omega_{+}\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) +\widehat {\varphi}_{2}\left( \boldsymbol{i}\right) \Omega_{-}\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) , \] where \[ \Omega_{+}\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) :=\Omega\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) +\Omega\left( p^{l}\left\Vert \kappa+\boldsymbol{i}\right\Vert _{p}\right) , \] and \[ \Omega_{-}\left( p^{l}\left\Vert \kappa-\boldsymbol{i}\right\Vert _{p}\right) :=\sqrt{-1}\left\{ \Omega\left( p^{l}\left\Vert \kappa -\boldsymbol{i}\right\Vert _{p}\right) -\Omega\left( p^{l}\left\Vert \kappa+\boldsymbol{i}\right\Vert _{p}\right) \right\} . \] \end{remark} We take\ $\varphi\in\mathcal{L}_{\mathbb{R}}^{l}$, then $\widehat{\varphi }\left( 0\right) =0$, and \begin{align*} E_{0}^{\left( l\right) }(\varphi) & =p^{-lN {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}\smallsetminus\left\{ \boldsymbol{0}\right\} }} \text{\ }\left( \frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \boldsymbol{j \right\Vert _{p})+\frac{\alpha_{2}}{2}\right) \left( \widehat{\varphi_{1 }^{2}\left( \boldsymbol{j}\right) +\widehat{\varphi_{2}}^{2}\left( \boldsymbol{j}\right) \right) \\ & =2p^{-lN {\textstyle\sum\limits_{r\in\left\{ 1,2\right\} }} \text{ \ {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}^{+}}} \text{\ }\left( \frac{\gamma}{2}A_{w_{\delta}}(\left\Vert j\right\Vert _{p})+\frac{\alpha_{2}}{2}\right) \widehat{\varphi_{r}}^{2}\left( \boldsymbol{j}\right) . \end{align*} By using that $\mathcal{L}_{\mathbb{R}}^{l}\simeq\mathcal{FL}_{\mathbb{R} ^{l}$ we get that $E_{0}^{\left( l\right) }$ is a real-valued functional defined on $\mathcal{FL}_{\mathbb{R}}^{l}\simeq\mathbb{R}^{\left( \#G_{l}-1\right) }$. We now define\ the diagonal matrix $B^{\left( r\right) }=\left[ B_{\boldsymbol{i},\boldsymbol{j}}^{\left( r\right) }\right] _{\boldsymbol{i},\boldsymbol{j}\in G_{l}^{+}}$, $r=1$, $2$, wher \[ B_{\boldsymbol{i},\boldsymbol{j}}^{\left( r\right) }:=\left\{ \begin{array} [c]{lll \frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \boldsymbol{j}\right\Vert _{p})+\frac{\alpha_{2}}{2} & \text{if} & \boldsymbol{i}=\boldsymbol{j}\\ & & \\ 0 & \text{if} & \boldsymbol{i}\neq\boldsymbol{j}. \end{array} \right. \] Notice that $B_{\boldsymbol{i},\boldsymbol{j}}^{\left( 1\right) }=B_{\boldsymbol{i},\boldsymbol{j}}^{\left( 2\right) }$. We se \begin{equation} B(l):=B(l,\delta,\gamma,\alpha_{2})=\left[ \begin{array} [c]{ll B^{\left( 1\right) } & \boldsymbol{0}\\ \boldsymbol{0} & B^{\left( 2\right) }. \end{array} \right] \label{Eq_Matrix_B_1 \end{equation} The matrix $B=\left[ B_{\boldsymbol{i},\boldsymbol{j}}\right] $ is a diagonal of size $2\left( \#G_{l}^{+}\right) \times2\left( \#G_{l ^{+}\right) $. In addition, the indices $\boldsymbol{i},\boldsymbol{j}$ run through two disjoint copies of $G_{l}^{+}$. Then we have the following result: \begin{lemma} \label{Lemma10}Assume that $\alpha_{2}>0$. With the above notation the following formula holds true \begin{equation} E_{0}^{\left( l\right) }(\varphi):=E_{0}^{\left( l\right) }\left( \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) ,\widehat{\varphi_{2 }\left( \boldsymbol{j}\right) ;\boldsymbol{j}\in G_{l}^{+}\right) =\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] ^{T}2p^{-lN}B(l)\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \geq0, \label{Eq_E_0_Momenta \end{equation} for $\varphi\in\mathcal{L}_{\mathbb{R}}^{l}\simeq\mathcal{FL}_{\mathbb{R} ^{l}\simeq\mathbb{R}^{\left( \#G_{l}\text{ }-1\right) }$, where $2p^{-lN}B(l)$ is a diagonal, positive definite, invertible matrix. \end{lemma} \section{Gaussian measures} We recall that we are taking $\delta>N$, $\gamma>0$, $\alpha_{2}>0$. The partition function attached to the energy functional $E_{0}$ is given by \[ \mathcal{Z}:=\mathcal{Z}(\delta,\gamma,\alpha_{2})=\int\limits_{\mathcal{FL _{\mathbb{R}}(\mathbb{Q}_{p}^{N})}D(\varphi)e^{-E_{0}\left( \varphi\right) }, \] where $D(\varphi)$ is a \textquotedblleft spurious measure\textquotedblrigh \ on $\mathcal{FL}_{\mathbb{R}}(\mathbb{Q}_{p}^{N})$. \begin{definition} We se \begin{gather*} \mathcal{Z}^{\left( l\right) }=\mathcal{Z}^{\left( l\right) (\delta,\gamma,\alpha_{2})=\int\limits_{\mathcal{FL}_{\mathbb{R} ^{l}(\mathbb{Q}_{p}^{N})}D_{l}\left( \varphi\right) e^{-E_{0}\left( \varphi\right) }\\ =:\mathcal{N}_{l}\int\limits_{\mathbb{R}^{\left( p^{2lN}-1\right) } \exp\left( -\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] ^{T}2p^{-lN}B(l)\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \right) \prod\limits_{\boldsymbol{i}\in G_{l}^{+}}d\widehat {\varphi_{1}}\left( \boldsymbol{i}\right) d\widehat{\varphi_{2}}\left( \boldsymbol{i}\right) , \end{gather*} where $\mathcal{N}_{l}$ is a normalization constant, and {\textstyle\prod\nolimits_{\boldsymbol{i}\in G_{l}^{+}}} d\widehat{\varphi_{1}}\left( \boldsymbol{i}\right) d\widehat{\varphi_{2 }\left( \boldsymbol{i}\right) $ is the Lebesgue measure of $\mathbb{R ^{\left( p^{2lN}-1\right) }$. \end{definition} The integral $\mathcal{Z}^{\left( l\right) }$ is the natural discretization of $\mathcal{Z}$. From a classical point of view, one should expect that $\mathcal{Z}=\lim_{l\rightarrow\infty}\mathcal{Z}^{\left( l\right) }$ in some sense. The goal of this section is to study these matters in a rigorous mathematical way. Our main result is the construction of rigorous mathematical version of the spurious measure $D(\varphi)$. By Lemma \ref{Lemma10}, $\mathcal{Z}^{\left( l\right) }$ is a Gaussian integral, the \[ \mathcal{Z}^{\left( l\right) }=\mathcal{N}_{l}\frac{\left( 2\pi\right) ^{\frac{\left( p^{2lN}-1\right) }{2}}}{\sqrt{\det4p^{-lN}B(l)} =\mathcal{N}_{l}\left( \frac{\pi}{2}\right) ^{\frac{\left( p^{2lN -1\right) }{2}}\frac{p^{\frac{lN\left( p^{2lN}-1\right) }{2}}}{\sqrt{\det B}}. \] We set \[ \mathcal{N}_{l}=\frac{\left( \frac{2}{\pi}\right) ^{\frac{\left( p^{2lN}-1\right) }{2}}\sqrt{\det B}}{p^{\frac{lN\left( p^{2lN}-1\right) }{2}}}. \] \begin{definition} We define the following family of Gaussian measures \begin{gather} d\mathbb{P}_{l}\left( \left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] ;\delta,\gamma,\alpha_{2}\right) :=d\mathbb{P}_{l}\left( \left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \right) \nonumber\\ =\mathcal{N}_{l}\exp(-\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] ^{T}2p^{-lN}B(l)\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G+_{l} \end{array} \right] )\prod\limits_{\boldsymbol{i}\in G_{l}^{+}}d\widehat{\varphi_{1 }\left( \boldsymbol{i}\right) d\widehat{\varphi_{2}}\left( \boldsymbol{i \right) \label{Eq_P_L \end{gather} in $\mathcal{FL}_{\mathbb{R}}^{l}\simeq\mathbb{R}^{\left( p^{2lN}-1\right) }$, for $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $. \end{definition} Thus for any Borel subset $A$ of $\mathbb{R}^{\left( p^{2lN}-1\right) }\simeq\mathcal{FL}_{\mathbb{R}}^{l}$ and any continuous and bounded function $f:\mathcal{FL}_{\mathbb{R}}^{l}\rightarrow\mathbb{R}$ the integra \ {\textstyle\int\limits_{A}} f\left( \left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \right) d\mathbb{P}_{l}\left( \left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \right) = {\textstyle\int\limits_{A}} f\left( \widehat{\varphi}\right) d\mathbb{P}_{l}\left( \widehat{\varphi }\right) \] is well-defined. We define $\mathcal{I}=\cup_{l=1}^{\infty}G_{l}^{+}$. Notice that any finite subset of $\mathcal{I}$ is of the form $G_{l}^{+}$ for some $l\in \mathbb{N}\setminus\left\{ 0\right\} $. To each $G_{l}^{+}$ we attach a collection of Gaussian random variables $\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] $ having joint probability distribution $\mathbb{P}_{l}\left( \left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \right) $. The family of Gaussian measures $\left\{ \mathbb{P}_{l}\left( \left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \right) ;l\in\mathbb{N\smallsetminus}\left\{ 0\right\} \right\} $ is consistent, i.e. $\mathbb{P}_{l}(A)=\mathbb{P}_{m}(A\times\mathbb{R ^{\#G_{m}-\#G_{l}})$, for $m>l$, see e.g. \cite[Chapter IV, Section 3.1, Lemma 1]{Gelfand-Vilenkin}. We now apply Kolmogorov's consistency theorem and its proof, see e.g. \cite[Theorem 2.1]{Simon-1}, to obtain the following result: \begin{lemma} \label{Lemma11}There exists a probability measure space $\left( X,\mathcal{F},\mathbb{P}\right) $ and random variables \[ \left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] \text{, for }l\in\mathbb{N}\smallsetminus\left\{ 0\right\} \text{, \] such that $\mathbb{P}_{l}$ is the joint probability distribution of\ $\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] $. The space $\left( X,\mathcal{F},\mathbb{P}\right) $ is unique up to isomorphisms of probability measure spaces. Furthermore, for any bounded continuous function $f$ supported in $\mathcal{FL}_{\mathbb{R}}^{l}$, we hav \ {\textstyle\int\limits_{\mathcal{FL}_{\mathbb{R}}^{l}}} f\left( \widehat{\varphi}\right) d\mathbb{P}_{l}\left( \widehat{\varphi }\right) {\textstyle\int\limits_{\mathcal{FL}_{\mathbb{R}}^{l}}} f\left( \widehat{\varphi}\right) d\mathbb{P}\left( \widehat{\varphi }\right) . \] \end{lemma} \subsection{A quick detour into the $p$-adic noise calculus} In this section we introduce a Gel'fand triple an construct \ some Gaussian measures in the non-Archimedean setting. \subsubsection{A bilinear form in $\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $} For $\delta>N$, $\gamma$, $\alpha_{2}>0$, we define the operator \ \begin{array} [c]{lll \mathcal{D}\left( \mathbb{Q}_{p}^{N}\right) & \rightarrow & L^{2}\left( \mathbb{Q}_{p}^{N}\right) \\ & & \\ \varphi & \rightarrow & \left( \frac{\gamma}{2}\boldsymbol{W}\left( \partial,\delta\right) +\frac{\alpha_{2}}{2}\right) ^{-1}\varphi, \end{array} \] where $\left( \frac{\gamma}{2}\boldsymbol{W}\left( \partial,\delta\right) +\frac{\alpha_{2}}{2}\right) ^{-1}\varphi\left( x\right) :=\mathcal{F _{\kappa\rightarrow x}^{-1}\left( \frac{\mathcal{F}_{x\rightarrow\kappa }\varphi}{\frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p})+\frac{\alpha_{2}}{2}}\right) $. We define the distributio \[ G(x):=G(x;\delta,\gamma,\alpha_{2})=\mathcal{F}_{\kappa\rightarrow x ^{-1}\left( \frac{1}{\frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa \right\Vert _{p})+\frac{\alpha_{2}}{2}}\right) \in\mathcal{D}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) . \] By using the fact that $\frac{1}{\frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p})+\frac{\alpha_{2}}{2}}$ is radial and $(\mathcal{F (\mathcal{F}\varphi))(\kappa)=\varphi(-\kappa)$ one \ verifies tha \[ G(x)\in\mathcal{D}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) . \] Now we define the following bilinear form $\mathbb{B}:=\mathbb{B (\delta,\gamma,\alpha_{2})$ \ \begin{array} [c]{lll \mathbb{B}:\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \times\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) & \rightarrow & \mathbb{R}\\ & & \\ \left( \varphi,\theta\right) & \rightarrow & \left\langle \varphi,\left( \frac{\gamma}{2}\boldsymbol{W}\left( \partial,\delta\right) +\frac {\alpha_{2}}{2}\right) ^{-1}\theta\right\rangle , \end{array} \] where $\left\langle \cdot,\cdot\right\rangle $ denotes the scalar product in $L^{2}\left( \mathbb{Q}_{p}^{N}\right) $. \begin{lemma} \label{Lemma12}$\mathbb{B}$ is a positive, continuous bilinear form from $\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \times \mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $\ into $\mathbb{R}$. \end{lemma} \begin{proof} We first notice that for $\varphi\in\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $, we hav \[ \mathbb{B}(\varphi,\varphi) {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \frac{\left\vert \widehat{\varphi}\left( \kappa\right) \right\vert ^{2 d^{N}\kappa}{\frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p})+\frac{\alpha_{2}}{2}}\geq0. \] Then $\mathbb{B}(\varphi,\varphi)=0$ implies that $\varphi$ is zero almost everywhere. Since $\varphi$ is a locally constant function, $\mathbb{B (\varphi,\varphi)=0$ if and only if $\varphi=0$. For $\left( \varphi,\theta\right) \in\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \times\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $, the Cauchy-Schwarz inequality implies tha \begin{equation} \left\vert \mathbb{B}\left( \varphi,\theta\right) \right\vert \leq\left\Vert \varphi\right\Vert _{2}\left( {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \frac{\left\vert \widehat{\theta}\left( \kappa\right) \right\vert ^{2 d^{N}\kappa}{\left( \frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa \right\Vert _{p})+\frac{\alpha_{2}}{2}\right) ^{2}}\right) ^{\frac{1}{2 }\leq\frac{2}{\alpha_{2}}\left\Vert \varphi\right\Vert _{2}\left\Vert \theta\right\Vert _{2}. \label{Eq_19 \end{equation} Now take two sequences in $\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) $ such that $\varphi_{n}$ $\underrightarrow{\mathcal{D _{\mathbb{R}}}$ $\varphi$ and $\theta_{n}$ $\underrightarrow{\mathcal{D _{\mathbb{R}}}$ $\theta$ with $\varphi$, $\theta\in\mathcal{D}_{\mathbb{R }\left( \mathbb{Q}_{p}^{N}\right) $. We recall that the convergence of these sequences means that there is an positive integer $l$ such that $\varphi_{n}$, $\varphi$, $\theta_{n}$, $\theta\in\mathcal{D}_{\mathbb{R}}^{l}$, and \[ \varphi_{n}-\varphi\text{ }\underrightarrow{\text{unif.}}\text{ }0\text{ \ and \ }\theta_{n}-\theta\text{ }\underrightarrow{\text{unif.}}\text{ }0\text{ in }p^{-l}\mathbb{Z}_{p}^{N}. \] Then \begin{align*} \varphi_{n}\left( x\right) -\varphi\left( x\right) & {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \left( \varphi_{n}\left( \boldsymbol{i}\right) -\varphi\left( \boldsymbol{i}\right) \right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i \right\Vert _{p}\right) \text{, and}\\ \theta_{n}\left( x\right) -\theta\left( x\right) & {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \left( \theta_{n}\left( \boldsymbol{i}\right) -\theta\left( \boldsymbol{i \right) \right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \end{align*} and by (\ref{Eq_19}) \begin{gather*} \left\vert \mathbb{B}\left( \varphi_{n}-\varphi,\theta_{n}-\theta\right) \right\vert \leq\frac{2}{\alpha_{2}}\left\Vert \varphi_{n}-\varphi\right\Vert _{2}\left\Vert \theta_{n}-\theta\right\Vert _{2}\\ \leq\frac{2p^{-lN}}{\alpha_{2}}\sqrt {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \left\vert \varphi_{n}\left( \boldsymbol{i}\right) -\varphi\left( \boldsymbol{i}\right) \right\vert ^{2}}\sqrt {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \left\vert \theta_{n}\left( \boldsymbol{i}\right) -\theta\left( \boldsymbol{i}\right) \right\vert ^{2}}\\ \leq\frac{2p^{-lN}\#G_{l}}{\alpha_{2}}\left( \max_{\boldsymbol{i}\in G_{l }\left\vert \varphi_{n}\left( \boldsymbol{i}\right) -\varphi\left( \boldsymbol{i}\right) \right\vert \right) \left( \max_{\boldsymbol{i}\in G_{l}}\left\vert \theta_{n}\left( \boldsymbol{i}\right) -\theta\left( \boldsymbol{i}\right) \right\vert \right) \rightarrow0 \end{gather*} as $n\rightarrow\infty$. This fact implies the continuity of $\mathbb{B}$ in $\mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \times \mathcal{D}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $. \end{proof} In the next sections we only use the restriction of $\mathbb{B}$ to $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \times \mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $. \begin{lemma} \label{Lemma13}For $\varphi\in\mathcal{L}_{\mathbb{R}}^{l}\simeq \mathcal{FL}_{\mathbb{R}}^{l}$ \[ \mathbb{B}_{l}(\varphi,\varphi):=\mathbb{B}(\varphi,\varphi)=\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] ^{T}2p^{-lN}B^{-1}(l)\left[ \begin{array} [c]{l \left[ \widehat{\varphi_{1}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+}}\\ \left[ \widehat{\varphi_{2}}\left( \boldsymbol{j}\right) \right] _{\boldsymbol{j}\in G_{l}^{+} \end{array} \right] , \] where $B(l)$ is the matrix defined in (\ref{Eq_Matrix_B_1}). \end{lemma} \begin{proof} The proof is similar to the proof of Lemma \ref{Lemma10}. We first notice that \[ \mathbb{B}(\varphi,\varphi) {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \text{ }\frac{\left\vert \widehat{\varphi}\left( \kappa\right) \right\vert ^{2}}{\frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p )+\frac{\alpha_{2}}{2}}d^{N}\kappa. \] By using (\ref{Eq_14}), we get tha \begin{equation} \mathbb{B}_{l}(\varphi,\varphi)=2p^{-lN {\textstyle\sum\limits_{r\in\left\{ 1,2\right\} }} \ {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}^{+}}} \ \frac{\widehat{\varphi_{r}}^{2}\left( \boldsymbol{j}\right) }{\frac {\gamma}{2}A_{w_{\delta}}(\left\Vert \boldsymbol{j}\right\Vert _{p )+\frac{\alpha_{2}}{2}}. \label{Eq_18 \end{equation} Now, the announced formula follows from (\ref{Eq_18}). \end{proof} Given a finite dimensional subspace $\mathcal{Y}\subset\mathcal{L _{\mathbb{R}}(\mathbb{Q}_{p}^{N})$, we denote by $\mathbb{B}_{\mathcal{Y}}$ the restriction of $\mathbb{B}$ to $\mathcal{Y}\times\mathcal{Y}$. In the case $\mathcal{Y}=\mathcal{L}_{\mathbb{R}}^{l}$, we use the notation $\mathbb{B _{l}$, which agrees with the notation introduced in Lemma \ref{Lemma13}. \begin{lemma} \label{Lemma14}Given finite dimensional subspace $\mathcal{Y}\subset \mathcal{L}_{\mathbb{R}}(\mathbb{Q}_{p}^{N})$, there is a positive integer $l=l(\mathcal{Y})$ such that $\mathcal{Y}\subset\mathcal{L}_{\mathbb{R} ^{l}\simeq\mathcal{FL}_{\mathbb{R}}^{l}$, and there is a subset $J=J(\mathcal{Y})\subset G_{l}^{+}$ such that \begin{equation} \mathbb{B}_{\mathcal{Y}}(\varphi,\varphi)=2p^{-lN {\textstyle\sum\limits_{r\in\left\{ 1,2\right\} }} \ {\textstyle\sum\limits_{\boldsymbol{j}\in J}} \ \frac{\widehat{\varphi}_{r}^{2}\left( \boldsymbol{j}\right) }{\frac {\gamma}{2}A_{w_{\delta}}(\left\Vert j\right\Vert _{p})+\frac{\alpha_{2}}{2}}. \label{Eq_22 \end{equation} Furthermore, \begin{equation} \mathbb{B}_{\mathcal{Y}}=\mathbb{B}_{l}\mid_{\left\{ \widehat{\varphi _{1}\left( \boldsymbol{j}\right) =0,\widehat{\varphi}_{2}\left( \boldsymbol{j}\right) =0;\boldsymbol{j}\notin J\right\} }. \label{Eq_23 \end{equation} \end{lemma} \begin{proof} Since $\mathcal{L}_{\mathbb{R}}=\cup_{l=1}^{\infty}\mathcal{L}_{\mathbb{R }^{l}$ and $\mathcal{L}_{\mathbb{R}}^{l}\subset\mathcal{L}_{\mathbb{R}}^{m}$ for $m>l$, there is is a positive integer $l=l(\mathcal{Y})$ such that $\mathcal{Y}\subset\mathcal{L}_{\mathbb{R}}^{l}$. Then there is a subset $J\subset G_{l}^{+}$ such that $\left\{ \Omega_{\pm}\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \right\} _{\boldsymbol{i}\in J}$ is a basis of $\mathcal{Y}$, and so the formula (\ref{Eq_22}) holds. The assertion (\ref{Eq_23}) follows from \ (\ref{Eq_18}). \end{proof} \begin{corollary} \label{Cor1}The collection $\left\{ \mathbb{B}_{\mathcal{Y}};\mathcal{Y \text{ finite dimensional subspace of\ }\mathcal{L}_{\mathbb{R}}\right\} $ is completely determined by the collection $\left\{ \mathbb{B}_{l ;l\in\mathbb{N}\smallsetminus\left\{ 0\right\} \right\} $. In the sense that given any $\mathbb{B}_{\mathcal{Y}}$ there is an integer $l$ and a subset $J\subset G_{l}^{+}$, the case $J=\emptyset$ is included, such that $\mathbb{B}_{\mathcal{Y}}=\mathbb{B}_{l}\mid_{\left\{ \widehat{\varphi _{1}\left( \boldsymbol{j}\right) =0,\widehat{\varphi}_{2}\left( \boldsymbol{j}\right) =0;\boldsymbol{j}\notin J\right\} }$. \end{corollary} \subsubsection{ Gaussian measures in the non-Archimedean framework} We recall that $\mathcal{D}(\mathbb{Q}_{p}^{N})$ is a nuclear space, cf. \cite[Section 4]{Bruhat}, and thus $\mathcal{L}_{\mathbb{R}}(\mathbb{Q _{p}^{N})$ is a nuclear space, since any subspace of a nuclear space is also nuclear, see e.g. \cite[Proposition 50.1]{Treves}. The spaces \[ \mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \hookrightarrow L_{\mathbb{R}}^{2}\left( \mathbb{Q}_{p}^{N}\right) \hookrightarrow \mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) \] form a Gel'fand triple, that is, $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $ is a nuclear space which is densely and continuously embedded in $L_{\mathbb{R}}^{2}$ (see \cite[Theorem 7.4.3 {A-K-S}) and $\left\Vert g\right\Vert _{2}^{2}=\left\langle g,g\right\rangle $ for $g\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $. We denote by $\mathcal{B}:=\mathcal{B}(\mathcal{L}_{\mathbb{R}}^{\prime }\left( \mathbb{Q}_{p}^{N}\right) )$ the $\sigma$-algebra generated by the cylinder subsets of $\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q _{p}^{N}\right) $. The mapping \ \begin{array} [c]{cccc \mathcal{C}: & \mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) & \rightarrow & \mathbb{C}\\ & f & \rightarrow & e^{-\frac{1}{2}\mathbb{B}(f,f) \end{array} \] defines a characteristic functional, i.e. $\mathcal{C}$ is continuous, positive definite and $\mathcal{C}\left( 0\right) =1$. The continuity follows from Lemma \ref{Lemma12}. The fact that $\mathbb{B}$ defines an inner product in $L^{2}\left( \mathbb{Q}_{p}^{N}\right) $ implies that the functional $\mathcal{C}$ is positive definite. \begin{definition} \label{Def_white_noise_space}By the Bochner-Minlos theorem, see e.g. \cite{Ber-Kon}, \cite{Hida et al}, \cite{Huang-Yang}, there exists a probability measure $\mathbb{P}:=\mathbb{P}\left( \delta,\gamma,\alpha _{2}\right) $ called \textit{the canonical Gaussian measure} on $\left( \mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) ,\mathcal{B}\right) $, given by its characteristic functional a \begin{equation {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\sqrt{-1}\langle W,f\rangle}d\mathbb{P}(W)=e^{-\frac{1}{2}\mathbb{B (f,f)}\text{,}\ \ f\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) \text{.} \label{Eq_Char_func \end{equation} \end{definition} We set $\left( L_{\mathbb{R}}^{\rho}\right) :=L^{\rho}\left( \mathcal{L _{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) ,\mathbb{P}\right) $, $\rho\in\left[ 1,\infty\right) $, to denote the real vector space of measu\-rable functions $\Psi:\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) \rightarrow\mathbb{R}$ satisfyin \[ \left\Vert \Psi\right\Vert _{\left( L_{\mathbb{R}}^{\rho}\right) }^{\rho} {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} \left\vert \Psi\left( W\right) \right\vert ^{\rho}d\mathbb{P}(W)<\infty \text{. \] \subsubsection{\label{Section_Further_Remarks}Further remarks on the cylinder measure $\mathbb{P}$} We set $\mathbb{L}\left( \varphi\right) =\exp\frac{-1}{2}\mathbb{B (\varphi,\varphi)$, for $\varphi\in\mathcal{L}_{\mathbb{R}}$. The functional $\mathbb{L}$ is positive definite, continuous and $\mathbb{L}(0)=1$. By taking the restriction of \ $\mathbb{L}$ to a finite dimensional subspace $\mathcal{Y}$ of $\mathcal{L}_{\mathbb{R}}$, one obtains a positive definite, continuous functional $\mathbb{L}_{\mathcal{Y}}(\varphi)$ on $\mathcal{Y}$. By the Bochner theorem, see e.g. \cite[Chapter II, Section 3.2]{Gelfand-Vilenkin , this function is the Fourier transform of a probability measure $\mathbb{P}_{_{\mathcal{Y}}}$ defined in the dual space $\mathcal{Y}^{\prime }\subset\mathcal{L}_{\mathbb{R}}^{\prime}$ of $\mathcal{Y}$. By identifying $\mathcal{Y}^{\prime}$ with $\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) /\mathcal{Y}^{0}$, where $\mathcal{Y}^{0 $\ consists of all linear functionals $T$ which vanish on $\mathcal{Y}$, we get that $\mathbb{P}_{_{\mathcal{Y}}}$ is a probability measure in the finite dimensional space \ $\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q _{p}^{N}\right) /\mathcal{Y}^{0}$. The measure $\mathbb{P}$ is constructed from the family of probability measures $\left\{ \mathbb{P}_{_{\mathcal{Y} };\mathcal{Y\subset L}_{\mathbb{R}}\text{, finite dimensional space}\right\} $. These measures are compatible and satisfy a suitable continuity condition, and they give rise to a cylinder measure $\mathbb{P}$ in $\mathcal{L _{\mathbb{R}}^{\prime}$. Since $\mathcal{L}_{\mathbb{R}}$ is a nuclear space, this cylinder measure is countably additive.\ For further details about the construction of the measure $\mathbb{P}$, the reader may consult \cite[Chapter IV, Section 4.2, proof of Theorem 1]{Gelfand-Vilenkin}. Now, by using the formul \[ \mathbb{L}\left( \varphi\right) {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) /\mathcal{Y}^{0}}} e^{\sqrt{-1}\left\langle W,\varphi\right\rangle }d\mathbb{P}_{_{\mathcal{Y} }\left( \varphi\right) \text{ for }\varphi\in\mathcal{Y}\text{, \] see \cite[Chapter IV, Section 4.1]{Gelfand-Vilenkin}, and the fact that $\mathbb{L}\left( \varphi\right) =\exp\frac{-1}{2}\mathbb{B}(\varphi ,\varphi)$, for $\varphi\in\mathcal{Y}$, one gets that $\mathbb{P _{_{\mathcal{Y}}}$ is a Gaussian probability measure in $\mathcal{Y}$, with mean zero, and correlation function $\mathbb{B}$, i.e. if $\mathcal{Y}$\ has dimension $n$, the \[ \mathbb{P}_{_{\mathcal{Y}}}\left( A\right) =\frac{1}{\left( 2\pi\right) ^{\frac{n}{2}} {\textstyle\int\limits_{A}} e^{-\frac{1}{2}\mathbb{B}(\psi,\psi)}d\mathbb{\psi}\text{, \] where $d\mathbb{\psi}$ is the Lebesgue measure in $\mathcal{Y}$ corresponding to the scalar product $\mathbb{B}$, and $A\subset\mathcal{Y}$ is a measurable subset. In conclusion, the cylinder measure $\mathbb{P}$ is uniquely determined by the family of Gaussian measures \[ \left\{ \mathbb{P}_{_{\mathcal{Y}}};\mathcal{Y\subset L}_{\mathbb{R}}\text{, finite dimensional space}\right\} , \] or equivalently by the sequenc \begin{equation} \left\{ \mathbb{B}_{_{\mathcal{Y}}};\mathcal{Y\subset L}_{\mathbb{R}}\text{, finite dimensional space}\right\} , \label{Eq_sequence_B \end{equation} where $\mathbb{B}_{_{\mathcal{Y}}}$ denotes the restriction of the scalar product to $\mathbb{B}$ to $\mathcal{Y}$. This is a consequence of the fact that any finite dimensional Gaussian measure, with mean zero, is completely determined by its correlation matrix. \subsection{Main result} \begin{theorem} \label{Theorem1}Assume that $\delta>N$, $\gamma>0$, $\alpha_{2}>0$. (i) The cylinder probability measure $\mathbb{P}=\mathbb{P}\left( \delta ,\gamma,\alpha_{2}\right) $ is uniquely determined\ by the sequence $\mathbb{P}_{l}=\mathbb{P}_{l}\left( \delta,\gamma,\alpha_{2}\right) $, $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $, of Gaussian measures. (ii) Let $f:\mathcal{F}\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \rightarrow\mathbb{R}$ be a continuous and bounded function. The \[ \lim_{l\rightarrow\infty {\textstyle\int\limits_{\mathcal{FL}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} f\left( \widehat{\varphi}\right) d\mathbb{P}_{l}\left( \widehat{\varphi }\right) {\textstyle\int\limits_{\mathcal{FL}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} f\left( \widehat{\varphi}\right) d\mathbb{P}\left( \widehat{\varphi }\right) . \] \end{theorem} \begin{proof} (i) We use the notation and results given in Section \ref{Section_Further_Remarks}. By the Corollary \ref{Cor1}, the sequence (\ref{Eq_sequence_B}) is completely determined by the sequence $\left\{ 2p^{-lN}\mathbb{B}_{l};l\in\mathbb{N}\smallsetminus\left\{ 0\right\} \right\} $, i.e. by the sequence $\left\{ \mathbb{P}_{l};l\in\mathbb{N \smallsetminus\left\{ 0\right\} \right\} $. Notice that the covariance matrix of $\mathbb{P}_{l}$ is $2p^{-lN}B^{-1}(l)=2p^{-lN}\mathbb{B}_{l}$, cf. Lemma \ref{Lemma13}. Then the cylinder measure $\mathbb{P}$ is exactly the probability measure announced in Lemma \ref{Lemma11}. (ii) By\ using the formula given in Lemma \ref{Lemma11}, for any bounded continuous function $f$ supported in $\mathcal{FL}_{\mathbb{R}}^{l}$, we hav \begin{equation {\textstyle\int\limits_{\mathcal{FL}_{\mathbb{R}}^{l}}} f\left( \widehat{\varphi}\right) d\mathbb{P}_{l}\left( \widehat{\varphi }\right) {\textstyle\int\limits_{\mathcal{FL}_{\mathbb{R}}^{l}}} f\left( \widehat{\varphi}\right) d\mathbb{P}\left( \widehat{\varphi }\right) . \label{Eq_25 \end{equation} By the uniqueness of the probability space $\left( X,\mathcal{F ;\mathbb{P}\right) $ in Lemma \ref{Lemma11}, we can identify the $\sigma $-algebra $\mathcal{F}$ with $\mathcal{B}(\mathcal{L}_{\mathbb{R}}^{\prime }\left( \mathbb{Q}_{p}^{N}\right) )$, the $\sigma$-algebra generated by the cylinder subsets of $\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q _{p}^{N}\right) $. Then $\mathcal{FL}_{\mathbb{R}}^{l}$ belongs to $\mathcal{B}(\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p ^{N}\right) )$, and $\mathcal{FL}_{\mathbb{R}}=\cup_{l}\mathcal{FL _{\mathbb{R}}^{l}$ also belongs to $\mathcal{B}(\mathcal{L}_{\mathbb{R }^{\prime}\left( \mathbb{Q}_{p}^{N}\right) )$. Now by taking the limit $l\rightarrow\infty$ in (\ref{Eq_25}), we get the announced formula. \end{proof} \subsection{Further comments on Theorem \ref{Theorem1}} By identifying $\mathcal{D}_{\mathbb{R}}^{l}$ with $\mathbb{R}^{\#G_{l}}$ and using tha \[ \mathcal{L}_{\mathbb{R}}^{l}=\left\{ \varphi\in\mathcal{D}_{\mathbb{R} ^{l};\varphi\left( x\right) =\sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i \right\Vert _{p}\right) \text{, }\sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) =0\right\} , \] we conclude that $\mathcal{L}_{\mathbb{R}}^{l}$ is the hyperplane $\mathcal{H}^{\left( l\right) }:=\left\{ \sum_{\boldsymbol{i}\in G_{l }\varphi\left( \boldsymbol{i}\right) =0\right\} $ in $\mathbb{R}^{\#G_{l} $. We denote by {\textstyle\prod\nolimits_{\boldsymbol{i}\in G_{l}}} d\varphi\left( \boldsymbol{i}\right) $ the Lebesgue measure of $\mathbb{R}^{\#G_{l}}$\ as before. Then, the induced Lebesgue measure on $\mathcal{H}^{\left( l\right) }$\ is $\delta\left( \sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \right) {\textstyle\prod\nolimits_{\boldsymbol{i}\in G_{l}}} d\varphi\left( \boldsymbol{i}\right) $, where $\delta$\ is the Dirac delta function. This means tha \ {\displaystyle\int\limits_{\mathbb{R}^{\#G_{l}}}} f\left( \varphi\left( \boldsymbol{i}\right) ;\boldsymbol{i}\in G_{l}\right) \delta\left( \sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \right) {\textstyle\prod\nolimits_{\boldsymbol{i}\in G_{l}}} d\varphi\left( \boldsymbol{i}\right) {\displaystyle\int\limits_{\mathcal{H}^{\left( l\right) }}} f\text{ }\left\vert \omega\right\vert , \] where $\left\vert \omega\right\vert $ denotes the measure induced by the differential form $\omega$ of degree $\#G_{l}-1$, which satisfies tha \ {\displaystyle\bigwedge\limits_{\boldsymbol{i}\in G_{l}}} \varphi\left( \boldsymbol{i}\right) =\omeg {\displaystyle\bigwedge} d\left( \sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \right) , \] see \cite[Chapter III, Section 1.2]{Gelfand-Shilov}. Now the image of the Gaussian measure $\mathbb{P}_{l}$, see (\ref{Eq_P_L}), under the isomorphism $\mathcal{FL}_{\mathbb{R}}^{l}\rightarrow\mathcal{L _{\mathbb{R}}^{l}$ i \begin{gather*} d\mathbb{P}_{l}\left( \left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}};\delta,\gamma,\alpha_{2}\right) :=d\mathbb{P _{l}\left( \left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}\right) \\ =\mathcal{N}_{l}^{\prime}\exp(-\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}^{T}p^{-lN}U(l)\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}})\delta\left( \sum_{\boldsymbol{i}\in G_{l}}\varphi\left( \boldsymbol{i}\right) \right) {\displaystyle\prod\limits_{\boldsymbol{i}\in G_{l}}} d\varphi\left( \boldsymbol{i}\right) , \end{gather*} where $\mathcal{N}_{l}^{\prime}$ is a normalization constant, see Lemma \ref{Lemma6}. The sequence \[ \left\{ \mathbb{P}_{l}\left( \left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}\right) \right\} _{l\in\mathbb{N \smallsetminus\left\{ 0\right\} \] uniquely determines the cylinder probability measure $\mathbb{P}$. \section{Partition functions and generating functionals} In this section we introduce a family of $\mathcal{P}(\varphi)$-theories, wher \begin{equation} \mathcal{P}(X)=a_{3}X^{3}+a_{4}X^{4}+\ldots+a_{2k}X^{2D}\in\mathbb{R}\left[ X\right] \text{, with }D\geq2\text{, } \label{Poly_interactions \end{equation} satisfying $\mathcal{P}(\alpha)\geq0$ for any $\alpha\in\mathbb{R}$. Notice that this implies that for $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}$ and $\alpha_{4}>0$, $\exp\left( -\frac{\alpha_{4}}{2}\int\mathcal{P (\varphi)d^{N}x\right) \leq1$. This fact follows from Remark \ref{Nota_discretization}. Each of these theories corresponds to a thermally fluctuating field which is defined by means of a functional integral representation of the partition function. All the thermodynamic quantities and correlation functions of the system can be obtained by functional differentiation from a generating functional as in the classical case, see e.g. \cite{Kleinert et al}, \cite{Mussardo}. In this section, we provide mathematical rigorous definitions of all these objects. \subsection{Partition functions} We assume that $\varphi\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) $ represents a field that performs thermal fluctuations. We also assume that in the normal phase the expectation value of the field $\varphi$ is zero. Then the fluctuations take place around zero. The size of these fluctuations is controlled by the energy functional \[ E(\varphi):=E_{0}(\varphi)+E_{\text{int}}(\varphi), \] where the first terms is defined in (\ref{Energy_Functioal_E_0}), and the second term i \[ E_{\text{int}}(\varphi):=\frac{\alpha_{4}}{4 {\displaystyle\int\limits_{\mathbb{Q}_{p}^{N}}} \mathcal{P}\left( \varphi\left( x\right) \right) d^{N}x\text{, \ \alpha_{4}\geq0\text{, \] corresponds to the interaction energy. All the thermodynamic properties of the system attached to the field $\varphi$ are described by the partition function of the fluctuating field, which is given classically by a functional integra \[ \mathcal{Z}^{\text{phys}} {\displaystyle\int} D\left( \varphi\right) e^{-\frac{E(\varphi)}{K_{B}T}}, \] where $D\left( \varphi\right) $ is a `spurious measure' on the space of fields, $K_{B}$ is the Boltzmann's constant and $T$ is the temperature. We use the normalization $K_{B}T=1$. When the coupling constant $\alpha_{4}=0$, $\mathcal{Z}^{\text{phys}}$ reduced to the free-field partition functio \[ \mathcal{Z}_{0}^{\text{phys}} {\displaystyle\int} D\left( \varphi\right) e^{-E_{0}(\varphi)}. \] It is more convenient to use a normalize partition function $\frac {\mathcal{Z}^{\text{phys}}}{\mathcal{Z}_{0}^{\text{phys}}}$. \begin{definition} Assume that $\delta>N$, and $\gamma$, $\alpha_{2}>0$. The free-partition function is defined a \[ \mathcal{Z}_{0}=\mathcal{Z}_{0}(\delta,\gamma,\alpha_{2}) {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} d\mathbb{P}\left( \varphi\right) . \] The discrete free-partition function is defined a \[ \mathcal{Z}_{0}^{\left( l\right) }=\mathcal{Z}_{0}^{\left( l\right) }(\delta,\gamma,\alpha_{2}) {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} d\mathbb{P}_{l}\left( \varphi\right) \] for $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $. \end{definition} By Lemma \ref{Lemma11}, $\lim_{l\rightarrow\infty}\mathcal{Z}_{0}^{\left( l\right) }=\mathcal{Z}_{0}$. Notice that the term $e^{-E_{0}(\varphi)}$ is used to construct the measure $\mathbb{P}\left( \varphi\right) $. \begin{definition} Assume that $\delta>N$, and $\gamma$, $\alpha_{2}$, $\alpha_{4}>0$. The partition function is defined as \[ \mathcal{Z}=\mathcal{Z}(\delta,\gamma,\alpha_{2},\alpha_{4}) {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) }d\mathbb{P}\left( \varphi\right) . \] The discrete partition functions are defined as \[ \mathcal{Z}^{\left( l\right) }=\mathcal{Z}^{\left( l\right) (\delta,\gamma,\alpha_{2},\alpha_{4}) {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) }d\mathbb{P}_{l}\left( \varphi\right) , \] for $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $. \end{definition} Notice that $e^{-E_{\text{int}}\left( \varphi\right) }$ is bounded and (sequentially) continuous in $\mathcal{L}_{\mathbb{R}}$, and consequently in $\mathcal{L}_{\mathbb{R}}^{l}$ for any $l$. Indeed, take $\varphi_{n}$ $\underrightarrow{\mathcal{D}_{\mathbb{R}}}\ 0$, $\mathcal{L}_{\mathbb{R}}$ is endowed with the topology of $\mathcal{D}_{\mathbb{R}}$. Then there is $l$ such that $\varphi_{n}\in\mathcal{L}_{\mathbb{R}}^{l}$ for every $n$, and $\varphi_{n}$ $\underrightarrow{\text{unif.}}\ 0$, i.e. \[ \varphi_{n}(x) {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{\left( n\right) }\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) \text{, and \max_{\boldsymbol{i}\in G_{l}}\left\{ \varphi^{\left( n\right) }\left( \boldsymbol{i}\right) \right\} \rightarrow0\text{ as }n\rightarrow\infty. \] Which implies that $E_{\text{int}}\left( \varphi_{n}\right) \rightarrow0$. Again by Lemma \ref{Lemma11}, $\lim_{l\rightarrow\infty}\mathcal{Z}^{\left( l\right) }=\mathcal{Z}$. \subsection{Correlation functions} From a mathematical perspective a $\mathcal{P}\left( \varphi\right) $-theory is given by a cylinder probability measure of the for \begin{equation} \frac{1_{\mathcal{L}_{\mathbb{R}}}\left( \varphi\right) e^{-E_{\text{int }\left( \varphi\right) }d\mathbb{P}} {\displaystyle\int\nolimits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q _{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) }d\mathbb{P}}=\frac{1_{\mathcal{L _{\mathbb{R}}}\left( \varphi\right) e^{-E_{\text{int}}\left( \varphi \right) }d\mathbb{P}}{\mathcal{Z}} \label{Eq_Measure \end{equation} in the space of fields $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) $. It is important to mention that we do not require the Wick regularization operation in $e^{-E_{\text{int}}\left( \varphi\right) }$ because we are restricting the fields to be test functions. \begin{definition} \label{Definition_G_m}The $m$-point correlation functions of a field $\varphi\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $ are \ defined a \[ G^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) =\frac{1 {\mathcal{Z} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left( {\displaystyle\prod\limits_{i=1}^{m}} \varphi\left( x_{i}\right) \right) e^{-E_{\text{int}}\left( \varphi \right) }d\mathbb{P}. \] The discrete $m$-point correlation functions of a field $\varphi\in \mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) $ are defined a \[ G_{l}^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) =\frac {1}{\mathcal{Z}^{\left( l\right) } {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} \left( {\displaystyle\prod\limits_{i=1}^{m}} \varphi\left( x_{i}\right) \right) e^{-E_{\text{int}}\left( \varphi \right) }d\mathbb{P}_{l}, \] for $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $. \end{definition} \begin{lemma} \label{Lemma15}The discrete $m$-point correlation functions $G_{l}^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) $ of a field $\varphi \in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $ are test functions in $x_{1},\ldots,x_{m}$. Furthermore, \[ G^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) =\lim_{l\rightarrow \infty}G_{l}^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) , \] pointwise and $G^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) $ are test functions in $x_{1},\ldots,x_{m}$. \end{lemma} \begin{proof} There is an positive integer $l=l(\varphi)$ such that $\varphi\in \mathcal{L}_{\mathbb{R}}^{l}$ and $x_{1},\ldots,x_{m}\in B_{l}^{N}$. By using tha \begin{equation} \varphi\left( x_{i}\right) {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} \varphi\left( \boldsymbol{j}\right) \Omega\left( p^{l}\left\Vert x_{i}-\boldsymbol{j}\right\Vert _{p}\right) , \label{Eq_phi_expansion \end{equation} one gets that {\textstyle\prod\nolimits_{i=1}^{m}} \varphi\left( x_{i}\right) $ is a finite sum of terms of the for \ {\displaystyle\prod\limits_{k=1}^{m}} \varphi\left( \boldsymbol{j}_{k}\right) \Omega\left( p^{l}\left\Vert x_{k}-\boldsymbol{j}_{k}\right\Vert _{p}\right) =:F(\varphi\left( \boldsymbol{j}_{1}\right) ,\ldots,\varphi\left( \boldsymbol{j}_{m}\right) )\Theta\left( x_{1},\ldots,x_{m}\right) , \] where $F(\varphi\left( \boldsymbol{j}_{1}\right) ,\ldots,\varphi\left( \boldsymbol{j}_{m}\right) )$ is a polynomial function defined in $\mathcal{L}_{\mathbb{R}}^{l}$, $\boldsymbol{j}_{k}\in G_{l}$, and $\Theta\left( x\right) =\Theta\left( x_{1},\ldots,x_{m}\right) $ is the characteristic function of the polydisc $B_{-l}^{N}(\boldsymbol{j}_{1 )\times\cdots\times B_{-l}^{N}(\boldsymbol{j}_{m})$. Now, by using that $\exp\left( -E_{\text{int}}\left( \varphi\right) \right) =\exp (-\frac{\alpha_{4}}{4}p^{-lN}\sum_{k=3}^{2D}\sum_{\boldsymbol{j}\in G_{l }a_{k}\varphi^{k}\left( \boldsymbol{j}\right) )$, the correlation function $G_{l}^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) $ is a finite sum of test functions of the for \begin{align*} & \Theta\left( x\right) {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}}} \left\{ F(\varphi\left( \boldsymbol{j}_{1}\right) ,\ldots,\varphi\left( \boldsymbol{j}_{m}\right) )\exp(-\frac{\alpha_{4}}{4}p^{-lN}\sum_{k=3 ^{2D}\sum_{\boldsymbol{j}\in G_{l}}a_{k}\varphi^{k}\left( \boldsymbol{j \right) )\right\} d\mathbb{P}_{l}\mathbb{=}\\ & \Theta\left( x\right) {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}}} \left\{ F(\varphi\left( \boldsymbol{j}_{1}\right) ,\ldots,\varphi\left( \boldsymbol{j}_{m}\right) )\exp(-\frac{\alpha_{4}}{4}p^{-lN}\sum_{k=3 ^{2D}\sum_{\boldsymbol{j}\in G_{l}}a_{k}\varphi^{k}\left( \boldsymbol{j \right) )\right\} d\mathbb{P}, \end{align*} where the convergence of the integrals is guaranteed by the fact that the integrands are bounded functions, cf. Lemma \ref{Lemma11}. Finally, \[ \lim_{l\rightarrow\infty}G_{l}^{\left( m\right) }\left( x_{1},\ldots ,x_{m}\right) =\frac{\lim_{l\rightarrow\infty {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} \left( {\displaystyle\prod\limits_{i=1}^{m}} \varphi\left( x_{i}\right) \right) e^{-E_{\text{int}}\left( \varphi \right) }d\mathbb{P}}{\lim_{l\rightarrow\infty}\mathcal{Z}^{\left( l\right) }}\mathbb{=}G^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) . \] \end{proof} \subsection{Generating functionals} We now introduce a current $J(x)\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $ and add to the energy functional $E(\varphi)$ a linear interaction \ energy of this current with the field $\varphi\left( x\right) $ \[ E_{\text{source}}(\varphi,J):= {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \varphi\left( x\right) J(x)d^{N}x\text{, \] in this way we get a new energy functiona \[ E(\varphi,J):=E(\varphi)+E_{\text{source}}(\varphi,J). \] Notice that $E_{\text{source}}(\varphi,J)=-\left\langle \varphi,J\right\rangle $, where $\left\langle \cdot,\cdot\right\rangle $ denotes the scalar product of $L^{2}(\mathbb{Q}_{p}^{N})$. This scalar product extends to the pairing between \ $\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) $ and $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $ \begin{definition} \label{Definition_Z_J}Assume that $\delta>N$, and $\gamma$, $\alpha_{2}$, $\alpha_{4}>0$. The partition function corresponding to the energy functional $E(\varphi,J)$ is defined as \[ \mathcal{Z}(J;\delta,\gamma,\alpha_{2},\alpha_{4}):=\mathcal{Z}(J)=\frac {1}{\mathcal{Z}_{0} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) +\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}\text{, \] and the discrete versions \[ \mathcal{Z}^{(l)}(J;\delta,\gamma,\alpha_{2},\alpha_{4}):=\mathcal{Z}^{\left( l\right) }(J)=\frac{1}{\mathcal{Z}_{0}^{\left( l\right) } {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) +\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}_{l}\text{, \] for $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $. \end{definition} \begin{remark} \label{Nota_3}In this section, we need some functionals from the space \[ \left( L_{\mathbb{R}}^{\rho}\right) =L^{\rho}\left( \mathcal{L _{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) ,d\mathbb{P}\right) \text{, \ }\rho\in\left[ 1,\infty\right) , \] see Definition \ref{Def_white_noise_space}. Let $F\left( X_{1},\ldots ,X_{n}\right) $ be a real-valued polynomial, and $\xi=\left( \xi_{1 ,\ldots,\xi_{n}\right) $ , with $\xi_{i}\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $ for $i=1,\ldots,n$, then the functiona \[ F_{\xi}(W):=F\left( \left\langle W,\xi_{1}\right\rangle ,\ldots,\left\langle W,\xi_{n}\right\rangle \right) \text{, }W\in\mathcal{L}_{\mathbb{R}}^{\prime }\left( \mathbb{Q}_{p}^{N}\right) , \] belongs to $\left( L_{\mathbb{R}}^{\rho}\right) $, $\rho\in\left[ 1,\infty\right) $, see e.g. \cite[Proposition 1.6]{Hida et al}. The functional $\exp C\left\langle \cdot,\phi\right\rangle $, for $C\in\mathbb{R $, $\phi\in\mathcal{L}_{\mathbb{R}}$ belongs to $\left( L_{\mathbb{R}}^{\rho }\right) $, $\rho\in\left[ 1,\infty\right) $, see e.g. \cite[Proposition 1.7]{Hida et al}. The $\mathbb{R}$-algebra $\mathcal{A}$ generated by the functionals $F_{\xi}$, $\exp C\left\langle \cdot,\phi\right\rangle $ is dense in $\left( L_{\mathbb{R}}^{\rho}\right) $, $\rho\in\left[ 1,\infty\right) $, see e.g. \cite[Theorem 1.9]{Hida et al}. \end{remark} \begin{lemma} \label{Lemma16}Given $\varphi\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q _{p}^{N}\right) $, $m\geq1$, and $e_{i}\geq0$ for $i=1,\ldots,m$,\ we defin \[ \mathcal{I}(\varphi) {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} \left( {\displaystyle\prod\limits_{i=1}^{m}} \varphi^{e_{i}}\left( x_{i}\right) \right) {\displaystyle\prod\limits_{i=1}^{m}} d^{N}x_{i}. \] Then $\mathcal{I}\in\mathcal{A}$. \end{lemma} \begin{proof} There is an integer $l$ such that $\varphi\in\mathcal{L}_{\mathbb{R}}^{l}$. By using (\ref{Eq_phi_expansion}), and the fact that the functions $\Omega\left( p^{l}\left\Vert x_{i}-\boldsymbol{j}\right\Vert _{p}\right) $, $\boldsymbol{j}\in G_{l}$, are orthogonal with respect to the scalar product $\left\langle \cdot,\cdot\right\rangle $ in $L_{\mathbb{R}}^{2}(\mathbb{Q _{p}^{N})$, we hav \begin{align*} \varphi\left( x_{i}\right) & {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} p^{lN}\left\langle \varphi\left( x_{i}\right) ,\Omega\left( p^{l}\left\Vert x_{i}-\boldsymbol{j}\right\Vert _{p}\right) \right\rangle \Omega\left( p^{l}\left\Vert x_{i}-\boldsymbol{j}\right\Vert _{p}\right) \\ & {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} p^{lN}\left\langle W_{\boldsymbol{j}},\varphi\right\rangle \Omega\left( p^{l}\left\Vert x_{i}-\boldsymbol{j}\right\Vert _{p}\right) , \end{align*} where $W_{\boldsymbol{j}}\in\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) $, for $\boldsymbol{j}\in G_{l}$. Consequently \[ \varphi^{e_{i}}\left( x_{i}\right) {\textstyle\sum\limits_{\boldsymbol{j}\in G_{l}}} p^{lNe_{i}}\left\langle W_{\boldsymbol{j}},\varphi\right\rangle ^{e_{i} \Omega\left( p^{l}\left\Vert x_{i}-\boldsymbol{j}\right\Vert _{p}\right) \] and {\textstyle\prod\nolimits_{i=1}^{m}} \varphi^{e_{i}}\left( x_{i}\right) $ is a finite sum of terms of the for \[ \left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle W_{\boldsymbol{j}_{k}},\varphi\right\rangle ^{e_{i_{k}}}\right) {\displaystyle\prod\limits_{k=1}^{m}} \Omega\left( p^{l}\left\Vert x_{k}-\boldsymbol{j}_{k}\right\Vert _{p}\right) , \] where $i_{k}\in\left\{ 1,\ldots,m\right\} $,\ $\boldsymbol{j}_{k}\in G_{l}$. Now $\mathcal{I}(\varphi)$ is a linear combination of terms of the for \begin{align*} & \left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle W_{\boldsymbol{j}_{k}},\varphi\right\rangle ^{e_{i_{k}}}\right) {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} {\displaystyle\prod\limits_{k=1}^{m}} \Omega\left( p^{l}\left\Vert x_{k}-\boldsymbol{j}_{k}\right\Vert _{p}\right) {\displaystyle\prod\limits_{i=1}^{m}} d^{N}x_{i}\\ & =p^{-lNm}\left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle W_{\boldsymbol{j}_{k}},\varphi\right\rangle ^{e_{i_{k}}}\right) \in\mathcal{A}, \end{align*} and therefore $\mathcal{I}\in\mathcal{A}$. \end{proof} \begin{lemma} \label{Lemma17}With the above notation, the following assertions hold true: \noindent(i) $1_{\mathcal{L}_{\mathbb{R}}}\left( \varphi\right) e^{-E_{\text{int}}\left( \varphi\right) +\left\langle \varphi,J\right\rangle }\in\left( L_{\mathbb{R}}^{1}\right) $. In particular, $\mathcal{Z (J)<\infty$; \noindent(ii) \[ \lim_{l\rightarrow\infty {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}_{l} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}; \] \noindent(iii) $\mathcal{Z}^{\left( l\right) }(J)<\infty$ for any $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $; \noindent(iv) $\lim_{l\rightarrow\infty}\mathcal{Z}^{\left( l\right) }(J)=\mathcal{Z}(J)$. \end{lemma} \begin{proof} (i) The result follows fro \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) +\left\langle \varphi,J\right\rangle }d\mathbb{P}\left( \varphi\right) \mathbb{\leq {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }d\mathbb{P}\left( \varphi\right) \mathbb{\leq {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\left\langle W,J\right\rangle }d\mathbb{P(}W\mathbb{)<\infty}\text{, \] by using Remark \ref{Nota_3}. (ii) For each $l\in\mathbb{N}\smallsetminus\left\{ 0\right\} $, we take $\left\{ K_{n_{l}}\right\} $ to be a increasing sequence of compact subsets of $\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) $ having $\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q}_{p}^{N}\right) $ as its limit. Se \[ \mathcal{I}^{\left( l,n\right) }(J): {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} 1_{K_{n_{l}}}\left( \varphi\right) e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}_{l}. \] Since the integrand $1_{K_{n_{l}}}\left( \varphi\right) e^{\left\langle \varphi,J\right\rangle }$ is continuous and bounded, by Lemma \ref{Lemma11},\ \[ \mathcal{I}^{\left( l,n\right) }(J) {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} 1_{K_{n_{l}}}\left( \varphi\right) e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}. \] The result follows by the dominated convergence theorem, by taking first the limit $n_{l}\rightarrow\infty$, and then the limit $l\rightarrow\infty$, and using the fact that $e^{\left\langle \varphi,J\right\rangle }$ is integrable. (iii) By Lemma \ref{Lemma11} and Remark \ref{Nota_3} \ {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}_{l}\left( \varphi\right) {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}\left( \varphi\right) \mathbb{\leq {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\left\langle W,J\right\rangle }\text{ }d\mathbb{P}\left( W\right) \mathbb{<\infty}\text{. \] We now use tha \[ \mathcal{Z}^{\left( l\right) }(J)\leq\frac {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}_{l}} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} \text{ }d\mathbb{P}_{l}}. \] (iv) It is sufficient to show tha \[ \lim_{l\rightarrow\infty {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) +\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}_{l} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}\left( \varphi\right) +\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P}. \] This identity is established by using the reasoning given in the second part. \end{proof} \begin{definition} For $\theta\in\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $, the functional derivative ${\Huge D}_{\theta}\mathcal{Z}(J)$ of $\mathcal{Z (J)$ is defined as \[ {\Huge D}_{\theta}\mathcal{Z}(J)=\lim_{\epsilon\rightarrow0}\frac {\mathcal{Z}(J+\epsilon\theta)-\mathcal{Z}(J)}{\epsilon}=\left[ \frac {d}{d\epsilon}\mathcal{Z}(J+\epsilon\theta)\right] _{\epsilon=0}. \] \end{definition} \begin{lemma} \label{Lemma18}Let $\theta_{1}$,\ldots,$\theta_{m}$ be test functions from $\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) $. The functional derivative ${\Huge D}_{\theta_{1}}\cdots{\Huge D}_{\theta_{m}}\mathcal{Z}(J)$ exists, and the following formula holds true: \begin{equation} {\Huge D}_{\theta_{1}}\cdots{\Huge D}_{\theta_{m}}\mathcal{Z}(J)=\frac {1}{\mathcal{Z}_{0}}\text{ \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle}\left( {\textstyle\prod\limits_{i=1}^{m}} \left\langle \varphi,\theta_{i}\right\rangle \right) d\mathbb{P}(\varphi). \label{Eq_30 \end{equation} Furthermore, the functional derivative ${\Huge D}_{\theta_{1}}\cdots {\Huge D}_{\theta_{m}}\mathcal{Z}(J)$ can be uniquely identified with the distributio \begin{equation {\textstyle\prod\limits_{i=1}^{m}} \theta_{i}\left( x_{i}\right) \rightarrow\frac{1}{\mathcal{Z}_{0}}\text{ \ {\textstyle\idotsint\limits_{\mathbb{Q}_{p}^{N}\times\cdots\times \mathbb{Q}_{p}^{N}}} \text{ {\textstyle\prod\limits_{i=1}^{m}} \theta_{i}\left( x_{i}\right) \left\{ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle {\textstyle\prod\limits_{i=1}^{m}} \varphi\left( x_{i}\right) d\mathbb{P}(\varphi)\right\} {\textstyle\prod\limits_{i=1}^{m}} d^{N}x_{i} \label{Eq_31 \end{equation} from $\mathcal{L}_{\mathbb{R}}^{\prime}\left( \left( \mathbb{Q}_{p ^{N}\right) ^{m}\right) $. \end{lemma} \begin{proof} We first comput \[ \left[ \frac{d}{d\epsilon}\mathcal{Z}(J+\epsilon\theta_{m})\right] _{\epsilon=0}=\frac{1}{\mathcal{Z}_{0}}\lim_{\epsilon\rightarrow0 {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle}\left( \frac {e^{\epsilon\langle\varphi,\theta_{m}\rangle}-1}{\epsilon}\right) \ d\mathbb{P}(\varphi). \] We consider the case $\epsilon\rightarrow0^{+}$, the other limit is treated in a similar way. For $\epsilon>0$ sufficiently small, by using the mean value theorem, \[ \frac{e^{\epsilon\langle\varphi,\theta_{m}\rangle}-1}{\epsilon}=\left\langle \varphi,\theta_{m}\right\rangle e^{\epsilon_{0}\langle\varphi,\theta _{m}\rangle}\text{ where }\epsilon_{0}\in\left( 0,\epsilon\right) . \] Then, by using\ $e^{-E_{\text{int}}(\varphi)}\leq1$\ and Remark \ref{Nota_3}, \[ e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle}\left( \frac {e^{\epsilon\langle\varphi,\theta_{m}\rangle}-1}{\epsilon}\right) =\left\langle \varphi,\theta_{m}\right\rangle e^{-E_{\text{int} (\varphi)+\langle\varphi,J+\epsilon_{0}\theta_{m}\rangle \] is an integrable function. Now, by applying the dominated convergence theorem \begin{equation} {\Huge D}_{\theta_{m}}\mathcal{Z}(J)=\left[ \frac{d}{d\epsilon \mathcal{Z}(J+\epsilon\theta_{m})\right] _{\epsilon=0}=\frac{1 {\mathcal{Z}_{0}}\text{ \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle}\left\langle \varphi,\theta_{m}\right\rangle \ d\mathbb{P}(\varphi). \label{Eq_Der_Funt \end{equation} By Remark \ref{Nota_3}, $e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle }\left\langle \varphi,\theta_{m}\right\rangle \in\left( L_{\mathbb{R} ^{1}\right) $, then, further derivatives can be computed using (\ref{Eq_Der_Funt}). Finally, formula (\ref{Eq_31}) is obtained from (\ref{Eq_30}) by using Fubini's theorem and Remark \ref{Nota_Nuclear} \[ {\Huge D}_{\theta_{1}}\cdots{\Huge D}_{\theta_{m}}\mathcal{Z}(J)=\frac {1}{\mathcal{Z}_{0}}\text{ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle}\left\{ \text{ {\textstyle\idotsint\limits_{\mathbb{Q}_{p}^{N}\times\cdots\times \mathbb{Q}_{p}^{N}}} \text{ {\textstyle\prod\limits_{i=1}^{m}} \theta_{i}\left( x_{i}\right) \varphi\left( x_{i}\right) {\textstyle\prod\limits_{i=1}^{m}} d^{N}x_{i}\right\} d\mathbb{P}(\varphi). \] \end{proof} \begin{remark} \label{Nota_eqqui_Der}In an alternative way, one can define the functional derivative $\frac{\delta}{\delta J\left( y\right) }\mathcal{Z}(J)$\ of $\mathcal{Z}(J)$ as the distribution from $\mathcal{L}_{\mathbb{R}}^{\prime }\left( \mathbb{Q}_{p}^{N}\right) $ satisfyin \ {\textstyle\int\limits_{\mathbb{Q}_{p}^{N}}} \theta\left( y\right) \left( \frac{\delta}{\delta J\left( y\right) }\mathcal{Z}(J)\right) \left( y\right) d^{N}y=\left[ \frac{d}{d\epsilon }\mathcal{Z}(J+\epsilon\theta)\right] _{\epsilon=0}. \] Using this notation and formula (\ref{Eq_31}), we obtain tha \[ \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{m}\right) }\mathcal{Z}(J)=\frac{1}{\mathcal{Z}_{0}}\text{ \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)+\langle\varphi,J\rangle}\left( {\textstyle\prod\limits_{i=1}^{m}} \varphi\left( x_{i}\right) \right) d\mathbb{P}(\varphi)\in\mathcal{L _{\mathbb{R}}^{\prime}\left( \left( \mathbb{Q}_{p}^{N}\right) ^{m}\right) . \] \end{remark} \begin{remark} \label{Nota_10}Consider the probability measure space \ $\left( \mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) ,\mathcal{B \cap\mathcal{L}_{\mathbb{R}},\frac{1}{\mathcal{Z}_{0}}\mathbb{P}\right) $, where $\mathcal{B}\cap\mathcal{L}_{\mathbb{R}}$ denotes the $\sigma$-algebra generated by the cylinder subsets of $\mathcal{L}_{\mathbb{R}}$. Given $\theta_{1}$,\ldots,$\theta_{m}$ test functions from $\mathcal{L}_{\mathbb{R }\left( \mathbb{Q}_{p}^{N}\right) $, \ we attach them the following random variable \ \begin{array} [c]{lll \mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) & \rightarrow & \mathbb{R}\\ \varphi & \rightarrow & {\textstyle\prod\limits_{i=1}^{m}} \left\langle \varphi,\theta_{i}\right\rangle . \end{array} \] The expected value of this variable is given b \[ {\Huge D}_{\theta_{1}}\cdots{\Huge D}_{\theta_{m}}\mathcal{Z}(J)\mid _{J=0}=\frac{1}{\mathcal{Z}_{0}}\text{ \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)}\left( {\textstyle\prod\limits_{i=1}^{m}} \left\langle \varphi,\theta_{i}\right\rangle \right) d\mathbb{P}(\varphi). \] An alternative description of the expected value is given b \[ \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{m}\right) }\mathcal{Z}(J)\mid_{J=0}=\frac{1}{\mathcal{Z}_{0 }\text{ \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{-E_{\text{int}}(\varphi)}\left( {\textstyle\prod\limits_{i=1}^{m}} \varphi\left( x_{i}\right) \right) d\mathbb{P}(\varphi). \] \end{remark} As a conclusion we have the following result: \begin{proposition} \label{Prop1}The correlations functions $G^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) \in\mathcal{L}_{\mathbb{R}}^{\prime}\left( \left( \mathbb{Q}_{p}^{N}\right) ^{m}\right) $ are given by \[ G^{\left( m\right) }\left( x_{1},\ldots,x_{m}\right) =\frac{\mathcal{Z _{0}}{\mathcal{Z}}\frac{\delta}{\delta J\left( x_{1}\right) }\cdots \frac{\delta}{\delta J\left( x_{m}\right) }\mathcal{Z}(J)\mid_{J=0}. \] \end{proposition} \subsubsection{Free-field theory} We set $\mathcal{Z}_{0}(J):=\mathcal{Z}(J;\delta,\gamma,\alpha_{2},0)$. \begin{proposition} \label{Prop2}$\mathcal{Z}_{0}(J)=\mathcal{N}_{0}^{\prime}\exp\left\{ \int_{\mathbb{Q}_{p}^{N}}\int_{\mathbb{Q}_{p}^{N}}J(x)G(\left\Vert x-y\right\Vert _{p})J(y)d^{N}x\text{ }d^{N}y\right\} $, where $\mathcal{N _{0}^{\prime}$ denotes a normalization constant. \end{proposition} \begin{proof} For $J\in\mathcal{L}_{\mathbb{R}}$, the equatio \[ \left( \frac{\gamma}{2}W\left( \partial,\delta\right) +\frac{\alpha_{2} {2}\right) \varphi_{0}=J \] has unique solution $\varphi_{0}\in\mathcal{L}_{\mathbb{R}}$. Indeed, $\widehat{\varphi_{0}}\left( \kappa\right) =\frac{\widehat{J}\left( \kappa\right) }{\frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p})+\frac{\alpha_{2}}{2}}$ is a test function satisfying $\widehat {\varphi_{0}}\left( 0\right) =0$. Furthermore, \[ \varphi_{0}\left( x\right) =\mathcal{F}_{\kappa\rightarrow x}^{-1}(\frac {1}{\frac{\gamma}{2}A_{w_{\delta}}(\left\Vert \kappa\right\Vert _{p )+\frac{\alpha_{2}}{2}})\ast J(x)=G(\left\Vert x\right\Vert _{p})\ast J(x)\text{ in }\mathcal{D}_{\mathbb{R}}^{\prime}\text{. \] We now change variables in $\mathcal{Z}_{0}(J)$ as $\varphi=\varphi _{0}+\varphi^{\prime}$ \begin{align*} \mathcal{Z}_{0}(J) & =\frac{1}{\mathcal{Z}_{0} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }\text{ }d\mathbb{P=}\frac {e^{\left\langle \varphi_{0},J\right\rangle }}{\mathcal{Z}_{0} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\left\langle \varphi^{\prime},J\right\rangle }\text{ }d\mathbb{P}^{\prime }\left( \varphi^{\prime}\right) \\ & =\left( \frac{1}{\mathcal{Z}_{0}}\text{ {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\left\langle \varphi^{\prime},\left( \frac{\gamma}{2}W\left( \partial,\delta\right) +\frac{\alpha_{2}}{2}\right) \varphi_{0}\right\rangle }\text{ }d\mathbb{P}^{\prime}\left( \varphi^{\prime}\right) \right) e^{\left\langle G\ast J,J\right\rangle }\\ & =\mathcal{N}_{0}^{\prime}e^{\left\langle G\ast J,J\right\rangle }=\mathcal{N}_{0}^{\prime}\exp\left\{ \int_{\mathbb{Q}_{p}^{N} \int_{\mathbb{Q}_{p}^{N}}J(x)G(\left\Vert x-y\right\Vert _{p})J(y)d^{N}x\text{ }d^{N}y\right\} . \end{align*} Furthermore, by using (\ref{Eq_Char_func}), the characteristic functional of the measure $\mathbb{P}^{\prime}$\ is \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\sqrt{-1}\langle T,f\rangle}d\mathbb{P}^{\prime}(T)=e^{-\sqrt{-1 \langle\varphi_{0},f\rangle-\frac{1}{2}\mathbb{B}(f,f)},\ \ f\in \mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) , \] which means that $\mathbb{P}^{\prime}$ is a Gaussian measure with mean functional $\langle\varphi_{0},\cdot\rangle$ and correlation functional $\mathbb{B}(\cdot,\cdot)$. \end{proof} The correlation functions $G_{0}^{\left( m\right) }(x_{1},\ldots,x_{m})$ of the free-field theory are obtained \ from the functional derivatives of $\mathcal{Z}_{0}(J)$ at $J=0$: \begin{proposition} \label{Prop3 \begin{gather*} G_{0}^{\left( m\right) }(x_{1},\ldots,x_{m})=\left[ \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{m}\right) }\mathcal{Z}_{0}(J)\right] _{J=0}\\ =\mathcal{N}_{0}^{\prime}\text{ }\frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{m}\right) }\exp\left\{ \int_{\mathbb{Q}_{p}^{N}}\int_{\mathbb{Q}_{p}^{N}}J(x)G(\left\Vert x-y\right\Vert _{p})J(y)d^{N}x\text{ }d^{N}y\right\} \mid_{J=0}. \end{gather*} \end{proposition} \begin{remark} The random variable $\varphi\left( x_{i}\right) $ corresponds to the random variable $\left\langle W,\varphi\right\rangle $, for some $W=W(x_{i )\in\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) $, see Remark \ref{Nota_10}, which is Gaussian with mean zero and variance $\left\Vert \varphi\right\Vert _{2}^{2}$, see e.g. \cite[Lemma 2.1.5]{Obata}. Then, the correlation functions $G_{0}^{\left( m\right) }(x_{1},\ldots ,x_{m})$ obey to Wick's theorem \begin{equation} \frac{1}{\mathcal{Z}_{0} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} {\displaystyle\prod\limits_{i=1}^{m}} \varphi\left( x_{i}\right) d\mathbb{P=}\left\{ \begin{array} [c]{lll 0 & \text{if} & m\text{ is not even}\\ & & \\% {\textstyle\sum\limits_{\text{pairings}}} \mathbb{E}(\varphi\left( x_{i_{1}}\right) \varphi\left( x_{j_{1}}\right) )\cdots\mathbb{E}(\varphi\left( x_{i_{n}}\right) \varphi\left( x_{j_{n }\right) ) & \text{if} & m=2n, \end{array} \right. \label{Wick-Expansion \end{equation} where \[ \mathbb{E}(\varphi\left( x_{i}\right) \varphi\left( x_{j}\right) ):=\frac{1}{\mathcal{Z}_{0} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \varphi\left( x_{i}\right) \varphi\left( x_{j}\right) d\mathbb{P \] and {\textstyle\sum\limits_{\text{pairings}}} $ means the sum over all $\frac{\left( 2n!\right) }{2^{n}n!}$ ways of writing $1,\ldots,2n$ as $n$ distinct (unordered) pairs $(i_{1},j_{1}) ,\ldots,$(i_{n},j_{n})$, see e.g. \cite[Proposition 1.2]{Simon-0}. For $n=2$, $G_{0}^{\left( 2\right) }$ is the free two-point function or the free propagator of the field \begin{align*} G_{0}^{\left( 2\right) }\left( x_{1},x_{2}\right) & =\mathcal{N _{0}^{\prime}\text{ }\frac{\delta}{\delta J\left( x_{1}\right) }\frac {\delta}{\delta J\left( x_{2}\right) }\exp\left\{ \int_{\mathbb{Q}_{p}^{N }\int_{\mathbb{Q}_{p}^{N}}J(x)G(\left\Vert x-y\right\Vert _{p})J(y)d^{N x\text{ }d^{N}y\right\} \mid_{J=0}\\ & =2\mathcal{N}_{0}^{\prime}\text{ }G(\left\Vert x_{1}-x_{2}\right\Vert _{p})\in\mathcal{L}_{\mathbb{R}}^{\prime}(\mathbb{Q}_{p}^{N}\times \mathbb{Q}_{p}^{N}). \end{align*} By using Wick's theorem all the $2n$-point functions can be expressed as sums of products of two-point functions \[ G_{0}^{\left( 2n\right) }(x_{1},\ldots,x_{2n}) {\textstyle\sum\limits_{\text{pairings}}} G(\left\Vert x_{i_{1}}-x_{j_{1}}\right\Vert _{p})\cdots G(\left\Vert x_{i_{n }-x_{j_{n}}\right\Vert _{p}). \] \end{remark} \subsection{Main result: perturbation expansions for $\varphi^{4}$-theories} In this section we assume that $\mathcal{P}(\varphi)=\varphi^{4}$. This hypothesis allow us to provide explicit formulas which completely similar to the classical ones, see e.g. \cite[Chapter 2]{Kleinert et al}. At any rate, the techniques presented here can be applied to polynomial interactions of type (\ref{Poly_interactions}). The existence of a convergent power series expansion for $Z(J)$ (\textit{the perturbation expansion}) in the coupling parameter $\alpha_{4}$ follows from the fact that $\exp\left( -E_{\text{int}}(\varphi)+\left\langle \varphi,J\right\rangle \right) $ is an integrable function, see Lemma \ref{Lemma17} (i), by using the dominated convergence theorem, more precisely, we hav \begin{gather} \mathcal{Z}(J)=\mathcal{Z}_{0}(J)+\frac{1}{\mathcal{Z}_{0} {\displaystyle\sum\limits_{m=1}^{\infty}} \frac{1}{m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left\{ {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} \left( {\textstyle\prod\limits_{i=1}^{m}} \varphi^{4}\left( z_{i}\right) \right) e^{\left\langle \varphi ,J\right\rangle {\textstyle\prod\limits_{i=1}^{m}} d^{N}z_{i}\right\} d\mathbb{P}(\varphi)\nonumber\\ =:\mathcal{Z}_{0}(J) {\displaystyle\sum\limits_{m=1}^{\infty}} \mathcal{Z}_{m}(J), \label{Eq_35 \end{gather} wher \[ \mathcal{Z}_{0}(J)=\frac{1}{\mathcal{Z}_{0} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\left\langle \varphi,J\right\rangle }d\mathbb{P}(\varphi). \] In the case $m\geq1$, by using that $\mathcal{A}$ is an algebra (see Remark \ref{Nota_3} and Lemma \ref{Lemma16}), we can apply Fubini's theorem to obtain tha \begin{align*} \mathcal{Z}_{m}(J) & :=\frac{1}{\mathcal{Z}_{0}\text{ }m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left\{ {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} \left( {\textstyle\prod\limits_{i=1}^{m}} \varphi^{4}\left( z_{i}\right) \right) e^{\left\langle \varphi ,J\right\rangle {\textstyle\prod\limits_{i=1}^{m}} d^{N}z_{i}\right\} d\mathbb{P}(\varphi)\\ & =\frac{1}{\mathcal{Z}_{0}\text{ }m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} \left\{ {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left( {\textstyle\prod\limits_{i=1}^{m}} \varphi^{4}\left( z_{i}\right) \right) e^{\left\langle \varphi ,J\right\rangle }d\mathbb{P}(\varphi)\right\} {\textstyle\prod\limits_{i=1}^{m}} d^{N}z_{i}. \end{align*} Then \begin{equation} \mathcal{Z}_{m}(0)=\frac{1}{m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} G_{0}^{\left( 4m\right) }\left( z_{1},z_{1},z_{1},z_{1},\ldots,z_{m ,z_{m},z_{m},z_{m}\right) {\textstyle\prod\limits_{i=1}^{m}} d^{N}z_{i}, \label{Eq_36 \end{equation} for $m\geq1$. Therefore from (\ref{Eq_35})-(\ref{Eq_36}), with $J=0$, and using $\mathcal{Z}=\mathcal{Z}(0)$, $\mathcal{Z}_{m}(0):=\mathcal{Z}_{m}$, for $m\geq1$ \[ \mathcal{Z}=1 {\displaystyle\sum\limits_{m=1}^{\infty}} \mathcal{Z}_{m}. \] Now by using Propositions \ref{Prop1}, \ref{Prop3}\ and (\ref{Eq_35}) \begin{gather*} G^{\left( n\right) }\left( x_{1},\ldots,x_{n}\right) =\frac{\mathcal{Z _{0}}{\mathcal{Z}}\left[ \frac{\delta}{\delta J\left( x_{1}\right) \cdots\frac{\delta}{\delta J\left( x_{n}\right) }\mathcal{Z}(J)\right] _{J=0}\\ =\frac{\mathcal{Z}_{0}}{\mathcal{Z}}\left[ \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{n}\right) \mathcal{Z}_{0}(J)\right] _{J=0}+\frac{\mathcal{Z}_{0}}{\mathcal{Z}}\left[ \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{n}\right) {\displaystyle\sum\limits_{m=1}^{\infty}} \mathcal{Z}_{m}(J)\right] _{J=0}\\ =\frac{\mathcal{Z}_{0}}{\mathcal{Z}}G_{0}^{\left( n\right) }\left( x_{1},\ldots,x_{n}\right) +\frac{\mathcal{Z}_{0}}{\mathcal{Z}}\left[ \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{n}\right) {\displaystyle\sum\limits_{m=1}^{\infty}} \mathcal{Z}_{m}(J)\right] _{J=0}. \end{gather*} \begin{lemma \begin{multline*} \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{n}\right) {\displaystyle\sum\limits_{m=1}^{\infty}} \mathcal{Z}_{m}(J)=\\ \frac{1}{\mathcal{Z}_{0}}\text{ {\displaystyle\sum\limits_{m=1}^{\infty}} \frac{1}{m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} \left\{ {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left( {\textstyle\prod\limits_{i=1}^{m}} \varphi^{4}\left( z_{i}\right) \right) \left( {\textstyle\prod\limits_{i=1}^{n}} \varphi\left( x_{i}\right) \right) e^{\left\langle \varphi,J\right\rangle }d\mathbb{P}(\varphi)\right\} {\textstyle\prod\limits_{i=1}^{m}} d^{N}z_{i}. \end{multline*} \end{lemma} \begin{proof} We recall that by the proof of Lemma \ref{Lemma16}, \[ e^{\left\langle \varphi,J\right\rangle }\mathcal{J}\left( \varphi\right) :=e^{\left\langle \varphi,J\right\rangle {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} \left( {\textstyle\prod\limits_{i=1}^{m}} \varphi^{4}\left( z_{i}\right) \right) {\textstyle\prod\limits_{i=1}^{m}} d^{N}z_{i \] is a \ finite sum of terms of the form \[ e^{\left\langle \varphi,J\right\rangle }\left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle \varphi,W_{\boldsymbol{j}_{k}}\right\rangle ^{e_{i_{k}}}\right) {\displaystyle\prod\limits_{k=1}^{m}} \Omega\left( p^{l}\left\Vert x_{k}-\boldsymbol{j}_{k}\right\Vert _{p}\right) , \] then by the definition of $\mathcal{Z}_{m}(J)$, it is sufficient to comput \begin{multline*} \frac{\delta}{\delta J\left( x_{1}\right) }\cdots\frac{\delta}{\delta J\left( x_{n}\right) {\displaystyle\sum\limits_{m=1}^{\infty}} \frac{1}{\mathcal{Z}_{0}\text{ }m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m}\times\\% {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left\{ \left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle \varphi,W_{\boldsymbol{j}_{k}}\right\rangle ^{e_{i_{k}}}\right) e^{\left\langle \varphi,J\right\rangle }\right\} d\mathbb{P}(\varphi). \end{multline*} We first establish tha \begin{gather*} {\Huge D}_{\theta_{1}}\left\{ {\displaystyle\sum\limits_{m=1}^{\infty}} \frac{1}{\mathcal{Z}_{0}m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left\{ \left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle \varphi,W_{\boldsymbol{j}_{k}}\right\rangle ^{e_{i_{k}}}\right) e^{\left\langle \varphi,J\right\rangle }\right\} d\mathbb{P}(\varphi)\right\} \\ {\displaystyle\sum\limits_{m=1}^{\infty}} \frac{1}{\mathcal{Z}_{0}m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} \left\{ \left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle \varphi,W_{\boldsymbol{j}_{k}}\right\rangle ^{e_{i_{k}}}\right) \left\langle \varphi,\theta_{1}\right\rangle e^{\left\langle \varphi,J\right\rangle }\right\} d\mathbb{P}(\varphi), \end{gather*} by using the reasoning given in the proof of Lemma \ref{Lemma18}. Since \[ \left( {\displaystyle\prod\limits_{k=1}^{m}} p^{lNe_{i_{k}}}\left\langle \varphi,W_{\boldsymbol{j}_{k}}\right\rangle ^{e_{i_{k}}}\right) \left\langle \varphi,\theta_{1}\right\rangle e^{\left\langle \varphi,J\right\rangle }\text{ is an integrable function, \] cf. Remark \ref{Nota_3}, further derivatives can be calculated in the same way. Consequently \begin{multline*} {\Huge D}_{\theta_{1}}\cdots{\Huge D}_{\theta_{m} {\displaystyle\sum\limits_{m=1}^{\infty}} \mathcal{Z}_{m}(J)=\\ \frac{1}{\mathcal{Z}_{0} {\displaystyle\sum\limits_{m=1}^{\infty}} \frac{1}{m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} \left\{ \left( {\textstyle\prod\limits_{i=1}^{n}} \left\langle \varphi,\theta_{i}\right\rangle \right) e^{\left\langle \varphi,J\right\rangle }\mathcal{J}\left( \varphi\right) d\mathbb{P (\varphi)\right\} . \end{multline*} The announced formula follows from Remark \ref{Nota_eqqui_Der}. \end{proof} In conclusion, we have the following result: \begin{theorem} \label{Theorem2}Assume that $\mathcal{P}(\varphi)=\varphi^{4}$. The $n$-point correlation function of the field $\varphi$ admits the following convergent power series in the coupling constant \[ G^{\left( n\right) }\left( x_{1},\ldots,x_{n}\right) =\frac{\mathcal{Z _{0}}{\mathcal{Z}}\left\{ G_{0}^{\left( n\right) }\left( x_{1 ,\ldots,x_{n}\right) {\displaystyle\sum\limits_{m=1}^{\infty}} G_{m}^{\left( n\right) }\left( x_{1},\ldots,x_{n}\right) \right\} , \] wher \begin{multline*} G_{m}^{\left( n\right) }\left( x_{1},\ldots,x_{n}\right) :=\\ \frac{1}{m!}\left( \frac{-\alpha_{4}}{4}\right) ^{m {\displaystyle\int\limits_{\left( \mathbb{Q}_{p}^{N}\right) ^{m}}} G_{0}^{\left( n+4m\right) }\left( z_{1},z_{1},z_{1},z_{1},\ldots ,z_{m},z_{m},z_{m},z_{m},x_{1},\ldots,x_{n}\right) {\textstyle\prod\limits_{i=1}^{m}} d^{N}z_{i}. \end{multline*} \end{theorem} The free-field correlation functions $G_{0}^{\left( n+4m\right) }$ in the sum may now Wick-expanded as in (\ref{Wick-Expansion}) into sums over products of propagators $G$. \section{\label{Section_Landau_Ginzburg}Ginzburg-Landau phenomenology} In this section we consider the following non-Archimedean Ginzburg-Landau free energy \begin{multline*} E\left( \varphi,J\right) :=E(\varphi,J;\delta,\gamma,\alpha_{2},\alpha _{4})=\frac{\gamma(T)}{2}\text{ \ {\textstyle\iint\limits_{\mathbb{Q}_{p}^{N}\times\mathbb{Q}_{p}^{N}}} \text{ }\frac{\left\{ \varphi\left( x\right) -\varphi\left( y\right) \right\} ^{2}}{w_{\delta}\left( \left\Vert x-y\right\Vert _{p}\right) }d^{N}xd^{N}y\\ +\frac{\alpha_{2}(T)}{2}\int\limits_{\mathbb{Q}_{p}^{N}}\varphi^{2}\left( x\right) d^{N}x+\frac{\alpha_{4}(T)}{4}\int\limits_{\mathbb{Q}_{p}^{N }\varphi^{4}\left( x\right) d^{N}x-\int\limits_{\mathbb{Q}_{p}^{N} \varphi\left( x\right) J(x)d^{N}x, \end{multline*} where $J$, $\varphi\in\mathcal{D}_{\mathbb{R}}$, and \[ \gamma(T)=\gamma+O(\left( T-T_{c}\right) );\text{ \ }\alpha_{2}(T)=\left( T-T_{c}\right) +O(\left( T-T_{c}\right) ^{2});\text{ \ }\alpha _{4}(T)=\alpha_{4}+O(\left( T-T_{c}\right) ), \] where $T$ is temperature, $T_{C}$ is the critical temperature and $\gamma>0$, $\alpha_{4}>0$. In this section we consider that $\varphi\in\mathcal{D}_{\mathbb{R}}^{l}$ is the local order parameter of a continuous Ising system with `external magnetic field' $J\in\mathcal{D}_{\mathbb{R}}^{l}$. The system is contained in the ball $B_{l}^{N}$. We divide this ball into sub-balls (boxes) $B_{-l}^{N}\left( \boldsymbol{i}\right) $, $\boldsymbol{i}\in G_{l}$. The volume of each of these balls is $p^{-lN}$ and the radius is $a:=p^{-l}$. In order to compare with classical case, the parameter $a$ is the length of the side of each box. Each $\varphi\left( \boldsymbol{i}\right) \in\mathbb{R}$ represents the `average magnetization' in the ball $B_{-l}^{N}\left( \boldsymbol{i}\right) $. We take $\varphi\left( x\right) =\sum_{\boldsymbol{i}\in G_{l} \varphi\left( \boldsymbol{i}\right) \Omega\left( p^{l}\left\Vert x-\boldsymbol{i}\right\Vert _{p}\right) $ which is a locally constant function. Notice that the distance between two points in the ball $\boldsymbol{i}+p^{l}\mathbb{Z}_{p}^{N}$ is $\leq p^{-l}$. Then $\varphi \left( x\right) $ varies appreciable over distances larger than $p^{-l}$. On the other hand, since $\widehat{\varphi}\left( \kappa\right) =0$ for $\left\Vert \kappa\right\Vert _{p}>p^{l}$, we get that $\Lambda=p^{l}$, which is the inverse of $a$, works as an ultra-violet cut-off. Then considering $\varphi\left( \boldsymbol{i}\right) \in\mathbb{R}$ as the continuous spin at the site $\boldsymbol{i}\in G_{l}$, the partition function of our continuos Ising model is \[ \mathcal{Z}^{\left( l\right) }\left( \beta\right) =\sum\limits_{\left\{ \varphi\left( \boldsymbol{i}\right) ;\text{ }\boldsymbol{i}\in G_{l}\right\} }e^{-\beta E(\varphi\left( \boldsymbol{i}\right) ,J\left( \boldsymbol{i}\right) )}. \] \subsection{Properties of $E$} \subsubsection{Non-locality} \textbf{ }The energy functional $E$ is non-local (i.e. only long range interactions happen) due to the presence of the non-local operator $\boldsymbol{W}_{\delta}$. In the non-Archimedean case all the known Laplacians are non-local operators. This is a central difference with the classical Ginzburg-Landau energy functionals which depend on short range interactions. \subsubsection{Translational and rotational symmetries} We set $GL_{N}\left( \mathbb{Z}_{p}\right) $ for the compact subgroup of $GL_{N}\left( \mathbb{Q}_{p}\right) $ consisting of all invertible matrices with entries in $\mathbb{Z}_{p}$, and define \[ GL_{N}^{\circ}\left( \mathbb{Z}_{p}\right) =\left\{ M\in GL_{N}\left( \mathbb{Z}_{p}\right) ;\left\vert \det M\right\vert _{p}=1\right\} . \] This last group preserves the norm $\left\Vert \cdot\right\Vert _{p}$, i.e. $\left\Vert x\right\Vert _{p}=\left\Vert Mx\right\Vert _{p\text{ }}$ for any $x\in\mathbb{Q}_{p}^{N}$ and any $M\in GL_{N}^{\circ}\left( \mathbb{Z _{p}\right) $. \ If $J=0$, then $E$ is invariant under the transformations of the for \[ x\rightarrow a+Mx\text{, \ for }a\in\mathbb{Q}_{p}^{N},M\in GL_{N}^{\circ }\left( \mathbb{Z}_{p}\right) , \] i.e. $E\left( \varphi\left( x\right) \right) =E\left( \varphi\left( a+Mx\right) \right) $. \subsubsection{$\boldsymbol{Z}_{2}$\textbf{ }symmetry} If $J=0$, then $E$ is invariant under $\varphi\rightarrow-\varphi$. \subsection{A motion equation in $\mathcal{D}_{\mathbb{R}}^{l}$} We now consider the following minimization problem \begin{equation} \left\{ \begin{array} [c]{ll (1) & \min_{\varphi\in\mathcal{D}_{\mathbb{R}}^{l}}E(\varphi,0);\\ & \\ (2) & \int_{\mathbb{Q}_{p}^{N}}\varphi\left( x\right) d^{N}x=C, \end{array} \right. \label{Eq_Problem_1 \end{equation} where $C$ is a real constant. Since $\mathcal{D}_{\mathbb{R}}^{l}\simeq\left( \mathbb{R}^{\#G_{l}},\left\vert \cdot\right\vert \right) $, the problem (\ref{Eq_Problem_1}) is just minimization problem in $\mathbb{R}^{\#G_{l}}$. We use all the notation and results given in the proof of Lemma \ref{Lemma7}. In particular \begin{align*} E(\varphi,0) & =E_{0}^{\left( l\right) }\left( \varphi,0\right) +\frac{p^{-lN}\alpha_{4}}{4 {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{4}\left( \boldsymbol{i}\right) \\ & =\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}^{T}U\left[ \varphi\left( \boldsymbol{i}\right) \right] _{\boldsymbol{i}\in G_{l}}+\frac{p^{-lN}\alpha_{4}}{4 {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi^{4}\left( \boldsymbol{i}\right) , \end{align*} an \begin{equation} \int_{\mathbb{Q}_{p}^{N}}\varphi\left( x\right) d^{N}x=p^{-lN {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi\left( \boldsymbol{i}\right) =C. \label{Eq_condition_Integral \end{equation} We proceed as in the proof of \ref{Lemma7}. There is a polydisc $\left\vert \varphi\left( \boldsymbol{i}\right) \right\vert \leq D$ for all $\boldsymbol{i}\in G_{l}$ such that $E(\varphi,0)\geq0$ outside of this polydisc. Consequently $E(\varphi,J)$ has global minima. In order to determine the minima of $E(\varphi,0)$ satisfying (\ref{Eq_condition_Integral}), we use the technique of Lagrange multipliers. We se \[ E(\varphi,\lambda)=E(\varphi,0)+\lambda\left( p^{-lN {\textstyle\sum\limits_{\boldsymbol{i}\in G_{l}}} \varphi\left( \boldsymbol{i}\right) -C\right) . \] Then, the necessary conditions are \begin{equation} \frac{\partial E(\varphi,p^{-lN}\lambda)}{\partial\varphi\left( \boldsymbol{i}\right) }=\frac{\partial E_{0}^{\left( l\right) }\left( \varphi,0\right) }{\partial\varphi\left( \boldsymbol{i}\right) +p^{-lN}\alpha_{4}\varphi^{3}\left( \boldsymbol{i}\right) +p^{-lN \lambda=0\text{ for all }\boldsymbol{i}\in G_{l}. \label{Eq_contions \end{equation} Consequently, $p^{lN}\frac{\partial E_{0}^{\left( l\right) }\left( \varphi,J\right) }{\partial\varphi\left( \boldsymbol{i}\right) }+\alpha _{4}\varphi^{3}\left( \boldsymbol{i}\right) $ \ does not depend on $\boldsymbol{i}$ neither on $l$, $N$, i.e. \begin{equation} \varphi\left( \boldsymbol{i}\right) =\varphi_{0}\text{ for all }\boldsymbol{i}\in G_{l}. \label{Eq_con_solution \end{equation} The conditions (\ref{Eq_contions}) can be rewritten a \begin{equation} -\frac{\gamma}{2}\boldsymbol{W}_{\delta}^{\left( l\right) }\varphi\left( x\right) +\left( \frac{\gamma}{2}d\left( l,w_{\delta}\right) +\alpha _{2}+\lambda\right) \varphi\left( x\right) +\alpha_{4}\varphi^{3}\left( x\right) =0. \label{Eq_motion_equation_lA \end{equation} Due to (\ref{Eq_con_solution}), we look for a constant solution, i.e.$\ $for a solution\ of the form$\ \varphi\left( x\right) =\varphi_{0}\Omega\left( p^{-l}\left\Vert x\right\Vert \right) $, where $\varphi_{0}$\ is a constant. Since \[ \boldsymbol{W}_{\delta}^{\left( l\right) }\Omega\left( p^{-l}\left\Vert x\right\Vert \right) =-\left( {\int\limits_{\mathbb{Q}_{p}^{N}\setminus B_{l}^{N}}}\frac{d^{N}y}{w_{\delta}\left( \Vert y\Vert_{p}\right) }\right) \Omega\left( p^{-l}\left\Vert x\right\Vert \right) , \] cf. (\ref{Eq_D_l}), we get from (\ref{Eq_motion_equation_lA}) tha \[ \varphi_{0}\left( \alpha_{4}\varphi_{0}^{2}+\left\{ \frac{\gamma}{2 {\int\limits_{\mathbb{Q}_{p}^{N}\setminus B_{l}^{N}}}\frac{d^{N}y}{w_{\delta }\left( \Vert y\Vert_{p}\right) }+\frac{\gamma}{2}d\left( l,w_{\delta }\right) +\lambda\right\} +\alpha_{2}\right) =0. \] Since the solution should be independent of $l$, $N$, we should take \[ \frac{\gamma}{2}{\int\limits_{\mathbb{Q}_{p}^{N}\setminus B_{l}^{N}} \frac{d^{N}y}{w_{\delta}\left( \Vert y\Vert_{p}\right) }+\frac{\gamma {2}d\left( l,w_{\delta}\right) +\lambda=0. \] So we have established the following result: \begin{theorem} \label{Theorem3}The minimizers of the functional $E(\varphi,0)$, $\varphi \in\mathcal{D}_{\mathbb{R}}^{l}$ are constant solutions of \begin{equation} \left( -\frac{\gamma}{2}\boldsymbol{W}_{\delta}^{\left( l\right) +\alpha_{2}-\frac{\gamma}{2}{\int\limits_{\mathbb{Q}_{p}^{N}\setminus B_{l}^{N}}}\frac{d^{N}y}{w_{\delta}\left( \Vert y\Vert_{p}\right) }\right) \varphi\left( x\right) +\alpha_{4}\varphi^{3}\left( x\right) =0, \label{Eq_motion_equation_1 \end{equation} i.e. solutions of \begin{equation} \varphi\left( \alpha_{4}\varphi^{2}+\alpha_{2}\right) =0. \label{Eq_ground_states \end{equation} \end{theorem} \begin{remark} Notice that the non-zero solutions (\ref{Eq_ground_states}) do not belong to $\mathcal{L}_{\mathbb{R}}^{l}$. If the limit $l\rightarrow\infty$ (`in some sense') in (\ref{Eq_motion_equation_1}) is \begin{equation} \left( -\frac{\gamma}{2}\boldsymbol{W}_{\delta}+\alpha_{2}\right) \varphi\left( x\right) +\alpha_{4}\varphi^{3}\left( x\right) =0. \label{Eq_motion_equation_2 \end{equation} Then, since $\boldsymbol{W}_{\delta}C=0$, for any constant $C$, the constant solutions of (\ref{Eq_motion_equation_2}) are exactly (\ref{Eq_ground_states}). \end{remark} \subsection{Spontaneous symmetry breaking} If $J=0$, the field $\varphi\in\mathcal{D}_{\mathbb{R}}^{^{l}}$ is a minimum of the energy functional $E$, if it satisfies (\ref{Eq_ground_states}). When $T>T_{C}$ we have $\alpha_{2}>0$ and the ground state is $\varphi_{0}=0$. In contrast, when $T<T_{C}$, $\alpha_{2}<0$, there is a degenerate ground state $\pm\varphi_{0}$ with \[ \varphi_{0}=\sqrt{-\frac{\alpha_{2}}{\alpha_{4}}}. \] This implies that below $T_{C}$ the systems must pick one of the two states $+\varphi_{0}$ or $-\varphi_{0}$, which means that \ there is a spontaneous symmetry breaking. There is a central difference between \ the non-Archimedean Ginzburg-Landau theory and the classical one comes from the fact that the two-point correlation functions decay at infinity as a power of $\left\Vert \cdot\right\Vert _{p}$ while in the classical case the decay exponentially, see e.g. \cite[Section 11.3.1]{KKZuniga}, \cite[Section 2.8]{Koch}. In the non-Archimedean case, the connection between critical exponents and correlation functions is an open problem. \section{\label{Section_Wick_rotation}The Wick rotation} The classical generating functional of $\mathcal{P}(\varphi)$-theory with Lagrangian density $E_{0}(\varphi)+E_{\text{int}}(\varphi)+E_{\text{source }(\varphi,J)$ in the Minkowski space i \[ \mathcal{Z}^{\text{phys}}(J)=\frac{\int D(\varphi)e^{\sqrt{-1}\left\{ E_{0}(\varphi)+E_{\text{int}}(\varphi)+E_{\text{source}}(\varphi,J)\right\} }}{\int D(\varphi)e^{\sqrt{-1}\left\{ E_{0}(\varphi)+E_{\text{int} (\varphi)\right\} }}. \] A natural $p$-adic analogue of this function i \[ \mathcal{Z}_{\mathbb{C}}(J)=\frac {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\sqrt{-1}\left\{ E_{\text{int}}(\varphi)+E_{\text{source}}(\varphi ,J)\right\} }d\mathbb{P}(\varphi)} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}\left( \mathbb{Q}_{p ^{N}\right) }} e^{\sqrt{-1}\left\{ E_{0}(\varphi)+E_{\text{int}}(\varphi)\right\} }d\mathbb{P}(\varphi)}. \] Which is a complex-value measure. The key point is that $e^{\sqrt{-1}\left\{ E_{0}(\varphi)+E_{\text{int}}(\varphi)+E_{\text{source}}(\varphi,J)\right\} }$ is integrable, see \cite[Theorem 1.9]{Hida et al}, and then the techniques presented here \ can be applied to $\mathcal{Z}_{\mathbb{C}}(J)$ and its discrete versio \[ \mathcal{Z}_{\mathbb{C}}^{\left( l\right) }(J)=\frac {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{\sqrt{-1}\left\{ E_{\text{int}}(\varphi)+E_{\text{source}}(\varphi ,J)\right\} }d\mathbb{P}_{l}(\varphi)} {\displaystyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{l}\left( \mathbb{Q _{p}^{N}\right) }} e^{\sqrt{-1}\left\{ E_{0}(\varphi)+E_{\text{int}}(\varphi)\right\} }d\mathbb{P}_{l}(\varphi)},\text{ }l\in N\smallsetminus\left\{ 0\right\} . \] In particular a version Theorem \ref{Theorem2} is valid for $\mathcal{Z _{\mathbb{C}}(J)$.\ To explain the connection of these constructions with Wick rotation, we rewrite (\ref{Eq_Char_func}) as follows \begin{equation {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\sqrt{-1}\lambda\langle W,f\rangle}d\mathbb{P}(W)=e^{-\frac{\left\vert \lambda\right\vert ^{2}}{2}\mathbb{B}(f,f)}\text{,}\ \ f\in\mathcal{L _{\mathbb{R}}\left( \mathbb{Q}_{p}^{N}\right) \text{, for }\lambda \in\mathbb{C}\text{.} \label{Eq_Char_func_2 \end{equation} This formula holds true in the case $\lambda\in\mathbb{R}$. The integral in the right-hand side of (\ref{Eq_Char_func_2}) admits an entire analytic continuation to the complex plane, see \cite[Proposition 2.4]{Hida et al}. Furthermore, this fact is exactly the Analyticity Axiom (OS0) in the Euclidean axiomatic quantum field presented in \cite[Chapter 6]{Glimm-Jaffe}. A field $\varphi:\mathbb{Q}_{p}^{N}\rightarrow\mathbb{R}$ is a function from the spacetime $\mathbb{Q}_{p}^{N}$ into $\mathbb{R}$ (the target space). We perform a Wick rotation in the target space \ \begin{array} [c]{lll \mathbb{R} & \rightarrow & \sqrt{-1}\mathbb{R}\\ & & \\ \varphi & \rightarrow & \sqrt{-1}\varphi. \end{array} \] Then \ {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\sqrt{-1}\langle T,\sqrt{-1}\varphi\rangle}d\mathbb{P}(T) {\textstyle\int\limits_{\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\sqrt{-1}\langle\sqrt{-1}T,\varphi\rangle}d\mathbb{P}(T)=e^{-\frac{1 {2}\mathbb{B}(\varphi,\varphi)}\text{. \] Changing variables as $W=\sqrt{-1}T$, we ge \[ e^{-\frac{1}{2}\mathbb{B}(\varphi,\varphi)} {\textstyle\int\limits_{\sqrt{-1}\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) }} e^{\sqrt{-1}\langle W,\varphi\rangle}d\mathbb{P}^{\prime}(W). \] Therefore, $\mathbb{P}^{\prime}$ is a probability measure in $\sqrt {-1}\mathcal{L}_{\mathbb{R}}^{\prime}\left( \mathbb{Q}_{p}^{N}\right) $ with correlation functional $\mathbb{B}(\cdot,\cdot)$, that can be identified with $\mathbb{P}$. \bigskip
1,108,101,564,082
arxiv
\section*{Introduction} In recent years, the growing demand of fast electronics promoted the thrive of applications based on Josephson vortices, i.e., solitons~\cite{And10,Lik12,Wus18}. Nonetheless, the effective interplay between thermal transport and soliton dynamics is far from being fully explored. Indeed, the influence on the dynamics of solitons of a homogeneous temperature gradient applied along (namely, from one edge of the junction to the other) a long Josephson junctions (LJJs) was earlier studied, both theoretically and experimentally, in Refs.~\cite{Log94,Gol95,Kra97}. Instead, the issue of the soliton-sustained coherent thermal transport in a LJJ as a temperature gradient is imposed across the system (namely, as the electrodes forming the device reside at different temperatures) was exclusively recently addressed in Refs.~\cite{Gua18,GuaSolBra18}. The latter work reports the first endeavour to combine the physics of solitons and phase-coherent caloritronics~\cite{Mak65,Gia06,MarSol14,ForGia17}. This research field deals with the manipulation of heat currents in mesoscopic superconducting devices~\cite{Gia12,Mar14,Mar15,Sol16,ForBla16,Pao17,ForGia17,For17,Pao18,Tim18} by mastering the phase difference of the superconducting order parameter. In this framework, the thermal modulation induced by the external magnetic field was first demonstrated in superconducting quantum-interference devices (SQUID)~\cite{GiaMar12,Gia12} and then in short Josephson junctions (JJs)~\cite{Gia13,Mar14}. Moreover, hysteretic behaviours in temperature-biased Josephson devices were also recently discussed in Refs.~\cite{Gua17,GuaSol18}. \begin{figure}[hb!!] \center \includegraphics[width=0.75\columnwidth]{Figure01.pdf} \caption{\textbf{Fluxon chain in a magnetically driven, thermally biased LJJ.} A \emph{S-I-S} LJJ excited by an external in-plane magnetic field $H_{\text{ext}}(t)$. The length and the width of the junction are $L\gg\lambda_{_{\text{J}}}$ and $W \ll\lambda_{_{\text{J}}}$, respectively, where $\lambda_{_{\text{J}}}$ is the Josephson penetration depth. Moreover, the thickness $D_2 \ll\lambda_{_{\text{J}}}$ of the electrode $S_2$ is indicated. A chain of fluxons, i.e., solitons, along the junction is depicted. The incoming, i.e., $P_{\text{in}}\left ( T_1,T_2,\varphi,V \right )$, and outgoing, i.e., $P_{e\text{-ph}}\left ( T_2,T_{\text{bath}}\right )$, thermal powers in $S_2$ are also represented, for $T_1>T_2(x)>T_{\text{bath}}$.} \label{Figure01} \end{figure} Although little is known about caloritronics effects in these systems, LJJs are still nowadays an active research field, both theoretically~\cite{Gul07,Mon12,Val14,Gua15,Zel15,Pan15,GuaValSpa16,GuaSol17,Hil18,Wus18} and experimentally~\cite{Ooi07,Lik12,Fed12,Mon13,Gra14,Kos14,Fed14,Vet15,Gol17}. In our work, we explore theoretically the effects of a sinusoidal magnetic drive on the thermal transport across a temperature-biased LJJ (see Fig.~\ref{Figure01}). We show that the behavior of solitons along the system is reflected on the fast evolution of both the heat power through the junction and the temperature of a cold ``thermally floating'' electrode of the device. Accordingly, we observe temperature peaks in correspondence of the magnetically induced solitons. Moreover, in sweeping back and forth the driving field, hysteretic thermal phenomena come to light. Finally, we thoroughly discuss the application of this system as a \emph{heat oscillator}, in which the thermal flux flowing from the junction edge oscillates following the sinusoidal magnetic drive. The dynamical approach that we will address in this work, is essential to establish the performance and the figure of merits of the device, especially when a ``fast'' (with respect to the intrinsic thermalization time scale of the system) magnetic drive is considered. \section*{Results} \label{Results}\vskip-0.2cm {\bf Theoretical modelling. } We investigate the thermal transport in a temperature-biased long Josephson tunnel junction driven by the external magnetic field, $H_{\text{ext}}(t)$. The electrodynamics of a long and narrow Josephson tunnel junction is usually described by the perturbed sine-Gordon (SG) equation~\cite{Bar82} \begin{equation} \frac{\partial^2 \varphi\big(\widetilde{x},\widetilde{t}\,\big) }{\partial {\widetilde{x}}^2} -\frac{\partial^2 \varphi\big(\widetilde{x},\widetilde{t}\,\big) }{\partial {\widetilde{t}}^{2}}- \sin\Big( \varphi\big ( \widetilde{x},\widetilde{t} \,\big ) \Big )= \alpha\frac{\partial \varphi\big(\widetilde{x},\widetilde{t}\,\big) }{\partial \widetilde{t}}\label{SGeq} \end{equation} for the Josephson phase $\varphi$, namely, the phase difference between the wavefunctions describing the carriers in the superconducting electrodes. The time variations of $\varphi$ generates a local voltage drop according to $V(x,t)=\frac{\Phi_0}{2\pi}\frac{\partial \varphi (x,t) }{\partial t}$ (where $\Phi_0= h/2e\simeq2\times10^{-15}\; \textup{Wb}$ is the magnetic flux quantum, with $e$ and $h$ being the electron charge and the Planck constant, respectively). In previous equations, space and time variables are normalized to the Josephson penetration depth $\lambda_{_{\text{J}}}=\sqrt{\frac{\Phi_0}{2\pi \mu_0}\frac{1}{t_d J_c}}$ and to the inverse of the Josephson plasma frequency $\omega_p=\sqrt{\frac{2\pi}{\Phi_0}\frac{J_c}{C}}$, respectively, i.e., $\widetilde{x}=x/\lambda_{_{\text{J}}}$ and $\widetilde{t}=\omega_pt$. The junction is called ``long'' just because its dimensions in units of $\lambda_{_{\text{J}}}$ are $\widetilde{L}=L/ \lambda_{_{\text{J}}}\gg1$ and $\widetilde{W}=W/ \lambda_{_{\text{J}}}\ll1$ (see Fig.~\ref{Figure01}). Here, we introduced the critical current density $J_c$, the effective magnetic thickness $t_d=\lambda_{L,1}\tanh\left ( D_1/2\lambda_{L,1} \right )+\lambda_{L,2}\tanh\left ( D_2/2\lambda_{L,2} \right )+d$~\cite{Gia13,Mar14} (where $\lambda_{L,i}$ and $D_i$ are the London penetration depth and the thickness of the electrode $S_i$, respectively, and $d$ is the insulating layer thickness), and the specific capacitance $C$ of the junction due to the sandwiching of the superconducting electrodes. The dissipation in the junction is accounted by the damping parameter $\alpha=1/(\omega_pRC)$, with $R$ being the normal-state resistance per area of the junction~\cite{Tin04}. The unperturbed SG equation, i.e., $\alpha=0$ in equation~\eqref{SGeq}, admits topologically stable travelling-wave solutions, called \emph{solitons}~\cite{Par93,Ust98}, corresponding to 2$\pi$-twists of the phase, which have the simple analytical expression~\cite{Bar82} \begin{equation} \varphi(\widetilde{x}-u\widetilde{t})=4\arctan \left [ \exp \left ( \sigma \frac{\widetilde{x}-\widetilde{x}_0-u\widetilde{t} }{\sqrt{1-u^2}} \right ) \right ], \label{SGkink} \end{equation}the Josephson heat interferometer where $\sigma=\pm1$ is the polarity of the soliton and $u$ is the soliton velocity, measured in units of the Swihart’s velocity $\bar{c}=\lambda_{_{\text{J}}}\omega_p$~\cite{Bar82}. A soliton has a clear physical meaning in the LJJ framework, since it carries a quantum of magnetic flux, induced by a supercurrent loop surrounding it, with the local magnetic field perpendicularly oriented with respect to the junction length. Thus, solitons in the context of LJJs are usually referred to as fluxons or Josephson vortices. The effect on the phase evolution of the driving external magnetic field is accounted by the boundary conditions of equation~\eqref{SGeq}, \begin{equation} \frac{d\varphi(0,t) }{d\widetilde{x}} = \frac{d\varphi(\widetilde{L},t) }{d\widetilde{x}}=2\frac{H_{\text{ext}}(t)}{H_{c,1}}= H(t). \label{bcSGeq} \end{equation} The coefficient $H_{c,1}=\frac{\Phi_0}{\pi \mu_0 t_d\lambda_{_{\text{J}}}}$ is called the first critical field of a LJJ~\cite{Gold01}, since it is a threshold value above which, namely, for $H_{\text{ext}}(t)>H_{c,1}$, in the absence of bias current solitons penetrate from the junction ends and fill the system with some density depending on both the value of $H(t)$ and the length $L$ of the junction. The aim of this work is the investigation of the variations of the temperature $T_2$ of the electrode $S_2$ as the magnetic drive is properly swept. Specifically, the modulation of the temperature of the drain ``cold'' electrode is usually obtained by realizing a JJ with a large superconducting electrode, namely, $S_1$, whose temperature $T_1$ is kept fixed, and a smaller electrode, namely, $S_2$, with a small volume and, thereby a small thermal capacity. In this way, the heat transferred significantly affects the temperature $T_2$ of the latter electrode, which is then measured. For the sake of readability, hereafter we will adopt the abbreviated notation in which the $x$ and $t$ dependences are left implicit, namely, $T_2=T_2(x,t)$, $\varphi=\varphi(x,t)$, and $V=V(x,t)$. The electrode $S_2$ can be modelled as a one-dimensional diffusive superconductor at a temperature varying along $L$, so that the evolution of the temperature $T_2$ is given by the time-dependent diffusion equation~\cite{Gua18} \begin{equation} \frac{\mathrm{d} }{\mathrm{d} x}\left [\kappa( T_2 ) \frac{\mathrm{d} T_2}{\mathrm{d} x} \right ]+\mathcal{P}_{\text{tot}}\left ( T_1,T_2,\varphi \right )=c_v(T_2)\frac{\mathrm{d} T_2}{\mathrm{d} t}, \label{ThermalBalanceEq} \end{equation} where the term \begin{equation} \mathcal{P}_{\text{tot}}\left ( T_1,T_2,\varphi \right )=\mathcal{P}_{\text{in}}\left ( T_1,T_2,\varphi,V \right )-\mathcal{P}_{e\text{-ph}}\left ( T_2,T_{\text{bath}}\right ) \label{TotalPower} \end{equation} consists of the phase-dependent incoming, i.e., $\mathcal{P}_{\text{in}}\left ( T_1,T_2,\varphi,V \right )$, and the outgoing, i.e., $\mathcal{P}_{e\text{-ph}}\left ( T_2,T_{\text{bath}}\right )$, thermal power densities in $S_2$. Finally, in equation~\eqref{ThermalBalanceEq}, $c_v(T)=T\frac{\mathrm{d} \mathcal{S}(T)}{\mathrm{d} T}$ is the volume-specific heat capacity, with $\mathcal{S}(T)$ being the electronic entropy density of $S_2$~\cite{Sol16}, and $\kappa(T_2)$ is the electronic heat conductivity~\cite{For17}. We are assuming that the lattice phonons are very well thermalized with the substrate that resides at $T_{bath}$, thanks to the vanishing Kapitza resistance between thin metallic films and the substrate at low temperatures~\cite{Wel94,Gia06}. The full expressions and the physical meaning of all terms and coefficients in equations~\eqref{ThermalBalanceEq} and~\eqref{TotalPower} are thoroughly discussed in ‘Methods’ section. To explore the thermal transport in this system, it only remains to include in equation~\eqref{ThermalBalanceEq} the proper phase difference $\varphi(x,t)$ for a LJJ given by numerical solution of equations~\eqref{SGeq} and~\eqref{bcSGeq}, with initial conditions $\varphi(\widetilde{x},0)=d\varphi(\widetilde{x},0)/d\widetilde{t}=0\quad \forall \widetilde{x}\in[0-\widetilde{L}]$. {\bf Numerical results. } In the present study, we consider an Nb/AlO$_x$/Nb SIS LJJ characterized by a normal resistance per area $R=50~\Omega~\mu\text{m}^2$ and a specific capacitance $C=50~fF/\mu \text{m}^2$. The linear dimensions of the junction are $L=100\;\mu\text{m}$ for the length, $W=0.5\;\mu\text{m}$ for the width, $D_2=0.1\;\mu\text{m}$ and $d=1\;\text{nm}$ for the thicknesses of $S_2$ and the insulating layer, respectively. For the Nb electrode, we assume $\lambda_{L}^0=80\;\text{nm}$, $\sigma_N=6.7\times10^6 \;\Omega^{-1}\text{m}^{-1}$, $\Sigma=3\times10^9\;\textup{W}\textup{m}^{-3}\textup{ K}^{-5}$, $N_F=10^{47}\;\textup{ J}^{-1}\textup{ m}^{-3}$, $\Delta_1(0)=\Delta_2(0)=\Delta=1.764k_BT_c$, with $T_c=9.2\;\text{K}$ being the common critical temperature of the superconductors, and $\gamma_1=\gamma_2=10^{-4}\Delta$. We impose a thermal gradient across the system, specifically, the bath resides at $T_{\text{bath}}=4.2\;\text{K}$, and $S_1$ is at a temperature $T_1=7\;\text{K}$ kept fixed throughout the computation. This value of the temperature $T_1$ assures the maximal soliton-induced heating in $S_2$, for a bath residing at the liquid helium temperature~\cite{Gua18}. Nonetheless, the soliton-sustained local heating that we are going to discuss could be enhanced by reducing the bath temperature and correspondingly adjusting the temperature $T_1$ of the hot electrode. However, we underline that a lowering of the working temperatures could lead to significantly longer thermal response times~\cite{Gua17}. The electronic temperature $T_2(x,t)$ of the electrode $S_2$ is the key quantity to master the thermal transport across the junction, since it floats and can be driven by the external magnetic field. By including the proper temperature-dependence in both the effective magnetic thickness $t_d(T_1,T_2)$ and the Josephson critical current density $J_c( T_1,T_2)$, which varies with the temperatures according to the generalized Ambegaokar and Baratoff formula~\cite{Gia05,Bos16}, we obtain $\lambda_{_{\text{J}}}\simeq7.1\;\mu\text{m}$, $\omega_p\simeq1.3\;\text{THz}$, and $H_{c,1}\simeq5.1\;\text{Oe}$. Moreover, $\alpha\simeq0.3$ corresponding to an underdamped dissipative regime. Anyway, these solitonic parameters weakly depend on the temperature $T_2$, in the range of $T_2$'s values that we will discuss. \begin{figure*}[!!t] \center \includegraphics[width=\textwidth]{Figure02.pdf} \caption{\textbf{Soliton configurations as a function of the magnetic drive.} Space derivative of the phase, $\frac{\partial \varphi}{\partial x}$, as a function of $x$ at those times at which the normalized driving field assumes the values $H(t)=\{0,1,2,3,4,5\}$ during the forward [solid lines, panel \textbf{a}] and backward [dashed lines, panels \textbf{b}] sweeps of the drive. In panels \textbf{c} and \textbf{d}, the evolutions of $\frac{\partial \varphi}{\partial x}$ and $H(t)$ are shown, respectively. In the latter panels, the horizontal, red solid and dashed lines indicate the times at which the curves in panels \textbf{a} and \textbf{b} are calculated, respectively.} \label{Figure02} \end{figure*} We use a sinusoidal normalized driving field with frequency $\omega_{\text{dr}}$ and maximum amplitude $H_{\text{max}}$ \begin{equation}\label{drivingfield} H(t)=H_{\text{max}}\sin(\omega_{\text{dr}} t), \end{equation} so that $H(t)$ within a half period is sweeping first forward from 0 to $H_{\text{max}}$ and then backward to 0. In the following, we impose $H_{\text{max}}=5$ and $\omega_{\text{dr}}=0.25\;\text{GHz}$, and we limit ourselves to investigate a single half period of the drive, corresponding to $T_{\text{dr}}/2=4\pi\;\text{ns}$. Anyway, since $\omega_{\text{dr}}\ll\omega_p$, we can image the presented solution for the phase profile as the adiabatic solution of the system. The evolution at multiple periods of the drive can be obtained by simply repeating the presented solution. Interestingly, a driving field sweeping back and forth is expected to give intriguing hysteretic behaviours~\cite{Gua16}. By increasing the magnetic field, for $H(t)<2$, that means $H_{\text{ext}}(t)<H_{c,1}$ according to equation~\eqref{bcSGeq}, the junction is in the Meissner state, namely, the fluxon-free state of the system~\cite{Cir97,Kup06}. Instead, for a magnetic field above the critical value, i.e., for $H(t)>2$, solitons in the form of fluxons penetrate the LJJ from its ends. However, in this case at a specific value of the magnetic field several solutions, describing distinct configurations with different amount of solitons, may concurrently exist~\cite{Kup06,Kup07,Kup10,Gua16}. The dynamical approach is essential to describe the JJ state when multiple solutions are available. In fact, when the magnetic field increases, at a certain point a configuration with more solitons can be energetically favorable and, thus, the system ``jumps'' from a metastable state to a more stable state. Therefore, the system stays in the present configuration until the following one is energetically more stable. The dynamical approach allows to determine both when the system switches and its new stable state. The configurations of solitons are well depicted by the space derivative of the phase, $\frac{\partial \varphi(x,t)}{\partial x}$, see Fig.~\ref{Figure02}, since it is proportional to the local magnetic field according to the relation~\cite{Bar82} \begin{equation} H_{\text{in}}(x,t)=\frac{\Phi_0}{2\pi\mu_0t_d}\frac{\partial \varphi (x,t) }{\partial x}. \label{localmagneticfield} \end{equation} The spatial distributions of $\frac{\partial \varphi}{\partial x}$ at a few values of $H$, are shown in panels \textbf{a} and \textbf{b} of Fig.~\ref{Figure02}, as the driving field is swept first forward (solid lines) and then backward (dashed lines), respectively. The ripples in these curves indicate fluxons along the junction. For a large applied field the solitons are closely spaced, since the amount of fluxons, e.g., ripples, along the JJ increases by intensifying the magnetic field. In the forward dynamics, for $H\leq2$ the system is in the Meissner state, i.e., no ripples, meaning zero fluxons, and a decaying magnetic field penetrating the junction ends (see Fig.~\ref{Figure02}\textbf{a}). According to the nonlinearity of the problem, for higher fields ($H>2$) the stable solutions are not the trivial superimposition of Meissner and vortex fields, but are rather solitons ``dressed'' by a Meissner field confined in the junction edges~\cite{Kup06}. Hysteresis results looking at the local magnetic field during the backward sweeping of the drive, see Fig.~\ref{Figure02}\textbf{b}. In fact, we observe that forward and backward spatial distributions of $\frac{\partial \varphi(x,t)}{\partial x}$ clearly differ, inasmuch as solitons still persist by reducing the driving field. Nevertheless, for $H(t)=0$ no solitons actually remain within the system in both forward and backward dynamics. This hysteretical behavior comes from the multistability of the SG model~\cite{Che94,Kup06,Kup07,Kup10,Gua16,GuaSol17}. In fact, Kuplevakhsky and Glukhov demonstrated that each solution of the SG equation, with a distinct number of solitons, is stable in a broad range of magnetic field values~\cite{Kup06,Kup07,Kup10}. Besides, they observed that these stability regions tend to overlap. Essentially, it means that at a fixed value of the magnetic field different stable solutions, with different amount of solitons along the system, may concurrently exists. Furthermore, it was demonstrated that the longer the junction, the stronger the overlap, and that overlapping decreases by increasing the magnetic field~\cite{Kup06}. This fact not only ensures that the stable solutions cover the whole field range $0\leq H<\infty$, but also proves that hysteresis is an intrinsic property of any LJJ~\cite{Kup06}. The evolution of a magnetically driven system described by the SG equation can be understood by analyzing the Gibbs free-energy functional and its minimization~\cite{Yug95,Yug99,Kup06,Kup07,Kup10,Gua16}. In fact, by sweeping the magnetic field, the system stays in a current state until the following one is energetically more stable. In this case the system ``jumps'' from a metastable state to a more stable one with a different configuration of solitons. Interestingly, the hysteresis and the sudden transitions between states with different number of solitons slightly depend also on the damping parameter~\cite{Gua16}. The full spatio-temporal evolution of the space derivative of $\varphi$ is displayed in the contour plot in Fig.~\ref{Figure02}\textbf{c}. Alongside, the time evolution of the magnetic drive is shown, see Fig.~\ref{Figure02}\textbf{d}. In both panels, horizontal red solid and dashed lines indicate the times at which the curves in panels \textbf{a} and \textbf{b} are calculated, respectively. In Fig.~\ref{Figure02}\textbf{c}, dark fringes patterns along $x$ indicate solitons along the junction. Furthermore, this figure discloses the transitions between different stable states, when the amount of solitons along the junction changes. As the magnetic field is increased (decreased) the solitons are shifted towards (away from) the center of the junction up to a pair of solitons is symmetrically injected (extracted) from the junction ends. Moreover, we observe that solitons arrange symmetrically and equidistantly along the junction, since the system is centrosymmetric and the solitons with the same polarity tend to repel each other. \begin{figure}[t!!] \center \includegraphics[width=0.5\columnwidth]{Figure03.pdf} \caption{\textbf{Phase dependent heat power.} Evolution of the heat power $P_{\text{in}}(x,t)$ at $T_1=7\;\text{K}$ and $T_{\text{bath}}=4.2\;\text{K}$, for $H_{\text{max}}=5$ and $\omega_{\text{dr}}=0.25\;\text{GHz}$. } \label{Figure03} \end{figure} \begin{figure*}[b!!] \center \includegraphics[width=\columnwidth]{Figure04.pdf} \caption{\textbf{Temperature evolution.} Evolution of the temperature $T_{2}(x,t)$ of the floating electrode $S_2$ at $T_1=7\;\text{K}$ and $T_{\text{bath}}=4.2\;\text{K}$, for $H_{\text{max}}=5$ and $\omega_{\text{dr}}=0.25\;\text{GHz}$. The legend refers to both panels. } \label{Figure04} \end{figure*} We have seen that the investigation of the full dynamics is crucial to understand the junction behavior, which depends on the full evolution of the system~\cite{Gua16}. Therefore, it is natural to wonder if also the heat transport throughout the system, and then the temperature of the junction, changes with the history of the system and how it is related to the soliton evolution. The time and space evolution of the heat power $P_{\text{in}}$ flowing from $S_1$ to $S_2$ is shown in the density plot in Fig.~\ref{Figure03}. In the abscisses of this figure we report the position along the LJJ, whereas on the left we show the time and the corresponding values of the magnetic field are shown on the right. We observe that solitons locally correspond to clearly enhancements of the heat power $P_{\text{in}}$, namely, the heat current flowing through the junction is significantly supported by a magnetically excited soliton. In fact, the value of the heat power in correspondence of each soliton is $P_{\text{in}}\sim0.9\;\mu\text{W}$, namely, a value three times higher than the power $P_{\text{in}}\sim0.3\;\mu\text{W}$ flowing elsewhere. The configurations of solitons, the sudden transitions between stable states with different amount of solitons, and the hysteretical behavior by sweeping back and forth the driving field are noticeable in Fig.~\ref{Figure03}. \begin{figure}[t!!] \center \includegraphics[width=0.5\columnwidth]{Figure05.pdf} \caption{\textbf{Thermal hysteresis.} Hysteretic behaviour of the temperature $T_2$ along the junction for $H(t)=0.5$ as the driving field is swept first forward (solid line) and then backward (dashed line).} \label{Figure05} \end{figure} \begin{figure}[b!!] \center \includegraphics[width=0.75\columnwidth]{Figure06.pdf} \caption{\textbf{Magnetically driven long junction operating as a Josephson heat oscillator.} Thermal fluxes in a temperature biased LJJ driven by a sinusoidal magnetic flux, see equation~\ref{drivingfield}, with $H_{\text{max}}=2$. Two solitons confined at the junction edges for $H=H_{\text{max}}$ are also depicted. Besides the incoming and outgoing thermal powers in $S_2$, the thermal power loss $P_{\text{loss}}$ flowing from the right side of $S_2$ is also represented. } \label{Figure06} \end{figure} Finally, the behaviour of the temperature $T_2$ reflects the behavior of the thermal power $P_{\text{in}}$, as it is shown in Fig.~\ref{Figure04}. In panel a of this figure, we observe that when a transition between stable states occurs, the temperature exhibits a locally peaked behavior. We observe that as a soliton set in, it induces a local intense warming-up in $S_2$, so that the temperature of the system locally rapidly approaches the maximum value $T_{2,\text{max}}\simeq4.29\;\text{K}$. Then, when a change in the magnetic field causes a transition to occur, the soliton positions modifies and the temperature adapts to this variation. In fact, the temperature peaks shift according to the new configuration of solitons, see Fig.~\ref{Figure04}\textbf{a}. In this way, for $H=H_{\text{max}}$ several peaks compose the temperature profile, one for each soliton induced by the magnetic field. The contour plot in Fig.~\ref{Figure04}\textbf{b} gives a clear image of the spatio-temporal distribution of $T_2$. In this figure, it is evident how the temperature accurately follows the solitonic dynamics. We note that, for the backward drive, for $H\lesssim1$ two temperature peaks persist, although in the Meissner state (i.e., $H<2$ during the forward sweep of the drive) the whole electrode $S_2$ if roughly thermalized at the same temperature. This thermal hysteresis is evidently highlighted in Fig.~\ref{Figure05}, for $H(t)=0.5$ as the driving field is swept first forward (solid line) and then backward (dashed line). The results discussed in this work could promptly find application in different contexts. For instance, an alternative method of fluxon imaging in extended JJs could be conceived. Heretofore, the low temperature scanning electron microscopy (LTSEM)~\cite{Hue88,Mal94,Dod97} was confirmed to be an efficient experimental tool for studying fluxon dynamics in Josephson devices. In this technique, a narrow electron beam is used to locally heat a small portion ($\sim\mu\text{m}$) of the junction, in order to locally increase the effective damping parameter. Consequently, the I-V characteristic of the device changes, so that, by gradually moving the electron beam along the junction surface and measuring the voltage, a sort of image of the dynamical state of the LJJ can be created. In our work, we demonstrated that, by imposing a thermal gradient across the junction, the temperature profile of the floating electrode mimics the positions of magnetically-induced solitons. Therefore, our findings can be effectively used for a \emph{thermal} imaging of steady solitons in LJJs through calorimetric measurements~\cite{Gas15,Gia15,Sai16,Zgi17,Wan18}. Moreover, the dynamics discussed so far embodies the thermal router application suggested in Ref.~\cite{Gua18}. In fact, we can image a superconducting finger attached in a specific point of $S_2$. Then, by adjusting the external magnetic field, we can induce a specific configuration of solitons along the junction, such to magnetically excite a soliton exactly in correspondence of this finger, with the aim to allow the route of the heat throughout this thermal channel. Finally, this device can be used to design a solid-state heat oscillator actively controlled by a magnetic drive. The latter application is carefully discussed in the following section. \begin{figure*}[t] \center \includegraphics[width=\textwidth]{Figure07.pdf} \caption{\textbf{Thermal oscillations as a function of the magnetic drive.} \textbf{a} Evolution of the temperature $T_{2}(x,t)$ of the floating electrode $S_2$ at $T_1=7\;\text{K}$ and $T_{\text{bath}}=4.2\;\text{K}$, for $H_{\text{max}}=2$, $\omega_{\text{dr}}=0.5\;\text{GHz}$, and $P_{\text{loss}}=0$. \textbf{b} $T_{2}(x,t)$ as a function of $t$ for $x=L$ at a few values of $H_{\text{max}}\in[0.2-2]$ for $\omega_{\text{dr}}=0.25\;\text{GHz}$. \textbf{c}-\textbf{g} $T_{2}(x,t)$ as a function of $t$ for $x=L$, at a few values of $\omega_{\text{dr}}$ for $H_{\text{max}}=2$.} \label{Figure07} \end{figure*} {\bf The Josephson heat oscillator. } We observe that the sinusoidal magnetic field causes the temperature of both sides of the junction to oscillate, and that, for $H=2$, only two solitons are penetrating the junction, with the center of the solitons being exactly located at the junction edges in $x=\{0,L\}$ (see Fig.~\ref{Figure02}\textbf{a}). Let us now discuss the temperature response through the solitonic dynamics. For $H=2$ we have in $x=\{0,L\}$ the maximum temperature enhancement, since this is the only case in which we can definitively assume a soliton firmly set in a junction end. In fact, the situations for $0<H<2$ can be envisaged by depicting two solitons situated outside the junction, so that by increasing the value of $H$ the solitons moves closer to the junction edges, until their centers are on the borders of junction in $x=\{0,L\}$ for $H = 2$. Instead, for higher fields, i.e., $H>2$, solitons start to penetrate (leave) the junction, so that the temperature of each edge nonlinearly follows the abrupt magnetically driven processes of injection (extraction) of solitons. In light of these remarks, we conceive a heat oscillator based on a temperature biased LJJ driven by a sinusoidal magnetic field with $H_{\text{max}}=2$. Specifically, we design to handle the temperature of the right edge of $S_2$, i.e., in $x=L$, with the aim to generate and master a thermal power $P_{\text{loss}}$, which flows from the right side of $S_2$, see Fig.~\ref{Figure06}, which oscillates according to the magnetic drive. We first discuss the effects produced by the variations of both the driving amplitude and the frequency on the temperature $T_2(x=L,t)$ when a negligible loss thermal power is assumed, i.e., $P_{\text{loss}}=0$, and then how $P_{\text{loss}}$ affects this temperature. The modulation of the temperature of $S_2$ due to a sinusoidal drive with $H_{\text{max}}=2$ and $\omega_{\text{dr}}=0.5\;\text{GHz}$, for $P_{\text{loss}}=0$, is displayed in Fig.~\ref{Figure07}\textbf{a}. This figure shows that the enhancement of the temperature is restricted to the junction edges, since for $H\leq2$ there are no solitons inside the system. Moreover, the temperature we are interested in, namely, the temperature of the junction edge in $x=L$, oscillates in tune with the driving field. Specifically, $T_2(L,t)$ shows peaks for $\left | H(t) \right |=H_{\text{max}}$, namely, for $t=T_{\text{dr}}/4$ and $t=3T_{\text{dr}}/4$ (with $T_{\text{dr}}$ being the driving period), and minima for $H(t)=0$. This means that, the thermal oscillation frequency is twice the driving frequency, since the thermal effects are independent on the polarity of the soliton. Accordingly, the frequency requirements of this device are less demanding. This phenomenon allows to discriminate in frequency the magnetic drive from the thermal oscillation. Clearly, the oscillatory behavior of the edge temperature $T_2(L,t)$ persists also by reducing $H_{\text{max}}$, see Fig.~\ref{Figure07}\textbf{b} for $\omega_{\text{dr}}=0.25\;\text{GHz}$, although the maximum value of the temperature reduces with decreasing the maximum magnetic drive. The position of the temperature peak is however independent on $H_{\text{max}}$, as it is well demonstrated in Fig.~\ref{Figure07}\textbf{b}. Alternatively, in Figs.\ref{Figure07}\textbf{c}-\textbf{g} the behavior of the temperature $T_2(L,t)$ for $H_{\text{max}}=2$ at a few values of $\omega_{\text{dr}}$ is shown. We note that the temperature oscillation amplitude is drastically damped by increasing the driving frequency, even if the value around which the temperature oscillates is independent on $\omega_{\text{dr}}$. Since the coherent thermal transport is a nonlinear phenomenon, we note that the frequency purity is affected by small corrections inducting vanishingly small spectral components at frequency $\omega_{\text{dr}}$. This effect can be observed as a small beat in the time evolution of $T_2$ (e.g., see Fig.~\ref{Figure07}\textbf{e}). The behavior of the system can be clearly outlined by the $T_2$ modulation amplitude, $\delta T_2$, defined as the difference between the maximum and the minimum values of $T_2(L,t)$ within an oscillation of the drive. In fact, we can define two relevant figures of merit of the thermal oscillator, represented by the modulation amplitude, $\delta T_{2}$, as a function of both the driving frequency $\omega_{\text{dr}}$ (see Fig.~\ref{Figure08}\textbf{a}, for $H_{\text{max}}=2$ and $P_{\text{loss}}=0$) and the maximum driving amplitude $H_{\text{max}}$ (see Fig.~\ref{Figure08}\textbf{b}, for $\omega_{\text{dr}}=0.25\;\text{GHz}$ and $P_{\text{loss}}=0$). We look first at the behaviour of $\delta T_2$ by varying the driving frequency $\omega_{\text{dr}}$ (see Fig.~\ref{Figure08}\textbf{a}). We observe that $\delta T_2$ is roughly constant for $\omega_{\text{dr}}\lesssim1\;\text{GHz}$, specifically, $\delta T_2\simeq56\;\text{mK}$. For higher frequencies, the modulation amplitude reduces, going down linearly in the semi-log plot shown in Fig.~\ref{Figure08}\textbf{a}. In fact, as the thermal oscillation frequency becomes comparable to the inverse of the characteristic time scale for the thermal relaxation processes, the temperature is not able to follow the fast driving field. According to Ref.~\cite{GuaSol18}, this thermal response time can be defined as the characteristic time of the exponential evolution by which the temperature locally approaches its stationary value in the presence of a soliton. In Ref.~\cite{Gua18}, for a Nb based junction at the same working temperatures used in this work, a thermal response time roughly equal to $\tau_{\text{th}}\sim0.25\;\text{ns}$ was estimated. The modulation amplitude at driving frequency $\omega_{\text{dr}}$ rolls off as $\Big ( 1+(2\omega_{\text{dr}}\tau_{\text{th}})^2 \Big )^{-1/2}$ since $\tau_{\text{th}}$ determines the time scale of the energy exchange between the ensemble and reservoir. Then, by fitting the $\delta T_2(\omega_{\text{dr}})$ data with the curve $\delta T_{2,0}/\sqrt{ 1+(2\omega_{\text{dr}}\tau_{\text{fit}})^2}$ the parameters $\delta T_{2,0}=\left ( 55.87\pm0.08 \right )\;\text{mK}$ and $\tau_{\text{fit}}=\left ( 0.243\pm0.001 \right )\;\text{ns}$ are estimated (see dashed curve in Fig.~\ref{Figure08}\textbf{a}). For higher frequencies, other nonlinear effects, related to the finite size of the system, can play a role. Anyway, we are dealing with a regime of tiny temperature modulations at very high frequencies, which is not so significative for practical points of view. In Fig.~\ref{Figure08}\textbf{b} the behavior of $\delta T_{2}$ as a function of $H_{\text{max}}$, for $\omega_{\text{dr}}=0.25\;\text{GHz}$, is shown. We observe that, by increasing the maximum drive, the $T_2$ modulation amplitude grows more than linearly. This behavior represents a sort of calibration curve for the thermal oscillator. \begin{figure*}[t!!] \center \includegraphics[width=\textwidth]{Figure08.pdf} \caption{\textbf{Figures of merit of the Josephson heat oscillator.} Relevant figures of merit of the heat oscillator, represented by the modulation amplitude, $\delta T_{2}$, as a function of the driving frequency $\omega_{\text{dr}}$ (for $H_{\text{max}}=2$ and $P_{\text{loss}}=0$), the maximum driving amplitude $H_{\text{max}}$ (for $\omega_{\text{dr}}=0.25\;\text{GHz}$ and $P_{\text{loss}}=0$), and the loss thermal conductance $\eta$ (for $H_{\text{max}}=2$ and $\omega_{\text{dr}}=0.25\;\text{GHz}$), see \textbf{a}, \textbf{b}, and \textbf{c}, respectively. In panel \textbf{a} a fitting curve is also shown.} \label{Figure08} \end{figure*} Now, we assume a non-vanishing thermal power $P_{\text{loss}}$ flowing throughout the right side of $S_2$, which area is $A=WD_2$. For simplicity, we speculate that this thermal power might depend linearly on the temperature $T_2(L,t)$ according to \begin{equation}\label{Ploss} P_{\text{loss}}=\eta \left [ T_2(L,t)-T_{\text{bath}} \right ]=\eta\Delta T_2. \end{equation} In this equation, $\eta$ is a thermal conductance, measured in $\text{W}/\text{K}$, that we use as knob to emulate the thermal effectiveness of the load. Thus, another figure of merit of the heat oscillator can be delineated by the $T_2$ modulation amplitude as a function of the coupling constant $\eta$, see Fig.~\ref{Figure08}\textbf{c} for $H_{\text{max}}=2$ and $\omega_{\text{dr}}=0.25\;\text{GHz}$. From this figure we note that $\delta T_2$ is roughly constant for low values of $\eta$, and that the energy outgoing from the right side of $S_2$ affects the temperature of $T_2$ only for $\eta\gtrsim1\;\text{pW}/\text{mK}$. Above this value, the modulation temperature significantly reduces, going to zero for $\eta\sim10^{3}\;\text{pW}/\text{mK}$. To estimate the coupling constant threshold value, $\eta_{\text{cr},1}$, above which $\delta T_2$ starts to reduce, we use a scaling argument based on the boundary condition derived by the Fourier law throughout the area $A$ and assuming a temperature drop $\delta T_2$ also along a distance $\lambda_{_{\text{J}}}/2$. Accordingly, we obtain \begin{equation}\label{criticalcoupliconstant1} \kappa(T_{2,\text{max}})A\frac{\delta T_2}{\lambda_{_{\text{J}}}/2}=\eta_{\text{cr},1}\Delta T_2, \end{equation} from which $\eta_{\text{cr},1}\simeq2.2\;\text{pW/mK}$. Differently, the value of the critical coupling constant $\eta_{\text{cr},2}$ at witch $\delta T_2\to0$ can be estimated by supposing that the incoming thermal power due to a soliton is entirely balanced by the outgoing power flowing towards both the thermal bath and the right side of $S_2$. We assume that the thermal power induced by a soliton centered in $x=L$ flows through a volume $V_{s}=A\lambda_{_{\text{J}}}/2$. Then, at the equilibrium, from equation\eqref{Ploss}, we obtain \begin{equation}\label{criticalcoupliconstant2} \eta_{\text{cr},2}=V_{s}\Bigg ( \frac{\partial \mathcal{P}_{e\text{-ph}}}{\partial T_{2,s}}-\frac{\partial \mathcal{P}_{\text{in}}}{\partial T_{2,s}} \Bigg )=V_{s}\Bigg ( \mathcal{G}(T_{2,s}) -\mathcal{K}(T_{2,s}) \Bigg ), \end{equation} where $\mathcal{G}$ and $\mathcal{K}$ are the electron-phonon~\cite{Vir18,GuaSol18} and electron~\cite{Mar14,GuaSol18} thermal conductances, in unit volume, (see ``Methods'' section), and $T_{2,s}$ is the steady temperature in $x=L$. For $T_{2,s}=4.23\;\text{K}$, we obtain $\eta_{\text{cr},2}\simeq1\times10^3\;\text{pW/mK}$. It is worth noting that the specifications of the proposed thermal oscillator can be tuned by properly choosing the system parameters. For instance, the modulation amplitude could be enhanced by lowering the temperature of the phonon bath and accordingly adjusting the temperature of the hot electrode. Furthermore, the use of superconductors with higher $T_c$'s gives higher thermalization frequencies, and then it permits to push forward the frequency threshold below which no attenuations of $\delta T_2$ occur. The proposed heat oscillator could find application as a temperature controller for heat engines~\cite{Kim11,Ros14,Ros16,Cam16,Mar16}. In fact, in mesoscale and nanoscale systems the precise control of the temperature in a fast time scale is regarded as a difficulty to cope with. Thus, through this system we could be able to definitively master the temperature, which oscillates in a controlled way by a fast magnetic drive. Accordingly, we can envision to build nanoscale heat motors or thermal cycles based on this Josephson heat oscillator. In summary, in this paper we have thoroughly investigated the effects produced by a time-dependent driving magnetic field on the temperature profile of a long Josephson junction, as a thermal gradient across the system is imposed. A proper magnetic drive induces Josephson vortices, i.e., solitons, along the junction. We showed that the soliton configuration is reflected first on the distribution of heat power flowing through the system and then on the temperature of a cold electrode of the device. In fact, we demonstrated a multipeaked temperature profile, due to the local warming-up of the junction in correspondence of each magnetically excited soliton. Moreover, the study of the full evolution of the system disclosed a clear thermal hysteretic effect as a function of the magnetic drive. We explored a realistic Nb-based setup, where the temperature of the ``hot'' electrode is kept fixed and the thermal contact with a phonon bath at $T_{\text{bath}}=4.2\;\text{K}$ is taken into account. Nevertheless, the soliton-induced heating that we observed can be increased by reducing the temperature of the phonon bath and manipulate by properly control the magnetic drive. Finally, we discussed the implementation of a heat oscillator based on this system. In the Meissner state, $H<H_{c,1}$, the magnetic drive affects significantly the phase at the junction edges. In these locations, a clear temperature enhancement is observed. Thus, a sinusoidal external magnetic field, with maximum value equal to $H_{c,1}$, causes the edges temperature to oscillate with a frequency twice than the driving field one. This phenomenon can be used to conceive a low temperature, field-controlled heat oscillator device based on the thermal diffusion in a Josephson junction, for creating an oscillating heat flux from a spatial thermal gradient between the warm electrode and a cold reservoir, i.e. the phonon bath. The thermal oscillator may have numerous applications, inasmuch as the creation and utilization of an alternating heat flux is applicable to technical systems operating in response to periodic temperature variations, like heat engines, energy-harvesting devices, sensing devices, switching devices, or clocking devices for caloritronics circuits and thermal logic. Additionally, through proper figures of merit, we discussed the behavior of this heat oscillator by varying both the frequency and the amplitude of the driving field, and also by assuming a non-vanishing loss power flowing towards a thermal load. Especially in this context, the dynamical, i.e., fully time-dependent, approach that we used is crucial to understand how the system thermally responds to a fast magnetic drive. For instance, we observed that when the driving frequency becomes comparable to the inverse of the thermalization characteristic times~\cite{GuaSol18}, the system is no longer able to efficiently follow the drive, and the modulation range of the temperature accordingly reduces. \section*{Methods} \label{AppA} \textbf{Thermal Powers. } In the adiabatic regime, the contributes, in unit volume, to the energy transport in a temperature-biased JJ read~\cite{Gol13} \begin{equation}\label{Pqp} \mathcal{P}_{\text{qp}}(T_1,T_2,V)=\frac{1}{e^2RD_2}\int_{-\infty}^{\infty} d\varepsilon \mathcal{N}_1 ( \varepsilon-eV ,T_1 )\mathcal{N}_2 ( \varepsilon ,T_2 )(\varepsilon-eV) [ f ( \varepsilon-eV ,T_1 ) -f ( \varepsilon ,T_2 ) ], \end{equation} \begin{equation} \label{Pcos} \mathcal{P}_{\cos}( T_1,T_2,V )=\frac{1}{e^2RD_2}\int_{-\infty}^{\infty} d\varepsilon \mathcal{N}_1 ( \varepsilon-eV ,T_1 )\mathcal{N}_2( \varepsilon ,T_2 )\frac{\Delta_1(T_1)\Delta_2(T_2)}{\varepsilon}[ f ( \varepsilon-eV ,T_1 ) -f ( \varepsilon ,T_2 ) ], \end{equation} \begin{equation}\label{Psin} \mathcal{P}_{\sin}(T_1,T_2,V)=\frac{eV}{2\pi e^2RD_2}\iint_{-\infty}^{\infty} d\epsilon_1d\epsilon_2 \frac{\Delta_1(T_1)\Delta_2(T_2)}{E_2}\times\left [\frac{1-f(E_1,T_1)-f(E_2,T_2)}{\left ( E_1+E_2 \right )^2-e^2V^2}\text{+}\frac{f(E_1,T_1)-f(E_2,T_2)}{\left ( E_1-E_2 \right )^2-e^2V^2}\right ] \end{equation} where $E_j=\sqrt{\epsilon_j^2+\Delta_j(T_j)^2}$, $f ( E ,T )=1/\left (1+e^{E/k_BT} \right )$ is the Fermi distribution function, $\mathcal{N}_j\left ( \varepsilon ,T \right )=\left | \text{Re}\left [ \frac{ \varepsilon +i\gamma_j}{\sqrt{(\varepsilon +i\gamma_j) ^2-\Delta _j\left ( T \right )^2}} \right ] \right |$ is the reduced superconducting density of state, with $\Delta_j\left ( T_j \right )$ and $\gamma_j$ being the BCS energy gap and the Dynes broadening parameter~\cite{Dyn78} of the $j$-th electrode, respectively. These equations derives from processes involving both Cooper pairs and quasiparticles in tunneling through a JJ predicted by Maki and Griffin~\cite{Mak65}. In fact, $\mathcal{P}_{\text{qp}}$ is the heat power density carried by quasiparticles, namely, it is an incoherent flow of energy through the junction from the hot to the cold electrode~\cite{Mak65,Gia06}. Instead, the ``anomalous'' terms $\mathcal{P}_{\sin}$ and $\mathcal{P}_{\cos}$ determine the phase-dependent part of the heat transport originating from the energy-carrying tunneling processes involving Cooper pairs and recombination/destruction of Cooper pairs on both sides of the junction. We note that $\mathcal{P}_{\sin}$, in the temperature regimes we are taking into account, is vanishingly small with respect to both $\mathcal{P}_{\text{qp}}$ and $\mathcal{P}_{\cos}$ contributions, and it can be, in principle, neglected. Moreover, since this term depends on the time derivative of the phase, it could be effective only when the phase rapidly changes, namely, when the soliton enter, or escape, the junction. However, the timescale of the soliton evolution is definitively shorter than the timescales of the driving processes. Consequently, the soliton phase profile follows adiabatically the driving induced by the magnetic field. In this condition, if the number of trapped solitons along the junction is fixed the time scale evolution of the phase is given by the driving process. Anyway, we stress that equation~\eqref{Psin} is a purely reactive contributions~\cite{Gol13,Vir17}, so that in the thermal balance equation~\eqref{ThermalBalanceEq} we have to neglect it. Therefore, the total thermal power density to include in Eq.~\eqref{TotalPower} reads \begin{equation}\label{Pt} \mathcal{P}_{\text{in}}( T_1,T_2,\varphi,V)=\mathcal{P}_{\text{qp}}( T_1,T_2,V)-\cos\varphi \;\mathcal{P}_{\cos}( T_1,T_2,V) \end{equation} The latter term of the rhs of equation~\eqref{TotalPower}, i.e., $\mathcal{P}_{e\text{-ph}}$, represents the energy exchange, in unit volume, between electrons and phonons in the superconductor and reads~\cite{Pek09} \begin{eqnarray}\label{Qe-ph} \mathcal{P}_{e\text{-ph}}&=&\frac{-\Sigma}{96\zeta(5)k_B^5}\int_{-\infty }^{\infty}dEE\int_{-\infty }^{\infty}d\varepsilon \varepsilon^2\textup{sign}(\varepsilon)M_{_{E,E+\varepsilon}}\\\nonumber &\times& \Bigg\{ \coth\left ( \frac{\varepsilon }{2k_BT_{\text{bath}}}\right ) \Big [ \mathcal{F}(E,T_2)-\mathcal{F}(E+\varepsilon,T_2) \Big ]-\mathcal{F}(E,T_2)\mathcal{F}(E+\varepsilon,T_2)+1 \Bigg\}, \end{eqnarray} where $\mathcal{F}\left ( \varepsilon ,T_2 \right )=\tanh\left ( \varepsilon/2 k_B T_2 \right )$, $M_{E,{E}'}=\mathcal{N}_i(E,T_2)\mathcal{N}_i({E}',T_2)\left [ 1-\Delta ^2(T_2)/(E{E}') \right ]$, $\Sigma$ is the electron-phonon coupling constant, and $\zeta$ is the Riemann zeta function. Finally, in equation~\eqref{ThermalBalanceEq}, $c_v(T)=T\frac{\mathrm{d} \mathcal{S}(T)}{\mathrm{d} T}$ is the volume-specific heat capacity, with $\mathcal{S}(T)$ being the electronic entropy density of $S_2$~\cite{Sol16} \begin{eqnarray} \mathcal{S}(T)=-4k_BN_F\int_{0}^{\infty}d\varepsilon \mathcal{N}_2(\varepsilon,T)\left\{ \left [ 1-f(\varepsilon,T) \right ] \ln\left [ 1-f(\varepsilon,T) \right ]+f(\varepsilon,T) \ln f(\varepsilon,T)\right \},\qquad \label{Entropy} \end{eqnarray} and $\kappa(T_2)$ is the electronic heat conductivity~\cite{For17} \begin{equation}\label{electronicheatconductivity} \kappa(T_2)=\frac{\sigma_N}{2e^2k_BT_2^2}\int_{-\infty}^{\infty}\mathrm{d}\varepsilon\varepsilon^2\frac{\cos^2\left \{ \text{Im} \left [\text{arctanh} \left (\frac{\Delta(T_2)}{\varepsilon+i\gamma_2} \right )\right ] \right \}}{\cosh ^2 \left (\frac{\varepsilon}{2k_BT_2} \right )}, \end{equation} with $\sigma_N$ and $N_F$ being the electrical conductivity in the normal state and the density of states at the Fermi energy, respectively. The first derivative of the heat power densities in equation~\eqref{TotalPower}, calculated at a steady electronic temperature $T_e$, gives the electron-phonon thermal conductance~\cite{Vir18}, in unit volume, \begin{eqnarray} \mathcal{G}(T_e)=\frac{\partial \mathcal{P}_{e\text{-ph}}}{\partial T_e} =\frac{5\Sigma}{960\zeta (5)k_B^6T_e^6}\displaystyle\iint_{-\infty}^{\infty}\frac{dEd\varepsilon E\left | \varepsilon \right |^3M^i_{E,E-\varepsilon }}{\sinh\frac{\varepsilon }{2k_BT_e}\cosh\frac{E }{2k_BT_e}\cosh\frac{E-\varepsilon }{2k_BT_e}} \label{ephconductance} \end{eqnarray} and the electron thermal conductance~\cite{Mar14}, in unit volume, \begin{eqnarray} \mathcal{K}(T_e)=\frac{\partial \mathcal{P}_{\text{in}}}{\partial T_e}=\frac{1}{2e^2k_BT_e^2RD_2}\displaystyle\int_{0}^{\infty}\frac{d\varepsilon \varepsilon^2}{\cosh^2\frac{\varepsilon }{2k_BT_e}}\Big [\mathcal{N}_1(\varepsilon,T_e)\mathcal{N}_2(\varepsilon,T_e)-\mathcal{M}_1(\varepsilon,T_e)\mathcal{M}_2(\varepsilon,T_e)\cos\varphi\Big ],\qquad \label{econductance} \end{eqnarray} where $\mathcal{M}_j\left ( \varepsilon ,T \right )=\left |\text{Im}\left [\frac{ - i\Delta _j\left ( T \right )}{\sqrt{\left (\varepsilon+ i\gamma_j \right ) ^2-\Delta _j\left ( T \right )^2}} \right ] \right |$. To estimate $\eta_{cr,2}$ through equation~\eqref{criticalcoupliconstant2}, we assume $\varphi=\pi$ in equation\eqref{econductance}, since the center of the soliton, in correspondence of which $\varphi=\pi$, is placed exactly in the junction edge $x=L$.
1,108,101,564,083
arxiv
\section{Introduction} The discovery of the Higgs boson by the ATLAS and CMS collaborations of the LHC gives big impacts on particle physics~\cite{Aad:2012tfa,Chatrchyan:2012ufa}. All the elementary particles of the Standard Model (SM) with gauge symmetry $\text{SU}(3) \times \text{SU}(2) \times \text{U}(1)$ are experimentally confirmed. Supersymmetry (SUSY) provides one of the most attractive candidates for the theory beyond the SM\@. In the Minimal Supersymmetric Standard Model (MSSM) with low-energy supersymmetry~\cite{Nilles:1983ge}, three gauge couplings are unified at a high-energy scale and the lightest superpartner of SM particles would be dark matter in the Universe. Furthermore the electroweak symmetry breaking is naturally triggered around the Fermi scale~\cite{Inoue:1982pi} and is stabilized by the cancellation of quantum corrections. We focus on two important physical quantities: the Higgs boson mass and the muon anomalous magnetic moment (the muon $g-2$). The Higgs boson mass was determined to be around 125 GeV by the LHC experiments. In the MSSM, it is well known that the Higgs mass is below the Z boson mass at tree level and can be raised by the radiative correction from the $\mathcal{O}(1)$ top Yukawa coupling~\cite{Okada:1990vk,Carena:2000dp}. The Higgs mass generally becomes large when the SUSY breaking (in the top sector) is large since the cancellation of divergences is weakened. The muon $g-2$, which is defined by $a_\mu=(g-2)_\mu/2$, shows the discrepancy between the experimental result and the SM prediction that indicates new physics beyond the SM\@. The discrepancy is above 3 sigma and quantified as $\Delta a_\mu \equiv a_\mu (\text{exp}) - a_\mu (\text{SM}) = (28.1 \pm 8.0) \times 10^{-10}$~\cite{Bennett:2006fi,hagiwara:2007}. The muon $g-2$ generally becomes small when the SUSY breaking (in the muon sector) is large~\cite{Lopez:1993vi,Martin:2001st} due to the decoupling property. It is noticed that these two quantities have the opposite dependences on SUSY breaking (the masses of superpartners). This fact leads to a problem in the MSSM that two experimental results are not simultaneously explained for universal SUSY-breaking parameters, e.g.\ the minimal supergravity mediation~\cite{Endo:2011gy}. In this work, we consider the effect of extra vector-like generations to the above problem. It is known that light vector-like matter fermions are consistent with the precision electroweak measurements, while only the 4th (chiral) generation is not~\cite{Lavoura:1992np}. Further, with (a pair of) vector-like generations, the gauge coupling unification is preserved~\cite{Maiani:1977cg,Bando:1996in}. The gauge couplings in high-energy regime are larger than those of the MSSM and hence the renormalization-group (RG) running is governed by the gauge sector. As a result, the ratios of Yukawa and gauge couplings have strong convergence to their infrared-fixed point values. It is therefore possible to determine Yukawa (and other) couplings at low-energy~\cite{Lanzagorta:1995gp} which are insensitive to high-energy initial values with the strong convergence property~\cite{Bando:1996in,Bando:1997dg}. In Ref.~\cite{Bando:1997cw}, we show that the fixed-point behavior determines the matrix forms of Yukawa couplings for the up-type, down-type quarks and charged leptons with the vector-like generations. A notable fact in these Yukawa matrices is that the Higgs boson and the muon have sizable couplings to the vector-like generations. In this paper, we focus on these Yukawa matrices and evaluate the contributions of vector-like generations to the Higgs boson mass and the muon $g-2$. The organization of this paper is as follows. In Section 2, we introduce vector-like generations and explain our model. The RG behaviors of gauge and Yukawa couplings are discussed and the realistic forms of Yukawa matrices are determined by the fixed-point property. In Section 3, we give the analytic formulae of the Higgs boson mass and the muon $g-2$ in the model. In Section 4, we first describe the RG property of SUSY-breaking parameters on which the Higgs mass and the muon $g-2$ depend. We then show the parameter regions where the Higgs mass and the muon $g-2$ are explained simultaneously, and compare our model with the MSSM\@. The final section is devoted to the conclusion. \bigskip \section{Model} We introduce a pair of vector-like generations (i.e.\ the 4th and 5th ones) to the MSSM\@. The (super)fields for the MSSM part are \begin{align} & Q_i, \; u_i, \; d_i, \; L_i, \; e_i, \quad (i=1,\cdots,3) \\ & H_u, \; H_d, \end{align} where $Q_i$ and $L_i$ are the SU(2) doublets of quarks and leptons, $u_i, d_i$ and $e_i$ are the SU(2) singlets of up-type, down-type quarks and charged leptons, respectively. The Higgs doublets are denoted by $H_u$ and $H_d$. The (super)fields for the vector-like generation part are \begin{align} & Q_4, \; u_4, \; d_4, \; L_4, \; e_4, \label{eq:forth} \\ & \bar{Q}, \; \bar{u}, \; \bar{d}, \; \bar{L}, \; \bar{e}, \label{eq:bar} \\ & \Phi . \label{eq:higgssec2} \end{align} The quantum charges of these superfields are summarized in Table~\ref{tb:newgeneration}. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|} \hline ~~~~~ & $ \text{SU}(3) $ & $( \text{SU}(2),\,\text{U}(1) )$ \\ \hline \hline $Q_4$ & ${\bm 3}$ & $ ( {\bm 2},\, \frac{1}{6} ) $ \\ $u_4$ & ${\bm 3^*}$ & $ ( {\bm 1},\, \frac{-2}{3} ) $ \\ $d_4$ & ${\bm 3}^*$ & $ ( {\bm 1},\, \frac{1}{3} ) $ \\ $L_4$ & ${\bm 1}$ & $ ( {\bm 2},\, \frac{-1}{2} ) $ \\ $e_4$ & ${\bm 1}$ & $ ( {\bm 1},\, 1 ) $ \\ $\bar{Q}$ & ${\bm 3}^*$ & $ ( {\bm 2},\, \frac{-1}{6} ) $ \\ $\bar{u}$ & ${\bm 3}$ & $ ( {\bm 1},\, \frac{2}{3} ) $ \\ $\bar{d}$ & ${\bm 3}$ & $ ( {\bm 1},\, \frac{-1}{3} ) $ \\ $\bar{L}$ & ${\bm 1}$ & $ ( {\bm 2},\, \frac{1}{2} ) $ \\ $\bar{e}$ & ${\bm 1}$ & $ ( {\bm 1},\, -1 ) $ \\ $\Phi$ & ${\bm 1}$ & $ ( {\bm 1},\, 0 ) $ \\ \hline \end{tabular} \caption{The chiral superfields and their quantum numbers under the SM gauge group.} \label{tb:newgeneration} \end{center} \bigskip \end{table} The chiral superfields in (\ref{eq:forth}) have the same charges as the MSSM generations, and those in (\ref{eq:bar}) have the opposite charges. These pairs with opposite charges are called vector-like generations. The field $\Phi$ in (\ref{eq:higgssec2}) is a gauge singlet under the SM gauge transformation. The superpotential in the model is \begin{align} W & = \sum_{i, j = 1, \cdots, 4} \Big( {\bm y}_{u_{ij}} u_i Q_j H_u + {\bm y}_{d_{ij}} d_i Q_j H_d + {\bm y}_{e_{ij}} e_i L_j H_d \Big) + \mu_H H_u H_d \nonumber \\ & \hspace{20mm} + y_{\bar{u}}\, \bar{u} \bar{Q} H_d + y_{\bar{d}}\, \bar{d} \bar{Q} H_u + y_{\bar{e}}\, \bar{e} \bar{L} H_u + M \Phi^2 + Y \Phi^3 \nonumber \\ & \quad + \sum_{i = 1, \cdots, 4} \Big( Y_{Q_i} \Phi Q_i \bar{Q} + Y_{u_i} \Phi u_i \bar{u} + Y_{d_i} \Phi d_i \bar{d} + Y_{L_i} \Phi L_i \bar{L} + Y_{e_i} \Phi e_i \bar{e} \Big). \label{superpotential44bar} \end{align} The first two lines show the Yukawa interactions of five-generation matter fields. The interactions in the third line generate vector-like mass terms when the scalar component of $\Phi$ develops a vacuum expectation value. In addition, the soft SUSY-breaking terms are given by \begin{align} - \mathcal{L}_{\rm soft} & = \bigg[ \sum_{i,j = 1, \cdots, 4} ( {\bm a}_{u_{ij}} \tilde{u}_i \tilde{Q}_j H_u + {\bm a}_{d_{ij}} \tilde{d}_i \tilde{Q}_j H_d + {\bm a}_{e_{ij}} \tilde{e}_i \tilde{L}_j H_d ) + b_H H_u H_d \nonumber \\ & \hspace{20mm} + a_{\bar{u}}\, \tilde{\bar{u}} \tilde{\bar{Q}} H_d + a_{\bar{d}}\, \tilde{\bar{d}} \tilde{\bar{Q}} H_u + a_{\bar{e}}\, \tilde{\bar{e}} \tilde{\bar{L}} H_u + b_M \Phi^2 + A_Y \Phi^3 \nonumber \\ & \qquad + \sum_{i = 1, \cdots, 4} ( A_{Q_i} \Phi \tilde{Q}_i \tilde{\bar{Q}} + A_{u_i} \Phi \tilde{u}_i \tilde{\bar{u}} + A_{d_i} \Phi \tilde{d}_i \tilde{\bar{d}} + A_{L_i} \Phi \tilde{L}_i \tilde{\bar{L}} + A_{e_i} \Phi \tilde{e}_i \tilde{\bar{e}} ) + {\rm h.c.} \bigg] \nonumber \\ & \quad + \tilde{Q}^\dagger {\bf m}_Q^2 \tilde{Q} + \tilde{L}^\dagger {\bf m}_L^2 \tilde{L} + \tilde{u}\,{\bf m}_u^2 \tilde{u}^\dagger + \tilde{d}\,{\bf m}_d^2\,\tilde{d}^\dagger + \tilde{e}\,{\bf m}_e^2 \tilde{e}^\dagger + m^2_{H_u} H^*_u H_u + m^2_{H_d} H^*_d H_d \nonumber \\ & \quad + m^2_{\bar{Q}}\,\tilde{\bar{Q}}^* \tilde{\bar{Q}} + m^2_{\bar{L}}\tilde{\bar{L}}^* \tilde{\bar{L}} + m^2_{\bar{u}}\, \tilde{\bar{u}} \tilde{\bar{u}}^* + m^2_{\bar{d}}\,\tilde{\bar{d}} \tilde{\bar{d}}^* + m^2_{\bar{e}}\,\tilde{\bar{e}} \tilde{\bar{e}}^* + m^2_\Phi \Phi^* \Phi \notag \\ & \quad + \frac{1}{2} \big( M_3 \tilde{g} \tilde{g} + M_2 \tilde{W} \tilde{W} + M_1 \tilde{B} \tilde{B} + {\rm h.c.} \big) , \label{44barsoftbreaking} \end{align} where the fields with tilde mean the scalar components of matter superfields, and $\tilde{g}, \tilde{W}$ and $\tilde{B}$ represent the gauginos for SU(3), SU(2) and U(1) gauge groups, respectively. After the electroweak symmetry breaking, the mass matrices for the five-generation quarks and leptons are given by \begin{align} m_u &\, = \bordermatrix{ & u_{1L} & \cdots & u_{4L} & u_{5L} \cr u_{1R} & & & & \cr \;\;\vdots & & {\bm y}_{u_{ij}} v_u & & Y_{u_i} V \cr u_{4R} & & & & \cr u_{5R} & & Y_{Q_j} V & & y_{\bar{u}}\,v_d \cr } , \label{ufermionmassmat} \\[2mm] m_d &\, = \bordermatrix{ & d_{1L} & \cdots & d_{4L} & d_{5L} \cr d_{1R} & & & & \cr \;\;\vdots & & {\bm y}_{d_{ij}} v_d & & Y_{d_i} V \cr d_{4R} & & & & \cr d_{5R} & & Y_{Q_j} V & & y_{\bar{d}}\,v_u \cr } , \label{dfermionmassmat} \\[2mm] m_e &\, = \bordermatrix{ & e_{1L} & \cdots & e_{4L} & e_{5L} \cr e_{1R} & & & & \cr \;\;\vdots & & {\bm y}_{e_{ij}} v_d & & Y_{e_i} V \cr e_{4R} & & & & \cr e_{5R} & & Y_{L_j} V & & y_{\bar{e}}\,v_u \cr } , \label{efermionmassmat} \end{align} where $v_u$, $v_d$ and $V$ are the vacuum expectation values of the scalar components of $H_u$,$ H_d$ and $\Phi$, respectively. Here and hereafter, we denote the fields in the ``5th'' generation superfields (\ref{eq:bar}) by those with the indices ``5''. For example, the fermions in the superfield $\bar{Q}$ are $(u_{5R})^C$ and $(d_{5R})^C$, and that in $\bar{u}$ is $u_{5L}$ (the subscripts $L$ and $R$ mean the chirality). The corresponding scalar partners are denoted by the fields with tildes such as $\tilde{u}_{5L}$. The singlet expectation value $V$ is assumed to be a bit larger than the electroweak scale ($V \gg v_u,\, v_d$) since the vector-like generations should be heavy to evade experimental bounds such as flavor constraints. Hereafter, we call the present model including vector-like generations as the Vector-like Matter Supersymmetric Standard Model (VMSSM). \subsection{Gauge coupling unification} The VMSSM is quite different from the MSSM with respect to the RG running of coupling constants. In particular, the one-loop RG equations for gauge couplings are given by \begin{eqnarray} \frac{d g_i}{d ( \log \mu )} = b_i \frac{ g^3_i }{16 \pi^2 }, \qquad ( b_1, b_2, b_3 ) = \begin{cases} (\frac{33}{5}, 1, -3) & ({\rm MSSM}) \\ (\frac{53}{5}, 5, 1) & ({\rm VMSSM}) \end{cases} \end{eqnarray} where $\mu$ is renormalization scale and $g_1, g_2$ and $g_3$ are the gauge coupling constants of U(1), SU(2) and SU(3) gauge groups, respectively. \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{gaugeunification.pdf} \caption{Gauge coupling unification in the MSSM (red) and the VMSSM (blue and dashed black). The red and blue lines are the one-loop RG running of $\alpha_i = g_i^2 /4 \pi$ ($i = 1,2,3$) and the dashed ones are the two-loop running in the VMSSM\@. The horizontal axis denotes the renormalization scale $\mu$.} \label{fig:gaugecoupling} \end{center}\bigskip \end{figure} In Fig.~\ref{fig:gaugecoupling}, we show the RG running of gauge coupling constants $\alpha_i = g_i^2/4\pi$ for the MSSM (red lines) and the VMSSM (blue ones). The two-loop RG running in the VMSSM is also shown by the dashed lines. The gauge coupling unification is kept in the VMSSM and the unified gauge coupling constant is defined as \begin{align} \alpha_\text{GUT} \,=\, \alpha_1 (M_\text{GUT}) = \alpha_2 (M_\text{GUT}) = \alpha_3 (M_\text{GUT}) \end{align} where the scale $M_\text{GUT}$ of grand unified theory (GUT) is around $10^{16}$ GeV\@. In the figure, the unification scale in the VMSSM is found to be larger than the MSSM due to the two-loop RG effects of gauge couplings~\cite{Bando:1996in}. Further, the unified gauge coupling constant in the VMSSM becomes larger than the MSSM since the vector-like generations make the gauge couplings asymptotically non free. The large coupling constants in the gauge sector govern the low-energy behavior of other model parameters through the RG evolution. As will be seen later, this fact is important to analyze Yukawa and SUSY-breaking parameters. We use in the numerical calculation the two-loop RG equations for the gauge coupling constants and gaugino masses which are summarized in Appendix~A. \subsection{Yukawa couplings at the unification scale} In this subsection, we consider possible forms of the quark and lepton Yukawa couplings at the GUT scale. Due to the strong RG effect of gauge couplings, Yukawa couplings tend to have the fixed-point behavior at low energy~\cite{Pendleton:1980as}. Fig.~\ref{fig:Yukawaconvergency} shows the typical RG flow of Yukawa couplings in case that ${\bm y}_{u_{33}}$, ${\bm y}_{d_{33}}$ and ${\bm y}_{e_{33}}$ are turned on. \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{yukawaconv.pdf} \caption{Typical RG flow of the 3rd generation Yukawa couplings ${\bm y}_{u_{33}}$ (black), $\,{\bm y}_{d_{33}}$ (red) and ${\bm y}_{e_{33}}$ (blue) normalized by $g_3$. The three lines for each Yukawa coupling correspond to the initial values 0.5, 1 and 2 from bottom to top. It is found that these Yukawa couplings have the strong convergence property in the infrared regime.} \label{fig:Yukawaconvergency} \end{center}\bigskip \end{figure} In the figure, all these couplings converge to their fixed-point values and are determined independently of their initial values at high energy.\footnote{The predictions from infrared fixed points of RG equations are reliable only when the couplings indeed reach to their fixed points around the electroweak scale. Such strong convergence behavior can be realized~\cite{Bando:1997dg} in asymptotically nonfree gauge theory (like the present VMSSM) and extra dimensional models, etc.} By using this feature, it is possible to specify the matrix forms of Yukawa couplings for matter fields, that is, which elements can be non-vanishing at the GUT scale. For the present purpose of calculating the Higgs boson mass and the muon $g-2$, it is sufficient to determine the Yukawa couplings except for the 1st generation. The matrix forms of Yukawa couplings from the 2nd to 5th generations are given by \begin{eqnarray} \text{up-type quarks} & :~ & \bordermatrix{ & 2 & 3 & 4 & 5 \cr 2 & \epsilon^3 \hat{y} & & \hat{y} & \epsilon^3 \hat{Y} \cr 3 & & \hat{y} & & \cr 4 & \hat{y} & & & \hat{Y} \cr 5 & \epsilon^3 \hat{Y} & & \hat{Y} & \hat{y} \cr }, \label{ufermionmassGUT} \\ \text{down-type quarks} & :~ & \bordermatrix{ & 2 & 3 & 4 & 5 \cr 2 & \epsilon^3 \hat{y} & & \hat{y} & \epsilon^3 \hat{Y} \cr 3 & & \epsilon \hat{y} & & \cr 4 & \hat{y} & & \hat{y} & \hat{Y} \cr 5 & \epsilon^3 \hat{Y} & & \hat{Y} & \cr }, \label{dfermionmassGUT} \\ \text{charged leptons} & :~ & \bordermatrix{ & 2 & 3 & 4 & 5 \cr 2 & \epsilon^3 \hat{y} & & 3\hat{y} & \epsilon^3 \hat{Y} \cr 3 & & 3\epsilon \hat{y} & & \cr 4 & 3\hat{y} & & 3\hat{y} & \hat{Y} \cr 5 & \epsilon^3 \hat{Y} & & \hat{Y} & \cr }, \label{efermionmassGUT} \end{eqnarray} where the blank entries mean 0, and each $\hat{y}$ ($\hat{Y}$) represents $\mathcal{O}(1)$ Yukawa coupling to the doublet (singlet) Higgs fields. The parameter $\epsilon$ is needed to reproduce the quark and lepton masses at low energy, especially for the 2nd generation~\cite{Bando:1997cw}. In the charged-lepton matrix (\ref{efermionmassGUT}), the Georgi-Jarlskog factor~\cite{Georgi:1979df} is utilized for the quark and lepton mass difference. We assume for simplicity that all $\hat{y}$ and $\hat{Y}$ in the matrices (\ref{ufermionmassGUT})--(\ref{efermionmassGUT}) are the same at the unification scale. With the input values listed in Table~\ref{tb:parameterset}, we find the quark and lepton masses of 2nd and 3rd generations are properly reproduced at the weak scale. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $\epsilon$ & $\hat{y}$ & $\hat{Y}$ & $\alpha_{\rm GUT}$ & $M_{\rm GUT}$ & $M_{\rm SUSY}$ & $V$ & $\tan\beta$ \\ \hline \hline 0.19 & 0.60 & 0.60 & 0.22 & $6.0\times 10^{16}$ GeV & 1.8 TeV & 4.0 TeV & 17 \\ \hline \end{tabular} \caption{The set of input values.} \label{tb:parameterset} \end{center} \end{table} The scale $M_{\text{SUSY}}$ is a typical threshold for supersymmetric particles, and the ratio of the vacuum expectation values of Higgs doublets is defined as $\tan\beta\equiv v_u/v_d$. In later sections, we will use these input values for the numerical analysis of the Higgs boson mass and the muon $g-2$. There are two notable byproducts in the above forms of Yukawa matrices. First, in the matrix (\ref{ufermionmassGUT}), the up-type Yukawa couplings of the 2-4 and 4-2 elements are $\mathcal{O}(1)$, that is, the vector-like generations strongly couple to the up-type Higgs field. This means that sizable radiative corrections to the Higgs boson mass arise from these couplings in addition to the ordinary top Yukawa correction in the MSSM\@. Second, in the matrix (\ref{efermionmassGUT}), the 2-4 and 4-2 elements are $\mathcal{O}(1)$, that is, the vector-like generations strongly couple to the muon field. The radiative corrections from these couplings give the contribution to the muon $g-2$ in addition to the MSSM one. As mentioned in the introduction, the experimental data of the Higgs boson mass and the muon $g-2$ is difficult to be explained simultaneously in the MSSM\@. The new contributions from the strongly-coupled vector-like generations are expected to ameliorate the problem. \bigskip \section{Analytic Formula} In this section, we present the analytic expressions for the one-loop radiative corrections to the Higgs boson mass and the muon $g-2$ in the VMSSM. \subsection{Higgs boson mass} The two Higgs doublets contain eight real scalar fields. Three of these are eaten to give masses to the gauge bosons and the remaining five are physical bosons; two charged scalars, one pseudoscalar, and two neutral scalars. Among them, it is well known in the MSSM that the tree-level mass of the lightest neutral scalar is below the $Z$ boson mass. This lightest scalar mass is however raised up by including quantum corrections from the top and stop particles so that it becomes a plausible candidate for the 125 GeV Higgs boson found by the LHC experiments. We evaluate the mass of the lightest neutral scalar in the VMSSM, which is denoted by $m_{h^0}$, by calculating the scalar potential of the Higgs sector up to one-loop order. The quantum corrections to the Higgs mass from vector-like generations were calculated in the literature~\cite{Moroi:1991mg,Martin:2009bg}. We use the effective potential method~\cite{Coleman:1973jx} as usual for the MSSM case. The one-loop correction to the Higgs potential is generally given by \begin{eqnarray} \qquad \Delta V_H = \sum_{X=u, d, e}\,\sum_{i=1}^{10} 2 N_c \left[ F( M_{\tilde{X}_i}^2 ) - F ( M_{X_i}^2 ) \right], \hspace{7mm} N_{c} = \begin{cases} 3 & (X=u,d) \\ 1 & (X=e) \end{cases} \end{eqnarray} where $M_{X_i}^2$ and $M_{\tilde{X}_i}^2$ are the squared-mass eigenvalues of fermions and scalars, respectively, which are obtained by diagonalizing (\ref{ufermionmassmat})-(\ref{efermionmassmat}) for fermions ($m_u^\dagger m_u$ and $m_um_u^\dagger$, etc.) and (\ref{uscalarmassmat})-(\ref{escalarmassmat}) for scalars. The function $F$ is defined as \begin{eqnarray} F(x) = \frac{x^2}{64\pi^2} \left[ \ln \left( \frac{x} {\mu^2} \right) - \frac{3}{2} \right] , \end{eqnarray} where $\mu$ represents the renormalization scale which is set to be $M_{\text{SUSY}}$ in evaluating the Higgs mass. The one-loop correction to the lightest neutral scalar mass, $\Delta m^2_{h^0}$, can be extracted from $\Delta V_H$ by the formula~\cite{Martin:2009bg} \begin{eqnarray} \Delta m_{h^0}^2 & = \Bigg[ \dfrac{\sin^2 \beta}{2} \left( \dfrac{\partial^2}{\partial v_u^2} - \dfrac{1}{v_u} \dfrac{\partial}{\partial v_u} \right) + \dfrac{\cos^2 \beta}{2} \left( \dfrac{\partial^2}{\partial v_d^2} - \dfrac{1}{v_d} \dfrac{\partial}{\partial v_d} \right) \nonumber \\ & \qquad + \sin\beta \cos\beta \dfrac{ \partial^2 }{\partial v_u \partial v_d} \Bigg] \Delta V_H . \label{eq:higgsmasscal} \end{eqnarray} \subsection{Muon $g-2$} In order to evaluate the muon $g-2$, we use the mass eigenstate basis for gauginos, charged leptons, charged sleptons and neutral sleptons. First, we consider the $4\times4$ mass matrix for neutralinos consisting of bino ($\tilde{B}$), neutral wino ($\tilde{W}^0$) and neutral higgsinos ($\tilde{H}^0_u$ and $\tilde{H}^0_d$). In the basis of $\{ \tilde{B}, \tilde{W}^0, \tilde{H}_d^0, \tilde{H}_u^0 \}$, the neutralino mass matrix $M_{\chi^0}$ is given by \begin{eqnarray} M_{\chi^0} = \left( \begin{array}{cccc} M_1 & 0 & -g_1 v_d/\sqrt{2} & g_1 v_u/\sqrt{2} \\ 0 & M_2 & g_2 v_d/\sqrt{2} & -g_2 v_u/\sqrt{2} \\ -g_1 v_d/\sqrt{2} & g_2 v_d/\sqrt{2} & 0 & -\mu_H \\ g_1 v_u/\sqrt{2} & -g_2 v_u/\sqrt{2} & -\mu_H & 0 \end{array} \right) . \label{neutralinomassmat} \end{eqnarray} Next we consider the mass matrix for charginos consisting of charged winos ($\tilde{W}^\pm$) and charged higgsinos ($\tilde{H}^+_u$ and $\tilde{H}^-_d$), where the charged winos $\tilde{W}^\pm$ are defined as \begin{eqnarray} \tilde{W}^\pm = \frac{i}{\sqrt{2}} ( \tilde{W}^1 \mp i \tilde{W}^2 ) . \end{eqnarray} In the basis of $\{ \tilde{W}^-, \tilde{H}_d^- \}$ and $\{ \tilde{W}^+, \tilde{H}_u^+ \}$, the chargino mass matrix $M_{\chi^\pm}$ is given by \begin{eqnarray} M_{\chi^\pm} = \left( \begin{array}{cc} M_2 & \sqrt{2} g v_u \\ \sqrt{2} g v_d & \mu_H \end{array} \right) . \label{charginomassmat} \end{eqnarray} These mass matrices are diagonalized as follows to obtain the muon interactions in terms of mass eigenstates. We first diagonalize the neutralino mass matrix (\ref{neutralinomassmat}) by using a unitary matrix $N$ \begin{eqnarray} N M_{\chi^0} N^\dagger = {\rm diag} \big(\, m_{\chi^0_1}, m_{\chi^0_2}, m_{\chi^0_3}, m_{\chi^0_4} \big), \label{nuediagonalize} \end{eqnarray} where $m_{\chi^0_x}$ ($x=1, \ldots, 4$) are the positive mass eigenvalues, and $m_{\chi^0_x} < m_{\chi^0_y}$ if $x < y$. Similarly, the chargino mass matrix (\ref{charginomassmat}) is diagonalized by using two unitary matrices $J$ and $K$ \begin{eqnarray} J M_{\chi^\pm} K^\dagger = {\rm diag} \big( m_{\chi^\pm_1}, m_{\chi^\pm_2} \big) , \label{chardiagonalize} \end{eqnarray} where $m_{\chi_x}^\pm$ ($x = 1,2$) are the positive mass eigenvalues, and $m_{\chi^\pm_1} < m_{\chi^\pm_2}$. Finally, we define the diagonalization of mass matrices for the lepton sector \begin{align} ( U_{e_R} m_e U_{e_L}^\dagger )_{ij} & = {m_E}_i \delta_{ij} \hspace{5mm} (i,j=1,\dots,5), \\ ( U_{\tilde{e}} M^2_{\tilde{e}} U_{\tilde{e}}^\dagger )_{ab} & = m^2_{\tilde{E}_{a}} \delta_{ab} \hspace{5mm} (a,b=1,\dots,10), \\ ( U_{\tilde{\nu}} M^2_{\tilde{\nu}} U_{\tilde{\nu}}^\dagger )_{\alpha\beta} & = m^2_{\tilde{N}_\alpha} \delta_{\alpha\beta} \hspace{5mm} (\alpha,\beta = 1,\dots,5) , \end{align} for charged leptons, charged sleptons and neutral sleptons, respectively. Here, $m_e$ is the charged lepton mass matrix in Eq.~(\ref{efermionmassmat}), and $M^2_{\tilde{e}}$ and $M^2_{\tilde{\nu}}$ are the charged slepton and neutral slepton mass matrices in (\ref{escalarmassmat}) and (\ref{nuscalarmassmat}). Further we denote the mass eigenvalues $m_{E_i}$, $m_{\tilde{E}_a}$ and $m_{\tilde{N}_a}$ for the mass eigenstates of charged leptons $(E_i)$, charged sleptons $(\tilde{E}_a)$ and neutral sleptons $(\tilde{N}_{\alpha})$, respectively. With these diagonalized basis at hand, the interaction terms of the muon, which are needed to calculate the muon $g-2$, are given by \begin{eqnarray} \mathcal{L} & = & \sum_{a,x} \bar{E}_2 ( n^L_{a x} P_L + n^R_{a x} P_R ) \tilde{E}_a \chi^0_x + \sum_{\alpha,x} \bar{E}_2 ( c^L_{\alpha x} P_L + c^R_{\alpha x} P_R ) \tilde{N}_\alpha \chi_x^\pm \nonumber \\ && \qquad + \sum_a \bar{E}_2 ( s^L_a P_L + s^R_a P_R ) \tilde{E}_a \chi_\Phi + {\rm h.c.} , \label{eq:muonvertex} \end{eqnarray} where $P_L = (1-\gamma_5)/2$ and $P_R = (1+\gamma_5)/2$. The mass eigenstate $E_2$ corresponds to the muon field, and $\chi^0_x$ and $\chi^\pm_x$ are the neutralinos and charginos. The fermion component of the singlet superfield $\Phi$ is denoted by $\chi_\Phi$ and called the phino in this paper. The coefficients in the Lagrangian (\ref{eq:muonvertex}) are \begin{align} n^L_{a x} & = -\sum_{i,j=1}^4 {\bm y}_{e_{ij}} (U_{e_R})_{i2} (U_{\tilde{e}})_{aj} N_{x3} + y_{\bar{e}} (U_{e_R})_{52} (U_{\tilde{e}})_{a,10} N_{x 4} \nonumber \\ & \qquad -\sum_{i=1}^4 \sqrt{2} g_1 (U_{e_R})_{i2} (U_{\tilde{e}})_{a, i+5} N_{x1} -\frac{g_2}{\sqrt{2}} (U_{e_R})_{52} (U_{\tilde{e}})_{a5} N_{x2} \nonumber \\ & \qquad -\frac{g_1}{\sqrt{2}} (U_{e_R})_{52} (U_{\tilde{e}})_{a5} N_{x1} \label{nlax}, \\ n^R_{a x} & = \sum_{i,j=1}^4 {\bm y}_{e_{ij}} (U_{e_L})_{j2} (U_{\tilde{e}})_{a,i+5} N_{x3} -y_{\bar{e}} (U_{e_L})_{52} (U_{\tilde{e}})_{a5} N_{x4} \nonumber \label{nrax} \\ & \qquad +\sum_{i=1}^4 \bigg[ \frac{g_2}{\sqrt{2}} (U_{e_L})_{i2} (U_{\tilde{e}})_{ai} N_{x2} +\frac{g_1}{\sqrt{2}} (U_{e_L})_{i1} (U_{\tilde{e}})_{ai} N_{x1} \bigg] \nonumber \\[1mm] & \qquad +\sqrt{2} g_1 (U_{e_L})_{52} (U_{\tilde{e}})_{a,10} N_{x1}, \\ c^L_{ax} & = -\sum_{i,j=1}^4 {\bm y}_{e_{ij}} (U_{e_R})_{i2} (U_{\tilde{\nu}})_{aj} J_{x2} + g_2 (U_{e_R})_{52} ( U_{\tilde{\nu}} )_{a5} J_{x1}, \label{eq:cl} \\ c^R_{ax} & = y_{\bar{e}} (U_{e_L})_{52} (U_{\tilde{\nu}})_{a5} K_{x2} -\sum_{i=1}^4 g_2 (U_{e_L})_{i2} (U_{\tilde{\nu}})_{ai} K_{x1}, \label{eq:cr} \\ s_a^L & = \sum_{i=1}^4 \Big[ -{Y_e}_i (U_{e_R})_{i2} (U_{\tilde{e}})_{a,10} -{Y_L}_i (U_{e_R})_{52} (U_{\tilde{e}})_{ai} \Big], \label{eq:sla} \\ s^R_a & = \sum_{i=1}^4 \Big[ -{Y_e}_i (U_{e_L})_{52} (U_{\tilde{e}})_{a,i+5} -{Y_L}_i (U_{e_L})_{i2} (U_{\tilde{e}})_{a5} \Big]. \label{eq:sra} \end{align} In the VMSSM, the origins of SUSY contributions to the muon $g-2$ are divided into 3 parts \footnote{Strictly speaking, the non-SUSY contribution from the vector-like leptons, which is denoted by $\Delta a_\mu^{4+\bar{4}}$, has to be taken into account. That is, in the VMSSM the new physics contribution to the muon $g-2$ should be $\Delta a_\mu^{4+\bar{4}}+\Delta a_\mu^{\text{SUSY}}$. However we numerically find $\Delta a_\mu^{4+\bar{4}}$ becomes $\mathcal{O}(10^{-12})$ by evaluating it in accordance with Ref.~\cite{Dermisek:2013gta}, and drop it in the following analysis.} : neutralinos, charginos, and phino. The contribution from the phino is evaluated by the replacement $\chi^0$ with $\chi_\Phi$ in the neutralino diagram (with appropriate replacement of coefficients). We find the SUSY contribution to the muon $g-2$ in the VMSSM: \begin{eqnarray} \Delta a_\mu^{\text{SUSY}} = \Delta a_\mu^{\chi^0} + \Delta a_\mu^{\chi^\pm} + \Delta a_\mu^{\chi_\Phi}, \label{eq:amususy} \end{eqnarray} where \begin{align} \Delta a_\mu^{\chi^0} & = \sum_{a,x} \frac{1}{16\pi^2} \bigg[ \frac{m_\mu m_{\chi^0_x }}{m^2_{\tilde{E}_a}} n_{ax}^L n_{ax}^R F_2^N(r_{1ax}) -\frac{m_\mu^2}{6m^2_{\tilde{E}_a}} \big( n_{ax}^L n_{ax}^{L} + n_{ax}^R n_{ax}^R \big) F_1^N(r_{1ax}) \bigg] , \label{neu} \\ \Delta a_\mu^{\chi^\pm} & = \sum_{\alpha,x} \frac{1}{16\pi^2} \bigg[ \frac{-3 m_\mu m_{\chi_x}^\pm}{ m^2_{\tilde{\nu}_a} } c_{\alpha x}^L c_{\alpha x}^R F_2^C(r_{2\alpha x}) +\frac{m_\mu^2}{3 m^2_{\tilde{\nu}_\alpha}} \big( c_{\alpha x}^L c_{\alpha x}^L + c_{\alpha x}^R c_{\alpha x}^R \big) F_1^C(r_{2\alpha x}) \bigg] , \label{char} \\ \Delta a_\mu^{\chi_\Phi} & = \sum_a \frac{1}{16\pi^2} \bigg[ \frac{m_\mu m_{\chi_\Phi}}{m^2_{\tilde{E}_a}} s_a^L s_a^R F_2^N(r_{3a}) -\frac{m_\mu^2}{6 m^2_{\tilde{E}_a}} \big( s_a^L s_a^L + s_a^R s_a^R \big) F_1^N(r_{3a}) \bigg] , \label{phi} \end{align} with $r_{1ax} = m^2_{\chi^0_x} / m^2_{\tilde{E}_a}$, $r_{2\alpha x} = m^2_{\chi^\pm_x} / m^2_{\tilde{N}_\alpha}$, $r_{3a} = m^2_{\chi_\Phi} / m^2_{\tilde{E}_a}$, and $m_\mu$ is the muon mass. The functions $F_{1,2}^N$ and $F_{1,2}^C$ are defined~\cite{Martin:2001st} by \begin{eqnarray} && F_1^N(x) = \frac{ 2 }{ (1-x)^4 } \left( 1 - 6 x^2 + 3 x^3 + 2 x^3 - 6 x^2 \ln x \right), \\ && F_2^N(x) = \frac{ 3 }{ (1-x)^3 } \left( 1 - x^2 + 2 x \ln x \right), \\ && F_1^C(x) = \frac{ 2 }{ (1-x)^4 } \left( 2 + 3 x - 6 x^2 + x^3 + 6 x \ln x \right), \\ && F_2^C(x) = \frac{ -3 }{ (1-x)^3 } \left( 3 - 4 x + x^2 + 2 \ln x \right) . \end{eqnarray} \bigskip \section{Numerical Result} \label{sec:numerical} In the following, we calculate the Higgs boson mass $m_{h^0}$ and the correction to the muon $g-2$ by using the results (\ref{eq:higgsmasscal}) and (\ref{eq:amususy}). \subsection{SUSY-breaking parameters} As for the SUSY breaking scenario, we use the minimal gravity mediation~\cite{Chamseddine:1982jx} which leads to the superpartner spectrum by assuming that all gaugino masses unify to $m_{1/2}$ and all scalar soft masses also unify to $m_0$ at the GUT scale. The other relevant SUSY-breaking parameters is $A_0$, which is the universal scalar trilinear coupling at the GUT scale. In this paper, $\tan\beta$ is fixed so that the experimental data of the muon mass is obtained at the electroweak scale for fixed values of Yukawa couplings. Furthermore, the sign of $\mu_H$ parameter is taken to be positive for the muon $g-2$ anomaly. As a result, in the following analysis we have three SUSY-breaking free parameters: $m_{1/2}$, $m_0$ and $A_0$. Once these parameters are fixed together with the input values listed in Table~\ref{tb:parameterset}, the resultant low-energy physics is determined by solving the RG equations. \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{atermconv.pdf} \caption{Typical RG flow of the scalar trilinear couplings ${\bm a}_{u_{33}}$ (red), $a_{\bar{u}}$ (black) and $\aterm{e_{24}}$ (blue). The three lines for each coupling correspond to the initial values $A_0= -1.0$, $0$, $1.0$ TeV from bottom to top. It is found that these trilinear couplings have the strong convergence property in the infrared regime.} \label{fig:Ae24rge} \end{center} \end{figure} We here comment on typical property of scalar trilinear couplings in the VMSSM\@. Fig.~\ref{fig:Ae24rge} shows the RG running of several trilinear couplings which are relevant to the Higgs boson mass and the muon $g-2$. The red and black lines mean the energy dependence of $\aterm{u_{33}}$ and $a_{\bar{u}}$ which would give sizable radiative corrections to the Higgs boson mass. The blue lines show the energy dependence of $\aterm{e_{24}}$ which would contribute to the muon $g-2$. It is found that these trilinear couplings have the strong infrared convergency as in the case of Yukawa couplings described in Fig.~\ref{fig:Yukawaconvergency}. In the MSSM, it is known that the mass of the lightest neutral scalar depends on the trilinear coupling of stop~\cite{Okada:1990vk}. In the VMSSM, taking into account the infrared convergence behavior, one expects that the Higgs boson mass does not depend on the initial values of trilinear couplings at high energy. Moreover, one might also consider that the same is true for the muon $g-2$. We will however show that the muon $g-2$ depend on the universal trilinear coupling $A_0$ in particular circumstance, and discuss the reason in detail in Section~\ref{sec:muong2}. \subsection{Parameter dependence of Higgs boson mass} \label{sec:higgs} \begin{figure}[tbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{higgsMUm0250_500_1000_A00.pdf} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{higgsMU2150_m0250_500_1000_A0.pdf} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{stopvshiggs.pdf} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{higgsVdepMUm0250A00.pdf} \end{center} \end{minipage} \caption{The dependence of the lightest Higgs mass $m_{h^0}$ on $m_{1/2}$ (upper left), $A_0$ (upper right), and the stop mass (lower left). The lower right panel shows $m_{h^0}$ including the radiative corrections in the VMSSM (red) and in the MSSM (black) with $m_0=250$ and $A_0=0$ GeV\@. The orange regions represent the Higgs boson with a mass between 124.7 and 126.2 GeV\@. In the upper left panel, the three lines are $m_0=250$, 500, 1000 GeV from bottom to top, and $A_0$ is fixed to 0 GeV\@. In the upper right panel, the three lines are $m_0=250$, 500, 1000 GeV from bottom to top, and $m_{1/2}$ is fixed to 2150 GeV\@.} \label{fig:gauginovshiggs} \end{figure} We first study the dependence of the Higgs boson mass $m_{h^0}$ on the SUSY-breaking parameters and the stop mass. We also compare the contribution from the vector-like generations with that from the MSSM sector. Figs.~\ref{fig:gauginovshiggs} show our $m_{h^0}$ results in various situations. The orange regions in these figures reproduce the Higgs boson mass between 124.7 and 126.2 GeV\@. First, the upper left panel shows the dependence on the universal gaugino mass $m_{1/2}$, where the universal trilinear coupling $A_0$ is set to 0 GeV\@. The three black lines correspond to the universal scalar soft mass $m_0=250$, 500, 1000 GeV from bottom to top. As seen from this panel, the Higgs boson becomes heavier when $m_{1/2}$ is larger. This is originated from the RG property that low-energy squark masses become large when $m_{1/2}$ is set to be large. It is also noted that $m_{h^0}$ becomes large as $m_0$ increases. In the upper right panel, we show the $A_0$ dependence of $m_{h^0}$, assuming $m_{1/2}=2150$ GeV and $m_0=250$, 500, 1000 GeV from bottom to top. As discussed in the previous subsection, the Higgs boson mass does not depend on $A_0$ due to the infrared convergence RG behavior of trilinear couplings. In the lower left panel, we show the dependence of $m_{h^0}$ on the stop mass which is denoted by $m_{\tilde{u}_{3L}}$. The orange region is explained if the stop mass is from 2.0 to 2.2 TeV\@. In the MSSM the stop mass is needed to be $3-4$ TeV to explain the Higgs mass around 125 GeV\@ \cite{Okada:1990gg}. In the VMSSM, however, $m_{h^0}$ is explained by a lighter stop than the MSSM\@. That is clearly seen in the lower right panel which shows the Higgs boson mass including the radiative corrections from the VMSSM (red line) and from the MSSM sector only (black line) with $m_0$ and $A_0$ being 250 and 0 GeV, respectively. The black line is defined by taking the limit of large $V$ which means the decoupling of vector-like generations. In the VMSSM, the Higgs boson mass turns out to be explained by a smaller value of $m_{1/2}$, that is, lighter low-energy squarks than the MSSM\@. This is because there exists additional radiative corrections from the vector-like generations which have sizable coupling to the Higgs fields. \subsection{Parameter dependence of muon $g-2$} \label{sec:muong2} Next we study the parameter dependences of the SUSY contribution $\Delta a_\mu^{\text{SUSY}}$ on the SUSY-breaking parameters and the smuon mass. We also compare the contribution from the vector-like generations with that from the MSSM sector. Figs.~\ref{fig:gauginovsg2} show our $\Delta a_\mu^{\text{SUSY}}$ results in various situations. The blue regions explain the muon $g-2$ anomaly within the $1\sigma$ level. First, the upper left panel of Fig.~\ref{fig:gauginovsg2} shows the dependence on the universal gaugino mass $m_{1/2}$, where the universal trilinear coupling $A_0$ is set to 0 GeV\@. The three black lines correspond to $m_0=250$, 500, 1000 GeV from top to bottom. As seen from this panel, the $g-2$ contribution $\Delta a_\mu^{\text{SUSY}}$ becomes smaller when $m_{1/2}$ and $m_0$ are larger. This is originated from the decoupling property in Eqs.~(\ref{neu})-(\ref{phi}) that the SUSY contribution becomes small when the superpartners become heavy. In the upper right panel, we show the $A_0$ dependence of $\Delta a_\mu^{\text{SUSY}}$, assuming $m_{1/2}=2150$ GeV, $m_0=250$, 500, 1000 GeV from top to bottom. An interesting $A_0$ dependence is found in this panel: When scalar soft masses are small (e.g.\ $m_0=250$ and 500 GeV in the panel), the $g-2$ contribution becomes larger as $A_0$ increases. On the other hand, if scalar soft masses are large ($m_0=1000$ GeV in the panel), the $g-2$ contribution is almost insensitive to $A_0$. This behavior is understood in a following way. In the charged slepton mass matrix (\ref{escalarmixing}), the off-diagonal mixing from the 1st to 4th generations are given by ${\bm a}_{e_{ij}} v_d - \mu_H^*{\bm y}_{e_{ij}} v_u$ where the second term is dominant with large $\tan\beta$ and the first $A$-parameter-dependent term is ignored. On the other hand, the off-diagonal mixing with the 5th generation takes the form $A_{e_i} V + Y_{e_i}Y^* |V|^2$ where both terms are similar order. If $m_0$ is small, the mixing with the 5th generation becomes nearly comparable with the diagonal part at low energy, and one of the pair of vector-like particles becomes light after the diagonalization. \begin{figure}[tbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{g2MUm0250_500_1000A00.pdf} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{g2MU2150_m0250_500_1000_A0.pdf} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{smuonvsg2} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=80mm]{g2graph_Vdep.pdf} \end{center} \end{minipage} \caption{The dependence of the muon $g-2$ contribution $\Delta a_\mu^\text{SUSY}$ on $m_{1/2}$ (upper left), $A_0$ (upper right) and the smuon mass (lower left). The lower right panel shows the muon $g-2$ in the VMSSM (red) and in the MSSM (black) with $m_0=250$ and $A_0=0$ GeV\@. The blue regions explains the deviation between the SM prediction and the experimental result within $1\sigma$. In the upper left panel, the three lines are $m_0=250$, 500, 1000 GeV from top to bottom, and $A_0$ is fixed to 0 GeV\@. In the upper right panel, the three lines are $m_0=250$, 500, 1000 GeV from top to bottom, and $m_{1/2}$ is fixed to 2150 GeV\@.} \label{fig:gauginovsg2} \end{figure} In the lower left panel, we show the dependence on the smuon mass which is denoted by $m_{\tilde{e}_{2L}}$. The blue region is explained if the smuon mass is around 1 TeV\@. In the MSSM that the smuon mass is needed to be $\mathcal{O}(100)\,\text{GeV}$ to explain the anomaly. In the VMSSM, however, the deviation of the muon $g-2$ is explained by a heavier smuon than the MSSM\@. This is clearly seen in the lower right panel which shows the muon $g-2$ contribution from the VMSSM (red line) and from the MSSM sector only (black line). The definition of the black line is the large $V$ limit and the same as before. In the VMSSM, the muon $g-2$ anomaly turns out to be explained by a larger value of $m_{1/2}$, that is, a heavier low-energy smuon than the MSSM\@. This is because there exists additional contribution from the vector-like generations which have sizable coupling to the muon field. \begin{figure}[tbp] \begin{center} \includegraphics[width=100mm]{g2PartCont.pdf} \caption{The blue and green lines represent the muon $g-2$ contributions from neutralinos and charginos, respectively. The blue regions explains the deviation between the SM prediction and the experimental result within $1\sigma$.} \label{fig:PartCont} \end{center} \end{figure} We also show in Fig.~\ref{fig:PartCont} each $g-2$ contribution from SUSY particles. The blue and green lines represent the contributions from neutralinos and charginos, respectively. The phino contribution is negligible ($\Delta a_\mu^{\chi_\Phi}\sim10^{-11}$) in this figure mainly because the gauge couplings are absent in (\ref{eq:sla}) and (\ref{eq:sra}). In the VMSSM with the parameter set of Table~\ref{tb:parameterset}, it is found that the neutralino contribution tends to be dominant than the chargino one. It comes from the superpartner spectrum that a charged slepton of $\mathcal{O}(100)\,\text{GeV}$ exists, while the neutral sleptons are $\mathcal{O}(1)\,\text{TeV}$\@. The concrete mass spectrum are listed in the next section. \subsection{Higgs boson mass and muon $g-2$ in the VMSSM} \label{sec:higgsandg2} Before estimating the Higgs boson mass and the muon $g-2$, we comment on the experimental bounds about the masses of vector-like generations and superparticles. For the fermion masses of vector-like generations, the experimental lower bounds of quarks and charged leptons are roughly given by 700 GeV and 100 GeV, respectively~\cite{Agashe:2014kda}. In the VMSSM, the 4-5 and 5-4 components of Yukawa couplings in (\ref{ufermionmassGUT})-(\ref{efermionmassGUT}) are $\mathcal{O}(1)$ and the expectation value $V$ is set to $4000$ GeV\@. Thus the quarks and leptons of vector-like generations respectively become $\mathcal{O}(1)$ TeV and about 200 GeV\@, which satisfy the experimental bounds. \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{gauginomass.pdf} \caption{The mass parameters of bino ($M_1$), wino ($M_2$) and gluino ($M_3$) at $M_{\text{SUSY}}$ for the universal gaugino mass $m_{1/2}$ at $M_{\rm GUT}$. The dashed line (800 GeV) represents a rough experimental lower bound of the gluino mass.} \label{fig:allgauginomass} \end{center} \end{figure} The gaugino, especially the gluino gives an important experimental bound on the superpartner mass spectrum. This is because in the VMSSM the gauge couplings are asymptotically non free and the universal gaugino mass is much larger than low-energy gaugino masses. The low-energy values of gaugino mass parameters are shown in Fig.~\ref{fig:allgauginomass} where the horizontal dashed line means 800 GeV, a rough lower experimental bound of gluino mass~\cite{Agashe:2014kda}. It is found that the universal gaugino mass $m_{1/2}$ should be taken above 1.9 TeV\@. The squarks of first two generations are roughly excluded with masses below 1100 GeV, and the superpartners of the top and bottom quarks should be heavier than 95 GeV and 89 GeV, respectively~\cite{Agashe:2014kda}. The lower left panel of Fig.~\ref{fig:gauginovshiggs} implies that the stop mass in the VMSSM is about 2 TeV for the Higgs boson mass being around 125 GeV\@. Since the RG evolution of squark mass parameters is governed by the strong gauge coupling, the other squark masses are of the same order of the stop mass. Thus the parameter regions appropriate for the Higgs boson mass and the muon $g-2$ are allowed by the squark mass bounds in the VMSSM\@. For the charged and neutral sleptons, the experimental mass bound is roughly given by 80 GeV~\cite{Agashe:2014kda}. As seen from Fig.~\ref{fig:gauginovsg2}, the muon $g-2$ anomaly is explained if the smuon mass is $\mathcal{O}(1)$ TeV\@. The other soft mass parameters for sleptons have similar RG behaviors as the smuon and hence all sleptons, except for the charged sleptons of vector-like generations, are $\mathcal{O}(1)$ TeV at low energy which satisfy the experimental mass bound. As mentioned previously, one of the charged sleptons of vector-like generations becomes $\mathcal{O}(100)$ GeV when the universal gaugino mass $m_{1/2}$ and/or soft scalar mass $m_0$ are small. We take into account this slepton mass bound as well as the above gluon mass bound in the following analysis. In Fig.~\ref{fig:result1}, we plot the contours of the Higgs boson mass and the muon $g-2$ in the $m_{1/2}$--$m_0$ plane. The other mass parameter $A_0$ is fixed to 0 GeV (left panel) and 1000 GeV (right panel). The orange region of parameters reproduces the Higgs boson mass from 124.7 to 126.2 GeV\@. The blue and green regions explain the muon $g-2$ anomaly within the 1$\sigma$ and 2$\sigma$ level, respectively. The black and gray regions are excluded by the experimental mass bounds of the gluino and the charged sleptons of vector-like generations, respectively. It is found from the comparison between $A_0=0$ and $A_0=1000$ GeV that the muon $g-2$ is sensitive to $A_0$ whereas the Higgs boson mass is not. This is the parameter dependence mentioned in Subsections~\ref{sec:higgs} and \ref{sec:muong2}. \begin{figure}[tbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=\hsize]{regionA00.pdf} \end{center} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=\hsize]{regionA01000.pdf} \end{center} \end{minipage} \caption{The Higgs boson mass and the muon $g-2$ anomaly in the VMSSM\@. The universal trilinear coupling $A_0$ is set to 0 (left) and 1000 GeV (right). The orange region reproduces the Higgs boson mass, and the blue and green regions explain the muon $g-2$ anomaly. The black and gray regions are excluded by the mass bounds of the gluino and the charged sleptons of vector-like generations. See the text for details. The cross marks (1)-(3) in the figures are the sample points whose mass parameters and spectrum are summarized in Table~\ref{tb:samplepoint}.} \label{fig:result1} \end{figure} We discuss how the vector-like generations contribute to the Higgs boson mass and the muon $g-2$ by comparing the VMSSM results with the MSSM\@. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|} \hline & Point (1) & Point (2) & Point (3) \\ \hline $m_{1/2}$ & 2150 & 2000 & 2080 \\ $m_0$ & 130 & 400 & 450 \\ $A_0$ & 0 & 0 & 1000 \\ \hline $M_3$ & 900.0 & 837.1 & 864 \\ $m_{\chi_1^0}$ & 185.5 & 172.6 & 177.6 \\ $m_{\chi_1^\pm}$ & 340.8 & 317. 1 & 325.9 \\ $m_{\tilde{u}_{3L}}$, $m_{\tilde{u}_{3H}}$ & 1926, \ 2433 & 1811, \ 2385 & 1898, \ 2383 \\ $m_{\tilde{u}_{4L,4H,5L,5H}}$ & $2715-3973$ & $2641-3874$ & $2691-3926$ \\ $m_{\tilde{e}_{2L}}$, $m_{\tilde{e}_{2H}}$ & 952.2, \ 1221 & 922.4, \ 1181 & 921.3, \ 1220 \\ $m_{\tilde{e}_{4L}}$ & 107.4 & 302.5 & 150.5 \\ $m_{\tilde{e}_{4H,5L,5H}}$ & $1129-1860$ & $1112-1808$ & $1119-1862$ \\ $m_{\tilde{\nu}_2}$ & 1227 & 1186 & 1223 \\ $m_{\tilde{\nu}_{4,5}}$ & 816, \ 1773 & 821, \ 1718 & 815, \ 1771 \\ \hline $m_{h^0}$ & 126.0 & 125.1 & 125.6 \\ $\Delta a_\mu^{\text{SUSY}}$ & $26.1\times 10^{-10}$ & $12.1\times 10^{-10}$ & $21.1\times 10^{-10}$ \\ \hline \end{tabular} \caption{The sample points in the VMSSM\@. All the mass parameters are given in unit of GeV\@.} \label{tb:samplepoint} \end{center} \end{table} Several patterns of model parameters and mass spectrum are listed in Table~\ref{tb:samplepoint}. We choose three benchmark points which simultaneously explain the Higgs boson mass and the muon $g-2$ anomaly: $(m_{1/2}, m_0, A_0)=(2150, 130, 0)$, $(2000, 400, 0)$, $(2080, 450, 1000)$ as Point (1), (2), (3), respectively. These points are marked by (1)-(3) in Fig.~\ref{fig:result1}. We show in the table the mass eigenvalues of scalar superpartners which are related to the quantum corrections to the Higgs boson mass and the muon $g-2$: the stop ($m_{\tilde{u}_{3L,3H}}$), the smuon ($m_{\tilde{e}_{2L,2H}}$), the sneutrino of 2nd generation ($m_{\tilde{\nu}_2}$), the vector-like up-type squarks ($m_{\tilde{u}_{4L, 4H, 5L,5H}}$), the vector-like charged sleptons ($m_{\tilde{e}_{4L, 4H, 5L, 5H}}$), and the vector-like neutral sleptons ($m_{\tilde{\nu}_{4,5}}$). The labels $L$ and $H$ of the stop and smuon mean that $L$ is lighter than $H$\@. For the vector-like generations, the mass eigenvalues are arranged to be heavy in the order of 4L, 4H, 5L, 5H for squarks and charged sleptons, and 4, 5 for neutral sleptons. In the MSSM, the stop masses are roughly needed to be $3-4$ TeV for the Higgs mass, though it depends on other parameters. In the VMSSM with the universal SUSY-breaking parameters, the Higgs mass is reproduced by lighter stop masses. This is because we have extra up-type (s)quarks that strongly couple to the Higgs fields, namely the Yukawa couplings of vector-like quarks remains $\mathcal{O}(1)$ at low energy. That gives an additional quantum correction to the Higgs boson mass and relaxes the requirement of large stop masses in the MSSM\@. As for the muon $g-2$, the anomaly can be explained by 1~TeV smuon masses in the VMSSM\@. In this paper, we fix $\tan\beta=17$ to obtain the quark and lepton masses in the 2nd and 3rd generations. If the muon $g-2$ is evaluated with $\tan\beta=17$ in the MSSM, the smuon masses are needed to be light and $\mathcal{O}(100)$ GeV to explain the deviation~\cite{Lopez:1993vi,Martin:2001st}. In the VMSSM, one of the charged sleptons in the vector-like generations, $m_{\tilde{e}_{4L}}$ in Table~\ref{tb:samplepoint}, becomes $\mathcal{O}(100)$ GeV\@. Moreover the Yukawa couplings between the 2nd and vector-like generations, the 2-4 and 4-2 elements in (\ref{efermionmassGUT}), are nonzero. These facts together mean that the muon couplings (\ref{nlax})-(\ref{eq:cr}) give sizable $g-2$ contributions from extra generations. In the end, the deviation between the SM theoretical values and the experimental measurement can be explained even if the smuon masses are much heavier than the MSSM. Finally we comment on the flavor constraints in the VMSSM\@. Notice that the following is a tentative analysis which depends on how to realize the generation mixing including (the Yukawa couplings of) the first generation. A typical experimental bound in the quark sector comes from the unitarity of the generation mixing matrix, which is numerically confirmed to be satisfied with heavy vector-like generations. In the lepton sector, the muon has sizable couplings to the vector-like generations, which would induce flavor-changing rare processes.\footnote{For the neutrino physics with low-scale vector-like generations, see \cite{Bando:1998ww} for example.} The flavor mixing including the third generation ($\tau$) is expected to be small since $\mu$-$\tau$ has no coupling in (\ref{efermionmassGUT}) and is only radiatively induced. On the other hand, the muon decay, especially $\mu\to e\gamma$, would be induced through the slepton mixing which is represented by the product of couplings among the first two generations and vector-like ones. A previous analysis shows the branching ratio of $\mu\to e\gamma$ becomes $\mathcal{O}(10^{-13})$ which is almost the same order of the experimental bound~\cite{Adam:2013mnn}, when the above product of couplings is $\mathcal{O}(10^{-1})$~\cite{Kitano:2000zw}. In the present model, the muon coupling to the vector-like generations is found to be $\mathcal{O}(10^{-1})$ at low energy and that of the electron is expected to be smaller. Thus, within the region where the Higgs boson mass and the muon $g-2$ are explained simultaneously, the $\mu\to e\gamma$ decay would not be comparable with the experimental bound. \bigskip \section{Conclusion} In this paper, we have studied the Higgs boson mass and the muon $g-2$ anomaly in an extension of the MSSM by introducing one pair of vector-like generations. Compared with the MSSM, the Higgs mass is reproduced with a lighter stop, while the muon $g-2$ is fitted by a heavier smuon. As a result, we found that these two experimental values can be explained simultaneously in wide regions of SUSY-breaking parameter space. The model parameters are controlled by the strong gauge coupling (and the gluino mass) through the infrared convergence of RG evolution. Due to this feature, the quark and lepton Yukawa couplings at high energy and the SUSY mass spectrum at low energy are highly restricted, which leads to distinctive physical predictions. Among them, the gluino mass becomes around 900 GeV and the lightest neutralino and chargino are $\mathcal{O}(100)$ GeV with the Higgs mass and the muon $g-2$ being realized. These mass regions would be measurable in near future experiments. We comment on the phenomenology of the singlet superfield $\Phi$, whose fermionic component is called the phino in this paper. The phino mass read from the superpotential and is given by $YV$ where $Y$ is the coefficient of the cubic term of gauge singlet $\Phi$. The RG running of $Y$ is governed only by the Yukawa couplings involving $\Phi$ and does not contains any gauge couplings, which mean $Y$ is pushed down during the RG evolution. As a result, $Y$ becomes $\mathcal{O}(10^{-2})$ at low energy and the mass of phino is around or below 100 GeV\@. Since the lightest neutralino is around 200 GeV (see Table~\ref{tb:samplepoint}), the phino may be the lightest superparticle, which implies the neutral and non-baryonic phino can be a reasonable candidate for the dark matter in the universe and may also give characteristic collider signatures. That should be investigated in detail, together with the phenomenology of the scalar component of $\Phi$, e.g., a recent analysis of the 750 GeV diphoton excess \cite{diphoton} with $\Phi$ and vector-like generations \cite{Hall:2016swn}. \bigskip \subsection*{Acknowledgments} The authors thank Tetsutaro Higaki, Naoki Yamamoto, Ryo Yokokura for useful discussions and comments. This work was supported in part by KLL PhD Program Research Grant from Keio University. \newpage
1,108,101,564,084
arxiv
\section{Introduction} \label{sec:introduction} Maximum likelihood estimation in Gaussian graphical models can be carried out via generic optimization algorithms, Newton--Raphson iteration, iterative proportional scaling, other alternating algorithms \citep{speed:kiiveri:86}, or the graphical lasso with zero penalty \citep[p.\ 631 ff.]{hastie:etal:16}. Iterative proportional scaling is a provably convergent algorithm, but it may be slow when used with many variables, as it involves repeated matrix inversion. In this paper we describe a faster version of the standard algorithm. The increased speed comes, alas, at the expense of a slightly increased storage demand. \section{Gaussian graphical models} \label{sec:setting} \subsection{Likelihood equations for Gaussian graphical models} \label{sec:likel-equat-ggms} Let $X=(X_v, v\in V)$ be a $d$ dimensional random vector, i.e.\ $|V|=d$, normally distributed as $X \sim N_d(0, \Sigma)$. The focus is on the pattern of zeros in the inverse covariance matrix, i.e. in the concentration matrix $K=\Sigma\inv$. If $K_{uv}=0$ then $X_u$ and $X_v$ are conditionally independent given $X_{V\setminus \{u,v \}}$. The pattern of zeros in $K$ may be represented by an undirected graph $\graf=(V,E)$ with vertices $V$ and edges $E$. A Gaussian graphical model is then defined by demanding $K_{uv}=0$ unless there is an edge $uv\in E$. For further details, we refer to \citet[Ch.\ 4]{lauritzen:96}. Let $\graf=(V,E)$ be a graph and let $S$ denote the empirical covariance matrix obtained from a sample $X^1=x^1,\ldots, X^n=x^n$. The likelihood equations for estimating the covariance matrix $\Sigma$ in an undirected Gaussian graphical model are \citep[p.\ 133]{lauritzen:96}: \begin{eqnarray} \hat\sigma_{vv}-s_{vv}&=& 0\;\mbox{ for all $v\in V$}, \label{eq:diag} \\ \hat\sigma_{uv}-s_{uv}&=& 0\;\mbox{ for all $uv\in E$},\label{eq:edge}\\ \hat K_{uv}=(\hat \Sigma^{-1})_{uv}&=& 0\;\mbox{ for all $uv \not \in E$},\label{eq:slackness} \end{eqnarray} where the last equation represents the model restrictions. \subsection{Computational issues of updating margins} Iterative proportional scaling cycles through relevant margins $c\subseteq V$ of variables, updating the estimate by keeping the parameters of the conditional distribution $X_{V\setminus c}\cd X_c$ fixed, whereas the parameters of the marginal distribution of $X_c$ are updated to maximize the objective function under that restriction. The updates have the form \[f(x; \Sigma)\; \leftarrow \;f(x) \frac{f(x_c; S_c)}{f(x_c; \Sigma_c)}\] so that the densities are scaled proportionally, whence the name of the algorithm. The algorithm is provably convergent when started in a point satisfying the model restrictions if the likelihood function is bounded, i.e.\ if the maximum likelihood estimate exists \citep[Thm.\ 5.4]{lauritzen:96}. Let $c\subseteq V$ and $a=V\setminus c$ where $c$ is a complete subset of $V$ in $\graf$. The standard update for $c$ takes the form \citep[p.\ 134]{lauritzen:96} \begin{equation}\label{eq:kupdate1} K_{cc} \seteq \tildeK + L, \end{equation} whereas $K_{ac}, K_{aa}, K_{ca}$ are unchanged. There are essentially two alternatives for calculating $L$: \begin{eqnarray} L &=&K_{ca}(K_{aa})\inv K_{ac} \label{eq:Lupd1} \\ &=& K_{cc}-(\Sigma_{cc})\inv. \label{eq:Lupd2} \end{eqnarray} Calculating $L$ as in (\ref{eq:Lupd1}) gives what is referred to in this paper as the standard version and has the advantage that $\Sigma= K\inv$ is not needed, so inversion of $K$ is avoided and $\Sigma$ need not be stored. This is efficient if $a$ is small and $c$ is large. Calculating $L$ as in (\ref{eq:Lupd2}) gives what is referred to in this paper as the fast version of the algorithm. Expression (\ref{eq:Lupd2}) has the advantage that $(K_{aa})\inv$ needs not be computed; this computation could be expensive if $a$ is large. On the other hand, $\Sigma$ needs to be stored and calculated. The main contribution of this paper is that we show how to update $\Sigma$ along with $K$, avoiding to compute $\Sigma$ by inversion of $K$. This makes expression (\ref{eq:Lupd2}) feasible to use in practice and speeds up the computation considerably, see Section~\ref{sec:study}. \subsection{Updating $\Sigma$ without inverting $K$} \label{sec:updat-sigma-with} There is a simple formula for updating $\Sigma$ without inverting the entire matrix $K$. This is based on the following formula given by \citet[eq.\ (3.6)]{harville:77} modifying what is known as Woodbury's matrix identity. If we let $\Delta$ denote the difference between the updated and old $K_{cc}$ i.e.\ \begin{equation}\label{eq:delta1} \Delta=\{ \tildeK +L\}- K_{cc}=\tildeK-(\Sigma_{cc})\inv, \end{equation} we can use Harville's expression \begin{eqnarray} (K + U\Delta U\transp)\inv &=& K\inv - K\inv U \Delta(I + U\transp K\inv U \Delta)\inv U\transp K\inv \nonumber \\ &=&\Sigma -\Sigma U \Delta(I + U\transp \Sigma U \Delta)\inv U\transp \Sigma \label{eq:harville1}. \end{eqnarray} This expression has the advantage of being suitable also when $\Delta$ is singular, in contrast to the original form of Woodbury's identity. For our purposes we want to use $U\transp=(I_c:0_a)$ and realize that then $U\Delta U\transp$ simply patches up $\Delta$ to a $V\times V$ matrix by inserting zeroes for all indices not in $V$, whereas $U\transp A U=A_{cc}$ picks out the $cc$ block of $A$. Letting $H=\Delta Q\inv$ we may rewrite (\ref{eq:harville1}) as \begin{equation} \label{eq:harville2} (K + U\Delta U\transp)\inv = \Sigma -\Sigma U H U\transp \Sigma= \Sigma -(\Sigma U) H (\Sigma U)\transp. \end{equation} We may without loss of generality assume that $c$ corresponds to the first rows and columns of $K$ and $a$ to the remaining. Thus the updated $K$ in (\ref{eq:kupdate1}) is \begin{equation}\label{eq:kupdate3} K_{\textrm{updated}}=\begin{pmatrix}(K_{cc}+\Delta) &K_{ca}\\K_{ac}&K_{aa}\end{pmatrix}=K + U\Delta U\transp\end{equation} and therefore \begin{equation} \label{eq:Sigma_update} \Sigma_{\textrm{updated}}=(K + U\Delta U\transp)\inv =\Sigma -(\Sigma U) H (\Sigma U)\transp. \end{equation} Now realise that $\Sigma U=(\Sigma_{cc}:\Sigma_{ca})\transp$ and $H$ is equal to \begin{eqnarray} H &=&\Delta Q\inv = \Delta (I + U\transp \Sigma U \Delta)\inv = \Delta [I + \Sigma_{cc} \{\tildeK-(\Sigma_{cc})\inv\}]\inv \nonumber\\ &=& \Delta\{\Sigma_{cc}\tildeK\}\inv = \Delta \tildeS(\Sigma_{cc})\inv \nonumber\\ &=&(\Sigma_{cc})\inv -(\Sigma_{cc})\inv\tildeS(\Sigma_{cc})\inv. \label{eq:harville_red} \end{eqnarray} Let $|c|$ denote the number of elements in $c$. The advantage of updating $K$ and $\Sigma$ in this way is that (\ref{eq:delta1}), (\ref{eq:kupdate3}), and (\ref{eq:harville_red}) only involve $|c|\times |c|$ matrices and (\ref{eq:Sigma_update}) uses matrix multiplication involving no more than $O(|c||V|^2)$ operations. Compare this result with the standard update (\ref{eq:Lupd1}) which requires $O((|V|-|c|)^3)$ operations because a $(|V|-|c|) \times (|V|-|c|)$ matrix must be inverted. Hence when e.g.\ $c=\{u,v\}$ has only two elements, the update goes from having complexity $O(|V|^3)$ for the standard version to just $O(|V|^2)$ for the fast version. \subsection{Updating the likelihood function} \label{sec:updat-value-likel} Consider again a sample $X^1=x^1, \dots, X^n=x^n$ where $X^\nu \sim N_d(0, \Sigma)$ and let $S$ denote the sample covariance matrix. The log-likelihood function (ignoring constants) is \begin{displaymath} l (K) = \frac n 2 \log \det(K) - \frac 1 2 \trace(KS). \end{displaymath} The likelihood function can be updated as follows. We have $$ \det K_{\textrm{updated}} =\det\{K_{cc}+\Delta-K_{ca}(K_{aa})^{-1}K_{ac}\}\det K_{aa}= \det (S_{cc})^{-1}\det K_{aa}.$$ But $\det K = \det (\Sigma_{cc})^{-1} \det K_{aa}$. Hence, if we let $A= (\Sigma_{cc})^{-1}S_{cc}$ we have $$\log \det K_{\textrm{updated}} = \log\det K - \log \det A.$$ For the trace term we get from (\ref{eq:delta1}) \begin{eqnarray*}\trace(K_{\textrm{updated}}S)&=&\trace(KS)+\trace(\Delta S_{cc})\\&=&\trace(KS)+|c|-\trace\{(\Sigma_{cc})^{-1} S_{cc}\}=\trace(KS)+|c|-\trace(A)\end{eqnarray*} where $|c|=\trace\{(S_{cc})^{-1}S_{cc}\}$ is the size of the margin $c$. We thus get the expression $$ \ell (K_{\textrm{updated}})= \ell(K) -\frac n 2|c|-\frac n 2\log \det A+\frac n 2\trace (A) $$ and note that $A$ has dimension $|c|$ so the adjustment is easily calculated if $c$ is small. \subsection{Convergence of algorithms} \label{sec:precision-algorithms} Convergence of the algorithm can be assessed by investigating whether the likelihood equations (\ref{eq:diag}) and (\ref{eq:edge}) are satisfied within a small numerical threshold since (\ref{eq:slackness}) remains exactly satisfied at all times. This requires that $\Sigma$ is available which is the case for the fast but not the standard version. Commonly used but less stringent convergence criteria are monitoring whether changes in the log-likelihood or changes in parameter values between succesive iterations are small. However, one may find that such changes may be small even though the likelihood equations are far from being satisfied so we warn against using these. \subsection{Space and time considerations} We consider a Gaussian graphical model with graph $\graf=(V,E)$ having $|V|$ vertices and $|E|$ edges. \paragraph{Space needs: } If data are standardized so that each variable has zero mean and variance one, the only data needed are $|E|$ pairwise empirical correlations $r_{uv}$ for all $uv\in E$. The only places where data are needed in the fast algorithm are in (\ref{eq:delta1}) and (\ref{eq:harville_red}). With respect to $K$, the corresponding $|E|$ entries $K_{uv}$ must be stored together with $|V|$ diagonal entries $K_{uu}$. For the fast algorithm, also $\Sigma=K\inv$ must be stored. Even if $K$ is sparse, $\Sigma$ is typically not. Since $\Sigma$ is symmetric, $|V|(|V|+1)/2$ values must additionally be stored. This is the extra space requirement for the fast algorithm. A brute-force approach would be to store $S$, $K$ and $\Sigma$ without exploiting symmetry and sparseness. That would require storing $3 d^2$ numbers. To place these numbers in perspective, a model with $d=10,000$ variables would require about $2$ Gb of storage for these three matrices. This is not a scary figure at the time of writing where standard laptops come with at least $8$ Gb of memory. \paragraph{Computing time: } Suppose the margin $c\subset V$ is to be updated. For both versions of the algorithm, $S_{cc}$ must be inverted, cf.\ (\ref{eq:delta1}), but this inversion needs to be done only once. The standard algorithm requires inversion of a $(|V|-|c|)\times (|V|-|c|)$ matrix in (\ref{eq:Lupd1}) so this step of the algorithm has complexity $O(|V|^3)$. The critical part of the fast version using (\ref{eq:Lupd2}) is the update of $\Sigma$ in (\ref{eq:harville2}) which has complexity $O(|c||V|^2)$. The computing time for a large margin $c$ will therefore be comparable for the two algorithms while the fast version may have considerably higher speed when $c$ is small. \paragraph{Choice of margin:} For a given graph $\graf=(V,E)$, two default choices of margins $c\subset V$ to be updated are immediate. Take sets $c$ to be the smallest components of $\graf$, i.e.\ the edges; or take $c$ to be the largest components of $\graf$, i.e.\ the cliques. Typically a graph will have considerably fewer cliques than edges. Fitting a clique has the advantage of fitting many parameters in one update but that may come at a cost: An edge may be in several cliques, so after iterating over all cliques, the same edge may have been visited several times. On the other hand, looping over the edges only keeps all updates of low dimension. To this it must be added that finding the cliques of a general graph from the edges is an NP-complete task and using edges in the updates avoids this problem. \section{Empirical study} \label{sec:study} \subsection{Implementation of the algorithms} Both versions of the algorithms have been implemented in R \citep[version 4.1.1]{r}. We have made an implementation based on C++ using the \pkg{RcppArmadillo} package \citep[version 0.10.6.0.0]{rcpparmadillo:14}. The implementation is naive in the sense that we store the full matrices $K$ and not just the non-zero elements. On the other hand, we store and calculate $S_{cc}$ and its inverse $(S_{cc})^{-1}$ for all relevant margins $c$ once and for all to use for updating $K$ in (\ref{eq:kupdate1}) so the empirical covariance matrix $S$ is itself not needed. Computations were performed on a standard laptop under Windows. \subsection{Comparing the fast and standard algorithms} \label{sec:increase-dens} We investigated the computing time for the fast and standard algorithm for random graphs of varying density as well as for a fixed grid. Model fitting was based on taking edges or cliques as margins. The algorithms were applied to a data set representing 102 samples of the expression of 6033 genes associated with prostate cancer, originating from \cite{prostate_data} and published in the R package \pkg{spls} \citep{spls}. In addition, artificial data sets were produced with 102 samples of the relevant number of variables, all entries simulated from the standard normal $N(0,1)$ distribution. In all cases the algorithm was run until the likelihood equations were satisfied with an average absolute error less that $10^{-4}$. The median computing time in milliseconds is displayed in Table~\ref{tab:increase-dens}. \begin{table}[h] \def~{\hphantom{0}} \tbl{ Median computing time in milliseconds for the standard and fast version, applied cliquewise or edgewise to twenty random graphs on 48 vertices and varying edge densities} \begin{tabular}{ccccccccc} \\ &\multicolumn{4}{c}{Prostate data}&\multicolumn{4}{c}{Simulated data}\\ &\multicolumn{2}{c}{Cliquewise}&\multicolumn{2}{c}{Edgewise}&\multicolumn{2}{c}{Cliquewise}&\multicolumn{2}{c}{Edgewise}\\ Density&Fast&Standard&Fast&Standard&Fast&Standard&Fast&Standard\\ 10\%&2&20&4 &31&0&8&0&10\\ 30\%&17&82&50 &465&7&20&8&51\\ 50\%&62&175&219&2,190&34&80&19&136\\ 70\%&597&809&908&9,281&430&593&41&337 \end{tabular}} \label{tab:increase-dens} \end{table} The general picture in Table~\ref{tab:increase-dens} is that the fast version is about five to ten times faster than the standard algorithm when models are updated edgewise. This effect is less obvious when fitting a dense model by cliquewise updating. There are at least three reasons for this: Firstly, for sparse graphs many cliques would be pairs and hence there is little difference between edgewise or cliquewise updating. Secondly, when the model is dense, the cliques will be relatively large so updating $\Sigma$ as in (\ref{eq:Sigma_update}) and (\ref{eq:harville_red}) will be time consuming. Finally, a random dense graph will typically have many large cliques sharing many variables. This means that the same edges are updated several times during each iteration. Computing times seem to be systematically shorter when the algorithms are applied to the simulated data. This is most likely a reflection of the fact that the empirical covariance matrix would tend to fit any of the models investigated and therefore be closer to the final estimate from the outset, hence demanding fewer iterations in the fitting procedure. Alas, at this size, even the slowest of the algorithms converges in less than ten seconds for the prostate data. \subsection{Comparison with the graphical lasso} \label{sec:comp-with-graph} Secondly we have compared edgewise fast iterative proportional scaling to the popular graphical lasso \citep{yuan:lin:07,banerjee:etal:08,friedman:etal:08}. The graphical lasso procedure is maximizing a penalized log-likelihood $$\ell_{\mbox{pen}}(K)=\log\det K -\trace(KS) - \trace(\Lambda|K|)$$ where $|K|_{uv}=|k_{uv}|$ and $\Lambda$ is a symmetric penalty matrix with non-negative elements and a possible pre-specification of zero elements in $K$. The R package \pkg{glasso} implements the algorithm \citep{glasso:19}. If $\Lambda=0$ there is no penalty and hence the graphical lasso provides an alternative to iterative proportional scaling for maximizing the likelihood. It is therefore relevant to compare computing time to that of \pkg{glasso} applied without penalty. The algorithms are not fully comparable because of differences in convergence criteria for the implementation of the algorithms. The documentation of \pkg{glasso} states that ``Iterations stop when average absolute parameter change is less than a threshold times the average of the off-diagonal elements of the empirical covariance matrix''. This is potentially dangerous as the algoritm in principle could begin to move slowly before convergence had been achieved. In contrast, our default convergence criterion is to stop when the average absolute difference between $s_{uv}$ and $\hat\sigma_{uv}$ is smaller than a threshold, i.e.\ when the likelihood equations are satisfied. Here $uv$ refers to an edge if $u\neq v$ and a vertex if $u=v$; that is we stop when the likelihood equations (\ref{eq:diag}) and (\ref{eq:edge}) are solved. To ensure that results were mildly comparable, we checked that at the end of the iterations that the value of the maximized log-likelihood functions were identical up to the first decimal point. We used the same data as in the previous section for comparison and the results are displayed in Table~\ref{tab:fips_glasso}. \begin{table}[h] \def~{\hphantom{0}} \tbl{ Median computing time in milliseconds for edgewise fast iterative proportional scaling and the graphical lasso over twenty random graphs} \begin{tabular}{ccccccccc}\\ &\multicolumn{4}{c}{Prostate data}&\multicolumn{4}{c}{Simulated data}\\ &\multicolumn{2}{c}{48 variables}&\multicolumn{2}{c}{96 variables}&\multicolumn{2}{c}{48 variables}&\multicolumn{2}{c}{96 variables}\\ Density&Fast scaling&\pkg{glasso}&Fast scaling &\pkg{glasso}&Fast scaling&\pkg{glasso}&Fast scaling &\pkg{glasso}\\ 10\%&2&9&134&57&0 &0&22&20\\ 30\%&51&14&2,773 &448&8&7 &126&40\\ 50\%&248&44&28,397&2,569&20& 9&421 &67\\ 70\%&997& 91&156,804& 15,301&47 &12&1,574 &116\\ \end{tabular}} \label{tab:fips_glasso} \end{table} From Table~\ref{tab:fips_glasso} we note that the speed of the fast scaling algorithm applied edgewise is comparable to the graphical lasso for edge densities less than 30\%. Also here we see a marked difference between the speed of the algorithm as applied to the prostate data and simulated data, the latter being much faster, probably for the same reasons as mentioned above. When graphs get dense, the edgewise updating can become computationally expensive although even at a density of 70\%, the edgewise fast scaling converges in less than three minutes with 96 variables. For random graphs with more than 100 vertices, experiments become complicated as there is a risk that the maximum likelihood estimate does not exist. So to extend the above comparisons to larger scale and a higher degree of sparsity, we investigate the behaviour for grid graphs. For the grid graph, the estimate exists with probability one for just three degrees of freedom \citep[Corollary 3.8]{gross:sullivant:18} and hence 102 observations are plenty. For the grid graph there is no difference in updating edgewise or cliquewise, as all cliques consist of exactly one edge. Computing times for the fast scaling algorithm and various grid sizes are displayed in Table~\ref{tab:gridexp}. \begin{table}[h] \def~{\hphantom{0}} \tbl{ Computing time in milliseconds for fast iterative proportional scaling and the graphical lasso over a regular grid } \begin{tabular}{cccccccc}\\ &&&&\multicolumn{2}{c}{Prostate data}&\multicolumn{2}{c}{Simulated data}\\ Grid size&\# of variables&\# of edges&Density&Fast scaling&\pkg{glasso}&Fast scaling&\pkg{glasso}\\ $12\times 8$&96&172&3.8\%&18&64&11&9\\ $12\times 16$ &192 &356&1.9\%&109 &825 &42 &81\\ $24\times 16$& 384&728&1\%&1,209&8,107&236&494\\ $24\times 32$&768&1,480&0.5\%&7,431&82,396&2,129&4,052\\ $48\times 32$&1,536&2,992&0.25\%&68,126&884,388&18,390&34,873 \end{tabular}} \label{tab:gridexp} \end{table} Table~\ref{tab:gridexp} indicates that the fast scaling algorithm appears faster than the graphical lasso at this level of sparsity and fits the model of a $48\times32$ grid to the prostate data in about a minute. Again, the speed is higher for the simulated data, for the same reasons as given above. \section{Discussion} \label{sec:discussion} We have described a fast version of iterative proportional scaling for fitting Gaussian graphical models. The increase of speed compared to the standard algorithm is in particular remarkable when graphs are sparse. An empirical study indicates that the fast scaling algorithm matches or outperforms the graphical lasso in terms of speed when applied edgewise to large, sparse graphs. In addition, identification of graph cliques is avoided, which could be problematic for the standard scaling algorithm as this is an NP-complete problem. The \pkg{golazo} algorithm developed recently in \cite{lauritzen2021locally} provides another promising alternative for fitting graphical models, but this algorithm demands a feasible starting point, which cannot be easily found when the number of observations is smaller than the number of variables. We also note that the fast scaling algorithm may easily be modified when used edgewise to add convex penalties and positivity constraints, as described in \cite{LUZ}. We refrain from providing the details here. \input{arxiv_fips.bbl} \end{document}
1,108,101,564,085
arxiv
\section{Introduction} \label{sec:intro} Socially assistive robots (SARs), which assist humans mentally and physically via social interaction with them, have proven to be very successful in boosting the outcomes of therapeutic and educational assistance for humans \citep{Feil-Seifer2005,Tapus2008,Ros2019a,Scassellati2018ImprovingRobot,Kidd2008RobotsInteraction,Ros2019a,Tapus2008}. The majority of SAR applications, however, involve short-term interactions (i.e., one or very few therapeutic or educational sessions that overall last less than a month) with humans that exclude in-depth analysis and understanding of every human's specific cognitive procedures and fixed personality, behavioral, and decision making traits \citep{Leite2013}. The more prominent the presence of intelligent machines, including SARs, becomes in our lives, the more essential becomes for machines to sustain long-term, meaningful and engaging interactions with humans by accounting for their personal cognitive dynamics. State-of-the-art SARs currently face serious challenges regarding maintaining such long-term engaging interactions with humans \citep{Kidd2008RobotsInteraction,Scassellati2018ImprovingRobot,Leite2013, Ros2019a}. Particularly, the simple rudimentary nature of social skills (including understanding and analysis of humans they interact with) displayed by SARs in their interactions with humans, significantly degrades the effectiveness and engagement level of these interactions for humans \citep{Leite2013, Scassellati2018ImprovingRobot, Kidd2008RobotsInteraction}. Although some behavioural, non-verbal cues (e.g., joint-attention\footnote{Joint-attention is a non-verbal skill exhibited by social agents. It implies that an agent draws the attention of another agent to an object by looking or pointing at the object.} \citep{Scassellati2018ImprovingRobot}, eye contact \citep{Kidd2008RobotsInteraction}, facial expressions \citep{Ros2019a}) have been implemented for SARs, due to a lack of deep cognitive understanding of humans, these robots often fail to recognise \emph{when} in an interaction each cue is best to be displayed \citep{Leite2013}. Additionally, while personalising the SAR's behavior with respect to every human \citep{Tapus2008, Ros2019a, Scassellati2018ImprovingRobot} is crucial for sustaining long-term meaningful interactions, due to the missing cognitive models of humans, personalisation of SARs so far has been task-specific and unsystematic. More specifically, personalisation has mainly been simplified to learning interactive behaviours by the SAR that maximise the score or performance of humans in specific therapeutic or educational tasks, rather than on explicitly learning about personal behaviours and characteristics of a human. Such specific task-oriented decision making approaches assigned to SARs results in interactions that are perceived as less natural, attractive, and engaging for humans, especially in the long term. In order to act as similarly as possible to humans, SARs need to understand the dynamics of human cognition and interactions \citep{Mataric2016,Leite2013}. So far, the control approaches used to steer the behaviour of SARs are mostly based on model-free methods, e.g., reinforcement learning \citep{Tapus2008,Ros2019a} and model-free rule-based decision making \citep{Scassellati2018ImprovingRobot,Kidd2008RobotsInteraction}. Ignoring the key to success in human's interactions, which according to the theory of mind (\ac{ToM}) \citep{Scassellati2002} is human's capability in (partial) modelling and awareness of the cognitive procedures and state-of-mind of other humans and making decisions according to such models, is a main bottleneck for developing intelligent machines, e.g., SARs, that can act and interact as closely as possible to humans. Therefore, this paper is focused on developing such cognitive models for analysis and decision making of intelligent machines The rest of the paper is structured as it follows. \autoref{sec:previous_Work} gives an overview on previous work about proposed cognitive models of humans. \autoref{sec:contributions} formulates the main contributions of the paper. \autoref{sec:agent-model} presents the formalisation and network representation of human-like cognitive procedures. \autoref{section:model_formulation} formulates the proposed cognitive models mathematically and according to a new extended version of fuzzy cognitive maps. In \autoref{sec:model_implementation} the proposed cognitive models are implemented and assessed via real-life experiments with $15$ volunteer human participants. Finally, \autoref{sec:conclusions} concludes the paper and presents topics for future research. \section{Previous Work} \label{sec:previous_Work} Understanding and representing the cognitive procedures involved in human interactions as well as in individual cognitive dynamics of every person are of utmost importance for both cognitive psychology and cognitive computing (see \cite{Guest2021,computational_modeling_cognition_and_behavior,Mareschal2007,computational_modeling_in_cognition,Scassellati2018ImprovingRobot,Kidd2008RobotsInteraction}). According to the theory of mind (ToM) \cite{Scassellati2002}, the key to success in interactions of humans is the human's capability in modelling the cognitive procedures and state-of-mind of other humans and making decisions according to such models. State-of-the-art cognitive computing approaches, however, fail to maintain meaningful interactions with humans due to a lack of awareness of the dynamic procedures in cognition of humans and the personalised aspects of these procedures (see \cite{Feil-Seifer2005,Tapus2008,Ros2019a,Leite2013}). Therefore, this paper is focused on developing such cognitive models for analysis and decision making in cognitive computing \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Figures/Baker2012-recreated} \caption{Cognitive model proposed by Baker in \cite{Baker2012} for rational agents: The model explains the behaviour of rational agents based on the principles of rational action and and rational belief.} \label{fig:Baker2012-PhD} \end{figure} Theory of mind (ToM) has been used to develop computational frameworks based on the principles of rational belief and rational action, i.e., assuming that according to their \emph{rational} beliefs and goals, humans act such that they maximise their outcomes/minimise their losses \cite{Dennett1987TheStance, Baker2011, Jara-Ettinger2016ThePsychology, Saxe2017}. Baker in \cite{Baker2012} proposes and implements a Bayesian ToM model based on the network representation given in \autoref{fig:Baker2012-PhD}. Using partially observable Markov decision processes and Bayesian inference, the model proposed in \cite{Baker2012} makes forward and inverse inferences of the actions and the beliefs and goals of rational agents assuming that the principles of rational belief and rational action hold. The model was implemented in experiments where rational agents moved in two-dimensional spaces and the results exhibited close similarities with the inferences made by humans. These results also acknowledged that goals and beliefs should be inferred simultaneously as independent variables. However, the experiments are significantly simpler than real-life situations that involve interactions of humans. For instance, the model and thus the experimental scenarios in \cite{Baker2012} do not include the influence of varying emotions and distinct personality traits of humans in their cognitive procedures. Additionally, although the model in \cite{Baker2012} recognises the influence of general world knowledge and general preferences on beliefs and goals (see \autoref{fig:Baker2012-PhD}), these influences are not included in the experiments. Finally, the principles of rational action and rational belief are idealised conceptions that ignore personal aspects of decision making procedures of humans, and do not account for biases that frequently occur in their cognition. Although the model proposed in \cite{Baker2012} offers a highly promising framework for computational representation of ToM, to the best of our knowledge, there has been no follow-up research on extension of the model to cover more realistic and complex cognitive procedures of humans. Thus the model shown in \autoref{fig:Baker2012-PhD} is our main inspiration for proposing a more precise and realistic cognitive model of humans.% \section{Main Aims and Contributions of the Paper} \label{sec:contributions} In this paper we develop mathematical cognitive models that estimate the state-of-mind of humans and predict their behavior. The main contributions of this paper include: \begin{itemize} \item A precise formalisation of human's cognitive procedures, based on comprehensive realistic examples, represented as a network, which in addition to beliefs and goals includes dynamic emotions, and systematically incorporates the influence of general world knowledge, general preferences, and personality traits of rational agents. These latter three elements play an important role in personalisation and precision of the resulting cognitive model. % \item Including the influence of biases, which may be boosted by emotions, personality traits, and goals, instead of proposing a universal cognitive representation based on the principle of rational belief, as well as defining a new agent-specific concept called \emph{rational action selection} instead of considering the universal principle of rational action. \item Proposing an extended version of fuzzy cognitive maps, in order to formulate mathematical models for the proposed network representation within a standard state-space framework. A main advantage of this mathematical representation is that the resulting cognitive models can directly be embedded within various existing model-based control approaches (e.g., model predictive control) to steer the decision making and interactions of cognitive intelligent machines. \end{itemize} The resulting cognitive models are identified and personalised for $15$ human participants and are implemented for realistic scenarios for the participants and for simulated observed agents. The results of these experiments show that the proposed cognitive models precisely estimate and predict the current and future state-of-mind and behaviors in a personalised user-specific way. The proposed cognitive models will contribute to the research on \ac{ToM}, and to developing analyses and model-based control approaches that yield long-term engaging human-machine interactions and more realistic \emph{human-like} machine behaviors and machine-machine interactions \section{Cognitive Models of Observed Agents: Formalisation} \label{sec:agent-model} In the rest of the paper, a rational\footnote{Note that although we use the term \emph{rational} agent, we mainly refer to cognitive procedures that follow some realistic level of rationalism that may vary from agent to agent. However, we do not limit our discussions to idealistic cases where rationality implies maximising the outcomes (or minimising the loses).} agent that makes inference about the state-of-mind and actions or behaviours of another rational agent is called an \emph{observer agent}. The other rational agent is called an \emph{observed agent}. In the following discussions we formalise cognitive models that describe how an observer agent makes inference about an observed agent.% The state-of-mind and thus actions and behaviours of an observed agent are influenced by its fixed (or more precisely invariant in longer terms) characteristics including the agent's general world knowledge, general preferences, and personality traits, as well as by its dynamic (or more precisely varying more frequently in time) inner state variables including beliefs, goals, and emotions. Moreover, the environmental data of an observed agent is perceived by the agent in a personalised way. Next we explain how the proposed cognitive models incorporate these characteristics.% \subsection{Variables and Dynamics of Cognitive Models} \label{subsec:variables-description} We propose a network representation (see \autoref{fig:model3-with-emotions}) of human's cognitive procedures that is composed of elements connected via directed links, which represent the inter-dependencies and influences of these elements. \autoref{app:examples} gives several representative examples based on real-life scenarios used to develop the network representation, which will be discussed in detail. The main elements of the network shown in \autoref{fig:model3-with-emotions} correspond to: \begin{enumerate} \item External (uncontrolled) inputs to the cognitive model, such as environmental factors (e.g., weather conditions) that may influence the state-of-mind and thus actions and behaviours of observed agents. \item State variables of the cognitive model, including the state-of-mind variables (beliefs, goals, emotions) of an observed agent. \item Fixed parameters of the cognitive model, including general world knowledge, general preferences, and personality traits of observed agents. \item Dynamic processes, which are functions that receive the fixed parameters and current external inputs and state variables and update the next-step state variables of the observed agent or predict its behaviours. \end{enumerate} \subsubsection{Elements internal and external to an observed agent} \begin{figure} \centering \includegraphics[width=\textwidth]{Figures/Model-4-with-bias.eps} \caption{Proposed network representation of human's cognitive procedures including emotions, personality traits, and biases. Oval-shaped elements show (input, output, and state) variables and rectangular elements correspond to processes or functions.} \label{fig:model3-with-emotions} \end{figure} \begin{comment} \begin{figure} \includegraphics[width=0.8\textwidth]{Figures/Model-2-with-observation-and-reasoning.eps} \caption{Proposed cognitive model incorporating input and output variables, unobservable state variables, \textit{personalised} perception and reasoning and rational action selection, and their inter-dependencies and connections. Variables and processes are represented by, respectively, ovals and rectangles. Internal elements (unobservable for an observer agent), external elements (observable for an observer agent), and partially external elements (partially observable for an observer agent) are illustrated inside, outside, and on the border of the \textit{agent} box. } \label{fig:model2} \end{figure} \end{comment} While \cite{Baker2012} distinguishes the elements in the cognitive network representation of \autoref{fig:Baker2012-PhD} based on whether or not they depend on situations, we differentiate the elements of our proposed cognitive network representation (\autoref{fig:model3-with-emotions}) considering whether or not they are external to an observed agent. The main advantages of our proposed approach include: \begin{itemize} \item The developed mathematical cognitive models correspond to the existing standard (e.g., state space) frameworks of mathematical modelling and systems theory, allowing to directly use them with various model-based decision making approaches. \item This categorisation allows to personalise the rationalisation and action selection per observed agent. \item Since the internal elements of an observed agent are not observable for the observer agent, their inference is in general personalised to an observer agent (although out of the scope of this paper, our framework is then easily expandable for cases where second-order inferences, i.e., inference about the inference of an observer agent about the observed agent \cite{Baker2008}, are of interest. \end{itemize} In addition to fully internal (i.e., unobservable) and fully external (i.e., observable) elements, our representation can incorporate partially external (i.e., partially observable) elements. In particular, the perceptual access of an observed agent is partially external, given that this process is influenced by external inputs, and is partially internal, since it is shaped by internal characteristics of the individual. This dual influence on perceptual access is explained in detail in \autoref{subsec:observation-reasoning}. \subsubsection{Fast-dynamics and slow-dynamics state variables} The state variables of the proposed cognitive model are distinguished according to their relevance for duration (i.e., short-term or long-term) of interactions between two rational agents and to the frequency of their dynamics. Consequently, two categories of state variables are defined: (i) Fast-dynamics state variables, which may constantly vary (with a time scale in the range of seconds or minutes) as a response to specific situations the observed agent faces. Goals, beliefs, and emotions in \autoref{fig:model3-with-emotions} are fast-dynamics state variables. (ii) Slow-dynamics state variables, which vary according to large time scales (months or years). General world knowledge, general preferences, and personality traits in \autoref{fig:model3-with-emotions} are slow-dynamics state variables. Fast-dynamics state variables are more relevant for short-term interactions, while slow-dynamics state variables become more relevant throughout long-term interactions, when the fixed or repetitive patterns of cognitive procedures resulting from these slow-dynamics state variables provide extra information for the observer agent to make more precise estimates and predictions Goals are immediate desires and needs of rational agents, such as finding food or reaching a location. General preferences of rational agents build up in long terms and remain invariant for long, and in order to identify them several interactions with a rational agent is needed. Examples of general preferences include favourite tastes, fiends, and hobbies of a rational agent. Beliefs correspond to \emph{temporary} knowledge or interpretations of rational agents from their world, while general world knowledge consists in \emph{persistent} rationally perceived knowledge, which remains unchanged or is rarely updated. For example, a rational agent believes that a friend who has left an hour ago to fetch a medicine from the drugstore is now in the city center, whereas the exact location of the drugstore is the agent's general world knowledge. Next we explain these elements and their mutual influences with respect to the other elements in the proposed cognitive network representation.% \begin{remark} \label{remark:slow-dynamics-not-changed} Slow-dynamics state variables may influence the evolution of fast-dynamics state variables (see \autoref{example:belief_guess}-\autoref{example:belief_personalitytraits} in \autoref{app:examples}), while the opposite is not necessarily true (especially in short terms). The main aim of this paper is to formalise and formulate the evolution of fast-dynamics state variables. Modelling the evolution of slow-dynamics state variables is out of the scope of this paper. Thus slow-dynamics state variables are mainly considered as fixed parameters in the proposed cognitive models. \end{remark} \subsubsection{Emotions and personality traits} \label{subsec:emotions-and-personality-traits} In interactions between rational agents identifying the emotions of observed agents is of utmost importance to enable observer agents to make an overall inference about the state-of-mind of observed agents \citep{Kwon2008, Saxe2017} and to select accordingly the most appropriate behaviour \citep{Tapus2008,Zaki2013, Lee2019}. Moreover, the personality traits of observed agents act as regulators of their emotions \citep{Bono2007PersonalitySelf-monitoring}. Therefore, by incorporating both the emotions and personality traits in cognitive models more genuine, engaging, and human-like interactions will be possible for machines that use these models \citep{Tapus2008,Leite2013} Personality traits are slow-dynamics state variables, therefore we are interested in understanding their potential influences on the fast-dynamics state variables. Beliefs may (indirectly) be affected by personality traits through generated biases. Goals may directly be affected by personality traits (see \autoref{fig:model3-with-emotions}): for instance, while the goal of an introvert rational agent is to avoid strangers, the goal of an extrovert rational agent is to make new friends. Finally, emotions are influenced directly by the personality traits, where this influence together with the potential influences of other state variables are discussed next \paragraph{State variables that influence emotions:} Emotions of a rational agent may be stimulated by its beliefs that are generated by the rationally perceived knowledge (see \autoref{fig:model3-with-emotions} and \autoref{example:belief_generating_emotions} in \autoref{app:examples}). Note that emotions are not directly generated by external inputs (i.e., real-life data received by a rational agent), but by how the rational agent internalises and interprets these inputs. \begin{comment} \begin{example} \label{example:belief_generating_emotions} While walking on the street, Elisa's wallet falls out of her purse (\textit{real-life data}). Later on in a shop Elisa reaches for her wallet and realises that it is not in her purse (\textit{perceptual access}). She reasons that she has lost the wallet (\textit{rationally perceived knowledge}). She then supposes that she has lost her wallet (\textit{inference of a belief based on the rationally perceived knowledge}). This belief makes her anxious (\textit{stimulation of emotions}). \end{example} In the given example, before Elisa notices that her wallet is missing (i.e., without \textit{perceptual access}) and reasons that she has lost it (i.e., without \textit{rational reasoning}), she was not anxious (no stimulation of \textit{emotions}). In a different situation, for the same perceptual access that causes the same perceived data, i.e., a missing wallet, Elisa may reason and believe that she has left her wallet on the dining table at home (\textit{different rational reasoning and hence different rationally perceived knowledge}). Therefore, Elisa will not be anxious (no stimulated \textit{emotions}). In summary, independent of what the real-life data is (e.g., the wallet has fallen on the street or is at home) the emotions of a rational agent may be moderated by the perceptual access of the agent to that data and by the reasoning it applies to the perceived data. In other words, the emotions of a rational agent depend on its beliefs rather than on real-life data directly.% \end{comment} Both goals and general preferences, alongside a belief, can impact emotions (see \autoref{fig:model3-with-emotions}). On the one hand, when a rational agent follows a goal and develops a belief that is in line with the fulfilment of that goal, positive emotions may be stimulated. On the other hand, when a rational agent follows a goal and develops a belief that hinders the chances of fulfilling that goal, negative emotions may be stimulated. Similarly when a general preference is supported by a developed belief positive emotions can be generated, while beliefs that conflict with the general preferences may result in negative emotions (see \autoref{example:goal_to_emotion} in \autoref{app:examples}). General preferences mostly influence the emotions of a rational agent indirectly and via generating a goal - that alongside a belief - stimulates emotions (see \autoref{example:general_preference_to_emotions} in \autoref{app:examples}), but the influence is in some cases direct (see \autoref{fig:model3-with-emotions}). \begin{comment} The next two examples show, respectively, the effect of goals and general preferences on the emotions \begin{example} \label{example:goal_to_emotion} Frank is exploring a new city for the first time and wants to buy an ice cream (\textit{goal}). While walking he notices a few people across the street who are eating ice cream (\textit{perceived data}). Correspondingly, he reasons and believes that there should be an ice cream shop close by (\textit{rationally perceived knowledge} transformed into a \emph{belief}), which makes him feel satisfied (stimulated \textit{emotions}) \end{example} This example shows a case where a belief by itself does not stimulate emotions, but the belief together with a goal does. In other words, if Frank did not want to eat an ice cream, the belief that an ice cream shop is nearby would not influence his emotional status. The next example shows a case where general preferences alongside beliefs directly stimulate emotions. \begin{example} \label{example:general_preference_to_emotions} Grace is afraid of dogs (\textit{general preference}). While walking in a park, she notices the footprints of a dog (\textit{perceptual access}) and correspondingly reasons and believes that there should be a dog nearby (\textit{rationally perceived knowledge} transformed into a \emph{belief}). This belief makes her anxious (stimulated \textit{emotion}) \end{example} \end{comment} Any direct influence from the general world knowledge on the emotions is negligible. In practice, general world knowledge is transformed into beliefs, which influence the emotions as discussed before Personality traits of rational agents may determine the extent to which certain beliefs affect their emotions: for instance, while both an introvert and an extrovert rational agent become happy for receiving a birthday present, the extrovert one experiences more excitement. Personality traits do not generate emotions by themselves, which is in line with the fact that emotions are fast-dynamics state variables that are temporary and event-triggered, whereas personality traits constantly exist. In other words, if personality traits were directly generating emotions a rational agent had to continuously experience those emotions. Personality traits instead boost or hinder the emotions and may be seen as regulators of the emotions \begin{remark} \label{remark:emotion-triggers} In summary, beliefs (either alone or supported by generated goals or by general preferences) and boosted or hindered by personality traits trigger the emotions. For the sake of brevity of the notations, we use the following terminology: \textbf{emotion trigger 1} for a combination of beliefs and general preferences, \textbf{emotion trigger 2} for solely beliefs, and \textbf{emotion trigger 3} for a combination of beliefs and goals \end{remark} \paragraph{State variables that are influenced by emotions:} Studies show that emotions can affect the immediate goals and desires\footnote{In this paper, the concepts of goals, desires, wishes, and needs of a rational agent are used interchangeably.} of rational agents \citep{Raghunathan1999, Andrade2009, Lerner2015, George2016}. More specifically, emotions may result in the development of a goal that contradicts the general preferences of a rational agent or in the change of a goal that was previously made by the rational agent. For instance, gratitude can galvanise rational agents into helping others \citep{Lerner2015}, or anxiety may trigger rational agents to avoid stressful situations \citep{Raghunathan1999}. The influence of emotions over goals is introduced into the proposed cognitive model shown in \autoref{fig:model3-with-emotions} via a directed link (also see \autoref{example:emotions_to_change_goals} in \autoref{app:examples}). \begin{comment} The following example illustrates the influence of emotions on the developed goals of a rational agent.% \begin{example} \label{example:emotions_to_change_goals} Hailey has planned to go to a party tonight (original \textit{goal}). In the afternoon, she receives bad news that make her sad (stimulated \textit{emotion}). As a consequence, she decides not to go to the party anymore (\textit{change in the goal due to the emotions}).% \end{example} This example shows how triggered emotions can affect an already developed goal of a rational agent. Similarly, if Hailey's general preference is to participate in social events, but just before she hears about the party she gets upset by some bad news, she may make a goal (i.e., skipping the party) that contradicts her general preferences.% \end{comment} While emotions do not directly influence beliefs, they can affect the processes that result in judgements or beliefs of rational agents \citep{Raghunathan1999, Andrade2009}. More specifically, positive emotions may introduce optimistic biases into the process of generation of new beliefs, whereas negative emotions may lead to the formation of overly pessimistic beliefs \citep{Lerner2015}. Therefore, the influence of emotions on the development of beliefs will be introduced into the proposed cognitive model \subsubsection{Perceptual access and rational reasoning} \label{subsec:observation-reasoning} General world knowledge and beliefs are acquired by rational agents through the same procedures. In the model proposed in \cite{Baker2012} these procedures are represented as a single element called the principle of rational belief (see \autoref{fig:Baker2012-PhD}). This simplification was shown to be sufficient to explain the relationship between the environmental inputs and the inferred beliefs in the simple environments and scenarios considered in \cite{Baker2012}. In real-life scenarios, however, a more complicated procedure occurs before a belief or a piece of general world knowledge is developed based on the raw real-life data. Data that is perceived by rational agents may differ from the real-life data. More specifically, rational agents may deliberately or indeliberately access and perceive only a portion of the real-life data in each interaction with their environment. On the one hand, perception depends on rational agents, i.e., when located in the same environment different rational agents may notice different types of data (e.g., one may perceive a sound that is inadvertently heard while another agent filters the sound out). On the other hand, rational agents may receive only partial real-life data due to external factors (e.g., missing visual data due to occlusion). Therefore, rational agents may hold false or inaccurate beliefs (c.f.\ the Sally-Anne experiment \cite{Baron-Cohen1985Does}), which is essential for observer agents to recognise \cite{Wellman2001Meta-analysisBelief,Rabinowitz2018}.% To address these aspects, the process that transforms real-life data into beliefs and general world knowledge is decomposed into smaller, well-defined sub-processes, \emph{perceptual access} and \emph{rational reasoning} (see \autoref{fig:model3-with-emotions}). Real-life data from the environment is perceived via \emph{perceptual access}, which, as explained above, is partially personalised and partially depends on the environment. Thus the corresponding rectangular element in \autoref{fig:model3-with-emotions}) is located at the border of the agent box. The perceived data is then processed via \emph{rational reasoning}, which as opposed to the principle of rational reasoning applied in \cite{Baker2012}, is specific to a rational agent. Accordingly the rational agent makes a judgement, i.e., the \emph{rationally perceived knowledge} (see \autoref{fig:model3-with-emotions}), which may be transformed into a belief or a general world knowledge by the rational agent. In particular, \autoref{example:personalized_perception} and \autoref{example:decomposing_rational_perception} in \autoref{app:examples} illustrate the importance of personalisation of the perception procedure and decomposing the procedure of developing rationally perceived knowledge into the proposed sub-procedures.% \subsection{Inverse Inference of Emotions from Actions} \label{subsec:emotion-inference-from-actions} \begin{figure} \centering \includegraphics[width=\textwidth]{Figures/Emotions-Inference-from-action-model.eps} \caption{Inference of emotions from actions: Whenever emotions exist, the observed actions of a rational agent may differ from those that are predicted based on cognitive models that include beliefs and goals only (left-hand side plots). In such cases, the agent's goal or belief (top right and bottom right plots respectively) or both have been influenced by the agent's emotions.} \label{fig:emotion-inference} \end{figure} According to the principle of rational action, actions of rational agents are a direct consequence of their beliefs and goals \cite{Dennett1987TheStance, Baker2011, Jara-Ettinger2016ThePsychology, Saxe2017}. Authors in \cite{Baker2007, Baker2011} discussed inverse inference of the beliefs and goals from actions. By introducing two new elements, emotion and bias, and by changing the universal principle of rational action to rational action selection in our proposed cognitive models, we should take into account the influence of these elements on the observed actions and also inverse inference of these elements based on the observed actions. Contrarily to beliefs and goals, emotions do not directly generate actions, while they contribute to the generation of goals and beliefs and thus indirectly to the observed actions of rational agents. In particular, estimation of the emotions (and thus biases) based on observed actions is done via our proposed model next to a simplified version of this model that considers the belief-goal pair only (see \autoref{fig:emotion-inference}). Observing an action different from the action predicted by such a model implies that either the belief or the goal or both have been different from those considered by the cognitive model, meaning that the belief (see \autoref{remark:emotion-triggers}) corresponding to one or more simulation steps earlier has triggered emotions that have resulted in different goals and beliefs for the current simulation step than those estimated by the simplified model (see \autoref{example:inverse_inference_belief} and \autoref{example:inverse_inference_goal} in \autoref{app:examples}). Thus for inverse inference of the emotions, knowing or inferring the beliefs and goals of an observed agent for at least two simulation steps is needed. A forward inference of the belief-goal pair leads to the prediction of an expected action. Next, comparing the observed and predicted actions of the observed agent, the underlying emotions triggered in the previous simulation steps and influencing the belief, goal, or both in the current simulation step are inferred. Such combination of forward and inverse inferences was previously mentioned by \cite{Saxe2017}, although no specific framework for implementing it was proposed or discussed. Note that the estimations obtained via a forward inference may be analysed together with the estimations obtained via an inverse inference from the observed actions within two parallel computation modules to provide more accurate estimations and predictions.% \section{Cognitive Models of Observed Agents: Formulation} \label{section:model_formulation} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{Figures/ConnectionBetweenModules.eps} \caption{Different modules of the proposed modelling framework: \textbf{Model core} includes the \textit{internal variables} that influence the fast-dynamics state variables. \textbf{Input processes} include the perceptual access and rational reasoning. \textbf{Output processes} include the rational action selection block. \textbf{World model} formulates the influence of the agent's actions on real-life data. Inverse inference of emotions from actions is represented in a module parallel to the model core.} \label{fig:model_formulation-division} \end{figure} In order to facilitate the formulation of the proposed cognitive network representation, it has been broken into five sub-modules (see \autoref{fig:model_formulation-division}): (1) \emph{model core}, including the internal variables that play a role in the dynamic evolution of the fast-dynamics state variables, (2) \emph{input processes}, including the perceptual access and rational reasoning functions and auxiliary variable perceived data, (3) \emph{output processes}, including rational action selection, (4) \emph{world model}, which formulates the influence of the rational agent's actions on real-life data, and (5) \emph{parallel inverse inference module}, which inversely infers the emotions of the rational agent according to the observed actions as discussed in \autoref{subsec:emotion-inference-from-actions}.% Since the main aim of this paper is modelling the dynamic evolution of fast-dynamics state variables, we focus on mathematical formulation of the model core. \autoref{fig:model_modules} shows the elements of the model core, where the model's state variables (i.e., belief, goal, and emotion) and auxiliary variables (perceived knowledge and bias) are represented in white and the model's inputs and fixed parameters (i.e., rationally perceived knowledge, general world knowledge, general preferences, and personality traits) are illustrated in grey.% \begin{figure} \centering \includegraphics[width=.72\textwidth]{Figures/Model-formulation} \caption{Model core: Elements that are state and auxiliary variables for the model core are represented in white, while elements that are inputs of the model core are represented in grey.} \label{fig:model_modules} \end{figure} Previous works \citep{Baker2011,Baker2017,Jara-Ettinger2016ThePsychology,Saxe2017,Lee2019} mainly use Bayes' theorem to describe human's cognition. This basically implies that humans make cognition in terms of probabilities by developing connections between different premises and by inferring about the likelihood of various random events based on their prior knowledge \citep{Johnson-Laird1994MentalThinking}. Using these assumptions corresponds to considering the cognitive network representation as a Bayesian network \citep{Heckerman2008}. Alternatively, fuzzy cognitive maps (FCMs) \citep{FCM:BartKosko} represent concepts and variables that correspond to complex and/or uncertain systems and their interlinks and interactions. Contrarily to Bayesian networks, FCMs support cyclic connections \citep{Stylios2004ModelingMaps}, which is very relevant for modelling cognitive procedures of humans (c.f.\ \autoref{fig:model_modules}). Moreover, concepts or variables in an FCM can be represented mathematically by fuzzy variables, which excellently fit those concepts that are involved in human cognitive procedures (e.g., beliefs, goals, emotions, etc.). Therefore, in this paper, based on the idea of FCMs, we propose an extended FCM representation and use it to mathematically formulate our proposed cognitive models.% \begin{figure} \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=\textwidth]{Figures/FCM_simple_linkage.eps} \caption{Simple linkage} \label{fig:FCM_simple_linkage} \end{subfigure} \hspace{5ex} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/FCM_complex_linkage.eps} \caption{Complex linkage} \label{fig:FCM_complex_linkage} \end{subfigure} \caption{Simple and complex linkage} \label{FCM:linkage&sidelinkage} \end{figure} In an FCM the elements or variables that explain the evolution of the system are called \textit{concepts}. The $i^{\textrm{th}}$ concept of the system is denoted by $C_i$. Mathematically, $C_i$ (e.g., emotion) can be represented as a fuzzy variable, with $A_i$ a possible representation (e.g., happy) of it. We define $\mathbb{C}$ as the set of all concepts and $\mathbb{A}$ as the set of all possible realisations of these concepts. The directed influence of concept $C_i$ over concept $C_j$ in an FCM is represented by a directed line called a \textit{linkage} (see \autoref{fig:FCM_simple_linkage}). Every linkage is characterised by a weight $w_{ij} \in [-1,1]$ that reflects the level of influence of concept $C_i$ over concept $C_j$. Whenever $w_{ij}$ is positive (negative), an increase in realisation $A_i$ of $C_i$ implies an increase (a decrease) in realisation $A_j$ of $C_j$ (the larger the absolute value of $w_{ij}$, the larger the influence of $C_i$ over $C_j$). Whenever $w_{ij}$ is null, changes in realisation $A_i$ of $C_i$ do not influence realisation $A_j$ of $C_j$.% In FCMs the weights $w_{ij}$ are considered to be constant. However, in order to accurately model most real-world systems with FCMs variable weights may be required \citep{Carvalho2001RuleDynamics,Mourhir2016AAssessment}. For instance, in rule-based FCM \citep{Carvalho2001RuleDynamics} the values of weights depend on the realised value $A_i$ of the causing variable or concept $C_i$. In our proposed cognitive network representation, in some cases the value of weight $w_{ij}$ for a given simulation step $k$ may depend on the realised values $A_i$, $A_j$, or $A_\ell$ of the causing concept $C_i$, affected concept $C_j$, or another intermediate concept $C_\ell$ corresponding to that simulation step. To address these requirements, we consider weights that may be a function of the causing, affected, or intermediate concepts and accordingly define \emph{simple linkages}, \emph{side linkages}, and \emph{complex linkages} and introduce their mathematical representations.% The linkages $(i,j)$ that connect two concepts $C_i$ and $C_j$ directly and are not influenced by an intermediate concept (see \autoref{fig:FCM_simple_linkage} and \autoref{fig:FCM_simple_linkage}) are called \textit{simple linkages}. A \textit{side linkage }$(\ell,i,j)$ corresponds to the directed influence of an intermediate concept $C_\ell$ over a linkage $(i,j)$ that connects concepts $C_i$ and $C_j$ (see the dashed arrow in \autoref{fig:FCM_complex_linkage}). The collection of a linkage that is influenced by one or several side linkages and all those side linkages is called a \textit{complex linkage} (see \autoref{fig:FCM_complex_linkage}). The set of all ordered pairs $(i,j)$ corresponding to simple linkages is given by $\mathbb{L}$ and the set of all ordered trios\footnote{In our cognitive network representation, complex linkages include no more than one side linkage.} $(\ell,i,j)$ that correspond to complex linkages is given by $\overline{\mathbb{L}}$. The weight of a simple and a complex linkage for simulation step $k$ is computed via, respectively, function $f:\mathbb{A}^2 \rightarrow [-1,1]$ and function $g:\mathbb{A}^3 \rightarrow [-1,1]$. we have: \begin{align} \begin{array}{ll} w_{ij} (k) = f \left( C_i(k) , C_j(k)\right), \qquad & \forall i,j \quad \textrm{for which}\quad (i,j) \in \mathbb{L}\\ w_{ij} (k) = g \left( C_\ell(k) , C_i(k), C_j(k)\right),\qquad & \forall i,j \quad \textrm{for which}\quad \exists \ell \quad \textrm{such that} \quad (\ell,i,j) \in \overline{\mathbb{L}} \end{array} \end{align} for all $k\in\{1,2,\ldots\}$, $C_\ell,C_i,C_j \in \mathbb{C}$, and $C_\ell(k), C_i(k), C_j(k)\in \mathbb{A}$ The dynamic equation for updating a concept $C_j\in\mathbb{C}$ that evolves per simulation step $k$ within the proposed extended FCM is formulated by: \begin{align} \label{eq:update-FCM} C_j(k+1) = h \Biggl( \sum_{\forall i | (i,j) \in \mathbb{L}} f\Big(C_i(k),C_j(k)\Big) C_i(k) + \sum_{ \forall i ;\exists \ell | (i,j,\ell) \in \overline{\mathbb{L}}} g\Big(C_\ell(k),C_i(k),C_j(k)\Big) C_i(k) + \alpha_{j} C_j(k) \Biggr) \end{align} where $h(\cdot)$ is in general a threshold function that constrains the evolving concept $C_j$ to remain within its admissible set $\mathbb{A}_j \subseteq \mathbb{A}$ and $\alpha_j$ determines the influence of the realised value of concept $C_j$ for simulation step $k$ on its value for simulation step $k+1$. Note that for the proposed cognitive model, concept $C_j$ in \eqref{eq:update-FCM} should correspond to one of the fast-dynamics state variables, while concept $C_i$ may be another fast-dynamics state variable, an auxiliary variable, a slow-dynamics state variable, or an input variable (see \autoref{fig:model_modules}) In order to define the functions $h(\cdot)$, $f(\cdot)$, and $g(\cdot)$ for the FCM corresponding to the proposed cognitive models, different approaches may be used, such as using crisp mathematical functions or describing them via fuzzy inference systems (see, e.g., \citep{Carvalho2001RuleDynamics}) \section{Cognitive Models of Observed Agents: Implementation} \label{sec:model_implementation} The proposed cognitive model was implemented via MATLAB and was used to simulate the expected cognitive procedures of human-like computer-based rational agents and human participants in several real-life scenarios. For qualitative assessment of the cognitive model (i.e., for validating the trends of the dynamic evolution of the fast-dynamics state variables beliefs, goals, and emotions), computer-based rational agents simulated according to a generalised knowledge base (built upon intuitive data provided by human participants and literature) are used. Next, $15$ human participants were asked to fill in online surveys, where they provided personalised answers regarding their state-of-mind for several real-life scenarios. The developed cognitive model was used to estimate state-of-mind of the participants within the same scenarios. These estimations were compared to the answers directly provided by the participants to assess the models \subsubsection*{Implementation Setup} \label{subsec:implementation-setup} \begin{figure} \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{Figures/FCM_situation1.eps} \caption{Network representation for \textit{real-life situation 1} illustrating auxiliary variables (emotion triggers 2 and 3) and excluding slow-rate state variables (general world knowledge, general preferences, personality traits).} \label{fig:situation1} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{Figures/FCM_situation2.eps} \caption{Network representation for \textit{real-life situation 2} illustrating auxiliary variables (emotion triggers 1, 2, and 3).} \label{fig:situation2} \end{subfigure} \caption{Proposed network representation of human's cognitive procedures: Oval-shaped and rectangular elements show, respectively, (input, output, and state) variables and processes/functions. } \label{FCM:real-life-situations} \end{figure} \begin{comment} \begin{table \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \textbf{\begin{tabular}[c]{@{}c@{}}Linguistic \\ Term\end{tabular}} & Null & \begin{tabular}[c]{@{}c@{}}Very Weak\\ Pos/Neg\end{tabular} & \begin{tabular}[c]{@{}c@{}}Weak\\ Pos/Neg\end{tabular} & \begin{tabular}[c]{@{}c@{}}Average\\ Pos/Neg\end{tabular} & \begin{tabular}[c]{@{}c@{}}Strong\\ Pos/Neg\end{tabular} & \begin{tabular}[c]{@{}c@{}}Very Strong\\ Pos/Neg\end{tabular} & \begin{tabular}[c]{@{}c@{}}Direct\\ Pos/Neg\end{tabular} \\ \hline \textbf{Weight} & $0$ & $\pm 0.1$ & $\pm 0.25$ & $\pm 0.5$ & $\pm 0.75$ & $\pm 0.9$ & $\pm 1.0$ \\ \hline \end{tabular} \caption{Linguistic terms used to describe the FCM weights (\textit{Pos} for positive and \textit{Neg} for negative).} \label{tbl:weights-equivalence} \end{table} \end{comment} \begin{table \centering \begin{tabular}{|l|l|lll|l} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Concept}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Scenario}}} & \multicolumn{3}{c|}{\textbf{Linguistic term}} \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}\end{tabular}}} \\ \cline{3-5} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{Min realisation $-1$}} & \multicolumn{1}{c|}{\textbf{Median realisation $0$}} & \multicolumn{1}{c|}{\textbf{Max realisation $1$}} \\ \hline \textbf{Belief} & 1, 2 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}There will be heavy rain\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}} No information about the weather\end{tabular}} & It will be very sunny\\ \hline \textbf{Goal} & 1, 2 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Agent does not want to do \\ the outdoor activity\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Agent does not have a preference \\ about the outdoor activity\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Agent wants to do \\ the outdoor activity\end{tabular} \\ \hline \textbf{Emotion} & 1, 2 & \multicolumn{1}{l|}{Very sad} & \multicolumn{1}{l|}{No emotion} & Very happy \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Emotion \\ trigger 2\end{tabular}} & 1, 2 & \multicolumn{1}{l|}{Very low trigger} & \multicolumn{1}{l|}{No emotion trigger} & Very high trigger\\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Emotion \\ trigger 3\end{tabular}} & 1, 2 & \multicolumn{1}{l|}{Very low trigger} & \multicolumn{1}{l|}{No emotion trigger} & Very high trigger \\ \hline \textbf{Bias} & 1, 2 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}There will be heavy rain\end{tabular}} & \multicolumn{1}{l|}{No bias} & It will be very sunny \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Rationally \\ perceived \\ knowledge\end{tabular}} & 1, 2 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}There will be heavy rain\end{tabular}} & \multicolumn{1}{l|}{No information} & It will be very sunny \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}General \\ world \\ knowledge\end{tabular}} & 2 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Weather prediction \\ is very inaccurate\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Weather prediction is \\ mildly accurate\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Weather prediction is\\ very accurate\end{tabular} \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}General\\ preferences\end{tabular}} & 2 & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}} Agent strongly dislikes \\ the outdoor activity\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}} Agent does not have a preference \\ about the outdoor activity\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Agent strongly likes \\ the outdoor activity\end{tabular} \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Emotion\\ trigger 1\end{tabular}} & 2 & \multicolumn{1}{l|} {Very low trigger} & \multicolumn{1}{l|}{No emotion trigger} & Very high trigger \\ \hline \end{tabular} \caption{Description of the concepts in real-life scenarios 1 and 2. The second column shows in which real-life scenario the concept is present.} \label{tbl:concepts-real-life-situation} \end{table} For implementations, the model core (see \autoref{fig:model_modules}) was considered supposing that the rationally perceived knowledge is exactly the same as real-life data. Similarly to how humans express their state-of-mind, the intensity of beliefs, goals, emotions, and biases were expressed via linguistic terms. These terms corresponded to fuzzy values with realisations in $[-1,1]$. The links that connect the elements of the network representations were also described by linguistic terms (according to the expected influence of every pair of connected concepts gathered from the intuitive knowledge base from humans) and are quantified according to the rules of fuzzy values.% The following two real-life scenarios, illustrated in \autoref{FCM:real-life-situations} were considered. \emph{Real-life scenario 1}, where the rational agent holds a given level of preference for doing an outdoor activity (intensity of the \emph{goal}), then checks the weather forecast (\emph{rationally perceived knowledge}), and develops a \emph{belief} (that may be biased) about the upcoming weather conditions. The agent's belief about the upcoming weather conditions alone, or together with the agent's goal (see \emph{emotion trigger 2} and \emph{emotion trigger 3} in \autoref{fig:situation1}) may trigger emotions in the agent. \emph{Real-life scenario 2}, where compared to real-life scenario 1, general world knowledge, general preferences, and influence of personality traits are included. The rational agent has a \emph{general preference} about the outdoor activity and some \emph{general world knowledge} regarding the reliability of the source of the weather forecast. When real-life scenario 2 is personalised for human participants, the influence of personality traits are also included in the model identification. A combination of the agent's general preference and belief (see \emph{emotion trigger 1} in \autoref{fig:situation2}) may trigger emotions in the rational agent. The definitions used for the concepts in real-life scenarios 1 and 2 are given in \autoref{tbl:concepts-real-life-situation} \subsubsection*{Cognitive Model Validation} The following three hypotheses were assessed and validated for the developed cognitive models: \begin{hyp \label{hyp:first} Formulating the weights of the linkages of the network representations in \autoref{FCM:real-life-situations} generally as functions of the concepts that correspond to that linkage, rather than considering fixed weights, is essential for accurate estimations of the state-of-mind variables of rational agents. \end{hyp} \vspace{-2ex} \begin{hyp \label{hyp:second} Incorporation of general world knowledge and general preferences is essential for accurate estimations of the state-of-mind variables of rational agents. \end{hyp} \vspace{-2ex} \begin{hyp \label{hyp:third} Personalising the weights of the linkages of the network representations for each individual (i.e., incorporating the personality traits) is essential for accurate estimations of the state-of-mind variables of rational agents. \end{hyp} \paragraph{Qualitative assessment based on computer simulations:} In order to evaluate Hypothesis~\ref{hyp:first}, weight $w_{12}$ corresponding to the simple linkage between the belief, $C_1$, and the goal, $C_2$, (see \autoref{fig:situation1}) is formulated as a function of the belief, i.e.: \begin{equation} \label{eq:piecewise-function} w_{12}(C_1(k)) = \left\{ \begin{array}{ll} w_{12}^- & \quad C_1(k) < 0 \\ 0 & \quad C_1(k) = 0 \\ w_{12}^+ & \quad C_1(k) > 0 \end{array} \right. \end{equation} where $k$ is the simulation step and $w_{12}^- > w_{12}^+$ (for computer-based simulations, we consider $w_{12}^- = 0.5 $ and $ w_{12}^+ = 0.1$ and for human participants these parameters are personalised per participant). More specifically, intuitive human data (also see \cite{Shapiro2007}) revealed that compared to a positive belief the influence of a negative belief is usually more significant on the (intensity of the) developed goal by rational agents, particularly when initially (i.e., before developing a belief) rational agents hold no specific preference regarding their goal. For instance, when a person who initially had no specific preference regarding doing or not doing an outdoor activity developed the belief that the weather conditions were going to be very bad, according to the data collected they were very likely to develop the goal of strongly preventing outdoor activities, while after developing the belief that the weather conditions were going to be very good (although some humans still developed the goal of doing an outdoor activity), the goal was much less likely as strong as in the first case. This effect was boosted when humans also had a general preference for staying inside.% \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1a_cte_rpk=1.eps} \caption{Initial conditions: Null intensity for initial goal ($A_2=0$ for $k=0$) and maximum rationally perceived knowledge ($A_7=1$).} \label{fig:h1a-cte-rpk=-1} \end{subfigure} \hspace{0.15cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1a_cte_rpk=-1.eps} \caption{Initial conditions: Null intensity for initial goal ($A_2=0$ for $k=0$) and minimum rationally perceived knowledge ($A_7=-1$).} \label{fig:h1a-cte-rpk=1} \end{subfigure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1a_var_rpk=1.eps} \caption{Initial conditions: Null intensity for initial goal ($A_2=0$ for $k=0$) and maximum rationally perceived knowledge ($A_7=1$).} \label{fig:h1a-var-rpk=1} \end{subfigure} \hspace{0.15cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1a_var_rpk=-1.eps} \caption{Initial conditions: Null intensity for initial goal ($A_2=0$ for $k=0$) and minimum rationally perceived knowledge ($A_7=-1$).} \label{fig:h1a-var-rpk=-1} \end{subfigure} \caption{Evolution of the model variables over $30$ simulation steps for real-life scenario 1: In the first two cases the weights are constant and in the last two cases weight $w_{12}$ varies according to \eqref{eq:piecewise-function}.} \label{fig:h1a-var} \end{figure} Two sets of simulations for real-life scenarios 1 and 2 were considered with a null intensity for the initial goal. In each simulation set two cases with constant weights for all the linkages and two cases with similar initial values, but varying weight $w_{12}$ according to \eqref{eq:piecewise-function} were considered. For real-life scenario 1 (see \autoref{fig:situation1}) the rationally perceived knowledge was once set maximum (i.e., $A_7=1$) and once minimum (i.e., $A_7=-1$), implying that the weather conditions were going to be very good and very bad, respectively. The evolution of the state and auxiliary variables for these simulations are shown in \autoref{fig:h1a-var}. While for the network representation with constant weights the converged intensity of the goal for both very bad and very good weather conditions are the same (see Figures~\ref{fig:h1a-cte-rpk=-1} and \ref{fig:h1a-cte-rpk=1}), for the network representation where $w_{12}$ varies according to \eqref{eq:piecewise-function} based on the developed belief, the converged intensity of the goal for a negative belief (\autoref{fig:h1a-var-rpk=-1}) is larger than a positive belief (\autoref{fig:h1a-var-rpk=1}), where these results are in line with reality.% \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1b_cte_gp=-1.eps} \caption{ Initial conditions: Maximum intensity for initial goal ($A_2=1$ for $k=0$), maximum rationally perceived knowledge ($A_7=1$), maximum general world knowledge ($A_8=1$), and null general preference ($A_9=0$).} \label{fig:h1b_cte_gp=0} \end{subfigure} \hspace{0.15cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1b_cte_gp=1.eps} \caption{ Initial conditions: Maximum intensity for initial goal ($A_2=1$ for $k=0$), maximum rationally perceived knowledge ($A_7=1$), maximum general world knowledge ($A_8=1$), and maximum general preference ($A_9=1$)} \label{fig:h1b_cte_gp=1} \end{subfigure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1b_var_gp=-1.eps} \caption{ Initial conditions: Maximum intensity for initial goal ($A_2=1$ for $k=0$), maximum rationally perceived knowledge ($A_7=1$), maximum general world knowledge ($A_8=1$), and null general preference ($A_9=0$).} \label{fig:h1b_var_gp=0} \end{subfigure} \hspace{0.15cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H1b_var_gp=1.eps} \caption{ Initial conditions: Maximum intensity for initial goal ($A_2=1$ for $k=0$), maximum rationally perceived knowledge ($A_7=1$), maximum general world knowledge ($A_8=1$), and maximum general preference ($A_9=1$).} \label{fig:h1b_var_gp=1} \end{subfigure} \caption{Evolution of the model variables over $30$ simulation steps for real-life scenario 2: In the first two cases, simple linkages (with constant weights) and in the last two cases complex linkages exist.} \label{fig:h1b-var} \end{figure} \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H2_GWK=-1_GP=1_scn11.eps} \caption{ Initial conditions: Maximum intensity for initial goal ($A_2=1$ for $k=0$), minimum rationally perceived knowledge ($A_7= -1$), minimum general world knowledge ($A_8=-1$), and maximum general preference ($A_9=1$).} \label{fig:h2_gwk-1_gp1} \end{subfigure} \hspace{0.15cm} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H2_GWK=1_GP=1_scn9.eps} \caption{ Initial conditions: Maximum intensity for initial goal ($A_2=1$ for $k=0$), minimum rationally perceived knowledge ($A_7= -1$), maximum general world knowledge ($A_8=1$), and maximum general preference ($A_9=1$).} \label{fig:h2_gwk1_gp1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Figures/testing/H2_GWK=1_GP=-1_scn10.eps} \caption{Initial conditions: Maximum intensity for initial goal ($A_2=1$ for $k=0$), minimum rationally perceived knowledge ($A_7= -1$), maximum general world knowledge ($A_8=1$), and minimum general preference ($A_9=-1$).} \label{fig:h2_gwk1_gp-1} \end{subfigure} \caption{Evolution of the model variables over $30$ simulation steps for real-life scenario 2} \label{fig:h2-var} \end{figure} The next case that was simulated was based on the following observation: When humans strongly wanted to do an outdoor activity, and developed the belief that the weather conditions were going to be very good, emotion trigger 3 (see Figures~\ref{fig:situation1} and \ref{fig:situation2}) had a higher intensity than when the weather conditions were the same, but such a strong intensity regarding doing an outdoor activity did not exist (also see \cite{Harris1989,Bradmetz2004,Reisenzein2009}). In other words, while beliefs, either alone or together with goals, may trigger emotions, their influence over emotions increases (decreases) with the intensity of the goal increasing (decreasing). This is represented via a complex linkage in the proposed network representation (see \autoref{FCM:real-life-situations}), i.e., goal is an intermediate concept that influences the weight corresponding to the direct linkage between the belief and emotion trigger 3. In other words, this linkage should mathematically be represented as a function of (at least) the intermediate concept, goal. When general world knowledge and particularly general preferences were excluded (see \autoref{fig:situation1}), the variables of the cognitive model converged rapidly to similar values in both cases of the given example. Therefore, the network representation shown in \autoref{fig:situation2} was used to simulate the discussed example, where the weather conditions (i.e., rationally perceived knowledge) were going to be very good (i.e., $A_7=1$), according to the general world knowledge the accuracy of the weather forecast was very high (i.e., $A_8=1$), and the rational agent initially held the highest intensity for doing an outdoor activity (i.e., $A_2=1$ for $k=0$). Once the general preference was set to $A_9=0$ and once to $A_9=1$. The simulation results for constant and varying weights for the linkages are shown in \autoref{fig:h2-var}: In Figures~\ref{fig:h1b_cte_gp=0} and \ref{fig:h1b_cte_gp=1} (constant weights), there is yet no significant difference in the estimated emotion of the rational agent for the two different cases. With varying weights however (see Figures~\ref{fig:h1b_var_gp=0} and \ref{fig:h1b_var_gp=1}), the converged value for emotion trigger 3 and thus emotions in the first case is lower than for the second case, providing a more realistic simulation. These findings support both Hypothesis~\ref{hyp:first} and Hypothesis~\ref{hyp:second}.% Next we considered two cases where despite similar rationally perceived knowledge and general preferences, the general world knowledge differs. We supposed very bad weather conditions (i.e., $A_7=-1$), and the rational agent initially holding the goal of doing an outdoor activity with the maximum intensity (i.e., $A_2=1$ for $k=0$) and a strong general preference regarding outdoor activities (i.e., $A_9=1$). Once the agent considered the weather forecast to be very accurate (i.e., $A_8=1$) and once to be very inaccurate (i.e., $A_8= -1$). The simulation results using the network representation shown in \autoref{fig:situation2} are presented in \autoref{fig:h2-var}: The cognitive model captured the difference in the evolution of the belief of the rational agent, i.e., whenever the rational agent considered the source of the weather forecast to be unreliable, the converged value of the belief was lower than the rationally perceived knowledge, while for a very reliable weather forecast, the belief's value converged to the rationally perceived knowledge (see \autoref{fig:h2_gwk1_gp1}). This difference cannot be captured via the model shown in \autoref{fig:situation1}. In addition to the belief, emotions are often influenced by the general preference and general world knowledge: \autoref{fig:h2_gwk1_gp-1} illustrates that a rational agent who initially held the goal of doing an outdoor activity with maximum intensity but minimum general preference, and realised via a highly reliable weather forecast source that it was going to be very bad weather conditions, according to the network representation in \autoref{fig:situation2} ended up with a small positive value for the emotion (e.g., slightly happy). Moreover, the intensity of the goal converges to approximately $-0.5$, implying that the rational agent keeps a medium intention for going outside despite the very bad weather conditions. These realistic results, which cannot be obtained via the network representation in \autoref{fig:situation1}, further support the validity of Hypothesis~\ref{hyp:second}. In order to evaluate the validity of Hypothesis~\ref{hyp:third}, detailed data from $15$ individual participants were used to personalise the cognitive models. The details are given next.% \paragraph{Quantitative assessment based on human participants:} The validity of Hypotheses~\ref{hyp:first}-\ref{hyp:third} was further investigated via the results of an online survey that compared the beliefs, goals, and emotions of human participants in various real-life scenarios with those computed by the proposed cognitive models. Overall, $15$ participants took part in the survey, which considered $26$ scenarios. The general preferences, rationally perceived knowledge, and general world knowledge were initialised as inputs of the cognitive models and survey. \autoref{tbl:scenario-inputs} shows the linguistic terms (provided by participants) for the inputs in the online survey and their corresponding values use by the cognitive models. The following four cognitive models were considered: \begin{compactitem} \item \textbf{Model 1}, which included the model core shown in \autoref{fig:situation2}, was personalised per participant, and the linkages where represented by functions. \item \textbf{Model 2}, which was based on \autoref{fig:situation1} (excluding general preferences and general world knowledge), was personalised per participant, and the linkages were represented by functions. \item \textbf{Model 3}, which included the model core shown in \autoref{fig:situation2}, was personalised per participant, and the linkages were represented by constant weights. \item \textbf{Model 4}, which included the model core shown in \autoref{fig:situation2}, was \emph{not} personalised for various participants, and the linkages were represented by functions. \end{compactitem} The $26$ scenarios were divided into $6$ sets, where in each only $1$ out of the $3$ inputs varied (i.e., rationally perceived knowledge for sets $1$ and $2$, general world knowledge for sets $3$ and $4$, general preferences for sets $5$ and $6$). Sets $1$-$4$ each included $3$ scenarios, while sets $5$ and $6$ included $7$ scenarios. The format of the questions asked in the online survey is briefly explained in \autoref{app:survey}. Around $70\%$ of the data collected via the online survey was used for training and the rest $30\%$ for validation. Since the number of data was limited, each model was trained and validated using three different batches of training and validation scenarios. First the cognitive models were trained via sets $3$-$6$ and were validated via sets $1$ and $2$. Second, sets $1$, $2$, $5$, and $6$ were used for training and sets $3$ and $4$ for validation. Lastly, sets $1$-$5$ and set $6$ were respectively used for training and for validation. For model $m$ with $m=1,2,3$ the weights/functions of those linkages that are regulated by personality traits (i.e., linkages influencing the emotion triggers and the bias and the goal via the belief) were identified per participant via a grid search that determined vector $\bm{w}$ of constant weights or function parameters that for participant $p$ minimised the following loss function: \begin{equation} \label{eq:loss-function} J_{m,p}(\bm{w}) = \sum_{s\in\mathbb{S}_{\textrm{t}}} \bigg( \sum_{j= 1,2,3} \Big( A_{m,j}(\bm{w},s) - A_{p,j}(s) \Big) ^2 \bigg) \end{equation} with $\mathbb{S}_{\textrm{t}}$ the set of scenarios used for training, $A_{m,j}(\bm{w},s)$ the converged value of the realisation of concept $C_j$ estimated via model $m$ using weight vector $\bm{w}$ for scenario $s$, and $A_{p,j}(s)$ the quantified value within $[-1,1]$ corresponding to the linguistic response of participant $p$ for concept $C_j$ in scenario $s$. % The converged values of the state variables (belief, goal, emotion) estimated by model $m$ for scenario $s$ was compared with the corresponding values from the online survey. The mean squared error for realisation $A_j$ of concept $C_j$ for scenario $s$ for model $m$ and for $j=1,2,3$ is: \begin{align} \label{eq:evalution-metrics-scenario} \text{MSE}(m,j,s) = \dfrac{1}{|\mathbb{P}|} \sum_{p\in\mathbb{P}} \bigg(A_{m,j}(\bm{w}^*_{m,p}, s)-A_{p,j}(s)\bigg)^2 \end{align} with $\bm{w}^*_{m,p}$ the vector of personalised weights for model $m$ obtained via \eqref{eq:loss-function} for participant $p$ and $|\cdot|$ the set cardinality. The mean squared error of model $m$ for concept $C_j$ and validation set $\mathbb{V}$ is: \begin{align} \label{eq:evalution-metrics-overall} \text{MSE}\left(m,j,\mathbb{V}\right) = \dfrac{1}{|\mathbb{V}|} \sum_{s\in\mathbb{V}} \text{MSE}(m,j,s) \end{align} The same procedure was used to identify model $4$ but the loss function in \eqref{eq:loss-function} was considered for the entire population of participants at once.% \begin{table \centering \begin{tabular}{|l|c|c|} \hline \textbf{Concept} & \textbf{Linguistic term} & \textbf{\begin{tabular}[c]{@{}c@{}}Numerical realisation \end{tabular}} \\ \hline \multirow{7}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}General preferences\end{tabular}}} & Dislike a great deal & -1 \\ \cline{2-3} & Dislike a moderate amount & -0.66\\ \cline{2-3} & Dislike a little & -0.33\\ \cline{2-3} & No preference & 0 \\ \cline{2-3} & Like a little & 0.33\\ \cline{2-3} & Like a moderate amount & 0.66 \\ \cline{2-3} & Like a great deal & 1 \\ \hline \multirow{5}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Rationally perceived knowledge\end{tabular}}} & Heavy rain & -1\\ \cline{2-3} & Light rain & -0.5\\ \cline{2-3} & Unknown & 0\\ \cline{2-3} & Cloudy & 0.5\\ \cline{2-3} & Sunny & 1\\ \hline \multirow{3}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}General world knowledge\end{tabular}}} & Inaccurate & -0.4\\ \cline{2-3} & Accurate & 0.2\\ \cline{2-3} & Very accurate & 0.8\\ \hline \end{tabular} \caption{Linguistic terms and their values considered as the inputs (general preferences, rationally perceived knowledge, and general world knowledge) in the online survey.} \label{tbl:scenario-inputs} \end{table} \begin{table \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Belief} & \textbf{First run} & \textbf{Second run} & \textbf{Third run} & \textbf{All the validation data} \\ \hline \textbf{Model 1} & 0.473 & 0.148 & 0.057 & 0.218 \\ \hline \textbf{Model 2} & 0.541 & 0.234 & 0.059 & 0.264 \\ \hline \textbf{Model 3} & 0.493 & 0.393 & 0.085 & 0.314 \\ \hline \textbf{Model 4} & 0.549 & 0.156 & 0.105 & 0.261 \\ \hline \hline \textbf{Goal} & \textbf{First run} & \textbf{Second run} & \textbf{Third run} & \textbf{All the validation data} \\ \hline \textbf{Model 1} & 0.386 & 0.400 & 0.115 & 0.314 \\ \hline \textbf{Model 2} & 0.426 & 0.793 & 0.133 & 0.461 \\ \hline \textbf{Model 3} & 0.548 & 0.744 & 0.418 & 0.593 \\ \hline \textbf{Model 4} & 0.373 & 0.406 & 0.219 & 0.353 \\ \hline \hline \textbf{Emotion} & \textbf{First run} & \textbf{Second run} & \textbf{Third run} & \textbf{All the validation data} \\ \hline \textbf{Model 1} & 0.185 & 0.198 & 0.120 & 0.162 \\ \hline \textbf{Model 2} & 0.255 & 0.268 & 0.183 & 0.234 \\ \hline \textbf{Model 3} & 0.244 & 0.288 & 0.141 & 0.223 \\ \hline \textbf{Model 4} & 0.211 & 0.257 & 0.128 & 0.195 \\ \hline \end{tabular} \caption{Mean squared error in estimation of the belief, goal, and emotion via models 1-4 for three different validation runs. The last column shows the average error for all validation scenarios.} \label{tbl:results-emotions} \end{table} Table~\ref{tbl:results-emotions} shows the mean squared error for estimation of the beliefs, goals, and emotions via models $1$-$4$. Since all realisations were already normalised within $[-1,1]$, absolute error values were used and the mean squared error was maximum $4$. Compared to model $3$, model $1$ had significantly lower errors for all validation sets for the three state variables, which supports the validity of Hypothesis~\ref{hyp:first}, i.e., the importance of formulating the linkages of the model as functions. Compared to model $2$, model $1$ had lower errors for all validation sets and and state variables, which supports the validity of Hypothesis~\ref{hyp:second}, i.e., the importance of including general preferences and general world knowledge in the cognitive models. Finally, compared to model $4$, model $1$ made more accurate estimates of beliefs and emotions for all validation sets. The error in estimation of the goal was close for both models for the first two validation sets, and was lower for model $1$ for validation set $3$. These results support the validity of Hypothesis~\ref{hyp:third}, i.e., the importance of personalising the cognitive models. The satisfactory average performance of model $4$ (see the last columns of Table~\ref{tbl:results-emotions}), implies that such a universal cognitive model can reliably be used when personal information is not yet available \section{Conclusions and Topics for Future Research} \label{sec:conclusions} We formalised various cognitive procedures of humans via network representations, proposed an extended version of fuzzy cognitive maps, and formulated the corresponding mathematical cognitive models for these network representations. While previous research focuses on evolution of beliefs and goals only, we considered emotions and personality traits, which allow for incorporation of biases in perception and beliefs. We performed several analyses based on realistic experiments that included cognitive procedures of humans in order to personalise and evaluate the accuracy and validity of the proposed cognitive models. Beliefs, goals, and emotions were considered as the model's state variables that evolve in the course of small time scales (e.g., seconds or minutes). General world knowledge, general preferences, and personality traits that evolve in much larger time scales (e.g., months or years) were included as constant parameters within the small time scales of the modelled dynamics. While previous research (see \cite{Kwon2008, Saxe2017,Tapus2008,Zaki2013, Lee2019,Bono2007PersonalitySelf-monitoring,Leite2013}) shows the importance of emotions and personality traits in ToM, this is the first time these elements are systematically included in mathematical cognitive models for humans. The resulting cognitive models were identified and validated based on computer-based simulations and real-life experiments with human participants. The results of these experiments showed that the proposed cognitive models successfully represent personal differences of participants and precisely estimate and predict their current and future state-of-mind and behaviors. Moreover, the results proved that including the emotions and personality traits, and thus incorporating personalisation and biases that exist in real-life cognitive procedures of humans, as well as including the general world knowledge and general preferences are essential for realistic, precise, and personalised estimation of the unobservable state-of-mind of humans In the future, the proposed cognitive models will be used as prediction models of humans for control systems that steer the behaviour of autonomous machines that interact with humans. Additionally, such models, when personalised to represent the cognitive procedures of an expert (e.g., a therapist), may be used to exhibit expert-like interactive behaviours via autonomous machines \section*{Compliance with Ethical Standards} \smallskip \noindent \textbf{Competing Interests:} The authors declare no competing interests. \bigskip \noindent \textbf{Research involving Human Participants and/or Animals:} The research involved volunteer human participants. The research has been approved by the Human Research Ethics Committee of TU Delft. \bigskip \noindent \textbf{Informed Consent:} The participants were completely informed about and agreed with the data that was collected via the surveys and that this data will remain anonymous. \bigskip \noindent \textbf{Author Contributions:} Author M.\ Mor\~{a}o Patr\'{i}cio contributed to designing and implementing the computer-based and real-life experiments. Authors M.\ Mor\~{a}o Patr\'{i}cio and A.\ Jamshidnejad contributed to the analysis and interpretation of the results, development of mathematical cognitive models of humans, and composition of the manuscript. Author A.\ Jamshidnejad supervised the study design and edited the manuscript. Both authors have critically reviewed the manuscript and approved the final version of the manuscript. \bibliographystyle{spbasic_custom}
1,108,101,564,086
arxiv
\section{Introduction} The Generalized Riemann Hypothesis asserts that the nontrivial zeros of an $L$-function lie on the critical line $\Re(s)=\frac12$. Thus, the zeros can be listed as $\frac12+i \gamma_n$ where $\cdots\le \gamma_{-2}\le \gamma_{-1}<0\le \gamma_1\le \gamma_2\le \cdots$, with zeros repeated according to their multiplicity. We refer to $\frac12+i\gamma_1 $, or to $\gamma_1$ when no confusion will result, as the ``first'' zero of the $L$-function. The first zero of the Riemann zeta function is approximately $\gamma_1\approx 14.13$. Stephen D. Miller\cite{M} proved that a large class of $L$-functions have a smaller first zero, so among that class the zeta function has the highest lowest zero. Miller was motivated by a question of Odlyzko\cite{O}, who showed that the Dedekind zeta function of any number field has a zero whose imaginary part is less than~14. In this note we quote results from \cite{FKL} and \cite{B}, respectively, which establish the following: \begin{itemize} \item{} There exists an $L$-function whose lowest zero is higher than the lowest zero of the Riemann zeta function. \item{} Assuming certain generally believed hypotheses, there is a universal upper bound on the gap between consecutive critical zeros of any $L$-function. \end{itemize} In the next section we briefly introduce the $L$-functions we consider in this paper and describe a general result bounding the gaps between consecutive zeros. Then in Section~\ref{sec:example} we give a numerical example from \cite{FKL} of an $L$-function with $\gamma_1\approx 14.496$ and we explain why it is not actually that surprising that there are $L$-functions whose first zero is higher than that of the Riemann zeta function. Then in Section~\ref{sec:explicit} we use the explicit formula to give upper bounds for $\gamma_1$ for the $L$-functions we consider. In Section~\ref{sec:millermethod} we show that, while it is not surprising that there are $L$-functions which have large gaps between their zeros, the existence of the example in Section~\ref{sec:example} is surprising. \section{$L$-functions}\label{sec:Lfunctions} By an \emph{$L$-function} we mean the $L$-function attached to an irreducible unitary cuspidal automorphic representation of $GL_n$ over $\mathbb Q$, and furthermore we assume the Ramanujan-Petersson conjecture and the Generalized Riemann Hypothesis. This means that we can write the $L$-function as a Dirichlet series \begin{equation}\label{eqn:ds} L(s) = \sum_{n=1}^\infty \frac{a_n}{n^s} \end{equation} where $a_n\ll n^\delta$ for any $\delta>0$, which has an Euler product \begin{equation}\label{eqn:ep} L(s)=\prod_p L_p(p^{-s})^{-1} \end{equation} and satisfies a functional equation of the form \begin{equation}\label{eqn:fe} \Lambda(s) = Q^s \prod_{j=1}^d \Gamma_\mathbb R\left(s + \mu_j\right) L(s) = \varepsilon \overline{\Lambda(1 - \bar{s})}. \end{equation} Here $|\varepsilon|=1$ and we assume that $\Re(\mu_j)\ge 0$ and~$Q\ge 1$. The normalized $\Gamma$-function is defined as \begin{equation} \Gamma_\mathbb R(s) = \pi^{-s/2}\Gamma(s/2), \end{equation} where $\Gamma(s)$ is the usual Euler Gamma function. The number $d$ is called the \emph{degree} of the $L$-function, which for all but finitely many $p$ is also the degree of the polynomial~$L_p$. We use Weil's explicit formula, given in Lemma~\ref{lem:weil}, to prove the following theorem. \begin{theorem}\label{thm:selberg-class-bound} If $L(s)$ is entire and satisfies the Generalized Riemann Hypothesis, then $L(1/2 + it)$ has a zero in every interval of the form $t \in [t_0, t_0 + 45.3236]$. \end{theorem} In the case all $\mu_j$ are real, Miller~\cite{M} proved the above theorem with ``45.3236'' replaced by ``28''. In Section~\ref{sec:example} we give a numerical example to illustrate why things behave differently when the $\mu_j$ are complex. That example has $\gamma_1-\gamma_{-1}\approx 28.992$. A slightly improved version of Theorem~\ref{thm:selberg-class-bound} is given by Bober~\cite{B}, and he also gives the optimal result for the cases $d=3$ and~$4$. The term ``lowest zero'' of an $L$-function is ill-defined, because one must first choose a normalization of the $L$-function. The normalization is clear in the case of Miller~\cite{M} because the parameters in the $\Gamma$-factors can be chosen to be real. But if $L(s)$ is an $L$-function then so is $L(s+i y)$ for any real $y$. A reasonable normalization is to require $\sum \Im(\mu_j) = 0$, but other normalizations are possible. Thus, it is natural to consider the maximum possible gap between zeros instead, which is how we phrased our result above. This discussion suggests two questions: \begin{question}\label{q:1} Does there exist an $L$-function with a larger gap between its zeros than any other $L$-function? \end{question} Theorem~\ref{thm:selberg-class-bound} shows that there is a least upper bound, $\Upsilon$, on the gap between consecutive zeros; the question is whether that bound is attained. We do not have a conjecture for $\Upsilon$, but Bober~\cite{B} suggests that $\Upsilon < 36$ (i.e., $\gamma_1 < 18$). \begin{question}\label{q:2} If $0<u<\Upsilon$, does there exist an $L$-function whose largest zero gap is arbitrarily close to~$u$? \end{question} Considerations of the function field analogue and a conjecture of Yoshida~\cite{Y} suggest that the answers to Questions~\ref{q:1} and~\ref{q:2} may be 'no' and 'yes,' respectively. Most of this work on this paper was completed during the workshop \emph{Higher rank $L$-functions: theory and computation}, held at the Centro de Ciencias de Benasque Pedro Pascual in July 2009. The motivation was a suggestion by David Farmer that one could make an ordered list of all $L$-functions according to their lowest critical zero. It was disappointing to find that such an ordering does not place the Riemann zeta function first, and in fact the $L$-function of the Ramanujan $\tau$-function would come before all the Dirichlet $L$-functions. Depending on the answers to Questions~\ref{q:1} and~\ref{q:2}, it is possible that this ``list'' would not have a first element, and any two $L$-functions could actually have infinitely many other $L$-functions between them. \section{A certain degree-4 $L$-function}\label{sec:example} In~\cite{FKL} the authors perform computational experiments to discover $L$-functions with functional equation~\eqref{eqn:fe} with $d=3$ or~$4$ and the $\mu_j$ purely imaginary. The results are approximate values for the $\mu_j$ and the coefficients $a_n$, which are claimed to be accurate to several decimal places. While it is not currently possible to prove that those numerical examples are indeed approximations to actual $L$-functions, the functions pass several tests which lend credence to their claim. One example which is relevant to the present paper has $d=4$ with $\mu_1=-\mu_2=4.7209 i$ and $\mu_3=-\mu_4=12.4687 i$. Appropriately interpreted, this is the ``first'' $L$-function with $d=4$ and the $\mu_j$ purely imaginary. A plot of the $Z$-function along the critical line is given in Figure~\ref{fig:degree4}. Note that on the critical line the $Z$-function has the same absolute value as the $L$-function; in particular, it has the same critical zeros. Figure~\ref{fig:degree4} shows that this $L$-function has its first zero at $\gamma_1=14.496$. The $L$-function is self-dual, so $Z(t)$ is an even function of $t$ and $\gamma_{-1}=-14.496$, giving a gap between zeros of 28.992. \begin{figure}[htp] \scalebox{1.0}[1.0]{\includegraphics{sp4plot.eps}} \caption{\sf The $Z$-function of an $L$-function satisfying functional equation~\eqref{eqn:fe} with $d=4$ and $\mu_1=-\mu_2=4.7209 i$ and $\mu_3=-\mu_4=12.4687 i$. The first zeros are at $\pm \gamma_1=14.496$. Data taken from~\cite{FKL}. } \label{fig:degree4} \end{figure} The plot in Figure~\ref{fig:degree4} shows local minima near 4.7 and 12.5. Those are due to the trivial zeros, which have imaginary parts $- \Im(\mu_j)$. Those trivial zeros suppress the appearance of nearby zeros on the critical line, a phenomenon first observed by Strombergsson~\cite{St}. Thus, such $L$-functions can have a surprisingly large gap between their critical zeros. For the reader who may wish to check our calculations, such as with the explicit formula, we provide the spectral parameters and initial Dirichlet coefficients to higher precision: \begin{align} \mu_1=\mathstrut &4.72095103638565339773\cr \mu_2=\mathstrut &12.4687522615131728082\cr &\mathstrut \cr a_2=\mathstrut& \phantom{\mathstrut -\mathstrut }1.34260324197021624329 \cr a_3=\mathstrut& -0.18745190876087089719 \cr a_4=\mathstrut& \phantom{\mathstrut-\mathstrut}0.4644565335271682550 \cr a_5=\mathstrut& -0.001627934631772515 \cr a_7=\mathstrut& \phantom{\mathstrut-\mathstrut}0.22822958260580737 \cr a_9=\mathstrut& -0.4634288260750947 \cr a_{11}=\mathstrut& \phantom{\mathstrut-\mathstrut}0.695834471444353 \cr a_{13}=\mathstrut& -0.8824356594477 \end{align} Note also that the zeros with imaginary part $0<\gamma<30$ are at the heights $\{$14.4960615091, 17.1144514545, 19.4393573576, 21.193378013, 22.396088469, 23.108950059, 24.34252975, 25.59506020, 27.12281351, 28.2791393, 29.5857431$\}$. \section{An upper bound on gaps between zeros}\label{sec:explicit} \subsection{The explicit formula} We use Weil's explicit formula with a particular test function to establish Theorem~\ref{thm:selberg-class-bound}. The form of the explicit formula that we will use is the following. \begin{lemma}\label{lem:weil} Suppose that $L(s)$ has a Dirichlet series expansion \eqref{eqn:ds} which continues to an entire function such that \begin{equation} \Lambda(s) = Q^s \prod_{j=1}^d \Gamma_\mathbb R\left(s + \mu_j\right) L(s) = \varepsilon \overline{\Lambda(1-\overline{s})} \end{equation} is entire and satisfies the mild growth condition $L(\sigma + it) \ll |t^A|$, uniformly in $t$ for bounded~$\sigma$. Let $f(s)$ be holomorphic in a horizontal strip $-(1/2 + \delta) < \Im(s) < 1/2 + \delta$ with $f(s) \ll \min(1, |s|^{-(1+\epsilon)})$ in this region, and suppose that $f(x)$ is real valued for real $x$. Suppose also that the Fourier transform of $f$ defined by \[ \hat f(x) = \int_{-\infty}^\infty f(u)e^{-2\pi i u x} dx \] is such that \[ \sum_{n=1}^{\infty} \frac{c(n)}{n^{1/2}} \hat{f} \left( \frac{\log{n}}{2 \pi} \right) + \frac{\overline{c(n)}}{n^{1/2}} \hat f\left( -\frac{\log n}{2 \pi}\right) \] converges absolutely, where $c(n)$ is defined by \begin{equation} \frac{L'}{L}(s) = \sum_{n=1}^{\infty} \frac{c(n)}{n^s} . \end{equation} Then \begin{align} \label{weil} \sum_{\gamma} f(\gamma) =\mathstrut & \frac{\widehat{f}(0)}{\pi} \log{Q} + \frac{1}{2 \pi} \sum_{j=1}^d \ell(\mu_i, f)\cr &+ \frac{1}{2 \pi} \sum_{n=1}^{\infty} \frac{c(n)}{n^{1/2}} \hat{f} \left( \frac{\log{n}}{2 \pi} \right) + \frac{\overline{c(n)}}{n^{1/2}} \hat f\left( -\frac{\log n}{2 \pi}\right) \end{align} where \begin{equation} \ell(\mu, f) = \Re\left\{\int_\mathbb R \frac{\Gamma'}{\Gamma} \left( \frac{1}{2} \left( \frac{1}{2} + i t \right) + \mu \right) f(t) dt\right\} - \hat f(0)\log \pi \end{equation} and the sum $\sum_\gamma$ runs over all non-trivial zeroes of $L(s)$. \end{lemma} \begin{proof} This can be found in Iwaniec and Kowalski \cite[Page 109]{IK}, but note that they use a different normalization for the Fourier transform. \end{proof} Note that if we assume the Ramanujan-Petersson conjecture then $c(n) \ll n^\epsilon$, but any mild growth estimate on the $c(n)$ is sufficient for our purposes. The general strategy we will use is as follows: to show that $L(1/2 + it)$ has a zero with $\alpha \le t \le \beta$, we want to take $f$ to be a good approximation of $\chi_{(\alpha, \beta)}$, the step function with value $1$ on $(\alpha, \beta)$ and $0$ elsewhere, and such that the support of $\hat{f}$ is contained in the interval $\left(- \frac{\log{2}}{2 \pi}, \frac{\log{2}}{2 \pi} \right)$. Then, the last sum on the RHS of the explicit formula disappears, and for the $L$-functions that we are considering, (\ref{weil}) should look like \begin{equation}\label{eq:misc1} \sum_{\alpha < \gamma < \beta} f(\gamma) \approx \frac{\log Q}{\pi} \hat{f}(0) + \frac{1}{2 \pi} \sum_{j=1}^d \ell(\mu_j, f). \end{equation} Since $f$ approximates $\chi_{(\alpha, \beta)}$, we expect that \[ \ell(\mu_j, f) \approx \Re \left\{\int_{\alpha}^{\beta} \left(\frac{\Gamma'}{\Gamma} \left( \frac{1}{4} + \frac{it}{2} + \mu_j \right) \right) dt\right\} - (\beta - \alpha)\log \pi. \] If $\beta-\alpha$ is large enough then this will be positive for any $\mu_j$. We will then find that the right side of \eqref{eq:misc1} is positive, which shows the existence of the zero that we are looking for. While we cannot actually use the characteristic function of the interval $(\alpha,\beta)$ in the explicit formula, we do not quite need to. As long as $f(x)$ is positive for $\alpha < x < \beta$ and negative elsewhere, the same argument will work. The function which we use here is the Selberg minorant $S_-(z)$ for the interval $(\alpha,\beta)$ and with Fourier transform supported in $( -(\log 2)/2\pi, (\log 2)/2\pi )$. We describe this function below. Note that with this approach there are fundamental limits to how small we can make $\beta - \alpha$. According to the uncertainty principle, we should need to make the support of $\hat f$ large if we want to get a good function with $\beta - \alpha$ small. \newcommand{\mathrm{sgn}}{\mathrm{sgn}} \subsection{Selberg's amazing functions} As we have already described, we would like to use in the explicit formula a function $f(x)$ which is positive only in a prescribed interval and which has a compactly supported Fourier transform. Additionally, we have some reason to believe that a good candidate for our purposes should be close to $1$ inside this interval and close to $0$ outside of it. Selberg \cite[pages 213--225]{Se} gives a construction of such functions which are suitable for our purposes. These functions are easiest to describe by first defining the Beurling function \[ B(z) = 1 + 2\left(\frac{\sin \pi z}{\pi}\right)^2\left(\frac{1}{z} - \sum_{n=1}^\infty \frac{1}{(n + z)^2}\right). \] This function is a good approximation for the function \[ \mathrm{sgn}(x) = \left\{ \begin{array}{cl} 1 & \textrm{if } x > 0 \\ 0 & \textrm{if } x = 0 \\ -1 & \textrm{if } x < 0 \end{array}\right. \] and it is a majorant for $\mathrm{sgn}(x)$; that is, $\mathrm{sgn}(x) \le B(x)$ for all real $x$. Beurling (unpublished) showed that this is the best possible such approximation in the sense that if $F(z)$ is any entire function satisfying $\mathrm{sgn}(x) \le F(x)$ for all real $x$ and $F(z) \ll_\epsilon \exp( (2\pi + \epsilon) |z|)$, then \[ \int_{-\infty}^\infty \left(F(x) - \mathrm{sgn}(x)\right) \ dx \ge 1, \] with equality achieved if and only if $F(x) = B(x)$. (A proof can be found in \cite{V}.) To approximate the characteristic function of an interval, we can use a simple linear combination of Beurling functions. \begin{definition} The Selberg minorant $S_-(z)$ for the interval $[\alpha, \beta]$ and parameter $\delta > 0$ is defined by \[\label{eqn:selminus} S_-(z) = -\frac{1}{2} \Big(B(\delta(\alpha - z)) + B(\delta(z - \beta))\Big) \] \end{definition} Selberg \cite[pages 213--225]{Se} proved that whenever $\delta(\beta - \alpha)$ is an integer, $S_-(z)$ is a best possible minorant for the characteristic function of the interval $[\alpha,\beta]$, in the same sense that $B(z)$ is the best possible majorant for the $\mathrm{sgn}$ function, although $S_-(z)$ is not the unique best possible minorant. We do not make us of this extremal property anywhere, but it is the motivation behind our choice of using $S_-(z)$ in the explicit formula, and it may give hope that our results are not too far from optimal. We summarize some important properties of $S_-(z)$ that we do need in the following lemma. \begin{lemma}\label{selberg-minorant-lemma} Let $S_-(z)$ be the Selberg minorant for the interval $[\alpha, \beta]$ with parameter $\delta$. Then the following hold. \begin{enumerate} \item $S_-(x) \le \chi_{(\alpha, \beta)}(x)$ for all real $x$. \item $\int_{-\infty}^\infty S_-(x) dx = \beta - \alpha - \frac{1}{\delta}$. \item $\hat S_-(x) = 0$ for $x > \delta$ or $x < -\delta$. \item For any $\epsilon > 0$, $S_-(z) \ll_{\delta, \alpha, \beta, \epsilon} \min\left(1, \frac{1}{|z|^2}\right)$ for $\Im(z) \le \epsilon$. \end{enumerate} \end{lemma} \begin{proof} All of these facts can be found in Selberg's work \cite[pages 213--225]{Se}. \end{proof} \begin{remark} For the function $f$ that we choose in the explicit formula, we will also need $f(x) > 0$ in a prescribed range. If $\delta(\beta - \alpha)$ is too small, then this might not be the case for the function $S_-(x)$. With the specific parameters we choose, this will hold for our application, however. \end{remark} \subsection{Proof of Theorem \ref{thm:selberg-class-bound}} \begin{proof}[Sketch of proof of Theorem \ref{thm:selberg-class-bound}] Lemma \ref{selberg-minorant-lemma} tells us that in the formula we may choose $f(s) = S_-(s)$. We do so, with $\alpha = -2.5/\delta$ and $\beta = 2.5/\delta$, where $\delta = \frac{\log 2}{2\pi}$. The explicit formula then reads \begin{equation}\label{eq-selberg-function-explicit-formula} \sum_\gamma S_-(\gamma) = \frac{\log Q}{\pi} \hat S_-(0) + \frac{1}{2\pi} \sum_{j=1}^d \Re\left\{ \int_{-\infty}^\infty \frac{\Gamma'}{\Gamma}\left(\frac{1}{4} + \frac{it}{2} + \mu_j\right)S_-(t) dt\right\} - \frac{d}{2\pi}\hat S_-(0)\log \pi \end{equation} Since $\hat S_-(0) = 4/\delta$ is positive and $Q\ge 1$, we may ignore the first term of this sum in establishing a lower bound. We then can check that \[ \Re\left\{\int_{-\infty}^\infty \frac{\Gamma'}{\Gamma}\left(\frac{1}{4} + \frac{it}{2} + \mu\right)S_-(t) dt\right\} > \hat S_-(0) \log \pi \] for all choices of $\mu$. The right hand side of \eqref{eq-selberg-function-explicit-formula} is thus positive. As $S_-(\gamma)$ is only positive when $\alpha < \gamma < \beta$, we conclude that $L(1/2 + i\gamma) = 0$ for some $\gamma$ in this range. More details of this computation will appear in \cite{B}. \end{proof} \begin{remark} Note that $\beta - \alpha \approx 45.3236$. It should be possible to use the Selberg functions to make this difference a very little bit smaller without changing the proof, but not much. We have chosen $\alpha$ and $\beta$ as above because the Selberg function has a much nicer representation when $(\beta - \alpha)\delta$ is an integer, which simplifies computation. \end{remark} \section{Positivity and non-existence of $L$-functions}\label{sec:millermethod} Although Miller's paper~\cite{M} concerns $L$-functions of real archimedean type, the methods also apply to the $L$-functions considered here. When that approach is used on degree 3 or degree 4 $L$-functions with functional equation~\eqref{eqn:fe} and $\mu_j$ pure imaginary, the result is not quite conclusive. Miller's implementation involves two calculations. One calculation shows that if all the $\mu_j$ are sufficient small (i.e., lying in a certain bounded region which can be made explicit) then such an $L$-function cannot exist. The second calculation shows that if the $\mu_j$ lie outside another (possibly larger) region, then the $L$-function must have a zero with imaginary part less than~$14$. For $d=3$ or $4$, those two calculations do not resolve whether or not there is an $L$-function with a first zero higher than the Riemann zeta function, because there remains a very small region which could possibly correspond to an $L$-function. And if such an $L$-function exists, one would then need to calculate its first zero to check if it was higher than 14.13. For the case of $d=3$, calculations of Bian~\cite{Bi, FKL} show that there are no $L$-functions in the missing region. Details are given by Bober~\cite{B}. But for $d=4$, calculations in~\cite{FKL} suggest that there is an $L$-function in the missing region. This is somewhat surprising because that region is very small, as shown in Figure~\ref{fig:excluded}. Furthermore, as was shown in Figure~\ref{fig:degree4}, that $L$-function has a larger gap between its zeros than does the Riemann zeta function. Note: the example from \cite{FKL} was found prior to our implementation of Miller's inequalities for $d=4$. Specifically, that $L$-function was found from a general search for degree-4 $L$-functions, not merely from an attempt to find examples of $L$-functions with a high lowest zero. Perhaps that makes it even more surprising that such an example exists. \begin{figure}[htp] \scalebox{0.7}[0.7]{\includegraphics{eigregion1.eps}} \hskip 0.1in \scalebox{0.7}[0.7]{\includegraphics{eigregion2.eps}} \caption{\sf The region outside the solid curve describes pairs $(\nu_1,\nu_2)$ for which it is possible that an $L$-function with functional equation \eqref{eqn:fe} exists, where $(\mu_1,\mu_2,\mu_3,\mu_4)=(\nu_1,-\nu_1,\nu_2,-\nu_2)$. The region outside the dotted curve describes pairs $(\nu_1,\nu_2)$ for which such an $L$-function, if it exists, must have a zero lower than the first zero of the Riemann zeta function. The black dot corresponds to the $L$-function shown in Figure~\ref{fig:degree4}. } \label{fig:excluded} \end{figure}
1,108,101,564,087
arxiv
\section{Introduction} The literature on tracking of isolated or multiple objects in uncluttered and cluttered environments is broad and varied, with methodological approaches ranging from variational techniques to an extensive variety of Kalman filters, particles filters, and other sequential Bayesian filters~\cite{Anderson-Moore:optimal-filtering, BarSalom-etal:tracking, Jazwinski:filtering, Mahler:information-fusion}. Optimal tracking (filtering or state estimation) requires identification of the appropriate length and time scales of a object's motion, and the magnitudes of the uncertainty (observational and dynamical noise) and the nonlinearity at these scales. There are three broad scenarios: (1)~when the update cycle is fast, such in guidance control, Kalman filters are appropriate, because system behaviours are almost linear; (2)~when nonlinearity is significant, and the noise largely dynamical, as opposed to observational, then particle filters are appropriate; and (3)~when nonlinearity is significant, but the noise largely observational, then variational methods are most appropriate. Mathematically situations (1) and~(2) best assume the underlying process is stochastic, whereas situation~(3) it is best to assume a deterministic dynamical system: the success and universal acceptance of stochastic methods, over the past decades, has lead to them being applied in situation~(3), where they are not the most appropriate choice~\cite{Judd-Stemler:deadparrot2}. Here we present an approach to tracking in situation~(3) based on \emph{shadowing filters}, which derive from the modern concept of shadowing in dynamical systems theory; literally meaning \emph{to find a trajectory that \textbf{shadows} the observations}~\cite{Gilmour:PhD}. The methodology has its roots in the work of Laplace and Gauss fitting celestial orbits as curves~\cite{Davis:Gauss}, and subsequent least squares approaches for ballistic trajectories. Shadowing filters lie within the domain of variational methods, as in optimal control, but are subtly different~\cite{Judd:dcip}. Shadowing filters are not equivalent to 4D-variational assimilation often employed in meteological and oceanographic modelling and forecasting. Nor are they equivalent to dynamic programming approaches that finds a Viterbi path~\cite{Forney:Viterbi-algorithm}. To develop our methodology, a one-dimensional, or scalar case, is considered first, which is then extended to the multi-dimensional vector case, where the observations are of the components of the Cartesian position vector. From this basis other relevant observation situations that are often encountered are considered, for example, using range or bearing observations from one or more sensors. A significant problem that arises in these types of observation networks is the way the covariance of observational errors varies with position, in particular, the singularities that occur when the target and sensors are co-linear or co-planar. Possibly the most surprising outcome of the shadowing filter approach is that singular covariances of observations are not a significant problem, and targets can be tracked through missing data and singularities relatively easily; indeed, sometimes covariances can be ignored entirely, obtaining very efficient tracking filters. To keep the exposition of the algorithm and its benefits clear, we restrict attention to tracking an isolated vehicle in an uncluttered environment. It should be clear, once the methodology is understood, that since the shadowing filter assumes a tracked object maintains a contiguous trajectory it will perform well with multiple targets in cluttered environments. \section{Formulation and implementation} \subsection{Scalar case} Our initial interest is tracking the position of a point object in one dimension, given a sequence of noisy observations. Let $\mathcal{P}_i\in\mathbb{R}$ be the observed position at time~$t_i$ for $i=0,\dots,n$, and $\sigma_i^2$ be the variance of the observational error. The object's dynamics are modelled by its position $p_i\in\mathbb{R}$ and velocity $v_i\in\mathbb{R}$ at $t_i$, and constant acceleration $a_i\in\mathbb{R}$ for $t_i\leq{}t<t_{i+i}$. For notational convenience, define $\tau_i=t_{i+1}-t_i$. Our goal is to have $p_i$ close to $\mathcal{P}_i$ subject to the accelerations not being excessively large or changeable. We might therefore choose to minimise the total square error $\sum_{i=0}^n\sigma_i^{-2}(\mathcal{P}_i-p_i)^2$ subject to accelerations over the interval being bounded, $a_i^2\leq\xi^2$ $i=0,\dots,n-1$. Bounded acceleration is an appropriate constraint, but it introduces technical difficulties. Although these difficulties can be over-come~\cite{Judd:dcip}, it is more convenient and efficient to instead constrain the root mean squared acceleration over the entire trajectory, $\sum_{i=0}^{n-1}\tau_ia_i^2\leq{}(t_n-t_0)\xi^2$. Assuming Newton's laws and Galilean transforms apply to the point object's motion, then the stated optimisation problem can be posed using a Lagrangian: \begin{align} \label{eq:L} &= \frac{1}{2}\sum_{i=0}^n\sigma_i^{-2}(\mathcal{P}_i-p_i)^2 \\ & + \sum_{i=0}^{n-1} \lambda_{i}(p_{i+1}-p_{i}-v_{i}\tau_{i}-\frac12 a_{i}\tau_{i}^2) \\ & + \sum_{i=0}^{n-1} \mu_{i}(v_{i+1}-v_{i}-a_{i}\tau_{i}) \\ & + \eta \left(\sum_{i=0}^{n-1}\tau_ia_i^2-(t_n-t_0)\xi^2\right), \label{eq:L4} \end{align} where $\lambda_i\in\mathbb{R}$, $\mu_i\in\mathbb{R}$ and $\eta\geq0$ are dual variables. Solving the optimisation, and defining \begin{equation} \label{eq:spline} p(t) = p_i+v_i(t-t_i)+\frac12a_i(t-t_i)^2 \quad\text{for}\quad t_i\leq t\leq t_{i+1}, \end{equation} provides an optimal quadratic spline estimate of a particle's path assuming piecewise constant accelerations. The optimal solution occurs where all the partial derivatives of $L$ are zero: \newcommand{\pd}[2]{\frac{\partial#1}{\partial#2}} \begin{align} \label{eq:Lx} \pd{L}{p_i} &= \left.\begin{cases} -\sigma_0^{-2}(\mathcal{P}_0-p_0) - \lambda_0, & i=0,\\ -\sigma_i^{-2}(\mathcal{P}_i-p_i) + \lambda_{i-1} - \lambda_{i}, & 0<i<n,\\ -\sigma_n^{-2}(\mathcal{P}_n-p_n) + \lambda_{n-1}, & i=n,\\ \end{cases} \right\}=0 \\ \label{eq:Lu} \pd{L}{v_i} &= \left.\begin{cases} -\lambda_{0}\tau_{0}-\mu_{0}, & i=0,\\ -\lambda_{i}\tau_{i}+\mu_{i-1}-\mu_{i}, & 0<i<n, \end{cases}\right\}=0 \\ \label{eq:La} \pd{L}{a_i} &= -\frac12\lambda_{i}\tau_{i}^2 - \mu_{i}\tau_{i} + 2\eta{}\tau_ia_{i}=0,\\ \label{eq:Ll} \pd{L}{\lambda_i} &= p_{i+1}-p_{i}-v_{i}\tau_{i}-\frac12 a_{i}\tau_{i}^2=0,\\ \label{eq:Lm} \pd{L}{\mu_i} &= v_{i+1}-v_{i}-a_{i}\tau_{i}=0,\\ \label{eq:Le} \pd{L}{\eta} &= \sum_{i=0}^{n-1}\tau_ia_{i}^2-(t_n-t_0)\xi^2=0. \end{align} Equation~(\ref{eq:Lx}) is defined for $0\leq{}i\leq{}n$ while (\ref{eq:Lu}--\ref{eq:Lm}) are defined for $0\leq{}i<n-1$. With the exception of~(\ref{eq:Le}), the remaining five equations are linear in the unknowns. However, $\xi$ and $\eta$ are related through term~(\ref{eq:L4}) of the Lagrangian, which is the only place they appear. Hence, one can solve the linear equations~(\ref{eq:Lx}--\ref{eq:Lm}) for a fixed $\eta$, then compute the corresponding value of $\xi$ from eq.~(\ref{eq:Le}). The optimal solution for any $\xi$ can be approximated arbitrarily closely by an efficient one-dimensional search, such as Brent's method~\cite{Press-etal:numerical-recipes}. In practice it is unlikely that $\xi$ needs to be specified precisely, after all, it is only a bound on the root mean squared acceleration. Consequently, it is usually sufficient to work only with $\eta$, treating it as a smoothing, or regularisation, parameter. Combining (\ref{eq:Ll}) and (\ref{eq:Lm}) to eliminate the $v_i$ gives\footnote{To do this multiply (\ref{eq:Ll}) by $\tau_{i-1}$, then take another copy of (\ref{eq:Ll}) with $i$ replaced with $i-1$, multiply by $\tau_i$, and substract this from the former. Then use (\ref{eq:Lm}) to eliminate $v_i-v_{i-1}$.} \begin{equation} \label{eq:xa} p_{i+1}\tau_{i-1}-p_i(\tau_i+\tau_{i-1})+p_{i-1}\tau_i = \frac12(a_i\tau_i+a_{i-1}\tau_{i-1})\tau_{i-1}\tau_i, \end{equation} for $0<i<n$. Combining (\ref{eq:Lx}), (\ref{eq:Lu}) and (\ref{eq:La}) and eliminating the dual variables $\lambda_i$ and $\mu_i$ (as explained in the following), obtains another set of expressions relating the $p_i$ and $a_i$, which when combined with~(\ref{eq:xa}) enable solving for a near optimal solution very efficiently. There is a certain amount of redunancy in the equations just stated, but to assist formulation of a solution using matrix notation it is advantageous to retain the redundancy. Define column vectors $\mathcal{P}=(\mathcal{P}_0,\dots,\mathcal{P}_n)^T\in\mathbb{R}^{n+1}$, $p=(p_0,\dots,p_n)^T\in\mathbb{R}^{n+1}$, $\lambda=(\lambda_0,\dots,\lambda_{n-1})^T\in\mathbb{R}^n$, $\mu=(\mu_0,\dots,\mu_{n-1})^T\in\mathbb{R}^n$, and finally $a=(a_0,\dots,a_{n-1})^T\in\mathbb{R}^{n}$. Define $\tau$ to be the $n\times{}n$ matrix of zeros with main diagonal $(\tau_0,\dots,\tau_{n-1})$, and $\mathcal{I}$ to be the $(n+1)\times(n+1)$ matrix of zeros with main diagonal $(\sigma_0^{-2},\dots,\sigma_{n}^{-2})$. Define a $n\times{}n$ matrix $D$ to have all entries zero except $-1$ on the main diagonal and $1$ on the first lower diagonal, and similarly define a $(n+1)\times{}n$ matrix $E$: specifically \begin{equation}\label{eq:DE} D_{ij} = E_{ij} = \begin{cases} -1, & i=j,\\ \phantom{-}1, & i=j+1,\\ \phantom{-}0, & \text{otherwise,} \end{cases} \end{equation} when the entry is defined. It follows that the linear equations (\ref{eq:Lx}), (\ref{eq:Lu}) and~(\ref{eq:La}) can be succinctly expressed as \begin{equation} \label{eq:succinct} E\lambda = \mathcal{I}(\mathcal{P}-p), \qquad D\mu=\tau\lambda, \qquad 2\eta\tau{}a=\frac12\tau^2\lambda+\tau\mu. \end{equation} In the last of the three equations of~(\ref{eq:succinct}), $\tau$ is invertible, so that a $\tau$ factor can be canceled on the left of each term. Define a $n\times{}n$ matrix $L$, and $n\times{}(n+1)$ matrix $M$, to have a lower triangular form: \begin{equation}\label{eq:ML} L_{ij} = M_{ij} = \begin{cases} -1, & i\geq{}j,\\ \phantom{-}0, & \text{otherwise,} \end{cases} \end{equation} when the entry is defined. It can be easily verified that $DL=I$ and $EM=J$ where $I$ is the $n\times{}n$ identity matrix, and $J$ is a $(n+1)\times(n+1)$ matrix with \begin{equation}\label{eq:J} J_{ij} = \begin{cases} \phantom{-}1, & i=j \text{ and } i\neq{}n+1,\\ -1, & i=n+1 \text{ and } j\neq{}n+1,\\ \phantom{-}0, & \text{otherwise.} \end{cases} \end{equation} The identities $DL=I$ and $D\mu=\tau\lambda$ imply\footnote{If $DL=I$, then $DL\tau\lambda=\tau\lambda$, but $D\mu=\tau\lambda$, implying $\mu=L\tau\lambda$.} $\mu=L\tau\lambda$. The identities $EM=J$ and $E\lambda=\mathcal{I}(\mathcal{P}-p)$ imply\footnote{If $EM=J$, then $EM\mathcal{I}(\mathcal{P}-p)=J\mathcal{I}(\mathcal{P}-p)$. If $E\lambda=\mathcal{I}(\mathcal{P}-p)$ also, then, since the first $n$ rows of $J$ is the $n\times{}n$ identity, $\lambda=M\mathcal{I}(\mathcal{P}-p)$. Substituting this $\lambda$ back into $E\lambda=\mathcal{I}(\mathcal{P}-p)$, gives $J\mathcal{I}(\mathcal{P}-p)=\mathcal{I}(\mathcal{P}-p)$; the first $n$ rows are a tautology, but the last implies $\sum_{i=0}^{n-1}\sigma_i^{-2}(\mathcal{P}_i-p_i)=0$.} that $\lambda=M\mathcal{I}(\mathcal{P}-p)$ and $\sum_{i=0}^{n-1}\sigma_i^{-2}(\mathcal{P}_i-p_i)=0$. It follows, by substitution into the last equation of~(\ref{eq:succinct}), that \begin{equation} \label{eq:a2} 2\eta{}a = \left(\frac12\tau{}M+L\tau{}M\right)\mathcal{I}(\mathcal{P}-p), \end{equation} which can be combined with (\ref{eq:xa}) as follows. Define $(n-1)\times{}(n+1)$ matrices $A$ and $B$, and $(n-1)\times{}n$ matrix $G$: \begin{equation}\label{eq:G} G_{ij} = \begin{cases} \tau_{i}^2\tau_{i+1}, & i=j,\\ \tau_{i}\tau_{i+1}^2, & i+1=j,\\ \phantom{}0, & \text{otherwise,} \end{cases} \end{equation} \begin{equation}\label{eq:B} B_{ij} = \begin{cases} \tau_{i+1}, & i=j,\\ -(\tau_i+\tau_{i+1}), & i+1=j,\\ \tau_{i}, & i+2=j,\\ 0, & \text{otherwise,} \end{cases} \end{equation} \begin{equation}\label{eq:A} A = \frac14 G\left(\frac12\tau{}M+L\tau{}M\right). \end{equation} Then (\ref{eq:xa}) can be written $Bp=\frac12Ga$, which when combined with (\ref{eq:a2}) obtains the equation \begin{equation} \label{eq:semi-master} (A\mathcal{I}+\eta{}B)p = A\mathcal{I}\mathcal{P}, \end{equation} and the additional constraint $\sum_{i=0}^{n-1}\sigma_i^{-2}(\mathcal{P}_i-p_i)=0$. For stability reasons discussed in section~\ref{sec:presentation}, this constraint will be extended to $\sum_{i=0}^{n}\sigma_i^{-2}(\mathcal{P}_i-p_i)=0$. Defining a $n\times{}(n+1)$ matrix $\mybar{B}$ to be $B$ augmented with a final row of zeros, and a $n\times{}(n+1)$ matrix $\mybar{A}$ to be $A$ augmented with a final row of ones, then \begin{equation} \label{eq:master} \left(\mybar{A}\mathcal{I}+\eta\mybar{B}\right) p = \mybar{A}\mathcal{I}\mathcal{P}, \end{equation} encodes both (\ref{eq:semi-master}) and the extended constraint. Solving~(\ref{eq:master}) by singular value decomposition obtains a least squares approximate solution for the position~$p$ for given smoothing parameter~$\eta$. See section~\ref{sec:presentation} on the nature of this approximation and the optimal presentation of the data in~$\mathcal{P}$. \emph{It is \textbf{important} to read section~\ref{sec:presentation} before implementing~(\ref{eq:master}).} \subsection{Vector case} Consider now the situation where a point vehicle is positioned in a $d$-dimensional Euclidean space with Cartesian coordinates. Suppose that the coordinate positions are observed as a sequence $\mathcal{P}_i\in\mathbb{R}^d$ in such a way that the observational errors have a $d\times{}d$ covariance matrix $\mathcal{C}_i$, and corresponding information matrix $\mathcal{I}_i=\mathcal{C}_i^{-1}$. The quantities to be determined are $p_i\in\mathbb{R}^d$, $v_i\in\mathbb{R}^d$, and $a_i\in\mathbb{R}^d$, which are all now $d$-dimensional column vectors. If the aim is to track the trajectory under the assumption of bounded RMS magnitude of the acceleration, the Lagrangian~(\ref{eq:L}) now becomes vectorised as \begin{eqnarray} \label{eq:Ld} &=& \frac{1}{2}\sum_{i=0}^n(\mathcal{P}_i-p_i)^T\mathcal{I}_i(\mathcal{P}_i-p_i) \\ && + \sum_{i=0}^{n-1} \lambda_{i+1}^T(p_{i+1}-p_{i}-v_{i}\tau_{i}-\frac12 a_{i}\tau_{i}^2) \\ && + \sum_{i=0}^{n-1} \mu_{i+1}^T(v_{i+1}-v_{i}-a_{i}\tau_{i}) \\ && + \eta \left(\sum_{i=0}^{n-1}\tau_ia_i^Ta_i-(t_n-t_0)\xi^2\right), \label{eq:Ld4} \end{eqnarray} where $\lambda_i\in\mathbb{R}^d$, $\mu_i\in\mathbb{R}^d$ are now $d$-dimensional column vectors, but $\eta\in\mathbb{R}$. The superscript $T$ indicates the transpose. The solution of the vectorised optimization problem proceeds identically to the scalar case, using the same linear algebra methods. Let $\mathcal{P}\in\mathbb{R}^{d(n+1)}$ denote a column vector being the time-series of $n+1$~observations~$\mathcal{P}_i\in\mathbb{R}^d$ stacked in $d$-dimensional blocks, and similarly for position variables~$p\in\mathbb{R}^{d(n+1)}$. Let $\mathcal{I}$ denote the $(n+1)d\times(n+1)d$ block diagonal matrix with the $d\times{}d$ information matrices $\mathcal{I}_i$ along the diagonal. Finally, let $I_d$ denotes the $d\times{}d$ identity matrix and let $\myhat{M}=M\otimes{}I_d$ denote the \emph{outer product} of an arbitrary matrix~$M$ with~$I_d$, that is, $\myhat{M}$ has a block structure where each scalar entry~$M_{ij}$ of~$M$ becomes a $d\times{}d$-block $M_{ij}I_d$ of $\myhat{M}$. Then the vectorised solution is \begin{equation} \label{eq:masterd} \left(\mybar{\myhat{A}}\mathcal{I}+\eta\mybar{\myhat{B}}\right) p = \mybar{\myhat{A}}\mathcal{I}\mathcal{P}. \end{equation} \subsection{Implementation and interpretation of the filter}\label{sec:presentation} Solving (\ref{eq:master}) or~(\ref{eq:masterd}) obtains an \emph{approximate} solution to the optimal shadowing trajectory for a given smoothness~$\eta$. To see this, note that the system of equations~(\ref{eq:master}) is under-determined: there are $n$~linear equations in $n+1$ unknowns. This occurs because in deriving~(\ref{eq:xa}) the velocity variables were eliminated, but to completely define a trajectory the velocity needs to be known at some time; usually the initial or final velocity is specified or solved for. To solve for the velocity requires introducing another $n$~variables and $n$~equations to solve for all the velocities, which is significant additional computation for very little benefit. The approximation~(\ref{eq:master}) relies on the fact that if the time window of the trajectory is sufficiently long, then accurate specification of the initial velocity is not required. It just means the initial part of the trajectory may not accurately fit the observations. However, leaving the initial velocity unspecified can lead to instability for short time windows. Imposing the extended constraint $\sum_{i=0}^{n}\sigma_i^{-2}(\mathcal{P}_i-p_i)=0$ overcomes possible instability by implicitly defining an initial velocity. In formulating the Lagrangian forward differences were used to express position in terms of velocity and acceleration. Unfortunately, this results in $A$ and~$B$ having a lower triangular form. Consequently, the approximation errors are largest for the $p_i$ with largest $i$, which is not what is wanted for state estimation and forecasting; it is preferable that the smallest errors are at the most recent times. Reformulating the Langrangian with backward differences solves this problem, however, there is much simpler solution: initially reversing the time-series data sequence $(\tau_i,\mathcal{P}_i,\mathcal{I}_i)$, applying the filter (\ref{eq:master}) or (\ref{eq:masterd}), then reversing shadowing trajectory time-series $p_i$ to obtain the desired result. Even this trick is unnecessary. Let $R$ denote the matrix that reverses a vector, then the time-series reversal trick, is equivalent to changing (\ref{eq:semi-master}) to \begin{equation} \left(A\mathcal{I}+\eta{}B\right)Rp = AR\mathcal{I}\mathcal{P}, \end{equation} but since $R^{-1}=R$, multipling on the left by~$R$ obtains \begin{equation} \left(RAR\mathcal{I}+\eta{}RBR\right)p = RAR\mathcal{I}\mathcal{P}, \end{equation} where the matrices $RAR$ and~$RBR$ are just $A$ and $B$ with their rows and columns reversed. Hence, the time-reversal trick is some bookkeeping when constructing $A$ and~$B$. \section{Illustrative examples} This section provides demonstrations of the use of the proposed methods. The scalar filter is considered first, both as an off-line smoothing filter and sequential state-estimator. The vector filter is considered for observations in cartesian coordinates, with and without correlation, which is a preliminary to section~\ref{sec:noncartesian} where non-cartesian observations are considered. \subsection{Scalar filter for moothing and sequential tracking} Here tracking of one observed variable is examined for increasing values of smoothing paramter~$\eta$. The filter is employed as smoothing filter over the entire observation window, and as a sequential state-estimator. In both cases the time-reversal trick discussed in section~\ref{sec:presentation} is employed. In this example employs a large red noise component~$\chi_t$ to mimic a vehicle maneuvering in an unpredictable way. \begin{figure} \centering \includegraphics[width=\linewidth]{fig1a} \caption{Position tracking $25+10\sin(t/15)+\chi_t+3\epsilon_t$, $0\leq{}t\leq{}100$ where $\epsilon_t$ is a white noise process $N(0,1)$, and $\chi_t$ an independent cumulative of a white noise process $N(0,1)$. Shadowing filter results for smoothing parameters as stated.} \label{fig:1a} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth,trim=0 120 0 120]{fig1b} \caption{Computed accelerations for tracking shown in fig.~\ref{fig:1a} for selected~$\eta$. Accelerations are scaled by $\sqrt{\eta}$ to allow easier comparision.} \label{fig:1b} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth,trim=30 20 220 165]{fig1c} \caption{Sequential tracking of same data as shown in fig.~\ref{fig:1a} for selected~$\eta$. State estimates use only observations up to that time, and final filtered state at that time is plotted.} \label{fig:1c} \end{figure} Figure~\ref{fig:1a} reveals how the implied approximation in the solution~(\ref{eq:master}) results in the position tracking deviating from ideal at the beginning ($t=0$) of the time series. Without the time-reversal trick, this deviation would have occurred at the end ($t=n$); although the deviation is small, it is significant, but if the time-series window is large enough, then there is no significant effect for $t>10$. Optimal smoothing appears occur in the range $10<\eta<100$. Figure~\ref{fig:1b} shows how smaller~$\eta$ result in large and rapidly switching accelerations, while larger~$\eta$ result in much smaller accelerations applied over longer periods. Figure~\ref{fig:1c} demonstrates using the same filter as in figure~\ref{fig:1a} as a sequential state-estimator; the filter is applied only to the observations up to that time. In the this tracking mode the smoothing parameter~$\eta$ is seen to act like an inertial damping. For the larger $\eta=10000$ the tracking lags the true trajectory. For smaller $\eta$ values the tracking is better, but note how a sequence of observations with repeated negative bias for $60<t<75$ result in the tracking over-shotting the turn near $t=70$. When the repeated bias ends around $t=75$ the near optimal $\eta=100$ track jumps back to good estimates, while the $\eta=1000$ track turns back smoothly toward the true trajectory. \subsection{Vector filter with uncorrelated observations} \label{sec:indept} If the observations of each component of the $d$-dimensional position are uncorrelated, then filtering can be accomplished very efficiently using a scale filter, which will come in useful later when non-cartesian observations are considered. When the observations of each component of the $d$-dimensional position are uncorrelated the covariance matrices $C_i$ are all diagonal, and it is unnecessary to use the vectorised filter~(\ref{eq:masterd}), which has been expanded by an outer product with~$I_d$; instead it is sufficient to solve ~(\ref{eq:master}) seperately for each component. If all the components have proportionally the same variance at each time, then the singular value decomposition only needs to be solved once for all components, that is, one information matrix~$\mathcal{I}$ is needed, whose elements are proportional to the variances, and $\mathcal{P}$ becomes a $(n+1)\times{}d$ matrix of observations, so that eq.~(\ref{eq:master}) then solves for all components of~$p$ simultaneously. \begin{figure} \centering \begin{tabular}{ll} (a) & (b) \\ \includegraphics[trim={130 0 110 0},width=0.4\linewidth]{fig2a} & \includegraphics[trim={130 0 110 0},width=0.4\linewidth]{fig2b} \end{tabular} \caption{Position tracking for $0\leq{}t\leq{}150$ of the path $(x,y)=10(t-10)/150+(1/3)(1-t) (\sin(t/15),2-t/15).$ Observational errors independent on each component with standard deviation~$5$. (a)~Final tracking curves for various $\eta$. (b)~Sequential position estimates for $\eta=1000$, that is, last state of a shadowing trajectory obtained using all observations up to a given time.} \label{fig:p2d} \end{figure} Figure~\ref{fig:p2d} shows tracking in two dimensions in this situation. Observe how a very large $\eta=10000$ results in poor tracking as the tracking curve is pulled toward the mean of the observations. The poor tracking at the beginning can also be observed for larger~$\eta$. Fig~\ref{fig:p2d}(b) shows the important case of sequential estimates, that is, sequential tracking of the object using all the observations obtained up to a given time. For efficiently reasons, one would in practice only use a finite window of past observations. Details of how to determine the optimal window for a given system and purpose is beyond the scope of this paper and is discussed in the general context of shadowing filters elsewhere~\cite{Stemler-Judd:guide1}. \section{Using non-Cartesian observations}\label{sec:noncartesian} A frequently encountered tracking problem involves using bearing and range observations, or combinations of multiple bearing or range observations. Several important situations are worth considering. Active radar location uses range and bearing information. Satellite interferometry uses range and bearing information, but the range is much more accurately measured than the bearing. Global positioning by satellite uses only range information, but from multiple reference satellites. Tracking wireless devices can use range information inferred from signal power at multiple transponders. Passive sonar location provides bearing information, but poor range; often bearings from multiple sensor points are used. All of the applications mentioned can be dealt with using the vector filter (\ref{eq:masterd}), by transforming the observations into \emph{raw Cartesian position estimates} and computing the appropriate information matrix. The transformations are simple geometry, but computing the information matrices requires some approximation or restrictions. Let $p\in\mathbb{R}^d$ be the position in Cartesian coordinates, and let $q\in\mathbb{R}^d$ be a vector of $d$ noise-free observations of the position is some other coordinates. Suppose there is an invertible function~$f$, on some domain, such that $p=f(q)$. Given a noisy observation $\mathcal{Q}=q+\Deltaq$, the transform~$f$ provides a raw position estimate~$\mathcal{P}=f(\mathcal{Q})=p+\Deltap$. Given the covariance matrix~$C_q$ of the observations, the covariance $C_p$ of the estimate is required. The column vector $\Delta\q$ is the error in the observation, and to a first approximation, the error in the estimate is $\Delta\p\approx{}J\Delta\q$, where $J=\partial_qf(q)$ is the Jacobian matrix of~$f$ at~$q$. Since the covariance $C_p$ is the expected value of the outer product $\Delta\p\Dp^T$, it follows that, to a first approximation, $C_p=JC_qJ^T$. It also follows that the corresponding information matrices are related by $\mathcal{I}_p=K^T\mathcal{I}_qK$, where $K=J^{-1}=\partial_p(f^{-1})$. Note that since only the information matrix is needed in the shadowing filter under discussion, it is sometimes easier to compute the $K$ directly using $f^{-1}$, than it is to compute $J$ and invert it. This is the case in some of the following examples. An important problem, which will be returned to in each of the following sections, is that although the correlation matrix~$C_q$ may be well known, the transformation matrices $J$ and~$K$ depend on the target's location, which is unknown. The raw position estimate $\mathcal{P}=f(Q)$ could be used, but this introduces errors in the supposed covariance. When the transformed coordinates are highly correlated and the transform very non-linear, then a small error in the raw position estimate can give rise to a very wrong estimate in the correlation. This problem plagues Kalman filters, and other filters that need a covariance estimate to reliably estimate the state. A significant advantage of a shadowing filter is the robustness gained from finding a shadowing trajectory, rather than just a current position estimate. This robustness means that the correlation in the raw position estimates are of little importance, that is, ignoring the correlation can have little effect on the quality of the tracking. \subsection{Range and bearing observations} Consider a target tracked in the plane, position $p=(x,y)$, using observations $q=(r,\theta)$, where $r$ is the range from a reference point $(a,b)$ and $\theta$ the bearing in radians measured in the anti-clockwise direction from the $x$-axis. The transformation $p=f(q)$ is given by \begin{eqnarray} x &=& a + r\cos\theta,\\ y &=& b + r\sin\theta. \end{eqnarray} Under the assumption that $r$ is not close to zero, and the variances of $r$ and $\theta$ are small, then the covariance and information matrix of $x$ and~$y$ are approximated as previously described using \begin{equation} J = \begin{pmatrix} \cos\theta & -r\sin\theta\\ \sin\theta & r\cos\theta \end{pmatrix} \end{equation} or \begin{equation} K = \begin{pmatrix} \cos\theta & \sin\theta\\ -(1/r)\sin\theta & (1/r)\cos\theta \end{pmatrix}. \end{equation} Figure~\ref{fig:br} shows the tracking of a target using range and bearing information of different accuracy. In panels (a) and (b) the correlation of the raw position estimates is ignored, and the results are good, that is, the shadowing filter provides significant improvement over the raw position estimates. Panel~(c) uses the same data as panel~(b), but tries to take into account the correlation of the raw position estimates using the correlation computed at the raw position estimate; the result is worse than assuming no correlation. If the same is attempted for the data of panel~(a), the result is a worse failure, because the radius is poorly estimated and since this appears as a reciprocal in the transformation, small errors in the radius can lead to very poorly estimated correlations, so much so that the quality of the filtering is much worse. This example provides an excellent illustration of how the robustness of a shadowing filter has significant gains over other filters. Not only does ignoring the correlation result is better tracking, it is also more efficient, because rather than using the vectorised filter~(\ref{eq:masterd}), the simpler, more compact, scalar filter~(\ref{eq:master}) can be used. \begin{figure} \centering \begin{tabular}{lll} (a) & (b) & (c) \\ \includegraphics[trim={220 30 200 30},width=0.3\linewidth]{fig3a} & \includegraphics[trim={220 30 200 30},width=0.3\linewidth]{fig3b} & \includegraphics[trim={220 30 200 30},width=0.3\linewidth]{fig3c} \end{tabular} \caption{Position tracking of same path as fig.~\ref{fig:p2d}. Circle with cross-hair is the observation site. Sequential position estimates and final trajectory estimate for $\eta=1000$. (a)~Bearing measurement ten times more accurate than range. (b)~Range measurement ten times more accurate than bearing. In both (a) and (b) the estimation ignores correlations and treats each component as being independent. (c)~As case (b) but using correlation as computed from raw position estimate; this does worse than~(b), because the correlation is calculated relative to the raw position estimates, which are very misleading.} \label{fig:br} \end{figure} \subsection{Multiple bearing observations} Consider a target tracked in the plane, position $p=(x,y)$, using bearing observations $q=(\theta,\theta')$ from two distinct reference points $(a,b)$ and $(a',b')$. Under most circumstances there exists unique $s,s'\in\mathbb{R}$ such that \begin{eqnarray} \label{eq:br} (x,y) &=& (a,b) + s(\cos\theta,\sin\theta)\\ &=& (a',b') + s'(\cos\theta',\sin\theta'). \end{eqnarray} Hence, a raw position estimate can be found by solving the linear equations \begin{equation}\label{eq:lin} \begin{pmatrix} \cos\theta & -\cos\theta'\\ \sin\theta & -\sin\theta' \end{pmatrix} \begin{pmatrix} s \\ s' \end{pmatrix} = \begin{pmatrix} a'-a \\ b'-b \end{pmatrix}, \end{equation} provided the equations are consistent and non-singular. Since the angles are observed angles, inconsistency can occur when $\theta'\approx\pm\theta$. For some sequential filters such nearly-singular situations can be devastating, but, since the shadowing filter is estimating a trajectory from a sequence of observations, there is generally no harm in simply dropping observations corrupted by near-singularities, or replacing them with forecasted positions, or crudely interpolated positions. There is generally no harm in doing either of these for short periods. Dropping observations requires using a larger $\tau$ time-gap between observations. If the observations were equally spaced in time, this leads to a lot of special computation, and so in this case it is generally easier to insert a forecasted position with a suitably scaled down information matrix to account for the errors in the forecast; see details given later. In this multiple-bearing situation it is difficult to compute~$J$ directly, but $K$ is easily computed. Note that \begin{equation} \theta = \arctan\frac{y-b}{x-a} \qquad\text{and}\qquad \theta' = \arctan\frac{y-b'}{x-a'}. \end{equation} It follows that \begin{equation} K = \begin{pmatrix} (x-a)/r^2 & (y-b)/r^2\\ (x-a')/r'^2 & (y-b')/r'^2 \end{pmatrix}, \end{equation} where $r^2=(x-a)^2+(y-b)^2$ and ${r'}^2=(x-a')^2+(y-b')^2$, which requires only trivial computation once an estimate of $(x,y)$ is obtained. As mentioned, a significant problem arises when the sensors and target are collinear, $\theta\approx\pm\theta'$, because the linear equations~(\ref{eq:lin}) are singular or badly conditioned. This can result in raw position estimates far their true position. Figure~\ref{fig:sonar} shows an example of tracking using two mobile sensors for detecting bearing, and a mobile target. In this example the target moves on a circular path clockwise from the 12 o'clock position. When the target is between the 4 and 5 o'clock position it is directly between the sensors, and at the end of its path, around the 7 o'clock position, the target is almost directly behind both sensors. Both of these situations lead to a poorly conditioned matrix in eq.~(\ref{eq:lin}) and hence poor raw position estimates. For the estimates shown in fig.~\ref{fig:sonar} the component correlations of the raw position estimate are ignored, as in fig~\ref{fig:br}(a) and~(b). The ill-conditioning is dealt with by using the 1-norm estimate of the reciprocal condition as returned by LAPACK~\cite{LAPACK}. This number varies between zero and one, with small values indicating bad conditioning. This norm is a natural candidate to scale the information matrices to give raw position estimates from poorly conditioned situations small weight. \begin{figure} \centering \includegraphics[trim={90 30 90 30},width=\linewidth]{fig4} \caption{Position tracking using bearings from two moving sensors. For $0\leq{t}\leq100$ one sensor moves between $(-3,3)$ and $(3,1)$, and the other between $(-3,-2)$ and $(3,-1)$, both at constant speed, while the target moves on the circular path $(\sin(t/25),\cos(t/25))$. The centre of the circles is the position of the sequential state estimates, the diameter of the circle indicates the condition-number weight applied to that position's raw position estimate.} \label{fig:sonar} \end{figure} From fig.~\ref{fig:sonar} it can be seen that when the raw position estimates are obtained under good conditioning, the sequential estimates (large circles) are good. Under poor conditioning (small circles) the sequential estimates are mainly forecasts from the preceding trajectory positions, and the wild raw position estimates are ignored. When the target passes beyond the 5 o'clock position and conditioning improves, and the sequential estimates return to good position estimates. Note also that the final trajectory almost exactly matches the true trajectory through the 4 to 5 o'clock position. \subsection{Multiple range observations} Consider a target tracked in the plane, position $p=(x,y)$, using range observations $q=(r,r')$ from two distinct reference points $(a,b)$ and $(a',b')$. Under most circumstances the location can be obtained from the solutions of $r^2=(x-a)^2+(y-b)^2$ and ${r'}^2=(x-a')^2+(y-b')^2$, assuming that the non-uniqueness can be resolved. By taking partial derivatives of these two equations with respect to $x$ and~$y$, it follows implicitly that \begin{equation} K = \begin{pmatrix} (x-a)/r & (y-b)/r\\ (x-a')/r' & (y-b')/r' \end{pmatrix}. \end{equation} \section{Partial observations} Our stated formulation of the tracking problem allows for non-uniformly spaced observations, but all examples thus far have used only uniformly spaced observations. We make a simple demonstration using non-uniformly spaced observations by considering a situation where observations are missing. Figure~\ref{fig:pa2} demonstrates tracking using similar data to figure~\ref{fig:1a} where 75\% of the observations are missing, comparing this to the tracking calculations when all observations are available. Two values of the smoothing parameter~$\eta$ are used. These results demonstrate that the tracking algorithm is very robust. The position tracking is very similar when there is missing observations. The acceleration estimates are also very similar. Overall the tracking is slightly smoother, and accelerations less variable, when there is missing data, but this is something of an artifact, because the effective amount of smoothing for a fixed $\eta$ depends on the amount of observations available; less data results in more smoothing. \begin{figure} \centering \includegraphics[width=\linewidth,trim=0 0 180 180]{fig5} \caption{Position tracking and accelerations using similar data to figure \ref{fig:1a} and~\ref{fig:1b}, but with partial observations for two $\eta$ smoothing values. The first two curves show calculations when 75\% of observations are missing, with crosses marking the missing observations, and the second two curves using all observations.} \label{fig:pa2} \end{figure} \section{Conclusion}\label{sec:notworthit} Under the assumption of piecewise constant accelerations a shadowing filter algorithm has been derived and implemented efficiently for scalar time-series of observations. Vector time-series of observations can be dealt with efficiently using the same algorithm if each component is observed with uncorrelated errors. The scalar algorithm can be easily extended to deal with tracking in $d$-dimensions where the position vector components are not observed directly and a transformation of the observations can be used to obtain an initial \emph{raw position estimate}. The question then arises of how to deal with the correlation of the errors this introduces. Remarkably, experiments reveal, as illustrated in figs \ref{fig:p2d}, \ref{fig:br} and \ref{fig:sonar}, that ignoring this correlation has little significant effect. Simply using the raw position estimates to estimate the correlations produced worse position estimates, because errors in the raw position estimates give misleading indications about the correlation. This problem is unvoidable to Kalman filters. It may be that some more complex algorithm could be devised to better estimate the correlations, but this will increase the amount of computation, for possibly no significant gain. Just taking correlations into account in the stated algorithm increases the size of matrix requiring singular value decomposition from $n(n+1)$, to $d^2n(n+1)$ entries, so the computation cost is significant. There are a number of other implementation issues that have not been discussed, the most important of which is the optimal \emph{window} size~$n$ for obtaining a shadowing trajectory and position estimates. The window size is problem depended, but a method for determining an appropriate window size is discussed at length elsewhere~\cite{Stemler-Judd:guide1}. This cited work also discusses issues of how best to implement sequential filtering. \section*{Acknowledgements} Supported by Australian Research Council Discovery Project DP0984659. \bibliographystyle{plain}
1,108,101,564,088
arxiv
\section{Introduction} Classical Cepheids (Cepheids hereafter) are an incredibly useful class of pulsating yellow supergiants. Since the discovery of the Cepheid Period-Luminosity law (the \textit{Leavitt Law} -- \citealt{lea08}) over a century ago, they have become a cornerstone of the Cosmic Distance Scale and a powerful tool for determining the Hubble constant ($H_0$) to a precision of $\sim$1\% \citep{rie16}. They also, however, offer valuable insights into stellar astrophysics and the influence that radial pulsations can have on stellar interiors, outer atmospheres, and circumstellar environments. In recent years, multi-wavelength studies of Cepheids have begun detailing new and surprising behaviors. Interferometric studies in the infrared and optical have revealed circumstellar structures around every Cepheid observed to date \citep[see][]{nar16}, in addition to radio observations that have determined mass loss rates of select Cepheids \citep{mat16}. Ultraviolet and far ultraviolet spectra have shown that the outer atmospheres of Cepheids undergo pulsation-phased variations in both emission levels and plasma density \citep{sp82,sp84a,sp84b,boh94,eng09,eng14,eng15,nei16a}. X-ray observations have been used to find stellar companions to Cepheids \citep{eva10,eva16} and to show that the protoype of Classical Cepheids, $\delta$ Cep, is also an X-ray variable \citep{eng17}. This is all in addition to continuing optical photometry and radial velocity studies attempting to detect the full range of Cepheid variations. Recent efforts have even made use of continuous space-based photometry from satellites such as \textit{CoRoT} \citep{por15}, \textit{BRITE} \citep{smo16}, \textit{MOST} \citep{eva15a} and \textit{Kepler} \citep{der17}. Once prized for the stability of their pulsations, it has been known for almost a century now that Cepheid pulsation periods can change over time \citep{edd19}. \citet{nei16} give an excellent overview of period variations in not only Cepheids, but other pulsating variable stars as well. Several studies provide a more complete understanding of the types of period variability that Cepheids can display. \citet{sza77,sza80,sza81} are the most comprehensive O-C studies of galactic Cepheids available. Updated O-C data sets for select Cepheids, some of which display potential companion-induced period variations, can also be found in \citet{sza89,sza91}. The important role that amateur observers can play in Cepheid studies is highlighted by \citet{ber03}, who analyzed the AAVSO database and derived numerous times of maximum light for a number of bright Cepheids. And finally, very high cadence and precision radial velocity studies are now being used to also search for period variations and potential companions to bright Cepheids \citep{and16a,and16b}. Monitoring the pulsations of Cepheids has become a powerful tool for studying stellar evolution on human timescales. Whether a Cepheid's period is increasing or decreasing, and the rate at which it does so, reveals evolutionary changes in the mean density of the star. As a Cepheid evolves towards the cool edge of the instability strip, its overall size grows and thus the density decreases. As the pulsation period of a star is inversely related to its mean density (the Period-Density relationship), the pulsation period increases. Conversely, when a Cepheid evolves toward the hotter edge of the instability strip, its overall size shrinks, density increases and thus the period decreases. The rate of period change can also theoretically be used to determine specifically where a Cepheid is within the instability strip \citep{tur06}. Normally, monitoring the evolution of a Cepheid's pulsation period requires yearly observing campaigns designed to develop full phase coverage in each year. This can involve a significant investment of telescope time. However, the advent of wide field photometric surveys such as, e.g., the All-Sky Automated Survey \citep[ASAS --][]{Pojmanski:1997} and the Kilodegree Extremely Little Telescope \citep[KELT --][]{Pepper:2007,Pepper:2012}, now provide excellent datasets for carrying out period change studies of a large number of Cepheids spread out across the sky. Here we report on a pilot period study of the Cepheid VZ Cyg using a combination of both survey data and targeted photometric observations. \section{VZ Cyg} VZ Cyg (BD+42 4233; $V\approx$ 8.62--9.29; F5--G0 II; $\alpha$ = 21:51:41.44, $\delta$ = +43:08:02.5) was first discovered to be a variable star by \citet{cer04}, using photographic plates obtained by Bla\v{z}ko. Insufficient data were taken at that time to accurately determine the period, though it was noted to be shorter than 5 days. \citet{bla06} later used additional data to populate a better light curve, but one that was plotted with two maxima and two minima. The period was estimated to be 9.727 days. It wasn't until \citet{sea07}, using a much larger dataset of 256 photometric observations, recognized that the earlier reported periods were double the true value which Seares determined as 4.864 days. VZ Cyg is also, as with numerous other Cepheids \citep[see][and references therein]{eva15}, a known spectroscopic binary with a 2183 day orbit ($\sim$5.98 years -- \citealt{gro13}). \section{KELT Observations} The Kilodegree Extremely Little Telescope is a photometric survey using two small-aperture (42 mm) wide-field robotic telescopes, KELT-North at Winer Observatory in Arizona in the United States \citep{Pepper:2007}, and KELT-South at the South African Astronomical Observatory (SAAO) near Sutherland, South Africa \citep{Pepper:2012}. The KELT survey covers over 70$\%$ of the sky and is designed to detect transiting exoplanets around stars in the magnitude range $8 < V < 11$, but can derive photometry for stars between $7 < V < 14$. It is designed for a high photometric precision of RMS $<1\%$ for bright, non-saturated stars. VZ Cyg is located in KELT-North field 12, which is centered at J2000 $\alpha =$ 21.4$^{h}$, $\delta =$ +31.7$\degr$. Field 12 was monitored for seven seasons from UT 2007 June 08 to UT 2013 June 14, acquiring a total of 5159 images after post-processing and removal of bad images (see Table \ref{tab:KELT} for KELT photometry of VZ Cyg). Because the KELT-North telescope is located in the American southwest, the monsoon weather prevents observations in the middle of summer (roughly early July to the beginning of September). Because that is when the visibility of VZ Cyg peaks, our data contain gaps at those times. The dates and number of images for each observing season are shown in Table \ref{tab:seasons}, with each observing season separated in (a) and (b) segments due to the monsoon gap. Because KELT uses a German Equatorial Mount, the telescope performs a flip when crossing the meridian, so data acquired in the eastern orientation must be reduced separately from data acquired in the western orientation. In this analysis, we have combined the east and west KELT light curves for VZ Cyg into a single data set. \begin{table} \centering \caption{KELT Observations of VZ Cyg} \label{tab:seasons} \begin{tabular}{ |c|c|c|c| } \hline Season & Start Date & End Date & Number of Images \\ \hline 1a & 2007 Jun 08 & 2007 Jun 27 & 435 \\ 1b & 2007 Sep 19 & 2007 Oct 13 & 116 \\ 2a & 2008 Apr 24 & 2008 May 21 & 187 \\ 2b & 2008 Sep 18 & 2009 Jan 08 & 813 \\ 3a & 2009 Mar 26 & 2009 Jun 23 & 408 \\ 3b & 2009 Sep 22 & 2009 Dec 20 & 673 \\ 4a & 2010 Apr 26 & 2010 Jun 28 & 375 \\ 4b & 2010 Sep 26 & 2010 Dec 18 & 641 \\ 5a & 2011 Apr 30 & 2011 Jun 17 & 212 \\ 5b & 2011 Sep 21 & 2011 Dec 17 & 628 \\ 6a & 2012 Apr 22 & 2012 Jun 22 & 97 \\ 6b & 2012 Sep 17 & 2012 Dec 13 & 464 \\ 7a & 2013 May 07 & 2013 Jun 14 & 110 \\ \hline \end{tabular} \end{table} \section{RCT Photometry} Recent CCD $BV$ photometry of VZ Cyg (see Table \ref{tab:rct} and Figure \ref{fig:rctphot}) were also carried out for this program with the 1.3m \textit{Robotically Controlled Telescope} (RCT -- \citealt{str14}) at Kitt Peak National Observatory (KPNO). Two seasons were obtained -- Season 1 from 2015 Aug 15 to 2015 Dec 14 and Season 2 began on 2016 Aug 20 to 2016 Nov 19. Observed amplitudes of $A_V$ = 0.680 mag and $A_B$ = 1.004 mag are found, making VZ Cyg a moderate amplitude Cepheid for its pulsation period \citep{kla09}. \begin{figure}[hbtp!] \centering \includegraphics[width=0.45\textwidth]{vzcygrctphot.eps} \caption{\textit{BV} photometry of VZ Cyg obtained with the RCT, phased using a new ephemeris, determined by this study, of: $HJD_{\rm max} = 2457665.6738 + 4.864207~{\rm days} \times E$. Bright green and dark green circles represent the 2015 and 2016 \textit{V}-band data, respectively, and blue and cyan circles represent the 2015 and 2016 \textit{B}-band data, respectively.} \label{fig:rctphot} \end{figure} \section{Analysis} \subsection{Outlier Rejection} To remove spurious data points, we phase the KELT light curve of VZ Cyg based on the reported period from \citet{sza91}, 4.86445 days. We then divide the phased light curve into 25 bins, and for each bin, we compute the median absolute deviation (MAD) of the magnitude and reject any data points that are more than 4 MAD from the median magnitude. We do that once more and reject a total of 76 data points. \subsection{Periodicity and Blending} After performing outlier rejection, we run a period search algorithm, analysis of variance \citep[AoV --][]{S-C:1989}, implemented in the VARTOOLS light curve analysis program \citep{Vart:2016}, to determine the Cepheid period from the KELT data. AoV searches for a period by using phase binning. It is sensitive to detecting high-amplitude non-sinusoidal signals. We search a period range of 2 and 20 days with 20 bins, and find a peak at 4.864295 days. The phased KELT light curve of VZ Cyg is shown in the top panel of Figure \ref{fig:ModelFitting}, but it appears to show some structure on top of the Cepheid signal. In order to identify that behavior we model and subtract the Cepheid pulsations and examine the residuals. We employ a median smoothing fit, using a smoothing length of 1/20th of the pulsational period. The fit from that smoothing is shown in the top panel of Figure \ref{fig:ModelFitting}, and the residuals between that fit and the data are shown in the bottom panel. Figure \ref{fig:TimeSeries} shows the full unphased KELT light curve, along with the residuals after subtracting the Cepheid pulsations. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{CepheidModelFitting_v4.eps} \caption{{\it Top panel}: KELT light curve of VZ Cyg Cepheid, phased to the initially-determined Cepheid period of 4.864295 days. Overlaid on top is the median smoothing fit in red. {\it Bottom panel}: Residuals after subtracting off the model fit.} \label{fig:ModelFitting} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{TimeSeries_v3.eps} \caption{{\it Top panel}: Unphased KELT photometry of VZ Cyg. {\it Bottom panel}: Residuals after subtraction of the Cepheid pulsations.} \label{fig:TimeSeries} \end{figure} We then search for periodic signals in the residuals after subtracting the Cepheid pulsations. This time we use the generalized Lomb-Scargle \citep[L-S --][]{Press:1992,Zechmeister:2009}, also implemented in VARTOOLS. Generalized L-S searches for a period by fitting sinusoids, and which we have found more sensitive to low amplitudes signals than other algorithms. We search across a period range of 10 and 500 days. L-S finds a peak at 322.62 days. The phased plot of the residuals to that period is shown in the top panel of Figure \ref{fig:ResidualsPhaseDiagram}. After re-examining the original KELT images, we have traced the long-term modulation to blending between VZ Cyg and a nearby star V673 Cyg, located at $\alpha$ = 21:51:37.75, $\delta$ = +43:09:58.7, which is 2.14 arcmin from VZ Cyg, or 5.6 KELT pixels. That star is a known Mira variable, and is somewhat blended into the effective aperture for VZ Cyg in KELT. We use the median smoothing method, applied to the phased residuals from the Cepheid pulsations shown in the top panel of Figure \ref{fig:ResidualsPhaseDiagram}, to represent the variability of that star, and subtract that signal from the light curve of VZ Cyg. We then recompute the period of the Cepheid variability, this time using multiharmonic AoV \citep{S-C:1996} using 4 harmonics, finding a period of 4.864226 days, and we display the resulting light curve in the bottom panel of Figure \ref{fig:ResidualsPhaseDiagram}. We use that final light curve for the analysis described below. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{PhaseDiagram_v4.eps} \caption{{\it Top panel}: Residuals after subtraction of the Cepheid pulsations, phased to 322.62 days, due to blended background Mira variable. Overlaid on top is the median smoothing fit in red. {\it Bottom panel}: Light curve of VZ Cyg after subtraction of the residuals, phased to a new Cepheid period of 4.864226 days.} \label{fig:ResidualsPhaseDiagram} \end{figure} \subsection{Stellar Parameters} Intensity-weighted mean magnitudes of $\langle B \rangle$ = 9.799 $\pm$ 0.005 and $\langle V \rangle$ = 9.005 $\pm$ 0.005 were derived for VZ Cyg from the RCT photometry. This gives an observed $\langle B \rangle$ -- $\langle V \rangle$ = 0.794, but VZ Cyg has a spectroscopically determined color excess of $E_{B-V}$ = 0.266 \citep{gro13}, resulting in a dereddened color index ($\langle B \rangle$ -- $\langle V \rangle$)$_0$ = 0.528. VZ Cyg was also included in the first \textit{Gaia} data release \citep[\textit{Gaia} DR1]{lin16}, which determined a parallax of $\pi = 0.545 \pm 0.228$ mas. \citet{ast16} showed that applying a Milky Way stellar observability prior resulted in improved DR1 distances for targets nearer than 2 kpc, while an exponentially decreasing stellar density prior worked better for distances larger than 2 kpc. Previous distance estimates for VZ Cyg, e.g. $1849 \pm 139.9$ pc \citep{gro13}, are close enough to the 2 kpc demarcation value that an average of the Milky Way and exponential prior-based distances was deemed to be the best representation of the \textit{Gaia}-determined distance until future data releases become available. Averaging the mode Milky Way and exponential distance values (1692 and 2029 pc, respectively) from \citet{ast16}, and combining their standard errors, gives a distance of $1861 \pm 879$ pc. This value is in good agreement with previous distance estimates, such as that of \citet{gro13}. Using the RCT photometry, the reddening value from the literature, and the \textit{Gaia}-derived distance, we calculate an absolute magnitude of $M_V = -3.17^{+1.39}_{-0.84}$ for VZ Cyg. The errors are large, but this value of $M_V$ is calculated using an early \textit{Gaia} parallax with an appreciable error that will vastly improve over the mission lifetime. However, the absolute magnitude still agrees well with the spectroscopically determined (via Fe \textsc{ii} / Fe \textsc{i} line ratios) value of $M_V = -3.11\pm0.18$ determined by \citet{kov10}. We note that, at the time of this writing, the {\it Gaia\/} $\pi$ values potentially have systematic uncertainties that are not yet fully characterized but that could reach $\sim$300~$\mu$as\footnote{See \url{http://www.cosmos.esa.int/web/gaia/dr1}.}. Preliminary assessments suggest a global offset of $-0.25$~mas (where the negative sign indicates that the {\it Gaia\/} parallaxes are underestimated) for $\pi \gtrsim 1$~mas \citep{Stassun:2016b}, corroborating the {\it Gaia\/} claim, based on comparison to directly-measured distances to well-studied eclipsing binaries by \citet{Stassun:2016a}. \citet{Gould:2016} similarly claim a systematic uncertainty of 0.12~mas. \citet{Casertano:2017} used a large sample of Cepheids to show that there is likely little to no systematic error in the {\it Gaia\/} parallaxes for $\pi \lesssim 1$~mas, but find evidence for an offset at larger $\pi$ consistent with \citet{Stassun:2016b}. Thus the available evidence suggests that any systematic error in the {\it Gaia\/} parallaxes is likely to be small, and probably negligible for $\pi < 1$~mas. For the purposes of this work, we use and propagate the reported {\it random} uncertainties on $\pi$ only, emphasizing that additional (or different) choices of $\pi$ uncertainties may be applied in the future. \section{O-C Analysis} All available times of maximum light for VZ Cyg are compiled from the literature, and combined with the recent maxima from KELT survey photometry and pointed CCD photometry gathered by us with the RCT (see Table \ref{tab:oc}). A Fourier series fit is applied to the combined RCT data \citep[see][]{eng14,eng15}, and the fit results serve as a template light curve for determining times of maximum light and errors for the individual KELT and RCT seasons. However, KELT photometry is not taken through a standard photometric filter, but rather a Kodak Wratten No. 8 red-pass filter \citep{Pepper:2007}. The resulting KELT system response function peaks just short of the standard $R$ bandpass. As a result, adjustments have to be made to the KELT timings, as per the $BVRI$ amplitudes and phase shifts of \citet{free88}. For VZ Cyg, the timing shifts between the $V$ and $R$ photometry of \citet{ber08} were used to calculate the appropriate KELT timing shift of $-0.025 \pm 0.005$ days. Also, in cases where photometric data sets are made available, but no times of maxima are published, these times will also be determined by fitting our template curve to the data. This is known as the Hertzsprung method \citep{her19}. Previously published times of maximum were assigned weights, but no errors. Therefore, when fitting the O-C data, the weights were taken into account so that all data could be handled equally. We compute O-C data for VZ Cyg using the ephemeris of \citet{sza91}: \begin{center} $HJD_{\rm max} = 2441705.702 + 4.86445~{\rm days} \times E$ \end{center} where \textit{C} is the computed time of maximum light, and \textit{E} is the epoch of the observation. The O-C data are presented in Table \ref{tab:oc} and Figure \ref{fig:vzcygoc}. As the figure shows, the pulsation period of VZ Cyg is continually decreasing over time, as also reported by \citet{tur98}. However, the situation becomes more complex depending on which time span of data is analyzed. Analyzing the complete O-C data set returns a rate of period change dP/dt = $-0.0642 \pm 0.0018$ sec yr$^{-1}$, which is considerably slower than the rate of $-0.2032$ sec yr$^{-1}$ reported by Turner. Such a difference can be understood, though, given the dynamic nature of Cepheid evolution and the fact that the current study benefits greatly from the data published in the nearly two decades that have passed since Turner's analysis. This rate of period decrease places VZ Cyg in the second crossing of the instability strip \citep{tur06}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{vzcygoc.eps} \caption{The full O-C diagram for VZ Cyg, including all literature data along with those from KELT and the RCT. The three colored vertical lines at the top of the plot demarcate the three time spans of O-C data separately analyzed. At the bottom of the plot, the three rates of period change found from quadratic fits to the data sets are given. From top to bottom, they represent rates for the full data set, the O-C data since HJD = 2440000, and the O-C data since HJD = 2454000, respectively.} \label{fig:vzcygoc} \end{figure} The addition of new data to an O-C diagram provides benefits beyond the extended time span. In particular, for known variable stars like VZ Cyg, earlier data are often visual and usually less precise than more modern photometry, especially from photoelectric or CCD instruments. This means that the scatter of an O-C diagram tends to significantly decrease over time. Therefore, more recent epochs of the O-C data set were analyzed separately to see if any further information about the period variability of VZ Cyg could be gleaned. As shown in (Figure \ref{fig:recent}), when a quadratic fit is applied to only the data after HJD = 2440000, the fit residuals show evidence of a potential cyclic period change. A combination quadratic+sinusoidal fit yields a $\chi^2$ value $\sim2.5\times$ smaller than the quadratic fit, returning a period change rate of dP/dt = $-0.0762 \pm 0.0024$ sec yr$^{-1}$; slightly faster than the period change rate found from the full data set, but still well shy of the value reported by \citet{tur98}. Finally, if only the most recent O-C determinations (those of KELT and the RCT -- Figure \ref{fig:keltrct}) are analyzed, the rate of period change is found to be dP/dt = $-0.0923 \pm 0.0110$ sec yr$^{-1}$. This is the fastest rate of period decrease determined from the O-C data set and subsets, yet still places VZ Cyg in the second crossing of the instability strip. But what to make of the different rates of period decrease? The notion of an accelerating period decrease was considered, but fitting a cubic function to the O-C data set results in a negligible improvement over the quadratic curve, so an accelerating period decrease does not appear to be the case. However, an important issue to account for when dealing with shortened time spans of O-C data is the possibility of additional short-term variations superimposed on any long-term trends. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{vzcygocrecent_sidebyside.eps} \caption{The O-C data for VZ Cyg since HJD = 2440000 are plotted. In the left-hand plot, a simple quadratic fit is applied to the data. However, likely due to the increased precision of the photometry and subsequent timings in this time span of data, a potential cyclic O-C variation appears in the residuals of the quadratic fit (bottom left). To improve the fit, a combination quadratic + sinusoidal equation is fitted to the data in the right-hand figure. This fit results in a $\chi^2$ value $\sim2.5\times$ smaller than the quadratic fit, and returns a slightly faster rate of period change ($-0.0762 \pm 0.0024$ s yr$^{-1}$) when compared to either the full O-C data set or the simple quadratic fit, and a cycle length of $26.5 \pm 2.7$ years.} \label{fig:recent} \end{figure*} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{vzcygockeltrct.eps} \caption{The most recent O-C data for VZ Cyg from KELT and the RCT are plotted, along with the quadratic fit which returns a rate of period change of $-0.0923$ s yr$^{-1}$. Residuals to the fit are given in the bottom plot, and the scale of the y-axis is the same as in \ref{fig:recent}. No additional, coherent periodic variations are seen.} \label{fig:keltrct} \end{figure} Such variations can be seen in the O-C data after HJD = 2440000 (Figure \ref{fig:recent}). The data display an apparent cyclic variability with a period of $26.5 \pm 2.7$ years, determined by a simultaneous quadratic + sinusoidal fit. This additional potential variability further testifies to how complex Cepheid period variations can be. One very exciting explanation for the cycle is the presence of an as yet unknown companion star whose orbit is responsible for the 26.5 year period variations by way of the light travel-time effect (LTTE), sometimes simply called the light time effect (LiTE). Residuals from the most recent O-C data since HJD = 2454000 were also analyzed (Figure \ref{fig:keltrct}), but no firm evidence of coherent cyclic variations is seen. If the data since HJD = 2454000, however, are covering just a small portion of the 26.5 year cyclic period variation, then this would help explain the faster rate of period decrease attributed to these recent timings when compared to either of the larger O-C time spans. It is situations like this, when unexpected additional period variations are observed, where the vast potential of automated yearly photometric monitoring of Cepheids becomes clear. As photometry continues to be gathered and analyzed, the full extent of the variations in VZ Cyg but also numerous other variables will become much more apparent. \section{Evolutionary Results} With all determined rates of period change placing VZ Cyg in its second crossing of the instability strip, we are given a valuable evolutionary constraint. This allows us to more accurately fit evolutionary tracks and place estimates on certain properties of the Cepheid. Cepheids and evolutionary tracks have rarely played well together. Cepheid masses determined via evolutionary models were consistently and systematically overestimated when compared to either masses from pulsation models, or for those Cepheids discovered as members of binary star systems. This long-standing problem is referred to as the \textit{Cepheid mass discrepancy}. Amongst the prominent mechanisms put forth to resolved this discrepancy, including convective core overshoot \citep{pra12} and pulsation-enhanced mass-loss \citep{nei08}, is the proper treatment of rotation. \citet{and14,and17} studied the effects that rotation can have on intermediate mass stellar evolutionary tracks, finding that tracks with rotation effects included can account for the mass discrepancy without needing increased values of core overshoot or mass loss. In short, due to several factors such as rotational mixing bringing additional hydrogen into the core, causing an extended main sequence lifetime and larger resultant helium core, rotation can increase the luminosity of instability strip crossings for a given mass. It also returns larger ages for Cepheids than what have been previously calculated. As discussed in \citet{and14}, the main sequence B-star progenitors of many Cepheids ($M \approx 5 - 9 M_\odot$) typically have rotation rates of $v/v_{crit} \approx 0.3-0.4$, where $v_{crit}$ is the critical rotation velocity. Figure \ref{fig:evol} plots Geneva tracks \citep{geo13} including rotation ($v/v_{crit} = 0.4$) in the region of the instability strip, whose boundaries are taken from Tammann et al. (2003). The location of VZ Cyg is also plotted, according to the values given in Section 4. As the \textit{Gaia}-derived distance still has large errors, the absolute magnitude used for the plot is the spectroscopically determined value from \citet{kov10}. As Figure \ref{fig:evol} shows, VZ Cyg is a good fit for a 4.7--5.0$M_\odot$ star in its second crossing of the instability strip. In looking at the plot, it would first appear that the 4.7$M_\odot$ blue loop doesn't extend to hot enough temperatures to account for VZ Cyg. However, the blue loop lengths are very sensitive to the rotation rates used, so we still consider it as a possible fit. Using the Geneva tracks plotted in Figure \ref{fig:evol}, we determine a mass of $M = 4.85\pm0.2M_\odot$, a radius of $R = 35\pm2R_\odot$, and an age of $\tau=130\pm6$ Myr for VZ Cyg. The evolutionary radius compares with that of $R = 40\pm19R_\odot$, calculated using the \textit{Gaia} distance and the limb-darkened disk diameter of 0.202 mas from \citet{bou17}. Again, as with the absolute magnitude calculated in Section 4, the radius errors are dominated by those of the parallax. The evolutionary age, as is a known consequence of rotation, is older than other determined values of 71 Myr (\citealt{ach12}; period-age relation of \citealt{bon05}) and 113 Myr (\citealt{mar13}; period-age relation of \citealt{efr03}), though we note that the period-age relation used for the latter age determination is based on LMC cepheids. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{vzcygevolutionary.eps} \caption{Geneva stellar evolutionary tracks \citep{geo13} are plotted, including rotation effects ($v/v_{crit} = 0.4$, where $v_{crit}$ is the critical velocity) along with the location of VZ Cyg and the instability strip boundaries of \citet{tam03}. The tracks are colored as follows: the pink track is $4.7M_\odot$, the purple track is $4.8M_\odot$, the green track is $4.9M_\odot$ and the dark green track is $5.0M_\odot$.} \label{fig:evol} \end{figure} \section{Conclusion} The Classical Cepheid VZ Cyg presents interesting and complex period variations. As it was first classified as a variable star over a century ago \citep{cer04}, VZ Cyg has an excellent timeline of literature observations, which we have combined with data from the literature, the AAVSO archive \citep{aavso}, the KELT survey, and the RCT at Kitt Peak. An O-C analysis of the full data set returns a period change rate of dP/dt = $-0.0642\pm0.0018$ sec yr$^{-1}$. However, recent data indicate faster rates of period decrease. The data after HJD = 2440000 show a period decrease of dP/dt = $-0.0762 \pm 0.0024$ sec yr$^{-1}$, and if only the most recent data (after HJD = 2454000) are analyzed, the rate of period decrease is dP/dt = $-0.0923 \pm 0.0110$ sec yr$^{-1}$. In addition to the long-term period decrease, quadratic fit residuals from the O-C data since HJD = 2440000 show evidence of a cyclic period variation superimposed on the long-term decrease. This additional cyclic variability has a period of $\sim$26.5 years, but will require further monitoring to confirm. It is likely that the rapid period decrease determined by fitting the most recent O-C data is a result of the influence of the shorter-term period variations. All things considered, the rate of period decrease determined from the O-C data after HJD = 2440000 ($-0.0762$ s yr$^{-1}$) is likely the true rate for VZ Cyg, as this data benefits from the higher precision of more modern photometric methods and instruments, and also covers a long enough time span for the potential cyclic period variations to have been properly accounted for. This value places VZ Cyg in the second crossing of the instability strip. Knowing which instability strip crossing VZ Cyg is currently undergoing presents an excellent evolutionary constraint. VZ Cyg was compared to Geneva stellar evolutionary tracks including rotation effects ($v/v_{crit} = 0.4$), wich \citet{and14,and17} have shown can give a more accurate representation of Cepheids, yielding a mass of $M = 4.85\pm0.2M_\odot$, a radius of $R = 35\pm2R_\odot$ and an age of $\tau=130\pm6$ Myr for the Cepheid. Combined with the proper evolutionary tracks, knowing which crossing of the instability strip a Cepheid is in allows for an accurate determination of stellar parameters and a valuable comparison against those derived via other non-evolutionary means. The KELT dataset has helped to offer new insights into the variability of VZ Cyg. KELT offers a rich photometric dataset: $\sim$6.5 years of KELT data are analyzed in this paper, with VZ Cyg being observed for $\sim$4--5 months each year, resulting in over 5000 data points. VZ Cyg is just the first target in our program. The final goal is to evaluate the period changes present in all Cepheids that fall within KELT fields, and carry out pointed follow-up photometry of many of these targets with the RCT. Although it is the first target of this study, VZ Cyg immediately shows the large potential of this (and any similar) program. Additional variations in the period of VZ Cyg are now being observed thanks in part to the KELT dataset. Our hope is to significantly improve the understanding of galactic Cepheid period variations on numerous timescales, making use of the yearly KELT observations, along with other available survey data such as the All-Sky Automated Survey (ASAS -- \citealt{Pojmanski:1997}). With the numerous all-sky (or most-of-the-sky) surveys that have either previously observed, are currently observing, or are planned to begin observing in the near future, it appears likely that our understanding of the behavior and complexity of Cepheids is about to grow considerably. \acknowledgments K.S. acknowledges support from Royal Thai Government scholarship. K.G.S. acknowledges partial support from NSF PAARE grant AST-1358862. We acknowledge with thanks the variable star observations from the AAVSO International Database contributed by observers worldwide and used in this research. This research has made use of the SIMBAD database \citep{Wenger:2000} operated at CDS, Strasbourg, France, the VizieR catalogue access tool, CDS, Strasbourg, France \citep{Ochsenbein:2000}, and NASA's Astrophysics Data System Bibliographic Services. Work performed by J.E.R. was supported by the Harvard Future Faculty Leaders Postdoctoral fellowship. We thank the anonymous referee for several excellent comments and suggestions that improved the overall clarity and results of the paper. \facility{AAVSO},\facility{ASAS},\facility{HIPPARCOS},\facility{KELT},\facility{KPNO:RCT},\facility{NSVS} \bibliographystyle{apj}
1,108,101,564,089
arxiv
\section{Introduction} \label{sec:intro} Forecast verification has been recognized as one of the most important topics in terrestrial weather forecast research. In the field of space weather forecast, however, forecast verification is still in the early stages. Recently, some Regional Warning Centers (RWCs), belonging to the International Space Environment Service (ISES), have started to verify their operational forecasts. As a result of this initiative, forecast verification has become recognized as one of the important research topic in the operational space weather forecasting community.\par RWC USA (Space Weather Prediction Center at the National Oceanic and Atmospheric Administration: SWPC/NOAA) verified their operational solar flare forecast by using forecast data accumulated over about 12.5 years (Crown 2012). RWC Belgium (Royal Observatory of Belgium: ROB) also verified their operational solar flare forecast and geomagnetic K-index forecast by using forecast data accumulated over about 8.5 years\footnote{They also verified their F10.7 forecast for data accumulated over 11 years.} (Devos et al. 2014). Other RWCs have also started verification studies of their operational space weather forecasts. \par Recently, the Community Coordinated Modeling Center has been planning the Flare Scoreboard, which is an online platform of real-time probabilistic solar flare forecast verification (http://ccmc.gsfc.nasa.gov/challenges/flare.php). RWC Japan (National Institute of Information and Communications Technology: NICT) also has an on-line platform of operational solar flare and geomagnetic K-index forecast verification (http://seg-web.nict.go.jp/cgi-bin/forecast/eng\_forecast\_score.cgi), which compares forecasts of some RWCs. However, as the forecast conditions are not the same and verification methods are insufficient to compare the forecasts, we cannot directly compare the forecast performances among the RWCs. While comparing forecast performances among some RWCs is very informative, we have to proceed with efforts to compare the operational space weather forecasts. Verifying one's own forecast performance is the first step toward proceeding with the comparison of forecast performances. \par The origin of the study of forecast verification goes back to 1884. Finley published some results of his tornado occurrence forecast (Finley 1884). Finley's results indicated that the accuracy of the tornado forecasts was extremely high, with the probability of correct forecasts exceeding 95\%. Shortly after the publication of Finley's paper, three papers that pointed out the deficiency of Finley's verification method appeared and alternative verification measures were proposed (Murphy 1996). These events were the start of forecast verification studies. Therefore, forecast verification has a long history in the terrestrial weather forecasting community, and the verification methods of the terrestrial weather forecast are more sophisticated than those of the space weather forecast. In this work, we perform a verification study of an operational solar flare forecast in RWC Japan (hereafter, RWCJ forecast) while referring to the methods of verifying terrestrial weather forecasts. \par This article is organized as follows. In section \ref{sec:forecast}, we describe the RWCJ forecast and solar flare observation data. In section \ref{sec:verification}, we describe verification measures with the method of estimating the confidence intervals of the measures. The verification analyses of the RWCJ forecast are described in section \ref{sec:RWCJ}. A discussion and summary are given in sections \ref{sec:discussion} and \ref{sec:summary}, respectively. In the Appendix, brief descriptions of the definitions of the verification measures used in this study are shown. \par \section{RWCJ forecast and flare observation data} \label{sec:forecast} The solar flare class is defined on the basis of the 1-8\AA\ X-ray flux observation by the Geostationary Operational Environmental Satellite (GOES). The RWCJ forecast gives the expected maximum solar flare class within 24 hours from the forecast issuing time. Table \ref{tbl:flare_class} shows the definition of the scale of the RWCJ forecast. We can easily recognize from the definition that the RWCJ forecast is a four-categorical deterministic forecast. Note that the flux level of a specific forecast has an upper bound, for example, when the forecast is ``Active'', the expected X-ray flux $F$ (W m$^{-2}$) is not $10^{-5}$ to infinity but $10^{-5}$ to $10^{-4}$. The RWCJ forecast is not an automatically determined forecast but a human decision forecast. Forecasters analyze many types of solar data such as the current situation and history of the solar X-ray flux, sunspot magnetic field configuration, and chromospheric brightening in active regions. Finally, the forecasters decide deterministically which class of solar activity will most likely occur in the next 24 hours. The forecast is issued at 6:00UT every day, so the range of the RWCJ forecast is from 6:00UT to 6:00UT the next day. While the RWCJ forecast started in 1992 and has continued to the present day, we use the RWCJ forecast data accumulated over 16 years, from 2000 to 2015 (5844 days), in this verification study. We did not use the data from 1992 to 1999 because a complete set of the forecast data could not be collected due to some missing data, and the forecast issue time was different until October 1994. \par For solar flare observation data, we use solar activity event lists issued by SWPC/NOAA. The flare peak time and peak flux found in the event lists are used as the flare time and flare class, respectively. We define a day as from 6:00UT to 6:00UT the next day, which is the same as the RWCJ forecast range. The observed flare class of a day is defined as the maximum flare class of that day, the flare time of which is within the forecast range. Because of the automatic detection of X-ray flare for the event lists, some small and/or gradual increases in X-ray flux cannot be detected and listed as a flare even if it is a C-class activity. On 22 February 2001, for example, no flare was registered in the event lists although the flare activity was apparently C-class. Owing to the absence of flares, there were six days whose flare activities were incorrectly determined as below C-class activity instead of C-class. We corrected these incorrectly determined flare classes. By compiling the forecast and observation data, we obtained 5844 forecast-observation pair data that were analyzed to assess the RWCJ forecast performance. \par \section{Forecast verification methods} \label{sec:verification} Here we introduce the verification methods for the RWCJ forecast. Using the 5844 forecast-observation pair data, we constructed a contingency table for the RWCJ forecast. Table \ref{tbl:ct} depicts a four-categorical contingency table for the RWCJ forecast. The number 0 to 3 on the forecast and observation axes stand for the flare class codes defined in Table \ref{tbl:flare_class}. \par \subsection{Verification measures for dichotomous forecasts} \label{sec:dichotomous_measure} There are many scalar measures that can be used to verify the performance of a dichotomous forecast. Although the RWCJ forecast is not a dichotomous forecast, we can apply the RWCJ forecast to the scalar measures by collapsing the four-categorical forecast to a dichotomous forecast with a certain threshold. In this study, we deal with two types of event threshold. In the M-threshold, M-class or larger flares are defined as events and below M-class flares are defined as no events. In the X-threshold, X-class flares are defined as events and below X-class flares are defined as no events. \par Many scalar verification measures have been proposed since the first paper on terrestrial weather forecast verification by Finley (1884). All scalar verification measures have some advantages and disadvantages, and it is not known which scalar verification measures are the most suitable for operational solar flare forecast verification although Bloomfield et al. (2012) recommended to use Peirce skill score for comparing a performance of solar flare forecasts. However, we may obtain some hints from terrestrial weather forecast verification strategies. The World Meteorological Organization (WMO) published a recommendation on verification methods for tropical cyclone forecasts, which describes the recommended scalar verification measures to assess tropical cyclone forecasts (WMO 2014). WMO (2009) published a recommendation for the verification and intercomparison of precipitation forecast, and this was reappeared in WMO (2014) together with an extremal dependence index (EDI), which has emerged as the importance for rare event forecast verification since WMO (2009). Because tropical cyclones are relatively rare events, tropical cyclone forecasts resemble large class solar flare forecasts in terms of their rarity. Table \ref{tbl:WMO} shows the recommended verification measures for assessing rare events that appeared in WMO (2014), who divided the scalar verification measures into three categories: mandatory, highly recommended, and recommended measures. The mandatory category is composed of hits, misses, false alarms, and correct rejections, which are all elements of a dichotomous forecast contingency table. The highly recommended measures are the frequency bias (FB), proportion correct (PC), probability of detection (POD), false alarm ratio (FAR), and equitable threat score (ETS). The recommended measures are the probability of false detection (POFD), critical success index (CSI), Peirce skill score (PSS), Heidke skill score (HSS), odds ratio (OR), odds ratio skill score (ORSS), and extremal dependence index (EDI). In this study, we estimate these verification measures except for the OR to assess the RWCJ forecast. Because the ORSS is the OR transformed so that its range is from $-1$ to $+1$, the OR and ORSS have a similar meaning. While two types of EDI were proposed, we use a symmetric version of the extremal dependence index (SEDI) because the SEDI has somewhat better properties than a non-symmetric version of EDI (Ferro \& Stephenson 2011). Brief definitions of these scalar verification measures are given in the Appendix 1. \par We have to note that it is not sure whether this recommendation is the best for operational solar flare forecasting or not. An important implication of the recommendation is that only one verification measure is not enough to correctly assess the forecast performance (Murphy 1991), at least several attributes of forecast system such as bias, accuracy, discrimination, reliability, and skill, must be assessed by verification measures. \par \subsection{Verification measure for multi-categorical forecast} \label{sec:multi_measure} As already mentioned, most of the scalar verification measures are defined on the basis of the dichotomous contingency table, so the multi-categorical contingency table must be collapsed to a dichotomous contingency table with a certain threshold so that the scalar verification measures can be applied to the RWCJ forecast. However, the collapsing process causes a loss of information included in the multi-categorical contingency table. Therefore, it is better to directly estimate a scalar verification measure for a multi-categorical forecast for the verification of the RWCJ forecasts. \par Gandin \& Murphy (1992) proposed a scalar verification measure for multi-categorical forecasts. This measure is based on a scoring matrix, whose elements denote scores assigned to all elements of a multi-categorical contingency table, so it does not require the collapse of a multi-categorical contingency table. Moreover, the measure satisfies equitability, which is a highly desirable property for a scalar verification measure. An undesirable property of the measure is that it is not a closed form, meaning that some free parameters are required to estimate them. Gerrity (1992) derived a closed form of the Gandin \& Murphy-type verification measure, which does not require any free parameters. In this study, we use the scalar verification measure derived by Gerrity (1992), which we hereafter call the Gandin-Murphy-Gerrity score (GMGS). A brief definition of the GMGS is given in the Appendix 2. \par A ranked probability score (RPS) is often used for a verification of a multi-categorical probabilistic forecast. If the forecast probabilities are set as one for any category and zero for others, it may be possible that the RPS is used for the verification of the multi-categorical deterministic forecast. However, as the RPS applied to a deterministic forecast does not satisfy equitability (see Appendix 3), the GMGS is suitable for a verification of a multi-categorical deterministic forecast. \par \subsection{Confidence interval for verification measures} \label{sec:ci} Many of the forecast verification studies do not take data sampling uncertainty into account. However, because all of the verification measures are calculated from a finite number of sampled data, it is necessary to estimate confidence intervals for the calculated verification measures, as some authors have pointed out (e.g., Stephenson 2000; Jolliffe \& Stephenson 2003; Wilks 2006; Jolliffe 2007). \par Because the elements of a dichotomous contingency table are regarded as binomial variables, the measures expressed as proportions such as the PC, POD, FAR, POFD, and CSI are sample estimates of the binomial probability $\hat p=x/n$. The simplest method of estimating a confidence interval of the binomial probability $\hat p$ is the so-called Wald confidence interval. The interval is calculated on the basis of a Gaussian approximation with mean $\hat p$ and variance $\sigma_p^2=\hat p(1-\hat p)/n$ instead of the binomial distribution. The resulting $1-\alpha$ confidence interval is \begin{equation} p=\hat p \pm z_{(1-\alpha/2)}\sqrt{\frac{\hat p(1-\hat p)}{n}}, \end{equation} where $z_{(1-\alpha/2)}$ is the $1-\alpha/2$ quantile of the standard Gaussian distribution. The Wald confidence interval is simple, but can be rather inaccurate unless the number of sample $n$ is very large (Agresti \& Coull 1998). \par A more accurate method of estimating a confidence interval was proposed by Wilson (1927). This confidence interval is also based on a Gaussian approximation, but its mean and variance are not assumed by the sample estimate of the binomial probability $\hat p$ but by using an unfixed value of the binomial probability as mean $p$ and variance $\sigma_p^2=p(1-p)/n$. This interval is derived from the inequality $p-z_{(1-\alpha/2)}\sigma_p\leq \hat p \leq p+z_{(1-\alpha/2)}\sigma_p$. The resulting $1-\alpha$ confidence interval is \begin{equation} p=\frac{\hat p +\frac{z_{(1-\alpha/2)}^2}{2n}\pm z_{(1-\alpha/2)}\sqrt{\frac{\hat p(1-\hat p)}{n}+\frac{z_{(1-\alpha/2)}^2}{4n^2}}}{1+\frac{z_{(1-\alpha/2)}^2}{n}}. \label{eq:score_ci} \end{equation} This confidence interval is sufficiently accurate even when the number of samples $n$ is quite small. According to Agresti \& Coull (1998), a confidence interval derived from later formula with sample number $n=5$ is more accurate than the Wald confidence interval derived with sample number $n=100$. For the confidence interval of the sample proportion, this formula is better than that for the Wald confidence interval. \par There are some verification measures that cannot be written as sample proportions of the elements of the contingency table, such as the FB, ETS, HSS, PSS, ORSS, SEDI, and GMGS. For these verification measures, equation (\ref{eq:score_ci}) cannot be applied directly to estimate the confidence interval. In this case, an error propagation rule can be applied to the verification measures written as the function of the POD, POFD, and S (base rate), when estimating the confidence intervals. However, the error propagation rule assumes implicitly that confidence intervals of the POD, POFD, and S are sufficiently small with respect to a main values. In this verification study, although most of the verification measures have a sufficiently small confidence interval, some of the measures do not satisfy the assumption for a rare event forecast, in which case the error propagation rule cannot be applied to estimate the confidence interval. \par To overcome these problems, we use a bootstrap method to estimate confidence intervals for scalar verification measures. The bootstrap method is becoming a popular means of constructing confidence intervals in statistical analyses and is based on a resampling procedure from an original data set. Many bootstrap replicates of our interest, such as scalar verification measures, are calculated from many resampled data sets, then their distribution is estimated from the many replicates. The confidence intervals are calculated by estimating a $1-\alpha/2$ quantile from the distribution. This means that an assumption about the distribution is not required in the bootstrap methods, which is a major advantage over the other methods. Details of the method have appeared in textbooks (e.g., Efron \& Tibshirani 1993). While there are many types of bootstrap confidence intervals, we use a BCa confidence interval in this study (BCa stands for ``bias-corrected and accelerated"). Because the BCa confidence interval includes corrections of bias and skew of bootstrap distribution, the BCa confidence interval has second-order accuracy\footnote{Other bootstrap confidence intervals with second-order accuracy, such as a bootstrap-{\it t} confidence interval and an ABC confidence interval, are also proposed (e.g., DiCiccio \& Efron 1996).} in the sample number $n$, whereas the simplest bootstrap confidence interval has an accuracy of the first order in $n$ (e.g., DiCiccio \& Efron 1996). Bootstrap samples, which are randomly sampled with replacement from original data set, are produced by Monte-Carlo method. A number of bootstrap samples is the same as the original data set (5844). A number of bootstrap replicates is 10,000 in this study. In statistical analyses, 95\% confidence intervals ($\alpha=0.05$) are often used, so we also estimate 95\% confidence intervals in this study. \par \subsection{Distribution-oriented approach} \label{sec:distribution-oriented} In a distribution-oriented approach, a contingency table is regarded as a joint probability distribution for a pair of forecasts and observations, which is derived by dividing the elements of the contingency table by the total number of elements. The joint probability distribution is related to the attribute of the association between forecasts and observations. \par A joint probability distribution can be factorized into a marginal distribution and a conditional probability distribution. There are two types of factorization in the forecast verification framework (Murphy \& Winkler 1987). One is a calibration-refinement factorization, and the other is a likelihood-base rate factorization. In the calibration-refinement factorization, the joint probability is factorized into the marginal probability of the forecasts and the conditional probability of the observations given the forecasts: $p(f,o)=p(o|f)\cdot p(f)$. In the likelihood-base rate factorization, the joint probability is factorized into the marginal probability of the observations and the conditional probability of the forecasts given the observations: $p(f,o)=p(f|o)\cdot p(o)$. The $p(o|f)$ and $p(f|o)$ are called the calibration distribution and likelihood distribution, respectively. The calibration distribution is related to attributes of reliability and resolution, whereas the likelihood distribution is related to the attribute of discrimination. Marginal distributions are related to the attribute of bias. Details of the distribution-oriented approach can be found in Murphy \& Winkler (1987). \par \section{RWCJ forecast verification} \label{sec:RWCJ} \subsection{Verification of overall data} \label{sec:overall} In this section, we describe the results of the verification of the RWCJ forecast using data accumulated over 16 years. As mentioned in section \ref{sec:dichotomous_measure}, a multi-categorical forecast can be collapsed to a dichotomous forecast by setting a specific threshold when the conventional scalar verification measures are calculated. In this study, we deal with the M- and X-thresholds defined in section \ref{sec:dichotomous_measure}. Table \ref{tbl:verification_measure} summarizes the estimated scalar verification measures with a 95\% confidence interval for the M- and X-thresholds. \par The scores of FB are almost unbiased in the M-threshold, meaning that the number of forecasted events is almost the same as the number of observed events. On the other hand, events are underforecasted in the X-threshold. \par The accuracy of the RWCJ forecast seems to be extremely high and the X-threshold seems to be more accurate than the M-threshold according to the scores of PC. According to the numbers of hits and correct rejections in Table \ref{tbl:verification_measure}, most of the correct forecasts are forecasts of null events, and the null events can be forecasted easily when the event is quite rare. This is why the scores of PC are extremely high, especially for the X-threshold. On the other hand, the X-threshold is less accurate than the M-threshold in terms of the scores of CSI. As described in the Appendix 1, the correct rejections are not taken into consideration when calculating the CSI. This is why the scores of CSI shows the opposite trend to those of PC. When the correct forecasts of null events do not have an essential meaning, the CSI becomes a good measure of accuracy although an intuitive meaning of the measure is somewhat ambiguous. \par The POD and POFD are the verification measures of discrimination, which is related to the likelihood distribution of the dichotomous contingency table. The discrimination means the ability of a forecast system to discriminate whether an event would occur under a condition of realized events. The scores of POD and POFD for the M- and X-thresholds are shown in Table \ref{tbl:verification_measure}. The POD for the M-threshold is reasonably good, meaning that the RWCJ forecast can discriminate the occurrence of below M-class flares and M- or X-class flares. On the other hand, the POD for the X-threshold is small, meaning that the RWCJ forecast cannot effectively discriminate the occurrence of below X-class and X-class flares. Because the POFD is a negative orientation measure, a small POFD means better performance. From the scores of POFD, it appears that the X-threshold has a better performance than the M-threshold. However, this is an invalid conclusion because the rarer the event, the larger the number of null events and the smaller the POFD. This means that the POFD approaches zero regardless of the discrimination performance when the event frequency approaches zero. \par The reliability of a forecast can be expressed by the FAR. The FAR pertains to the relationship of a forecast to an average observation given the specific forecast and the calibration distribution of the dichotomous contingency table. The FAR is a negative orientation measure. For the RWCJ forecast, the X-threshold has bad performance, as shown in Table \ref{tbl:verification_measure}. 55-75 \% alarms issued for the occurrence of X-class flare were false alarms. This rate may be too large when the forecast is used for decision making to activate some countermeasures. \par The ETS, HSS, and PSS are all verification measures of skill. All of these verification measures have significant positive values. All of these verification measures have larger scores for the M-threshold than for the X-threshold. It seems that the forecast skill of the M-threshold is better than that of the X-threshold. However, according to Stephenson et al. (2008), the scores of ETS, HSS, and PSS tend to degenerate to zero irrespective of their skill and behave as the trivial non-informative limit for vanishingly rare events. Therefore, we have to pay attention to the interpretation of the scores of the skill measures when comparing the scores for different event frequencies such as M- and X-thresholds. \par The association describes the overall strength of the relationship between the individual pairs of forecasts and observations. According to the ORSS score, the association between the forecasts and the observations is reasonably strong for both the M- and X-thresholds. The ORSS for X-threshold is higher than that of M-threshold. This is easily accounted for from a definition of the ORSS (see Appendix 1). When the POFD is much smaller than the POD like the X-threshold, the ORSS approaches one. However, when the POFD is as same order of magnitude as the POD, the ORSS approaches zero even when POFD is small enough (see Figure \ref{fig:3modelX}). We note that the association and the accuracy seem to have the same attributes; however, they are different. The accuracy describes the correspondence between the individual pairs of forecasts and observations, while the association describes the overall strength of the relationship between them. \par The SEDI was proposed by Ferro \& Stephenson (2011) and was designed to verify the performance of extremely rare event forecasts. The advantage of this measure is that the score for vanishingly rare events does not converge to zero but to a non-trivial meaningful value, whereas conventional measures such as the HSS, PSS, and ETS tend to be zero for vanishingly rare events. The scores of the SEDI in Table \ref{tbl:verification_measure} show that the RWCJ forecast for rare events is significantly better than random forecasting, for which the score of SEDI is zero. \par In the rest of the subsection, a verification of the four-categorical forecast is given. Figure \ref{fig:ct_joint} depicts the joint probability distribution $p(f,o)$ for the RWCJ forecast calculated from the four-categorical contingency table in Table \ref{tbl:ct}. A good association between the RWCJ forecast and the observation can be seen in the figure. The correlation coefficient (CC) between them is estimated to be 0.717 with a 95\% confidence interval of [0.703, 0.730]. \par Figures \ref{fig:ct_marginal} shows the marginal distributions of the RWCJ forecast and observation. The figure shows that the RWCJ forecast is almost unbiased, although there is slight overforecasting (underforecasting) for the M-class (X-class) flare. Figures \ref{fig:ct_calibration} and \ref{fig:ct_likelihood} show the calibration distribution and the likelihood distribution, respectively. The black dots with numbers connected by the line in Figure \ref{fig:ct_calibration} and \ref{fig:ct_likelihood} are the conditional expectation values of the observation given the forecast and of the forecast given the observation, respectively. We can recognize from Figure \ref{fig:ct_calibration} that the flare class that most frequently occurred under the condition that a specific flare class had been forecasted was the same as the forecasted flare class except when an X-class flare had been forecasted. When X-class flares had been forecasted, the most frequently occurring flares were M-class flares. This means that the reliability of the RWCJ forecast is good for below X-class flares but not good for X-class flares. As shown in Figure \ref{fig:ct_likelihood}, under the condition that a specific class flare occurred, the most frequently forecasted flare class was the same as the observed class except for the condition that an X-class flare occurred. The most frequently forecasted flare class under the condition that an X-class flare occurred was an M-class flare. This means that the RWCJ forecast cannot successfully discriminate between the occurrences of X-class and below X-class flares. \par The GMGS was calculated from the four-categorical contingency table in Table \ref{tbl:ct}. As already mentioned, because the GMGS satisfies equitability, the score for unskillful forecasts becomes zero. For the RWCJ forecast shown in Table \ref{tbl:verification_measure}, the score is significantly larger than zero, meaning that the RWCJ forecast has some forecast skill. \par \subsection{Comparison with persistence and recurrence method} \label{sec:comp} It was shown in section \ref{sec:overall} that the RWCJ forecast seems to have reasonable performance. In this subsection, we compare the performance of the RWCJ forecast with those of two other forecasting methods: a persistence method and a recurrence method. In the persistence method, today's forecast is the same as yesterday's observation results. In the recurrence method, today's forecast is the same as the observation results 27 days ago. As solar flare activity is not independent between consecutive days, it is expected that the persistence method have some forecast performance. As the solar rotation period when viewed from Earth is almost 27 days, the recurrence method is also expected to have some forecast performance. Therefore, it is useful to compare the three forecasting methods to assess the RWCJ forecast performance. \par The top panel of Figure \ref{fig:3modelM} shows comparisons of various verification measures among the three forecasting methods for the M-threshold. The magenta, cyan, and yellow bars show the resultant scores for RWCJ, persistence, and recurrence methods, respectively. The black intervals drawn on the colored bars stand for the 95\% confidence intervals of the scores. We can easily recognize that all three methods are skillful forecasts because all the scores of ETS, HSS, and PSS are positive. All the verification measures, except for FB, for the recurrence method have significantly worse scores than the RWCJ forecast (FAR and POFD are negative orientation measures), meaning that the performance of the RWCJ forecast is significantly better than that of the recurrence method. On the other hand, the differences in the scores between the RWCJ forecast and the persistence method are small for all the verification measures, and the 95\% confidence intervals overlap each other. To investigate the difference in scores between the RWCJ forecast and persistence method more precisely, we show the difference in the various scores between the RWCJ forecast and the other two methods with 95\% confidence intervals in the bottom panel of Figure \ref{fig:3modelM}. The red bars stand for the differences in scores between the RWCJ forecast and the persistence method while the blue bars stand for the differences between the RWCJ forecast and the recurrence method. We can recognize that all the verification measures have slightly better scores for the RWCJ forecast than for the persistence method. However, the 95\% confidence intervals for some measures such as the PC, FAR, POFD, and ORSS include the zero score. This means that we cannot definitely conclude that there are significant differences in the scores between the RWCJ forecast and the persistence method. What we can conclude from the results for the performances of the RWCJ forecast and the persistence method is that (1) there is no significant difference in the accuracy, but the accuracy is slightly better when correct forecasts of a null event are not essential, (2) discrimination is slightly better for the RWCJ forecast, (3) there is no significant difference in reliability, (4) the RWCJ forecast has slightly better skill than the persistence method, (5) there is very slight or no significant difference in association, and (6) the performance of extreme event forecasts is slightly better for the RWCJ forecast. In summary, the RWCJ forecast for the M-threshold seems to have a slightly better performance than the persistence method. \par Figure \ref{fig:3modelX} is the same as Figure \ref{fig:3modelM} except that the event definition is the X-threshold. From the top panel of Figure \ref{fig:3modelX}, we can immediately recognize that the RWCJ forecast and persistence method have some forecast performance, whereas the recurrence method has no performance. This may mean that the recurrence of an active region that produced an X-class flare during the last solar rotation period provides little information on the productivity of the X-class flare. Differences in scores between the RWCJ forecast and the persistence method are shown as red bars in the bottom panel of Figure \ref{fig:3modelX}. For almost all verification measures, the 95\% confidence intervals include the zero score, meaning that we cannot definitely conclude that there is a significant difference between the performances of the RWCJ forecast and the persistence method for the X-threshold. \par The scores of GMGS for the three forecast methods are included in Figures \ref{fig:3modelM} and \ref{fig:3modelX}. The scores drawn in these two figures are exactly the same because the GMGS is calculated directly from the four-categorical contingency table, not a collapsed dichotomous contingency table. Therefore, the measures express the performances of the four-categorical forecast system. The scores of the GMGS show that all three forecast methods have some skill as a four-categorical forecast system. The skill of the recurrence method is significantly worse than that of the other two methods. The difference between the RWCJ forecast and the persistence method may not be significant. \par \subsection{Verification of subset data} \label{sec:subset} Hamill \& Juras (2006) revealed that some conventional scalar verification measures such as the ETS were prone to give an unexpectedly increased score when the climatological frequency of the event occurrence varies among pooled samples. In the case of solar flare forecasts, the climatological frequency of event occurrence appears to vary chronologically owing to changing solar activity. Therefore, we apply the verification study to subset data, which are divided by solar activity levels. Figure \ref{fig:event_freq} depicts a chronological history of the M-threshold event occurrence defined in section \ref{sec:overall}. The blue vertical lines show the maximum M-threshold solar flare events within 24 hours from the forecast issue time. The red dots show the cumulative number of maximum M-threshold events counted from 1 January 2000 plotted against the event occurrence date. The slope of the red dots can be regarded as the climatological frequency of event occurrence. We can clearly recognize that there are four separate periods, during which the event frequencies are almost constant, which are shown by the dashed gray lines. Therefore, we divide the 16 years of data into four subsets: 2000-2002 (subset-1), 2003-2005 (subset-2), 2006-2010 (subset-3), and 2011-2015 (subset-4). \par Figures \ref{fig:subsetM} and \ref{fig:subsetX} show the scores of various verification measures for the four subsets with a whole dataset as references for the M- and X-thresholds, respectively. The cyan, yellow, red, and blue bars are for subset-1 through subset-4, respectively, with magenta used for the whole dataset. In the M-threshold, the FB shows that subset-1 and 2 are overforecasting while subset-3 and 4 are underforecasting, although subset-3 has very wide 95\% confidence interval. According to the PC, accuracy seems to be the best for subset-3. However, the high score of subset-3 is ascribed by correct forecast of null events because the CSI of subset-3 is significantly lower than that of other three subsets. Comparing the subset-2 and 4, which have almost the same event frequency, discrimination shown by the POD of subset-2 is better than that of subset-4. From the POD and FAR, we can recognize that subset-3 has low discrimination and reliability than other three subsets. For the forecast skill, we can see from the ETS, HSS, and PSS that subset-2 seems to have the best forecast skill among the four subsets, although a slight overlap of the 95\% confidence intervals exists. The SEDI shows that the performance of extreme event forecast is subset-2, 4, and 1 in descending order, except for subset-3 which have large uncertainty. In the X-threshold, we can recognize from the FB that subset-1 and 4 are large underforecasting. Accuracy shown by the CSI is subset-2, 4, and 1 in descending order, although 95\% confidence intervals overwrap each other. Low PODs for subset-1 and 4 would be ascribed by large underforecasting. There seems to be no significant difference for reliability among all subsets because all the 95 \% confidence intervals of FAR overwrap each other. It seems that subset-2 has the best forecast skill among the subsets except for subset-3, for which we cannot comment on the forecast skill because the 95\% confidence intervals for subset-3 are extremely wide. The SEDI has the same pattern as the POD because the POFDs for all subsets have almost the same values and the SEDI is calculated by only the POD and POFD. Regarding the skill of four-categorical forecast, the GMGS shows that the best forecast skill seems to be for subset-2, although the upper limit of the relatively wide confidence interval of subset-3 is higher than that of subset-2. \par As already shown in section \ref{sec:comp}, the skill score of the RWCJ forecast is often similar to that of the persistence method. This may imply that a relatively high score of subset-2 is due to the persistence of event occurrence because the solar activity during the period of subset-2 was reasonably high. To investigate this point, the differences in score between the RWCJ forecast and the persistence method for each subset for the M-threshold are drawn in Figure \ref{fig:subset_diffM}. For most of the verification measures, a zero score is included in the 95\% confidence interval, meaning that we cannot conclude that there are significant differences between the RWCJ forecast and the persistence method for all the subsets. The comparison of each subset for a specific verification measure is also important. For the verification measure of the skill (ETS, HSS, and PSS), the largest differences in scores appear in subset-4. For subset-2, the difference in score is positive but the smallest among all the subsets except for subset-3. Although we cannot give a definitive conclusion because the difference among the subsets is small and the confidence intervals overlap each other, the relatively high score of subset-2 is probably due to the persistence of event occurrence. Moreover, the best {\it judgment} skill, which is defined in Section \ref{sec:discussion}, for the M-threshold may be in subset-4 because the difference in scores of the ETS, HSS, and PSS between the RWCJ forecast and the persistence method for the subset-4 is the largest among those for all subsets. In the X-threshold, we cannot comment on the significance of the score differences because of the extremely wide confidence intervals compared with the score difference (not shown). \par \section{Discussion} \label{sec:discussion} As already mentioned in section \ref{sec:comp}, the persistence method, as well as the RWCJ forecast, is a skillful forecast method. However, the persistence method is determined by only the observation result of the previous day, namely, the persistence method is a skillful forecast method {\it without} judgment. On the other hand, the RWCJ forecast (and other operational solar flare forecasts) includes a judgment process to determine the issued forecast. If the skill scores of the operational solar flare forecast are smaller than those of the persistence method, the contribution of the judgment of flare occurrence to the operational solar flare forecast is almost zero. Therefore, it would be better to assess the {\it judgment} skill for the operational solar flare forecast in addition to forecast skills, which are assessed by the verification measures of ETS, HSS, and PSS. In sections \ref{sec:comp} and \ref{sec:subset}, we estimated the difference in the scores between the RWCJ forecast and the persistence method as one of the assessments of the judgment skill. It is also useful for estimating a skill measure with reference to the persistence method to assess the judgment skill (called a judgment skill measure henceforth). Similar to the HSS, the judgment skill measure (JS) based on the PC is defined as \begin{equation} JS=\frac{a-a_p+d-d_p}{a-a_p+b+c+d-d_p}=\frac{PC-PC_p}{1-PC_p}, \end{equation} where $a$, $b$, $c$, and $d$ are the elements of the dichotomous contingency table (see, Table \ref{tbl:dichoto_table}). The elements with subscript ``$p$'' are for the persistence method. The scores of the JS are 1 and 0 for a perfect forecast and the persistence method, respectively. However, for a perfectly incorrect forecast the score of the JS is not -1 but depends on the score of the persistence method. One of the most important characteristics that a verification measure of skill should have is equitability (Gandin \& Murphy 1992). The equitability means that the unskillful forecasts involving forecasting ``yes'' every time, forecasting ``no'' every time, and forecasting at random must have the same scores. However, the JS does not satisfy equitability. This is not a good characteristics for a verification measure of skill. Recognizing this characteristic, it is better to use the judgment skill measure JS. For the RWCJ forecast for overall data, the JS is estimated to be 0.0639 [-0.0305, 0.145] for the M-threshold and 0.223 [0.0843, 0.332] for the X-threshold. Since the 95\% confidence intervals of the scores of JS for M- and X-threshold are largely overwrapped, the difference in the JS scores are probably not significant. We have to pay attention to interpretation of the JS for different event thresholds in case of PC$\sim$PC$_p$, because a denominator of the JS is dominated by the number of false alarms ($b$) and misses ($c$), which can be small for rare event forecast, and the JS can be large irrespective of their judgment skill. Similar to the ETS, the judgment skill measure based on the CSI can also be considered, whose definition is the same as the ETS except with $a_r$ replacing $a_p$. However, this formulation has a critical defect. When the judgment skill is very poor, it may be possible that $a_p$ is larger than $a+b+c$. In this case, the defined formula is positive despite the very poor judgment skill. Therefore, the formulation based on the CSI cannot be used to assess the judgment skill. \par As the RWCJ forecast is a four-categorical forecast, the performance assessment of only the collapsed dichotomous forecast is insufficient, and the direct assessment of a four-categorical contingency table is required in addition to the verification as a dichotomous forecast with some thresholds. However, a small scalar verification measure for a multi-categorical contingency table has been discovered, for example, the GMGS for measure of skill. For accuracy, we can use a proportion correct extended to a multi-categorical contingency table (PC$_{\mathrm m}$), which is defined as PC$_{\mathrm m}$=$\sum_i p_{ii}$, where $p_{ii}$ is the diagonal element of a joint probability distribution for the multi-categorical contingency table (e.g., Jolliffe \& Stephenson 2003). No scalar verification measures for other attributes such as reliability, resolution, and discrimination have been proposed yet. Therefore, a distribution-oriented approach such as the discussion of a joint probability distribution in section \ref{sec:overall} is also required. We propose a verification strategy for a multi-categorical deterministic operational solar flare forecast as follows: marginal distributions for bias, PC$_{\mathrm m}$ for accuracy, CC and joint probability distribution for association, the likelihood distribution for discrimination, the calibration distribution for reliability and resolution, the GMGS for forecast skill, and JS$_{\mathrm m}$ for judgment skill, whose definition is the same as the JS except with PC replaced by PC$_{\mathrm m}$ and PC$_{\mathrm p}$ replaced by PC$_{\mathrm{mp}}$, in addition to the verification as dichotomous forecasts with M- and X-thresholds. The scores of the proposed scalar verification measures for the overall data of multi-categorical RWCJ forecast are summarized in Table \ref{tbl:multi_score}. The accuracy as the four-categorical forecast is reasonably good because PC$_{\mathrm m}$ is 68\% to 71\%. According to CC and joint probability distribution, the association for four-categorical forecast is also reasonably good. However, forecast skill is not so high although it may be acceptable depending on the user needs. As the GMGS imposes larger penalty for multi-category error than for one-category error, reducing the multi-category error will lead high score. Although the confidence interval of the JS$_{\mathrm m}$ does not include zero, the judgment skill as four-categorical forecast is small. \par On the other hand, the dichotomous forecast verification strategy for a deterministic operational solar flare forecast is under discussion. As already mentioned, many verification measures for a dichotomous deterministic forecast have been proposed since the paper by Finley (1884). Because all verification measures have both advantage and disadvantage, it is difficult to determine which verification measure is the best for operational solar flare forecasting. As we already mentioned, the CSI will be more suitable than the PC for rare event forecasting. However, an intuitive meaning of the CSI is somewhat ambiguous, while a meaning of the PC is completely clear, which means a percentage of correct forecast over whole forecast. For a verification measure of skill, the ETS is commonly used in terrestrial weather forecasting community, while the PSS is recommended by Bloomfield et al. (2012) in the space weather forecasting community. The PSS is base-rate-independent measure, which is a good characteristic for verification measure of skill, while the ETS is base-rate-dependent measure. On the other hand, while the PSS are designed on the basis of the PC, the ETS is defined on the basis of the CSI. So, it is a difficult question which measure is more suitable for verifying rare event forecasts. These kinds of discussions appear frequently for other verification measures. With the understanding of this situation, we propose a verification strategy for dichotomous forecast for rare event as follow: FB for bias, PC and CSI for accuracy, POD for discrimination, FAR for reliability, PSS for forecast skill, and SEDI for association. As the SEDI is formally a non-linear transformation of the OR (Ferro \& Stephenson 2011) although its derivation is completely different, the SEDI can be regarded as the verification measure of association for rare event forecast. For the verification measure of skill, although there are many discussions for their usability, the PSS is selected for respecting Bloomfield et al. (2012). We have to note that this is just one suggestion. We think further researches on verification measures themselves are highly required to determine the best verification strategy for dichotomous deterministic operational solar flare forecast. As already mentioned in Section \ref{sec:intro}, comparing forecast performances among some RWCs is informative. We briefly discuss comparison between the RWCJ forecast and the RWC Belgium (ROB) solar flare forecast (Devos et al., 2014). As a verification study of RWC USA (Crown 2012) is for probabilistic forecast of solar flare, it cannot be compared directly with RWCJ forecast. Scores for some verification measures for M-threshold for ROB are shown in Table 3 of Devos et al., (2014). Comparison between scores for ROB and RWCJ forecast (Table \ref{tbl:verification_measure} in this article) shows following results: (1) RWCJ is almost unbiased while ROB is obviously under-forecasted, (2) accuracy of RWCJ is a little worse than that of ROB according to PC, however, excluding trivial correct forecasts (correct rejections) leads reverse result (CSI of ROB is estimated as 0.311), (3) discrimination of RWCJ is obviously better than that of ROB according to POD, (4) reliability of RWCJ is somewhat worse than that of ROB because FAR of RWCJ is larger than that of ROB, (5) forecast skill of RWCJ is somewhat better than that of ROB, (6) performance of extreme event forecast of RWCJ is slightly better than that of ROB because SEDI of ROB is estimated as 0.594 from their Table 3. The obvious under-forecasting of ROB may lead the small POD and FAR. Because a large over-forecasting (under-forecasting) leads sometimes a large POD and FAR (small POD and FAR), an unbiased forecast will be required for a good forecasting system. We should stress that a period of verified data used in Devos et al., (2014) is different from that of this study. \par RWCJ forecast is a four-categorial deterministic forecast which is defined by ISES. However, some readers may think that only M- and X-threshold forecasts are enough because B- or C-class flares will not affect social infrastructures. This claim may be true. On the other hand, the M- and X-threshold forecasts can be easily made by setting some threshold in the four-categorical forecast. Of course, no (or almost no) flare forecast such as less than or equal to B-class flare forecast can be made from the four-categorical forecast by setting the threshold between categories of B- and C-class flares. Therefore, various threshold forecasts, which the forecast users require, can be made from the four-categorical forecast. As many RWCs belonging to ISES have issued four-categorical forecast, issuing the four-categorical forecast is preferable for the forecast verification researches among some RWCs. Another preferable forecast is a probabilistic forecast. Because deterministic solar flare forecast based on the underlying physics is unlikely to be realized, a probabilistic forecast is essential (Kubo 2008). We think that a future direction of our operational solar flare forecast will be a probabilistic forecast. \par Forecast performance comparison with a persistence and recurrence method was performed in this study. This is a common approach as a verification study of space weather as well as terrestrial weather forecasting, for example, Devos et al. (2014). This approach is also justified by the scientific researches of solar flare occurrence. McCloskey et al. (2016) showed that time evolution of flare occurrence is important factor for flare forecasting. This fact will be related with the persistence method to be skillful. D\'emoulin et al. (2002) and Green et al. (2002) showed by analyzing long lifetime active regions that magnetic flux and helicity stored in an active region, which are expelled by solar flares accompanied by coronal mass ejections, are often reduced during successive solar rotations. This fact may be related with that the recurrence method is almost unskillful forecast, which was shown by verification of X-threshold for recurrence method in this study. \par \section{Summary} \label{sec:summary} A verification study on the operational solar flare forecast in the Regional Warning Center Japan (RWCJ forecast) was performed for the first time. Forecast and observation pair data accumulated over 16 years from 2000 to 2015 were used in the study. We estimated various types of scalar verification measures with 95\% confidence intervals for the overall data of the RWCJ forecast, and they were compared with those of persistence and recurrence methods. The performance of the recurrence method is significantly worse than that of the RWCJ forecast. However, the score difference in various verification measures between the RWCJ forecast and the persistence method is small, and we could not conclude definitely that there were significant performance differences between these two forecast methods, although a slightly significant difference was found for the M-threshold events. We also compared various types of scalar verification measures among four subsets of data, whose long-term event frequencies were almost the same. The forecast skill for 2003-2005 seemed to be the best among the four subsets; however, the better forecast skill for 2003-2005 seemed to be due to the persistence of solar activity. The judgment skill seemed to be the best during 2011-2015. Finally, we proposed the use of the judgment skill score to assess the judgment skill of an operational solar flare forecast and the verification strategy for a dichotomous and a multi-categorical operational solar flare forecast. \par \section*{Appendix 1} We briefly introduce the definitions of the scalar verification measures for the dichotomous forecast used in this study. Table \ref{tbl:dichoto_table} is a contingency table for the dichotomous forecast. \begin{description} \item[\parbox{6in}{Base rate (S).}] $$S=\frac{a+c}{a+b+c+d}=p(o)$$ \item[\parbox{6in}{Probability of detection (POD). Measure of discrimination.}] $$POD=\frac{a}{a+c}=p(f\ |\ o)$$ \item[\parbox{6in}{Probability of false detection (POFD). Measure of discrimination.}] $$POFD=\frac{b}{b+d}=p(f\ |\ \overline{o})$$ \item[\parbox{6in}{False alarm ratio (FAR). Measure of reliability.}] $$FAR=\frac{b}{a+b}=\frac{(1-S)POFD}{S\cdot POD+(1-S)POFD}=p(\overline{o}\ |\ f)$$ \item[\parbox{6in}{Proportion correct (PC). Measure of accuracy.}] $$PC=\frac{a+d}{a+b+c+d}=S\cdot POD+(1-S)POFD=p(f,o)+p(\overline{f},\overline{o})$$ \item[\parbox{6in}{Critical success index (CSI), also known as the threat score. Measure of accuracy.}] $$CSI=\frac{a}{a+b+c}=\frac{S\cdot POD}{S+(1-S)POFD}=p\left(f,o\ \Big|\ \overline{\overline{f},\overline{o}}\right)$$ \item[\parbox{6in}{Frequency bias (FB). Measure of bias.}] $$FB=\frac{a+b}{a+c}=POD+\frac{1-S}{S}POFD=\frac{p(f)}{p(o)}$$ \item[\parbox{6in}{Equitable threat score (ETS), also known as the Gilbert skill score. Measure of skill.}] $$ETS=\frac{a-a_r}{a-a_r+b+c}=\frac{S(1-S)(POD-POFD)}{S(1-S\cdot POD)+(1-S)^2POFD}$$ \par $$a_r=\frac{(a+b)(a+c)}{a+b+c+d}$$ \item[\parbox{6in}{Heidke skill score (HSS). Measure of skill.}] $$HSS=\frac{PC-PC_r}{1-PC_r}=\frac{2S(1-S)(POD-POFD)}{S+S(1-2S)POD+(1-S)(1-2S)POFD}$$\par $$PC_r=\frac{(a+c)(a+b)+(b+d)(c+d)}{(a+b+c+d)^2}$$ \item[\parbox{6in}{Peirce skill score (PSS), also known as the true skill statistic. Measure of skill.}] $$PSS=\frac{PC-PC_r}{1-PC_c}=POD-POFD$$ \par $$PC_r=\frac{(a+c)(a+b)+(b+d)(c+d)}{(a+b+c+d)^2}\quad PC_c=\frac{(a+c)^2+(b+d)^2}{(a+b+c+d)^2}$$ \item[\parbox{6in}{Odds ratio skill score (ORSS). Measure of association.}] $$ORSS=\frac{ad-bc}{ad+bc}=\frac{POD-POFD}{POD(1-POFD)+POFD(1-POD)}$$ \item[\parbox{6in}{Symmetric extremal dependence index (SEDI). Measure of performance of extreme event. This measure is undefined when any element in contingency table is zero.}] $$SEDI=\frac{\log [POFD(1-POD)]-\log [POD(1-POFD)]}{\log [POFD(1-POD)]+\log [POD(1-POFD)]}$$ \end{description} \section*{Appendix 2} We briefly introduce the definition of the Gandin-Murphy-Gerrity score (GMGS) used in this study. A content of this Appendix is following Gandin \& Murphy (1992) and Gerrity (1992). \par Define an $N$-categorical contingency table ${\bf P}$ with an element $p_{ij}$, which is a relative frequency of an observation falling category $i$ and a forecast falling category $j$. The GMGS is calculated as \begin{equation} GMGS={\rm Tr}\ ({\bf S^{T}}\cdot {\bf P}), \label{eq:gmgs} \end{equation} where ${\bf S}$ is an $N$-rank scoring matrix with an element $s_{ij}$. The $s_{ij}$ is determined for the GMGS satisfying equitability condition. The condition is written as follow. \begin{equation} \sum_{i=1}^{N} s_{ji} p_i=0; \ j=1,\cdots,N, \label{eq:gmgs_noskill} \end{equation} \begin{equation} \sum_{i=1}^{N} s_{ii} p_i=1, \label{eq:gmgs_perfect} \end{equation} where $p_i$ is relative observation frequency of category $i$ defined as \begin{equation} p_{i}=\sum_{j=1}^{N} p_{ij}. \label{eq:gmgs_defpi} \end{equation} Equation (\ref{eq:gmgs_noskill}) means that scores of the GMGS for the forecast with always category $i$ vanish. Equation (\ref{eq:gmgs_perfect}) means that score of the GMGS for perfect forecast is one. Symmetry of scoring matrix ($s_{ij}=s_{ji}$) is also imposed. \par Gerrity (1992) found a closed form scoring matrix satisfying the all conditions described above. The scoring matrix is defined as \begin{equation} a_i=\frac{1-\sum_{k=1}^{i}p_k}{\sum_{k=1}^{i}p_k}; \ i=1,\cdots, N, \end{equation} \begin{equation} s_{ii}=\frac{1}{N-1}\left[\sum_{k=1}^{i-1}a_k^{-1}+\sum_{k=i}^{N-1}a_k\right]; \ i=1,\cdots, N, \end{equation} \begin{equation} s_{ij}=\frac{1}{N-1}\left[\sum_{k=1}^{i-1}a_k^{-1}+\sum_{k=i}^{j-1}(-1)+\sum_{k=j}^{N-1}a_k\right]; \ 1\le i<j\le N, \end{equation} \begin{equation} s_{ji}=s_{ij}. \end{equation} As we can recognize from the definition that the elements of scoring matrix are determined by only the observation frequency, so they do not depend on the forecast frequency. \par The GMGS has a score of zero for unskillful forecast and one for perfect forecast. The GMGS for $N$-categorical contingency table is mathematically equals to an arithmetic mean of ($N-1$)-number PSSs, which are calculated from dichotomous contingency tables collapsed with threshold $k$ ($k=1,\cdots,N-1$). Therefore, the GMGS reaches the PSS for a case of dichotomous forecast (i.e. $N=2$). \par \section*{Appendix 3} We briefly show that a ranked probability score (RPS) applied to a multi-categorical deterministic forecast does not satisfy equitability. \par An RPS for $N$-categorical probabilistic forecast is defined as \begin{equation} RPS=E\left(\frac{1}{N-1}\sum_{n=1}^{N-1}\left(F_n-O_n\right)^2\right), \label{eq:RPSdef} \end{equation} where $F_n$ and $O_n$ are cumulative probabilities at category $n$ of forecast and observation, respectively (e.g., Jolliffe \& Stephenson 2003). The expectation value $E(\cdots)$ is calculated for all forecast-observation pairs. When an observation falls category $i$, the $O_n$ is zero and one for $n < i$ and $n \ge i$, respectively. If the forecast probabilities are set as one for category $j$ and zero for others (meaning deterministic forecast for category $j$), the $F_n$ is zero and one for $n < j$ and $n \ge j$, respectively. Therefore, the RPS applied to the deterministic forecasts can be written as \begin{equation} RPS=E\left(\frac{|j-i|}{N-1}\right). \label{eq:RPSdeterministic} \end{equation} When an $N$-categorical contingency table ${\bf P}$ with an element $p_{ij}$, which is a relative frequency of an observation falling category $i$ and a forecast falling category $j$, is constructed from all forecast-observation pairs, equation (\ref{eq:RPSdeterministic}) can be rewritten as \begin{equation} RPS=\sum_{j=1}^{N} \sum_{i=1}^{N} s_{ji}p_{ij} = {\rm Tr}\ ({\bf S^{T}}\cdot {\bf P}), \label{eq:RPStable} \end{equation} where $s_{ij}=|j-i|/(N-1)$. Therefore, the RPS applied to the deterministic forecast can mathematically be expressed by similar form of the GMGS. \par Equitability requires a condition of equation (\ref{eq:gmgs_noskill}) to be satisfied (more precisely, the condition does not require zero for right hand side of the equation (\ref{eq:gmgs_noskill}), but a certain constant $c$ to be enough). The condition can be written by using the $s_{ij}$ determined for the RPS as \begin{equation} \frac{1}{N-1}\sum_{i=1}^{N} |j-i|p_{i}=c; \ j=1,\cdots,N, \label{eq:RPScondition} \end{equation} where $p_i$ is relative observation frequency of category $i$. It is obvious that satisfying equation (\ref{eq:RPScondition}) for any $p_i$ is impossible. Therefore, the RPS applied to the deterministic forecast cannot satisfy the condition of equation (\ref{eq:gmgs_noskill}), and does not satisfy equitability. \par \begin{acknowledgements} We would like to thank SWPC/NOAA for compiling the GOES X-ray flare events list. We also would like to thank anonymous referees for useful comments to improve the manuscript. The editor thanks David Jackson and an anonymous referee for their assistance in evaluating this paper. \end{acknowledgements}
1,108,101,564,090
arxiv
\section{Analysis} \label{SECII}\label{sec:analysis} \subsection{Single-sample observation} \label{sec:singleSample} We begin by investigating perhaps the simplest Bayesian coherent data analysis: detecting a signal from a known sky position in a single strain sample from each of $N$ gravitational wave observatories. This example will show many of the basic features of the Bayesian analysis, and highlight some of the differences between the Bayesian approach and previous statistics. In the following section we will generalize to a multi-sample search for a signal arriving at an unknown time from an unknown sky position. Consider a single strain sample from each of $N$ detectors, each measurement taken at the moment corresponding to the passage of a postulated plane gravitational wave from some known location on the sky, ($\theta, \phi$). The measurements are then equal to \cite{GuTi:89} \begin{equation} \mathbf{x}=\mathbf{F}\,\mathbf{h}+\mathbf{e} \, , \label{eqn:ssmodel} \end{equation} where $\mathbf{x}$ is the vector of measurements $[x_1,\ldots,x_N]^T$, the matrix $\mathbf{F}=[[F_1^+,F_1^\times],\ldots,[F_N^+,F_N^\times]]$ contains the antenna responses of the observatories to the postulated gravitational wave strain vector $\mathbf{h}=[h_+,h_\times]^T$, and $\mathbf{e}$ is the noise in each sample. $\mathbf{F}$ is a known function of the source sky direction $(\theta,\phi)$, and the decomposition into $+$ and $\times$ polarizations requires us to choose an arbitrary polarization basis angle $\psi$ for each source sky direction. We wish to distinguish between two hypotheses: $H_0$, that the data contains only noise, and $H_1$, that the data contains a gravitational wave signal. The Bayesian odds ratio \cite{jaynes, gregory} allows us to compare the plausibility of the hypotheses: \begin{equation} \frac{p(H_1|\mathbf{x},I)} {p(H_0|\mathbf{x},I)}= \frac{p(H_1|I)} {p(H_0|I)} \frac{p(\mathbf{x}|H_1,I)} {p(\mathbf{x}|H_0,I)} \label{Bayes_Ratio} \, , \end{equation} where $I$ is a set of unstated but shared assumptions (such as the detector locations, orientations and noise power spectra). If the posterior plausibility ratio is greater than one, $H_1$ is more plausible than $H_0$ and we classify the observation as a detection. If the posterior plausibility ratio is less than one, $H_1$ is less plausible than $H_0$ and we classify the observation as a non-detection. The $p(H|I)$ terms (``plausibility of $H$ assuming $I$'')are the \emph{prior} plausibilities we assign to each hypothesis $H$ on the basis of our knowledge $I$ prior to considering the measurement; for example, our expectation that detectable gravitational waves are rare requires that $p(H_1|I)\ll p(H_0|I)$. The $p(\mathbf{x}|H,I)$ terms (``plausibility of $\mathbf{x}$ assuming $H$ and $I$'') are the probabilities assigned by a hypothesis to the occurrence of a particular observation $\mathbf{x}$. These are sometimes called likelihood functions; they represent the likelihood of a certain measurement being made. The $p(H|\textbf{x},I)$ terms are the \emph{posterior} plausibilities we assign to the hypotheses in light of the observation. The hypothesis that assigned more probability to the observation becomes more plausible. For notational simplicity we will drop the $I$ in our formulae; the unstated assumptions are implicit. If we make the idealized assumption that the noise in each detector is independent and normally distributed \cite{jaynes, gregory} with zero mean and unit standard deviation, we can then write the following expression for the likelihood $p(\mathbf{x}|H_0)$ \begin{eqnarray} p(\mathbf{x}|H_0)&=&\prod_{i=1}^N p(x_i|H_0)\nonumber\\ &=&\prod_{i=1}^N\frac{1}{\sqrt{2\pi}}\exp(-\frac{1}{2}x_i^2)\nonumber\\ &=&(2\pi)^{-\frac{N}{2}}\exp(-\frac{1}{2}\mathbf{x}^T\mathbf{x})\label{singleNoise} \, , \label{noise_only} \end{eqnarray} where $^T$ denotes matrix transposition. For real detectors, the measurements can be \emph{whitened}, which modifies the effective beam pattern functions $\mathbf{F}$. If we assume that there is a gravitational wave $\mathbf{h}$ present, then after subtracting away the response $\mathbf{F}\,\mathbf{h}$ the data will be distributed as noise and the likelihood $p(\mathbf{x}|\mathbf{h},H_1)$ becomes \begin{eqnarray} p(\mathbf{x}|\mathbf{h},H_1) &=&(2\pi)^{-\frac{N}{2}}\exp(-\frac{1}{2}(\mathbf{x}-\mathbf{F}\,\mathbf{h})^T (\mathbf{x}-\mathbf{F}\,\mathbf{h})) \label{noiseSignal} \, . \label{noise_signal} \end{eqnarray} Unfortunately, we do not know the signal strain vector $\mathbf{h}$ {\em a priori}. To compute the plausibility of the more general hypothesis $p(\mathbf{x}|H_\mathrm{signal})$ we need to marginalize away these {\it nuisance parameters} \begin{eqnarray} p(\mathbf{x}|H_1) &=&\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} p(\mathbf{h}|H_1) p(\mathbf{x}|\mathbf{h},H_1) \, \mathrm{d}{h_+} \, \mathrm{d}{h_\times} \, . \label{marginal} \end{eqnarray} The hypothesis resulting from the marginalization integral is an average of the hypotheses for particular signals $\mathbf{h}$, weighted by the prior probability $p(\mathbf{h}|H_\mathrm{signal})$ we assign to those signals occurring. A convenient choice of prior is to use a normal distribution for each polarization, with a standard deviation $\sigma$ indicative of the amplitude scale of gravitational waves we hope to detect. Under these assumptions the prior is \begin{eqnarray}\label{wave_distribution} p(\mathbf{h}|H_1) & = & \frac{1}{2\pi\sigma^2}\exp(-\frac{1}{2\sigma^2}\mathbf{h}^T\mathbf{h}) \, . \end{eqnarray} This allows us to perform the marginalization integral analytically \begin{eqnarray} p(\mathbf{x}|H_1) & = & (2\pi)^{-\frac{N}{2}-1}\sigma^{-2} \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} \exp(-\frac{1}{2}((\mathbf{x}-\mathbf{F}\,\mathbf{h})^T (\mathbf{x}-\mathbf{F}\,\mathbf{h}) \nonumber \\ & & \mbox{} +\sigma^{-2}\mathbf{h}^T\mathbf{h})) \, \mathrm{d}{h_+} \, \mathrm{d}{h_\times} \nonumber \\ & = & (2\pi)^{-\frac{N}{2}} |\mathbf{I-K_\mathrm{ss}}|^{\frac{1}{2}} \exp(-\frac{1}{2}\,\mathbf{x}^T (\mathbf{I-K_\mathrm{ss}})\mathbf{x}) \, , \label{eq:simpleP} \end{eqnarray} where \begin{eqnarray} \mathbf{K_\mathrm{ss}} &\equiv& \mathbf{F} (\mathbf{F}^T\mathbf{F}+\sigma^{-2}\mathbf{I})^{-1} \mathbf{F}^T\label{eq:simpleC}. \end{eqnarray} The result is a multivariate normal distribution with covariance matrix $(\mathbf{I-K_\mathrm{ss}})^{-1}$, which quantifies the correlations among the detectors due to the presence of a gravitational wave signal. With both hypotheses defined, we can form the \emph{likelihood ratio} \begin{eqnarray} \Lambda & = & \frac{p(\mathbf{x}|H_1)} {p(\mathbf{x}|H_0)} \nonumber\\ & = & |\mathbf{I-K_\mathrm{ss}}|^\frac12 \exp ( \frac{1}{2}\,\mathbf{x}^T \mathbf{F}(\mathbf{F}^T\mathbf{F}+\sigma^{-2}\mathbf{I})^{-1} \mathbf{F}^T\mathbf{x}) \, . \, \label{eqn:ssLambda} \label{likelihood_final} \end{eqnarray} Multiplying the likelihood ratio by the prior plausibility ratio $p(H_1)/p(H_0)$ completes the calculation of the Bayesian odds ratio (\ref{Bayes_Ratio}). In the limit $\sigma\rightarrow\infty$ we find that the odds ratio contains the least-squares estimate of the strain \begin{eqnarray} \mathbf{\hat{h}}&=&(\mathbf{F}^T\mathbf{F})^{-1}\mathbf{F}^T\mathbf{x} \, . \end{eqnarray} The odds ratio may then be rewritten in terms of a matched filter for the response to the estimated strain, $\mathbf{x}^T\mathbf{F}\,\mathbf{\hat{h}}$. For finite values of $\sigma$, the odds ratio contains the \emph{Tikhonov regularized} estimate of the strain \cite{Ra:06} \begin{eqnarray} \mathbf{\hat{h}} = (\mathbf{F}^T\mathbf{F}+\sigma^{-2}\mathbf{I})^{-1}\mathbf{F}^T\mathbf{x} \, , \end{eqnarray} and can still be rewritten as a matched filter for this estimate. It is also worth noting the presence in (\ref{eqn:ssLambda}) of the determinant $|\mathbf{I-K_\mathrm{ss}}|$ factor. It is independent of the data and depends only on the antenna pattern and the signal model. In particular, it tells us how strongly to weight likelihoods computed for different possible sky positions of the signal. This {\em Occam factor} penalizes sky positions of high sensitivity relative to sky positions of lower sensitivity which give similar exponential part of the likelihood. The effect is typically small compared to the exponential in most cases if the data has good evidence for a signal, but can be important for weak signals and for parameter estimation. \subsection{General Bayesian model} We now generalize the analysis of the previous section to the case of burst signals of extended duration and unknown source sky direction $(\theta, \phi)$ and arrival time $\tau$ with respect to the centre of the Earth. A global network of $N$ gravitational wave detectors each produce a time-series of $M$ observations with sampling frequency $f_\textrm{s}$, which we pack into a single vector \begin{equation} \fl \mathbf{x}=[x_{1,1},x_{1,2},\ldots,x_{1,M},x_{2,1},x_{2,2},\ldots,x_{2,M},\ldots,x_{N,1},x_{N,2},\ldots,x_{N,M}]^T \ . \end{equation} Our signal model is a generalization of (\ref{eqn:ssmodel}), \begin{eqnarray} \mathbf{x}&=&\mathbf{F}(\tau,\theta,\phi)\cdot\mathbf{h}+\mathbf{e} \, ,\label{eq:linearmodel} \end{eqnarray} where \begin{eqnarray} \mathbf{h}&=&[h_{+,1},h_{+,2},\ldots,h_{+,L},h_{\times,1},\ldots,h_{\times,L}]^T \end{eqnarray} is a time-series of $2 L$ samples describing the band-limited strain waveform (with the two polarizations packed into a single vector), $\mathbf{e}$ is a random variable representing the instrumental noise, and $\mathbf{F}(\tau,\theta,\phi)$ is a $NM\times 2L$ response matrix describing the response of each observatory to an incoming gravitational wave, \begin{eqnarray} \fl \mathbf{F}(\tau,\theta,\phi)&=& \left[ \begin{array}{cc} F^+_1(\theta,\phi)\mathbf{T}(\tau+\Delta\tau_1(\theta,\phi)) & F^\times_1(\theta,\phi)\mathbf{T}(\tau+\Delta\tau_1(\theta,\phi)) \\ F^+_2(\theta,\phi)\mathbf{T}(\tau+\Delta\tau_2(\theta,\phi)) & F^\times_2(\theta,\phi)\mathbf{T}(\tau+\Delta\tau_2(\theta,\phi)) \\ \vdots & \vdots \\ F^+_N(\theta,\phi)\mathbf{T}(\tau+\Delta\tau_N(\theta,\phi)) & F^\times_N(\theta,\phi)\mathbf{T}(\tau+\Delta\tau_N(\theta,\phi)) \end{array} \right] \, . \end{eqnarray} Each $M\times L$ block of the response matrix is responsible for scaling and time shifting one of the waveform polarizations for one detector, so each block is the product of the directional sensitivity of each detector to each polarization, $F^+_i(\theta,\phi)$ or $F^\times_i(\theta,\phi)$, and a time delay matrix $T_{j,k}(t)$ \footnote{ From the assumption that the signal is band-limited, it follows that the time delay matrix may be written as $T_{j,k}(t)=\textrm{sinc}(\pi(j-k-f_\textrm{s}t))$; for $L = M$ and zero time delays, it is equal to the identity matrix; for $L = M$ and time delays corresponding to integer numbers of time samples, it is a \emph{shift matrix}. }, for the source sky direction dependent arrival times $\tau+\Delta\tau_i(\theta,\phi)$ at each detector. \subsection{Noise model} The noise that affects gravitational wave detectors is typically modeled as stationary, colored gaussian noise that is independent of the signal parameters. This can be represented with a \emph{multivariate normal distribution}, which can be compactly written as \begin{eqnarray} \mathcal{N}(\mathbf{\mu},\mathbf{\Sigma},\mathbf{x})&=&\frac{1}{(2\pi)^{N/2}\sqrt{|\mathbf{\Sigma}|}}\exp(-\frac{1}{2}(\mathbf{x}-\mathbf{\mu})^T\mathbf{\Sigma}^{-1}(\mathbf{x}-\mathbf{\mu})) \, . \end{eqnarray} The vector $\mathbf{\mu}$ is the mean of the distribution, and the positive-definite \emph{covariance matrix} $\mathbf{\Sigma}$ describes the ellipsoidal shape of the constant-density contours of the distribution in terms of the pairwise covariances of the samples, \begin{eqnarray} \mathbf{\Sigma}_{i,j} = \langle(e_i-\mu_i),(e_j-\mu_j)\rangle \, . \end{eqnarray} Using this notation, the noise likelihood is \begin{eqnarray} p(\mathbf{x}|H_0) &=& \mathcal{N}(\mathbf{0},\mathbf{\Sigma},\mathbf{e}) \end{eqnarray} for some $MN\times MN$ positive definite matrix $\mathbf{\Sigma}$. Under the additional assumption of stationarity over some timescale, these covariances can be estimated from previous observations. In the case of Gaussian stationary colored noise, each detector is individually represented by a Toeplitz covariance matrix $\mathbf{\Sigma}^{(i)}$. For uncorrelated noise, the covariance matrix for the whole network is $\mathbf{\Sigma} = \textrm{diag}(\mathbf{\Sigma}^{(1)}, \mathbf{\Sigma}^{(2)},\ldots,\mathbf{\Sigma}^{(N)})$. In the simple case in which all the noises are white, have equal standard deviation and are uncorrelated, we have $\mathbf{\Sigma} = \textrm{diag}(\mathbf{I}, \mathbf{I},\ldots,\mathbf{I})=\mathbf{I}$. The generalization of (\ref{noise_signal}) and (\ref{marginal}) for the signal likelihood is \begin{eqnarray} p(\mathbf{x}|H_1) &=& \int_{V_{\mathbf{h},\tau,\theta,\phi}} \!\!\!\!\!\!\!\!\! \mathcal{N}(\mathbf{F}(\tau,\theta,\phi)\cdot\mathbf{h},\mathbf{\Sigma},\mathbf{x}) \, p(\mathbf{h},\tau,\theta,\phi|H_1) \, \mathrm{d}\mathbf{h}\ldots\mathrm{d}\phi \ ,\label{eq:partialmarginalization} \end{eqnarray} where ${V_{\mathbf{h},\tau,\theta,\phi}}$ is the space of all signal parameters and $p(\mathbf{h},\tau,\theta,\phi|H_1)$ is the prior for these parameters. Without loss of generality we may separate this signal prior into a prior on source sky direction and arrival time, and a prior on the waveform \emph{conditional on} the source sky direction and the arrival time, i.e. \begin{eqnarray} p(\mathbf{h},\tau,\theta,\phi|H_1) = p(\tau,\theta,\phi|H_1) \, p(\mathbf{h}|\tau,\theta,\phi,H_1) \, , \end{eqnarray} giving \begin{eqnarray} \fl p(\mathbf{x}|H_1) &=& \int_{V_{\mathbf{h},\tau,\theta,\phi}} \!\!\!\!\!\!\!\!\! \mathcal{N}(\mathbf{F}(\tau,\theta,\phi)\cdot\mathbf{h},\mathbf{\Sigma},\mathbf{x}) \, p(\tau,\theta,\phi|H_1) \, p(\mathbf{h}|\tau,\theta,\phi,H_1) \, \mathrm{d}\mathbf{h}\ldots\mathrm{d}\phi \ .\label{eq:partialmarginalization2} \end{eqnarray} \subsection{Wideband signal model} \label{sec:wideband} In analogy with the single sample case, we can choose a multivariate normal distribution prior for the waveform amplitudes and render the integral soluble in closed form. The marginalization integral over $\mathbf{h}$ in (\ref{eq:partialmarginalization2}) can then be analytically performed, giving \begin{eqnarray} \frac{p(\mathbf{x}|\tau,\theta,\phi,H_1)} {p(\mathbf{x}|H_0)} &=& \frac{ \int_{\mathbb{R}^{2L}} \mathcal{N}(\mathbf{F}(\tau,\theta,\phi)\cdot\mathbf{h},\mathbf{\Sigma},\mathbf{x}) \, p(\mathbf{h}|\tau,\theta,\phi,H_1) \, \mathrm{d}\mathbf{h}} { \mathcal{N}(\mathbf{0},\mathbf{\Sigma},\mathbf{x}) } \label{eq:quick} \end{eqnarray} (see (\ref{eqn:explicit}) below). Numerical integration over a more manageable three dimensions is then sufficient to compute the Bayes factor, \begin{eqnarray} \frac{ p(\mathbf{x}|H_1) }{ p(\mathbf{x}|H_0) } &=& \int\int\int p(\tau,\theta,\phi|H_1) \, \frac{p(\mathbf{x}|\tau,\theta,\phi,H_1)} {p(\mathbf{x}|H_0)} \, \mathrm{d}\tau \, \mathrm{d}\theta \, \mathrm{d}\phi \, . \end{eqnarray} This signal model is computationally tractable. It represents signals that can be described by an invertible $2 L\times 2 L$ correlation matrix, including the important 'least informative' case of independent, normally distributed samples of $\mathbf{h}$. \subsection{Informative signal models} The wideband signal model excludes some important cases, such as when we have a known waveform, almost known waveform (such as from a family of numerical simulations) or even just a signal restricted to some frequency-band. These signals are superpositions of a (relatively) small number $G < 2 L$ of basis waveforms, that may themselves be characterized by a finite number of parameters, which we denote $\rho$. These parameters must be numerically integrated, like $\tau$, $\theta$, and $\phi$, which may be time-consuming. Their prior distribution will be denoted by $p(\mathbf{\rho}|\tau,\theta,\phi,H_1)$. To describe the signal as a superposition of basis waveforms \cite{Heng:09}, define a set of amplitude parameters $\mathbf{a}$ mapped into strain $\mathbf{h}$ via a $2L\times G$ matrix $\mathbf{W}(\rho,\tau,\theta,\phi)$ whose columns $\mathbf{w}_i(\rho,\tau,\theta,\phi)$ are the basis waveforms, so that \begin{eqnarray} \mathbf{h}&=&\mathbf{W}(\mathbf{\rho},\tau,\theta,\phi)\cdot\mathbf{a} \ . \end{eqnarray} We assume that the amplitude parameters $\mathbf{a}$ are multivariate normal distributed with a covariance matrix $\mathbf{A}(\mathbf{\rho},\tau,\theta,\phi)$, so that \begin{eqnarray} p(\mathbf{a}|\mathbf{\rho},\tau,\theta,\phi,H_1)&=&\mathcal{N}(\mathbf{0},\mathbf{A}(\mathbf{\rho},\tau,\theta,\phi),\mathbf{a}) \, . \end{eqnarray} The resulting distribution for the waveform strain is \begin{eqnarray} \fl p(\mathbf{h}|\tau,\theta,\phi,H_1) &=& \int_{V_{\rho}} \int_{\mathbb{R}^{G}} p(\mathbf{h}|\mathbf{a},\mathbf{\rho},\tau,\theta,\phi,H_1) \, p(\mathbf{a},\mathbf{\rho}|\tau,\theta,\phi,H_1) \, \mathrm{d} \mathbf{a} \, \mathrm{d} \mathbf{\rho} \, \nonumber \\ &=& \int_{V_{\rho}} \int_{\mathbb{R}^{G}} \delta(\mathbf{h}-\mathbf{W}\cdot\mathbf{a}) \, \mathcal{N}(\mathbf{0},\mathbf{A},\mathbf{a}) \, p(\mathbf{\rho}|\tau,\theta,\phi,H_1) \, \mathrm{d}\mathbf{a} \, \mathrm{d}\mathbf{\rho} \, , \end{eqnarray} where for clarity we have begun to omit the dependence of matrices on their parameters. As $G < 2L$ ({\em i.e.}, we have fewer basis waveforms than samples in the signal time-series) the integral over $\mathbf{a}$ cannot be directly represented as a multivariate normal distribution. This signal model proposes that gravitational wave signals have waveforms that are the sum of $G$ basis waveforms with amplitudes that are normally distributed (and potentially correlated). The basis waveforms and their amplitude distributions may vary with source sky direction, arrival time, and any other parameters we care to include in $\mathbf{\rho}$. The model is capable of representing a variety of sources including the important special cases of known `template' waveforms, and band-limited bursts. We will consider some concrete examples in \S\ref{sec:signalexamples}; perhaps the most important is a scale parameter $\sigma$, that permits us to look for signals of different total energies. We can substitute the expression back into part of (\ref{eq:quick}) to form a multivariate normal distribution partial integral whose solution is given in \cite{jaynes}: \begin{eqnarray} \fl p(\mathbf{x}|\tau,\theta,\phi,H_1) &=& \int_{V_{\rho}}\int_{\mathbb{R}^{G+2L}} \!\!\!\! \mathcal{N}(\mathbf{F}\cdot\mathbf{h},\mathbf{\Sigma},\mathbf{x}) \, \delta(\mathbf{h} -\mathbf{W}\cdot\mathbf{a}) \, \mathcal{N}(\mathbf{0},\mathbf{A},\mathbf{a}) \, \nonumber \\ & & \mbox{} \times p(\mathbf{\rho}|\tau,\theta,\phi,H_1) \, \mathrm{d}\mathbf{a} \, \mathrm{d}\mathbf{\rho} \, \mathrm{d}\mathbf{h} \nonumber \\ &=& \int_{V_{\rho}} \mathcal{N}(\mathbf{0},(\mathbf{\Sigma}^{-1}-\mathbf{K})^{-1},\mathbf{x}) \, p(\mathbf{\rho}|\tau,\theta,\phi,H_1) \, \mathrm{d}\mathbf{\rho} \, , \end{eqnarray} where the matrix \begin{eqnarray} \fl \mathbf{K}(\mathbf{\rho},\tau,\theta,\phi)&=& (\mathbf{\Sigma}^{-1}\mathbf{F}\mathbf{W}) ( (\mathbf{F}\mathbf{W})^T \mathbf{\Sigma}^{-1} \mathbf{F}\mathbf{W} + \mathbf{A}^{-1} )^{-1} (\mathbf{\Sigma}^{-1}\mathbf{F}\mathbf{W})^T \end{eqnarray} will be the kernel of our numerical implementation. Note that this is a generalization of equation (\ref{eq:simpleC}) obtained in the single-sample case. Since \begin{eqnarray}\label{eqn:note} \fl \frac{ p(\mathbf{x}|\rho,\tau,\theta,\phi,H_1) }{ p(\mathbf{x}|H_0) } & = & \frac{ \mathcal{N}(\mathbf{0},(\mathbf{\Sigma}^{-1}-\mathbf{K})^{-1},\mathbf{x}) }{ \mathcal{N}(\mathbf{0},\mathbf{\Sigma},\mathbf{x}) } = \sqrt{|\mathbf{I}-\mathbf{\Sigma}\mathbf{K}|} \exp(\frac{1}{2}\mathbf{x}^T\mathbf{K}\mathbf{x}) \, , \end{eqnarray} we have \begin{eqnarray} \fl \frac{p(\mathbf{x}|\tau,\theta,\phi,H_1)}{ p(\mathbf{x}|H_0)} & = & \int_{V_{\rho}} p(\mathbf{\rho}|\tau,\theta,\phi,H_1) \sqrt{|\mathbf{I}-\mathbf{\Sigma}\mathbf{K}|} \exp(\frac{1}{2}\mathbf{x}^T\mathbf{K}\mathbf{x}) \, \mathrm{d}\mathbf{\rho} \, . \label{eqn:explicit} \end{eqnarray} and the Bayes factor becomes \begin{eqnarray} \fl \frac{p(\mathbf{x}|H_1)}{p(\mathbf{x}|H_0)} & = & \int_{V_{\rho, \tau, \theta, \phi}} \!\!\!\!\!\! p(\mathbf{\rho},\tau,\theta,\phi|H_1) \sqrt{|\mathbf{I}-\mathbf{\Sigma}\mathbf{K}|} \exp(\frac{1}{2}\mathbf{x}^T\mathbf{K}\mathbf{x}) \, \mathrm{d}\mathbf{\rho} \, \mathrm{d}\tau \, \mathrm{d}\theta \, \mathrm{d}\phi \, .\label{eq:fastbayesfactor} \end{eqnarray} In other words we have reduced the task of computing the Bayes factor to an integral over arrival time, source sky direction, and any additional signal model parameters $\mathbf{\rho}$. \subsection{Example signal models\label{sec:signalexamples}} A simple signal model is the wideband signal model discussed briefly in Section~\ref{sec:wideband}. This is a burst whose spectrum is white, has characteristic strain amplitude $\sigma$ (at the Earth) and duration $f_\textrm{s}^{-1}L$ \begin{eqnarray} G&=&2L \label{wnb1}\\ \mathbf{A}&=&\sigma^2\mathbf{I}\label{eq:sigma} \label{wnb2} \\ \mathbf{W}&=&\mathbf{I} \, . \label{wnb3} \end{eqnarray} If we assert that such bursts are equally likely to come from any source sky direction and arrive at any time in the observation window of $f_\textrm{s}^{-1}M$ seconds, then the priors are \begin{eqnarray} p(\theta|H_1)&=&\frac{1}{2}\sin(\theta)\\ p(\phi|H_1)&=&(2\pi)^{-1}\\ p(\tau|H_1)&=&f_\textrm{s}M^{-1} \, . \end{eqnarray} If we assert that the source population is distributed uniformly in flat space up to some horizon $r_\mathrm{max}$ we have a prior on the distance $r$ to the source $p(r|H_1)\propto r^2$. We want to turn this into a prior on the characteristic amplitude $\sigma$, an example of a signal model parameter we must numerically marginalize over ($\mathbf{\rho}=[\sigma]$). Since the gravitational wave energy decays with the square of the distance to the source, $\sigma^2\propto r^{-2}$, we then deduce that: \begin{eqnarray} p(\sigma|H_1)&=&p(r|H_1)\left|\frac{\mathrm{d}r}{\mathrm{d}\sigma}\right|\\ &=&\frac{3\sigma_\mathrm{min}^3}{\sigma^4} \, ,\label{eq:sigma4prior} \end{eqnarray} where $\sigma_\mathrm{min} \propto r_\mathrm{max}^{-1}$ is a lower bound on the amplitude of (or upper bound on the distance of) the gravitational wave. This bound is obviously somewhat arbitrary, but is a consequence of the way we distinguish between detection and non-detection. For a uniformly spatially distributed population of bursts there are of course many weak signals within the data, and the noise hypothesis is ``never'' true. In reality we are interested only in gravitational waves of at least a certain size. If $\sigma_\mathrm{min}$ is much smaller than the noise floor in all detectors, the expression for the noise hypothesis is an excellent approximation to the expressions of the likelihood we adopted. The classification of observations is insensitive to different choices of $\sigma_\mathrm{min}$ below the noise floor. This distribution of $\sigma$ is preserved if we consider a source population with a distribution of different intrinsic luminosities, so long as they are uniformly distributed in space out to their respective $r_\mathrm{max}$ determined by the choice of $\sigma_\mathrm{min}$. This is an example of a relatively \emph{uninformative} signal model. It is capable of detecting signals of any waveform (of appropriate duration). However, it incurs a large {\it Occam penalty} for its generality, and cannot be as sensitive as a more \emph{informed} search. The other extreme situation is where a source's waveform is completely known, but its other parameters (amplitude, source sky position, polarization angle) are not. Consider a source that produces a linearly polarized strain $\mathbf{w}$. If the source's orientation, inclination and amplitude are unknown, we can parameterize the system with two amplitudes $\mathbf{a}$ mapping the strain into the observatory network's polarization basis \begin{eqnarray} \mathbf{W}&=&\left[ \begin{array}{cc} \mathbf{w} & \mathbf{0}\\ \mathbf{0} & \mathbf{w} \end{array}\right]. \end{eqnarray} This is the Bayesian equivalent of the matched filter. The template $\mathbf{w}$ appears twice because any specific signal typically will not be aligned with the polarization basis used to describe $h_+$ and $h_\times$ in the detectors, but rather will be rotated by some {\em polarization angle} $\psi$ with respect to that basis. More generally, any signal model that is independent of the observatory network's polarization basis must have $\mathbf{A}$ and $\mathbf{W}$ composed of two identical sub-matrices on the diagonal like this, so that $\mathbf{h}_+$ and $\mathbf{h}_\times$ have the same statistical distribution. For example, if the source is not linearly polarized, but has strain described by $\mathbf{w}_+$ and $\mathbf{w}_\times$, then \begin{eqnarray} \mathbf{W}&=&\left[ \begin{array}{cccc} \mathbf{w}_+ & \mathbf{w}_\times & \mathbf{0} & \mathbf{0}\\ \mathbf{0} & \mathbf{0} & \mathbf{w}_+ & \mathbf{w}_\times \end{array}\right]. \end{eqnarray} A more general case might be where we have a number of different predictions for a waveform, $\mathbf{w}_i$, numerically derived. The resulting search looks for a linear combination of these different waveforms, \begin{eqnarray} \mathbf{W}&=&\left[ \begin{array}{cccccc} \mathbf{w}_1 & \mathbf{w}_2 & \cdots & \mathbf{0} & \mathbf{0} & \cdots \\ \mathbf{0} & \mathbf{0} & \cdots & \mathbf{w}_1 & \mathbf{w}_2 & \cdots \end{array} \right] \, . \end{eqnarray} \subsection{Comparison with previously proposed methods} \label{sec:comparison} In this section we will expand on the arguments sketched in a previous paper \cite{SeSuTiWo:08}. Several previously proposed hypothesis tests, such as the G\"{u}rsel-Tinto (i.e. standard likelihood), the constraint likelihoods, and the Tikhonov-regularized likelihood, can be written in the form \begin{eqnarray}\label{eq:prev} \max_{\rho,\tau,\theta,\phi}\mathbf{x}^T\mathbf{J}(\rho,\tau,\theta,\phi)\mathbf{x}&>&\lambda \, ,\label{eq:fht} \end{eqnarray} where $\mathbf{J}$ is an $MN\times MN$ matrix and $\lambda$ is a \emph{threshold}. These tests proceed in two steps. First, parameters are \emph{estimated} by maximizing the likelihood function with respect to the parameters. Second, the value of the likelihood function at its maximum is compared to a threshold $\lambda$, which is chosen to ensure that it is only exceeded for the noise hypothesis at some acceptable \emph{false alarm rate}. The corresponding Bayesian expression, from (\ref{eq:fastbayesfactor}), integrates over source sky direction, arrival time and any other parameters and determines if the Bayes factor is large enough to overcome the prior plausibility ratio \begin{eqnarray} \fl \int_{V_{\rho, \tau, \theta, \phi}} \!\!\!\!\!\! p(\mathbf{\rho},\tau,\theta,\phi|H_1) \sqrt{|\mathbf{I}-\mathbf{\Sigma}\mathbf{K}|} \exp(\frac{1}{2}\mathbf{x}^T\mathbf{K}\mathbf{x}) \, \mathrm{d}\mathbf{\rho} \, \mathrm{d}\tau \, \mathrm{d}\theta \, \mathrm{d}\phi &>& \frac{ p(H_0) }{ p(H_1) }\,. \label{eq:bht} \end{eqnarray} There are some obvious similarities between (\ref{eq:fht}) and (\ref{eq:bht}), in particular the quadratic forms central to each. However, direct mathematical equivalence cannot be established in general because of the difference between maximization and marginalization. We can establish equivalence for the related problem of parameter estimation, where we have maximum likelihood parameter estimate \begin{eqnarray} \{\rho,\tau,\theta,\phi\}&=&\arg\max(\mathbf{x}^T\mathbf{J}\mathbf{x}) \end{eqnarray} and the Bayesian most plausible parameters, one of several ways the posterior plausibility distribution for the parameters can be turned into a point estimate \begin{eqnarray} \fl \{\rho,\tau,\theta,\phi\}&=&\arg\max ( p(\mathbf{\rho},\tau,\theta,\phi|H_1) \sqrt{|\mathbf{I}-\mathbf{\Sigma}\mathbf{K}|} \exp(\frac{1}{2}\mathbf{x}^T\mathbf{K}\mathbf{x}) )\\ &=&\arg\max(\mathbf{x}^T\mathbf{K}\mathbf{x} + 2\ln p(\mathbf{\rho},\tau,\theta,\phi|H_1) + \ln |\mathbf{I}-\mathbf{\Sigma}\mathbf{K}|) \, . \end{eqnarray} In the cases where we can find a Bayesian signal model that produces $\mathbf{K}=\mathbf{J}$, we must also use a prior \begin{eqnarray} p(\mathbf{\rho},\tau,\theta,\phi|H_1)&\propto&|\mathbf{I}-\mathbf{\Sigma}\mathbf{K}|^{-\frac{1}{2}}. \end{eqnarray} This prior states that gravitational wave bursts are \emph{intrinsically} more likely to occur at the sky positions that the network is more sensitive to. We interpret this as an implicit bias present in any statistic of the form of (\ref{eq:fht})\footnote{It is important to note that this particular objection applies only to all-sky searches; it is a consequence of the maximization over $(\theta,\phi)$. These statistics are also used in directed searches (for example, in the direction of a gamma-ray burst) where $(\theta, \phi)$ is known and fixed, and the problem does not arise (the missing normalization term is one of several absorbed by tuning the threshold).}. In order to compare previously proposed statistics to the Bayesian method, we place some restrictions on the configurations considered. We will assume co-located (but differently oriented) detectors to eliminate the need to time-shift data, and we will use stationary signals and observation times that coincide with the time the signal is present. These restrictions eliminate the differences in the way previously proposed statistics and the Bayesian method handle arrival time and signal duration. For simplicity, we will further assume that the detectors are affected by white Gaussian noise. The conclusions drawn will apply equally to different versions of these statistics for colored noise or different bases other than the time-domain (such as the frequency or wavelet domains). \subsubsection{Tikhonov regularized statistic} The Tikhonov regularized statistic proposed in \cite{Ra:06} for white noise interferometers is \begin{eqnarray} \mathbf{x}^T\mathbf{F}(\mathbf{F}^T\mathbf{F} +\alpha^2\mathbf{I})^{-1}\mathbf{F}^T\mathbf{x}\, . \end{eqnarray} The Bayesian kernel $\mathbf{K}$ reduces to this for \begin{eqnarray} \mathbf{\Sigma}&=&\mathbf{I}\\ \mathbf{W}&=&\mathbf{I}\\ \mathbf{A}&=&\alpha^{-2}\mathbf{I} \, . \end{eqnarray} This is a signal of characteristic amplitude $\sigma = \alpha^{-1}$. The Tikhonov regularizer $\alpha$ therefore places a delta function prior on the characteristic amplitude of the signal $p(\sigma|H_1)=\delta(\sigma-\alpha^{-1})$. The Tikhonov statistic behaves like a Bayesian statistic that postulates all bursts have energies in a narrow range. \subsubsection{G\"{u}rsel-Tinto statistic} The G\"{u}rsel-Tinto or standard likelihood statistic \cite{GuTi:89,FlHu:98b,AnBrCrFl:01} is \begin{eqnarray} \mathbf{x}^T\mathbf{F}(\mathbf{F}^T\mathbf{F})^{-1}\mathbf{F}^T\mathbf{x}\,. \end{eqnarray} For large $\sigma$, the Tikhonov statistic goes to \begin{eqnarray} \mathbf{K} &\approx& \mathbf{F}(\mathbf{F}^T\mathbf{F})^{-1}\mathbf{F}^T \, . \end{eqnarray} This implies that the G\"{u}rsel-Tinto statistic is the limit of a series of Bayesian statistics for increasing signal amplitudes. \subsubsection{Soft constraint likelihood} The soft constraint statistic \cite{KlMoRaMi:05,KlMoRaMi:06} for white noise interferometers is \begin{eqnarray}\label{eqn:SC} k^2(\theta,\phi)\,\mathbf{x}^T\mathbf{FF}^T\mathbf{x} \, , \end{eqnarray} for some function $k(\theta,\phi)$. Specifically, (\ref{eqn:SC}) gives the soft constraint likelihood for the choice $k^2=(\mathbf{F}^{+T}\mathbf{F}^+)^{-1}$, where the antenna response is computed in the dominant polarization frame \cite{KlMoRaMi:05}. Consider the signal model defined by \begin{eqnarray} \mathbf{\Sigma}&=&\mathbf{I}\\ \mathbf{W}&=&\mathbf{I}\\ \mathbf{A}&=&\sigma^2k^2(\theta,\phi)\mathbf{I} \, . \end{eqnarray} This is a population of signals whose characteristic amplitude $\sigma k(\theta,\phi)$ varies as some known function of source sky direction, slightly generalizing the situation of the Tikhonov statistic. For small $\sigma$, \begin{eqnarray} \mathbf{K}&\approx&\sigma^2k^2(\theta,\phi)\mathbf{F}\mathbf{F}^T \, , \end{eqnarray} so we can see that the soft constraint is the limit of a series of Bayesian statistics for decreasing signal amplitudes. \subsubsection{Hard constraint likelihood} Let us restrict the soft-constraint signal model to a population of \emph{linearly polarized} signals with a known polarization angle $\psi(\theta,\phi)$ for each source sky direction \begin{eqnarray} \mathbf{\Sigma}&=&\mathbf{I}\\ \mathbf{W}&=&\left[ \begin{array}{c} \cos 2\psi(\theta,\phi)\mathbf{I}\\ \sin 2\psi(\theta,\phi)\mathbf{I} \end{array} \right]\\ \mathbf{A}&=&\sigma^2 k^2(\theta,\phi)\mathbf{I} \, . \end{eqnarray} Then for $\sigma\rightarrow 0$ the Bayesian statistic limits to \begin{eqnarray} k^2(\theta,\phi) \, \mathbf{x}^T\mathbf{FW}(\mathbf{FW})^T\mathbf{x} \, . \end{eqnarray} For the particular choice of $\psi(\theta,\phi)$ being the rotation angle between the detector polarization basis and the dominant polarization frame, and $k^2=(\mathbf{FW})^T\mathbf{FW}$ (which is equal to ($\mathbf{F}^{+T}\mathbf{F}^+)^{-1}$ in the dominant polarization frame \cite{KlMoRaMi:05}), this yields the hard constraint statistic of \cite{KlMoRaMi:05}. In addition to the explicit assumptions that all signals are linearly polarized with known polarization angle, the hard constraint has the same properties as the soft constraint. \subsection{Interpretation} We have shown that several previously proposed statistics are special cases or limiting cases of Bayesian statistics for particular choices of prior. The `priors' implicit in these non-Bayesian methods are not representative of our expectations about the source population, so we can reasonably expect improved performance from a detection statistic with priors better reflecting our state of knowledge. The Bayesian analysis allows us to begin with our physical understanding of the problem, described in terms of prior expectations about the gravitational wave signal population, and derive the detection statistic for these conditions. The effects of priors are lessened when there is a strong gravitational wave signal present; all these statistics, Bayesian and non-Bayesian, are effective at detecting stronger gravitational waves; significant differences occur only for marginal signals. In the next section, we will quantitatively compare the relative performance of the methods mentioned above and the Bayesian statistic we propose. \section{better noise model} The assumption that the interferometer noise is well-modeled by a multivariate normal distribution is convenient, but false. The presence of `glitches' in the interferometer, where the noise statistics change dramatically, is well documented. Current methods, including ours in the form proposed in the paper, are easily fooled by these bursts of excess power, simply because the analyses assume that the only way that extra power can be introduced to the interferometer is by a gravitational wave. The gravitational wave hypothesis $H_\mathrm{signal}$ will do a very poor job of explaining temporally coincident incoherent bursts of noise power in the interferometer, but the noise hypothesis $H_\mathrm{signal}$ in its simple form does even worse; the gravitational wave explanation is thus preferred. We can generalize the noise hypothesis to cope with glitches by creating a model for glitches and adding that hypothesis to the set under consideration. Like gravitational waves, glitches are infrequent, have poorly known waveforms, and poorly known power. Unlike gravitational waves, they will not be correlated between instruments. One first attempt at such a hypothesis is to propose that an interferometer is either `quiet' with some probability $p(H_\mathrm{quiet}|H_\mathrm{noise})$ and has a unit normal noise distribution, or is `glitching' with probability $p(H_\mathrm{glitch}|H_\mathrm{noise})$ and has an increased standard deviation $\sigma_g$ \begin{equation} P(\mathbf{x}|H_\mathrm{noise}) = \prod_{i=1}^{N}\left[p(H_\mathrm{quiet}|H_\mathrm{noise}) (2\pi)^{-n/2} \exp(-\frac{1}{2}\sum_{j=1}^n x_{ij}^2) +p(H_\mathrm{glitch}|H_\mathrm{noise}) (2\pi)^{-n/2}\sigma_g^{-n} \exp(-\frac{1}{2\sigma_g^2}\sum_{j=1}^n x_{ij}^2) \right]\label{eq:glitchy} \end{equation} If there is excess energy in only one detector, the new noise hypothesis will readily explain it. If there is excess energy in three detectors, the noise hypothesis must invoke three coincident glitches and is penalized by $p(H_\mathrm{glitch}|H_\mathrm{noise})^3$ reflecting our belief that triple-coincidence glitches are rare, and the prediction that the glitches are incoherent thinly spreads the hypothesis over a higher-dimensional space than that of the signal hypothesis, which is concentrated around $\mathrm{span}\,\mathbf{F}$. These factors make it possible for the gravitational wave hypothesis to be preferred for some data. \section{Acknowledgments} We would like to thank Shourov Chatterji, Albert Lazzarini, Soumya Mohanty, Andrew Moylan, Malik Rakhmanov, and Graham Woan for useful discussions and valuable comments on the manuscript. This work was performed under partial funding from the following NSF Grants: PHY-0107417, 0140369, 0239735, 0244902, 0300609, and INT-0138459. A.~Searle was supported by the Australian Research Council and the LIGO Visitors Program. For M.~Tinto, the research was also performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. M.~Tinto was supported under research task 05-BEFS05-0014. P.~Sutton was supported in part by STFC grant PP/F001096/1. LIGO was constructed by the California Institute of Technology and Massachusetts Institute of Technology with funding from the National Science Foundation and operates under cooperative agreement PHY-0107417. This document has been assigned LIGO Laboratory document number LIGO-P070114-00-Z. \section{References} \section{Conclusions and Future directions} \label{SECIV} We have presented a comprehensive Bayesian formulation of the problem of detecting gravitational wave bursts with a network of ground-based interferometers. We demonstrated how to systematically incorporate prior information into the analysis, such as time-frequency or polarization content, source distributions and signal strengths. We have also seen that this Bayesian formulation contains several previously proposed detection statistics as special cases. The Bayesian methodology we have derived to address the problem of detecting poorly-understood gravitational wave bursts yields a novel statistic. On theoretical grounds we expect this statistic to outperform previously proposed statistics. A Monte-Carlo analysis confirms this expectation: over a range of false alarms rates, the Bayesian statistic can detect sources at 15\% greater distances and therefore observe 50\% more events. The Bayesian search requires explicitly adopting a model for the poorly understood signal. This is not a shortcoming. The model may be agnostic with respect to many features of the waveform. As we have demonstrated, existing methods are not free from their own signal models, but implicitly assume priors on the energies and spatial distribution of sources. Coherent analyses of any kind are relatively costly, and efficient implementations must be sought. By specifying the problem as an integral, the Bayesian approach lets us leverage the extensive literature on numerical integration for techniques to accelerate the computation; one promising contender is \emph{importance sampling}. As second practical issue that must be dealt with is that the background noise of real gravitational wave detectors contains transient non-Gaussian features (``glitches''). As in the case of detection statistics, several {\em ad hoc} non-Bayesian statistics have been proposed to distinguish glitches from gravitational wave signals (see for example \cite{Ch_etal:06,WeSc:05}). Again, Bayesian methodology provides us with a direction in which to proceed: augment the noise model to better reflect ``glitchy'' reality, and to the extent we are successful, robustness will automatically follow. \section{Introduction} \label{sec:introduction} Large-scale, broad-band interferometric gravitational-wave observatories \cite{GEO,LIGO,TAMA,VIRGO} are operating at their anticipated sensitivities, and scientists around the globe have begun to analyze the data generated by these instruments in search of gravitational wave signals \cite{Ab_etal:07rh}. Gravitational wave bursts (GWBs) are among the most exciting signals scientists expect to observe, as our present knowledge and modeling ability of GWB-emitting systems is rather limited. These signals often depend on complicated (and interesting) physics, such as dynamical gravity and the equation of state of matter at nuclear densities. While this makes GWBs an especially attractive target to study, our lack of knowledge also limits the sensitivity of searches for GWBs. Potential sources of GWBs include merging compact objects \cite{FlHu:98a,FlHu:98b,Pr:05,Ba_etal:06, Ca_etal:06,Di_etal:06, HeShLa:06,LoReAn:06,ShTa:06}, core-collapse supernovae \cite{ZwMu:97,DiFoMu:02b,OtBuLiWa:04,ShSe:04,OtBuDeLi:06}, and gamma-ray burst engines \cite{Me:02}; see \cite{CuTh:02} for an overview. Although a gravitational wave signal is characterized by two independent polarizations, an interferometric gravitational wave detector is sensitive to a single linear combination of them. Simultaneous observations of a gravitational wave burst by three or more observatories over-determines the waveform, permitting the source position on the sky to be determined, and a least-squares estimate of the two polarizations. This solution of the {\em inverse problem} for gravitational waves was first derived by G\"{u}rsel and Tinto \cite{GuTi:89} for three interferometers. Subsequent work generalized it \cite{Tinto96} and formalized it as an example of a coherent maximum likelihood statistic by Flanagan and Hughes \cite{FlHu:98b} and later Anderson, Brady, Creighton, and Flanagan \cite{AnBrCrFl:01}. Various modifications have been proposed, such as Rakhmanov's Tikhonov \cite{Ra:06} and Summerscale's maximum-entropy \cite{Summerscales:2007xq} regularization techniques, Kilmenko \emph{et al}'s constraint likelihood method \cite{KlMoRaMi:05,MoRaKlMi:06}, and the SNR variability approach of Mohanty {\em et al.} \cite{MoRaKlMi:06}. Potential as a consistency test was noted by Wen and Schutz in \cite{WeSc:05} and demonstrated in Chatterji \emph{et al} \cite{Ch_etal:06}. Other coherent detection algorithms have been proposed by Sylvestre \cite{Sylvestre:03} and by Arnaud {\em et al.} \cite{Arnaud:03}. These approaches to signal detection have generally been derived by following either \emph{ad hoc} reasoning or a maximum-likelihood criterion (notable exceptions, foreshadowing our Bayesian approach, are Finn \cite{finn:97}, Anderson \emph{et al} \cite{AnBrCrFl:01} and Allen \emph{et al} \cite{Al_etal:03}). In this paper we present a systematic and comprehensive Bayesian formulation \cite{jaynes, gregory} of the problem of coherent detection of gravitational wave bursts. We demonstrate how to incorporate partial or incomplete knowledge of the signal in the analysis, thereby improve the probability of detection of these weak signals. This information may include time-frequency properties of the signal, polarization content, model waveform families or templates, as well as information on the distribution of the source through spacetime. We also explicitly identify the prior assumptions that must be made about the signal to cause a Bayesian analysis to behave like several of the previously proposed detection statistics. Real interferometers also experience instrumental artifacts that can masquerade as signals. These are typically dealt with in post-processing rather than by the detection statistic itself, but some proposals have been made to include a model for these ``glitches'' in the detection statistic \cite{PrPi:08,SeSuTiWo:08}. An advantage of the Bayesian framework is that the standard choice between signal and stationary noise could be extended to include a third option: randomly occurring noise ``glitches''. This formulation is a promising direction for future progress, but we do not further discuss it in this paper. The paper is organized as follows. In \S\ref{sec:analysis} we derive the Bayesian posterior detection probability of an idealized delta-like burst signal by a toy-model network of observatories. Greatly expanding on \cite{SeSuTiWo:08}, we then generalize the derivation of the Bayesian odds ratio into a usable statistic for an arbitrary number of interferometers with differently colored and (potentially) correlated noises. We also consider a wide range of signal models corresponding to different states of knowledge about the burst, from total ignorance to complete {\em a priori} knowledge of the waveforms. In \S\ref{sec:simulations} we characterize the relative performance of previously proposed statistics (the G\"ursel-Tinto/standard and constraint likelihoods) and the Bayesian statistic by performing a Monte-Carlo simulation in which we add a simple binary black-hole merger waveform to simulated detector data and construct Receiver-Operating Characteristic (ROC) curves. We find that the Bayesian method increases the probability of detection for a given false alarm rate by approximately 50\%, over those associated with previously proposed statistics. \section{Simulations \label{sec:simulations}} \label{SECIII} To characterize the relative performance of the G\"{u}rsel-Tinto (i.e. standard likelihood), soft constraint, hard constraint, and Bayesian methods we used the \textsc{X-Pipeline} software package \cite{Xpipeline}. This package reads in gravitational wave data, estimates the power spectrum and whitens the data, and transforms it into a time-frequency basis of successive short Fourier transforms. Each statistic is then applied to the transformed data, and the results saved to file. This ensures that the observed differences are due to the statistics themselves, and not to different whitening or other conventions. Our tests used a set of 4 identical detectors at the positions and orientations of the LIGO-Hanford, LIGO-Livingston, GEO 600, and Virgo detectors. The data was simulated as Gaussian noise with spectrum following the design sensitivity curve of the 4-km LIGO detectors; it was taken from a standard archive of simulated data \cite{Be_etal:05} used for testing detection algorithms. Approximately 12 hours of data in total was analysed for these tests. For the population of gravitational-wave signals to be detected we chose, somewhat arbitrarily, the ``Lazarus'' waveforms of Baker {\em et al.\/} \cite{Ba_etal:02}. These are fairly simple waveforms generated from numerical simulations of the merger and ringdown of a binary black-hole system. We chose to simulate a pair of 20 solar-mass black holes, which puts the peak of signal power near the frequencies of best sensitivity for LIGO. The time-series waveforms are shown in Fig.~\ref{fig:lazarus-timeseries}, while the spectra and detector noise curve are given in Fig.~\ref{fig:lazarus-freqseries}. The sources were placed at the discrete distances $240/S$ Mpc \footnote{The fiducial distance $240$ Mpc is chosen for numerical convenience; it is the distance at which the sum-squared matched-filter SNR for each polarization is $1/2$, assuming optimal antenna response ($F^+,F^\times=1$).}, where $S = 1, 2, 2.5, 3, 10$, and with randomly chosen sky position and orientation. Approximately 5000 injections were performed for each distance. \begin{figure} \includegraphics[width=\textwidth]{lazarus_timeseries} \caption{\label{fig:lazarus-timeseries} Time-series Lazarus waveforms \cite{Ba_etal:02} used for our simulations, from a nominal distance of 240 Mpc. } \end{figure} \begin{figure} \includegraphics[width=\textwidth]{lazarus_freqseries} \caption{\label{fig:lazarus-freqseries} Strain-equivalent noise amplitude spectral density of the simulated data used in our tests (black) with spectra of the Lazarus waveforms \cite{Ba_etal:02} at 240 Mpc. The Lazarus spectra have been rescaled by $T^{-1/2}=(128/\mathrm{sec})^{1/2}$ to render them into the same units as the noise spectrum.} \end{figure} For the Bayesian statistic we adapt the broad-band signal prior (\ref{wnb1})--(\ref{wnb3}) for each polarization. Specifically, we assume the simple model of a burst whose spectrum is white, with characteristic strain amplitude $\sigma$ at the Earth and duration equal to our chosen FFT length: \begin{eqnarray} G &=& 2M\\ \mathbf{A} &=& \sigma^2\mathbf{I}\label{eq:sigma}\\ \mathbf{W} &=& \mathbf{I} \, . \end{eqnarray} We use a uniform prior on the signal arrival time $\tau$, and an isotropic prior on the sky position $(\theta,\phi)$. The Bayesian statistic was computed for approximately logarithmically spaced discrete values of characteristic strain $\sigma = 10^{-23}, 3\times10^{-23}, 10^{-22}, 3\times10^{-22}, 10^{-21}$, and averaged together in post-processing. This averaging approximates a single Bayesian statistic with a Jeffreys (scale invariant) prior $p(\sigma)\propto1/\sigma$ between $10^{-23}$ and $10^{-21}$. (Performing the combination in post-processing allowed us to maintain compatibility with the existing architecture of \textsc{X-Pipeline}; we do not use the $\sigma^{-4}$ prior from (\ref{eq:sigma4prior}) because we are injecting from a fixed distance, not a spatially uniform population.) Each likelihood statistic was computed over a fixed frequency band of [64,1088] Hz, with an FFT length of 1/128 sec. We analyzed blocks of data, overlapping by 75\% of their duration. The detection probability as a function of false alarm probability is shown in Fig.~\ref{fig:ROC}. The distance used for the Lazarus simulations for this figure was $240/2.5=96$~Mpc; injections at other distances yielded similar results. At this distance, the total SNR deposited in the network $\sqrt{\sum_\alpha \mathrm{SNR}^2}$ was in the range $\sim1-8$ with a mean value of 5, where \begin{equation} \sum_\alpha \mathrm{SNR}^2_\alpha = \sum_\alpha 4 \int_0^\infty \!\! df \, \frac{\left| F_\alpha^+ \tilde{h}_+(f) + F_\alpha^\times \tilde{h}_\times(f) \right|^2}{S(f)} \end{equation} and $S(f)$ is the one-sided noise power spectral density of each interferometer. Fig.~\ref{fig:ROC} is the receiver-operating characteristic plot for each of the statistics considered. The vertical axis represents the fraction of a population of signals whose detection statistics exceed the threshold that would only be crossed by background noise at the rate given by the false alarm probability on the horizontal axis. For example, we can read off the figure that if we can afford a false alarm probability of $10^{-2}$, the various detection statistics are able to detect between $0.4$ and $0.6$ of the injected signals. We see that the best performance is achieved by the Bayesian method with the $\sigma$ value most closely matching the injected signals, with the marginalized curve performing almost identically. The detection probability of the marginalized Bayesian method is significantly better than that of any of the non-Bayesian methods (standard likelihood, soft constraint, and hard constraint likelihoods) over the full range of false-alarm probabilities tested. For a given false-alarm probability, we may compute the distance at which each likelihood achieves 50\% efficiency by fitting a sigmoid curve to the simulations. The observed volume, and therefore the expected rate of detections for a uniformly distributed source population, scales as the cube of the distance. We computed the distance and volume for each statistic for two false alarm probabilities, $10^{-5}$ in Table~\ref{table:1e-5} and $1/256\approx 3.9\times 10^{-3}$ in Table~\ref{table:1-256}. As we compute 512 statistics per second, these correspond to false alarm rates of 1/200 Hz and 2 Hz respectively (as the statistics are computed on 75\% overlapped data, these estimates are conservative). These rates are practical for event generation at the first stage of an untriggered (all-sky, all-time) burst search \cite{Ab_etal:05c,Ab_etal:08,Ab_etal:07rh,Ab_etal:07}. At both of these false alarm probabilities, the Bayesian method can detect sources approximately 15\% more distant, and consequently has an observed volume and expected detection rate approximately 50\% greater, than the non-Bayesian statistics. It is important to note that we do {\em not} use detailed knowledge of the signal waveform for the Bayesian analysis. Our prior is that the signal spectrum is flat over the analysis band ([64,1088] Hz), and by imposing no phase structure or sample-to-sample correlations we are assuming that over the integration time (1/128 sec) the time samples of strain are independently and identically distributed. Considering Figures~\ref{fig:lazarus-timeseries} and~\ref{fig:lazarus-freqseries}, it is clear that these priors are not particularly accurate models for the actual gravitational-wave signal. Nevertheless, our Monte Carlo results demonstrate that even this incomplete prior information can improve the sensitivity of the search. \begin{figure \includegraphics[width=\textwidth]{roc_injS2d5} \caption{Receiver-operating characteristic (ROC) curves for the Bayesian, standard (G\"ursel-Tinto), soft constraint, and hard constraint likelihoods for sources at 96 Mpc. The curve for a $\sigma^{-1}$ prior is obtained by marginalizing over probabilities associated with the discrete $\sigma$ values tested. The best performance is achieved by the $\sigma$ value most closely matching the amplitude of the injected signals, with the marginalized curve performing almost identically. The detection probability of the marginalized Bayesian method is significantly greater than that of any of the non-Bayesian methods (standard/G\"ursel-Tinto, soft constraint, and hard constraint likelihoods) over the full range of false-alarm probabilities tested.} \label{fig:ROC} \end{figure} \begin{table} \caption{Distances for false-alarm probability $10^{-5}$} \begin{tabular}{cccc} \hline\hline Statistic & Distance (Mpc) & Distance (rel.) & Volume (rel.) \\ \hline Standard & 72.1 & 1.01 & 1.02 \\ Soft & 71.6 & 1.00 & 1.00 \\ Hard & 72.5 & 1.01 & 1.04 \\ Bayesian & 82.0 & 1.15 & 1.50 \\ \hline \end{tabular} \label{table:1e-5} \end{table} \begin{table} \caption{Distances for false-alarm probability $1/256$} \begin{tabular}{cccc} \hline\hline Statistic & Distance (Mpc) & Distance (rel.) & Volume (rel.) \\ \hline Standard & 87.1 & 1.03 & 1.08 \\ Soft & 84.9 & 1.00 & 1.00 \\ Hard & 86.9 & 1.02 & 1.07 \\ Bayesian & 97.9 & 1.15 & 1.53 \\ \hline \end{tabular} \label{table:1-256} \end{table}
1,108,101,564,091
arxiv
\section{Introduction} Due to their fascinating properties such as aging, memory effects and ergodicity-breaking transitions, as well as industrial applications, structural glasses, supercooled liquids and polymers have received considerable attention recently. In particular, when the temperature is decreased, they undergo a dynamic transition \cite{goetze:92,angell:95,binder:02} below which the particle-density correlation length does not decay to zero in the long-time limit and the evolution becomes nonergodic. However, this transition is not associated with any thermodynamic singularity. Hence the system ``freezes'' in a portion of phase space. There is a second transition at a lower temperature \cite{kauzmann:48,gibbs:57} which can be associated with a thermodynamic singularity and which can be related to a possible ideal glass transition. Despite ongoing efforts, the structural glass transition remains to be fully understood. The $p$-state Potts glass \cite{elderfield:83,gross:85,carmesin:88,scheucher:90,schreider:95,dillmann:98} is one of the most versatile models in statistical physics: For $p = 2$ states it reduces to the well-known Edwards-Anderson Ising spin glass \cite{edwards:75}, a workhorse in the study of disordered magnetic systems. For $p = 3$ it can be used to model orientational glasses \cite{binder:92}, while for $p = 4$ the Potts glass can be used to model quadrupolar glasses. For large $p > 4$ and no disorder the model shows a first-order transition. In particular, infinite-range Potts glasses with $p>4$ exhibit a transition from ergodic to nonergodic behavior \cite{elderfield:83,gross:85,carmesin:88,scheucher:90,schreider:95,dillmann:98}, as well as an additional static transition at a lower temperature. In fact, the equations describing the system's dynamics near the transition are mathematically related \cite{kirkpatrick:87b,kirkpatrick:88,kirkpatrick:89} to the equations of mode-coupling theory, which describe the behavior found in structural glasses and supercooled liquids. Therefore, studying the Potts glass with large $p$ could provide, in principle, some insights into the mechanisms governing the structural glass {\em transition}. However, this beneficial relationship seems to only work when the model is infinite ranged \cite{kob:00}. The existence of a transition in finite-dimensional systems remains to be proven \cite{brangian:03,lee:06}. Not only are hypercubic lattices with large space dimension hard to study numerically, recent work \cite{cruz:09-ea,alvarez:10-ea} suggests that if there is a transition for large $p$ it would occur at very low temperatures. In this work we simulate the 10-states Potts glass on a one-dimensional ring topology with power-law interactions. This allows us to effectively tune the range of the interactions and therefore the (effective) space dimension for large linear system sizes. Our results suggest that 10-state Potts glasses should have a very low finite-temperature transition for finite space dimensions. The paper is structured as follows. In Sec.~\ref{sec:model} we introduce the model and observables. Furthermore, we outline the details of the numerical simulations. Section \ref{sec:results} summarizes our findings, followed by concluding remarks. \section{Model and Observables} \label{sec:model} We study a one-dimensional Potts glass with long-range power-law interactions \cite{kotliar:83,katzgraber:03} and Hamiltonian ${\mathcal H} = -\sum_{i,j} J_{ij} \delta_{q_i,q_j}$, where $q_i \in \{1,\ldots,10\}$ are $10$-state Potts spins on a ring of length $L$ to enforce periodic boundary conditions and $\delta_{x,y} = 1$ if $x=y$ and zero otherwise. The sum is over all spins and the interactions $J_{ij}$ are given by $J_{ij} = \varepsilon_{ij}/r_{ij}^\sigma$, where $\varepsilon_{ij}$ are Normal distributed with mean $J_0$ and standard deviation unity. $r_{ij} = (L/\pi)\sin[(\pi |i - j|)/L]$ represents the geometric distance between the spins on the ring. For the simulations we express the Potts glass Hamiltonian using the simplex representation where the $10$ states of the Potts spins are mapped to the corners of a hypertetrahedron in nine space dimensions. The state of each spin is therefore represented by a nine-dimensional unit vector $\vec{S}_i$ taking one of the $10$ possible values satisfying the condition $ \vec{S}^\mu \cdot \vec{S}^{\nu} = [p/(p-1)](\delta_{\mu,\nu} - 1)$ with $\{\mu,\nu\} \in \{1,2, \ldots, 10\}.$ In this representation the Potts glass Hamiltonian is given by ${\mathcal H} = -\sum_{i, j} \tilde{J}_{ij} \vec{S}_i \cdot \vec{S}_j$ with $\tilde{J}_{ij} = J_{ij}(p-1)/p$. In the limit when $\sigma \to 0$, when the system is infinite ranged (Sherrington-Kirkpatrick limit), we obtain $T_{c}(\sigma = 0)=1/(p-1)$. The merit of the long-range one-dimensional model lies in emulating a short-range topology of varying dimensionality, depending on the power-law exponent: For $\sigma\le2/3$ the model is in the mean-field long-range 10-state Potts universality class and, in particular for $\sigma \le 1/2$ in the infinite-range universality class. However, for $2/3 < \sigma < 1$ the model is in a nonmean-field universality class with a finite transition temperature $T_c$. It can be shown \cite{kotliar:83} that $\sigma = 2/3$ corresponds exactly to six space dimensions for a hypercubic lattice. Therefore, $\sigma$ values between $1/2$ and $2/3$ allow us to effectively study \cite{kotliar:83,katzgraber:03} a short-range hypercubic Potts glass {\em above} the upper critical dimension $d_{\rm u} = 6$, whereas when $\sigma > 2/3$ we effectively study a model with a space dimension {\em below} six dimensions. Thus, by studying the one-dimensional model we can infer if a transition should be present for the corresponding short-range hypercubic Potts glass. The presence of a transition is probed by studying the two-point finite-size correlation length \cite{palassini:99b}. We measure the wave-vector-dependent spin-glass susceptibility \cite{katzgraber:09b} \begin{equation} \chi_{\rm SG}({\bf k}) = N \sum_{\mu,\nu} [\langle \left|q^{\mu\nu}({\bf k})\right|^2 \rangle ]_{\rm av}\,, \label{eq:chi} \end{equation} where $\langle \cdots \rangle$ denotes a thermal average, $[\cdots]_{\rm av}$ an average over the disorder and \begin{equation} q^{\mu\nu}({\bf k}) = \frac{1}{N} \sum_i S_i^{\mu(\alpha)} S_i^{\nu(\beta)} e^{i {\bf k} \cdot {\bf R}_i}\,, \end{equation} is the spin-glass order parameter computed over two replicas $(\alpha)$ and $(\beta)$ with the same disorder. The two-point finite-size correlation length is then given by \begin{equation} \xi_L = \frac{1}{2 \sin (k_\mathrm{min}/2)} \left[\frac{\chi_{\rm SG}({\bf 0})}{\chi_{\rm SG}({\bf k}_\mathrm{min})} - 1\right]^{1/(2\sigma -1)} \, , \end{equation} where ${\bf k}_\mathrm{min} = 2\pi/L$ is the smallest nonzero wave vector. According to finite-size scaling \cite{katzgraber:09b} \begin{subequations} \label{eq:xiscale} \begin{align} {\xi_L/L^{\nu/3}} &= {\mathcal X} [ L^{1/3} (T - T_c) ] \;\;\; (1/2 <\sigma \le 2/3)\, , \label{eq:xiscaleMF} \\ {\xi_L/L} &= {\mathcal X} [ L^{1/\nu} (T - T_c) ] \;\;\; (2/3 < \sigma)\, , \label{eq:xiscaleNMF} \end{align} \end{subequations} where $\nu$ is the critical exponent for the correlation length and $T_c$ the critical temperature. For $\sigma < 2/3$, $\nu = 1/(2\sigma -1)$. In practice, there are corrections to scaling to Eqs.~(\ref{eq:xiscale}) and so data for different system sizes do not cross exactly at one point as implied by the finite-size scaling expressions. The crossings between pairs of system sizes $L$ and $2L$ shift with temperature and tend to a constant for $L \to\infty$. In general, $T_c^* = T_c^\infty + b/L^{\theta}$ with $\theta = 1/\nu + \omega$. Here we find empirically that $1/\nu + \omega \approx 1$. We fit $T_c^*(L,2L)$ with high probability to a linear function in $1/L$. The intercept with the vertical axis after the fit determines a lower bound for the transition temperature. Error bars are determined via a bootstrap analysis. To obtain a better understanding of the corrections to scaling we also measure the spin-glass susceptibility [Eq.~(\ref{eq:chi}) with ${\bf k} = 0$]. The finite-size scaling of the spin-glass susceptibility $\chi_{\rm SG}$ is given by \begin{subequations} \label{eq:chiscale} \begin{align} {\chi_{\rm SG}/L^{1/3}} &= {\mathcal C} [ L^{1/3} (T - T_c) ] \;\;\; (1/2 <\sigma \le 2/3)\, , \label{eq:chiscaleMF} \\ {\chi_{\rm SG}/L^{2 - \eta}} &= {\mathcal C} [ L^{1/\nu} (T - T_c) ] \;\;\; (2/3 < \sigma)\, . \label{eq:chiscaleNMF} \end{align} \end{subequations} In general, the exponent $\eta$ has to be known {\em a priori} to precisely determine the location of $T_c$. However, for the one-dimensional model $2 - \eta = 2\sigma - 1$ for $\sigma > 2/3$ exactly and so ${\chi_{\rm SG}/L^{2 - \eta}}$ can be treated as a dimensionless quantity similar to the two-point correlation length. To prevent ferromagnetic order \cite{gross:85,elderfield:83} we set the mean of the random interactions to $J_0 = -1$ \cite{brangian:03} in our simulations. This suppresses the ferromagnetic susceptibility $\chi_{m} = N \sum_{\mu} [\langle \left|m^{\mu}\right|^2 \rangle]_{\rm av}$ [$m^{\mu} = (1/N) \sum_i S_i^{\mu}$]. We discuss the case where $J_0 = -1$ in more detail below. The simulations are done using the parallel tempering Monte Carlo technique \cite{hukushima:96}; simulation parameters are shown in Table~\ref{tab:simparams}. Equilibration is tested by using an exact relationship between the energy and four-spin correlators (link overlap) \cite{katzgraber:01} when the bond disorder is Gaussian, suitably generalized to Potts spins \cite{lee:06} on a one-dimensional topology \cite{katzgraber:05c}. \begin{table}[!tb] \vspace*{-.21cm} \caption{ Parameters of the simulations for different exponents $\sigma$. $N_{\rm sa}$ is the number of samples, $N_{\rm sw}$ is the total number of Monte Carlo sweeps, $T_{\rm min}$ is the lowest temperature simulated, and $N_T$ is the number of temperatures used in the parallel tempering method for each system size $L$. \vspace*{.1cm} \label{tab:simparams}} {\footnotesize \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}} c r r r r r} \hline \hline $\sigma$ & $L$ & $N_{\rm sa}$ & $N_{\rm sw}$ & $T_{\rm min}$ & $N_{T}$ \\ \hline $0.60$ & $ 32,48,64,96 $ & $4000$ & $2^{20}$ & $0.054$ & $41$ \\ $0.60$ & $ 128,192 $ & $2400$ & $2^{21}$ & $0.054$ & $41$ \\ $0.60$ & $ 256 $ & $500$ & $2^{22}$ & $0.054$ & $41$ \\ $0.60$ & $ 512 $ & $200$ & $2^{22}$ & $0.054$ & $41$ \\[1mm] $0.75$ & $ 32,48,64,96 $ & $4000$ & $2^{20}$ & $0.030$ & $41$ \\ $0.75$ & $ 128 $ & $1600$ & $2^{22}$ & $0.030$ & $41$ \\ $0.75$ & $ 192 $ & $1600$ & $2^{24}$ & $0.030$ & $41$ \\ $0.75$ & $ 256 $ & $500$ & $2^{26}$ & $0.030$ & $41$ \\[1mm] $0.85$ & $ 32,48,64,96 $ & $4000$ & $2^{20}$ & $0.018$ & $61$ \\ $0.85$ & $ 128 $ & $1600$ & $2^{22}$ & $0.018$ & $61$ \\ $0.85$ & $ 192 $ & $1600$ & $2^{24}$ & $0.025$ & $41$ \\ $0.85$ & $ 256 $ & $500$ & $2^{26}$ & $0.025$ & $41$ \\ \hline \hline \end{tabular*} \vspace{-.2cm} } \end{table} \begin{figure*} \centering \includegraphics[width=0.95\columnwidth]{crossings60.eps} \includegraphics[width=0.95\columnwidth]{extrapolation60.eps} \includegraphics[width=0.95\columnwidth]{crossings75.eps} \includegraphics[width=0.95\columnwidth]{extrapolation75.eps} \includegraphics[width=0.95\columnwidth]{crossings85.eps} \includegraphics[width=0.95\columnwidth]{extrapolation85.eps} \vspace*{-0.3cm} \caption{(Color online) Panels (a), (c) and (e) show the correlation length $\xi_L/L$ (inset: susceptibility $\chi_{\rm SG}/L^{2-\eta}$) as a function of temperature $T$ for different system sizes $L$. Panel (a) shows data for $\sigma = 0.60$ (mean-field regime) where a transition is expected \cite{brangian:02c} [note that here $\nu = 1/(2\sigma - 1)$]. Panels (c) and (e) show data for $\sigma = 0.75$ and $\sigma = 0.85$, respectively, which correspond to a space dimension below the upper critical dimension. A transition for low yet finite temperature is clearly visible. Panels (b), (d) and (f) show the crossing temperatures $T_c^*(L,2L)$ of successive pairs of system sizes for different exponents $\sigma$ [(b) $0.60$; (d) $0.75$; (f) $0.85$]. The crossings for both $\xi_L/L$ and $\chi_{\rm SG}/L^{2 - \eta}$ are well approximated by a linear behavior in $1/L$. Despite small deviations between the estimates for both quantities, for all $\sigma$ values studied $T_c(\sigma) > 0$. In particular, we estimate $T_{c}(0.60) = 0.060(4)$, $T_{c}(0.75) = 0.040(3)$ and $T_{c}(0.85) = 0.025(3)$. Note that the data for $\sigma = 0.60$ show a deviation from the linear behavior for the largest system sizes studies. However, both data sets agree and therefore suggest that the thermodynamic limit might have been reached. } \vspace{-.1cm} \label{fig:crossings} \end{figure*} \section{Results} \label{sec:results} Our results are summarized in Fig.~\ref{fig:crossings}. The main panels in the left column show data for the finite-size correlation length as a function of temperature for (a) $\sigma = 0.60$, (c) $0.75$, and (e) $0.85$. The insets show the corresponding data for the scaled dimensionless susceptibility. In all cases data for different system sizes cross, indicating the presence of a transition. To better quantify the thermodynamic behavior, we show in the right column the scaling of the crossing between successive system size pairs $T^*(L,2L)$ as a function of $1/L$. The data can be well fit by a linear function; the intercept with the vertical axis corresponding to the thermodynamic limit. For all $\sigma$ studied we find finite values for the thermodynamic glass transition. These findings for the long-range model with power-law interactions imply that the 10-state mean-field Potts glass, for $d_{\rm u} < d < \infty$ space dimensions, has a stable glass phase at finite temperatures. In addition, our data for $\sigma > 2/3$ indicate that short-range Potts glasses with a space dimension below the upper critical dimension should also have a finite transition temperature, albeit at very low~$T$ \cite{comment:rppg}. Recently, Alvarez Ba\~nos {\em et al.}~\cite{alvarez:10-ea} performed a thorough study of a three-dimensional Potts glass with $p \le 6$, bimodal disorder and $J_0 = 0$. Their main result is that $T_c$ decreases with an increasing number of states $p$ and suggests that for $10$ states $T_c$ should be strongly suppressed, in agreement with our results. In addition, Alvarez Ba\~nos {\em et al.}~\cite{alvarez:10-ea} claim that (1) only weak ferromagnetic order is visible when $J_0 = 0$, (2) that the complexity of the simulations is much higher when $J_0 = 0$, (3) that setting $J_0 = -1$ could impact the presence of the glass transition, and (4) that the transition could be first order. \begin{figure}[h] \vspace*{-0.2cm} \centering \includegraphics[width=1\columnwidth]{fm_susceptibility.eps} \vspace*{-0.5cm} \caption{(Color online) Ferromagnetic susceptibility $\chi_m$ as a function of temperature $T$ for different system sizes. Ferromagnetic order is strongly suppressed for $J_0 = -1$ in comparison to the $J_0 = 0$ case. } \vspace*{-0.1cm} \label{fig:chim} \end{figure} We have examined these claims using the one-dimensional model with Gaussian disorder and find that (1) ferromagnetic order grows considerably when $J_0 = 0$ at low enough temperatures (see Fig.~\ref{fig:chim}) and (2) the complexity of the simulations is not affected by shifting the mean of the interactions. With respect to point (3), we do find, however, that the transition temperatures are reduced by approximately a factor of $2$ -- $3$ when $J_0 = -1$ in comparison to the simulations where $J_0 = 0$. Shifting the mean of the interactions therefore only quantitatively impacts the transition temperature. Finally, (4), for the system sizes studied, the distribution functions of the energy show no double-peak structure that would be indicative of a first-order transition. \section{Conclusions} \label{sec:conlcusions} Using a one-dimensional 10-state Potts glass with power law interactions, we present evidence suggesting that short-range finite-dimensional 10-state Potts glasses should exhibit a finite-temperature transition for low enough temperatures and large enough system sizes. Although corrections to scaling are large, we estimate that for all $\sigma$ values studied $T_c(\sigma) > 0$. In particular, we conservatively estimate $T_{c}(0.60) = 0.060(4)$, $T_{c}(0.75) = 0.040(3)$, and $T_{c}(0.85) = 0.025(3)$. Larger system sizes might show a different behavior, however, the presented state-of-the-art simulations show strong evidence that short-range 10-state Potts glasses in high enough space dimensions should order. \begin{acknowledgments} We thank A.~P.~Young for numerous discussions. H.G.K.~acknowledges support from the SNF (Grant No.~PP002-114713). The authors acknowledge ETH Zurich for CPU time on the Brutus cluster. \end{acknowledgments} \vspace*{-0.5cm}
1,108,101,564,092
arxiv
\section{Introduction} \label{sec:intro} Most existing dialogue systems are sequence-to-sequence (seq2seq) models~\cite{Luan2016LSTMBC,serban2016building}. Since a dialogue generally lasts for several turns, a dialogue session with multiple utterances can often be modeled as a sequence of ``sequences'' (i.e., utterances). A representative framework is the hierarchical recurrent encoder-decoder framework HRED~\cite{serban2016building,Serban2017AHL}. In HRED, a recurrent neural network (RNN) encoder encodes the tokens in each utterance, and a context RNN encodes the temporal structure of the utterances. The entire dialogue session is then organized as a sequence. Although HRED is effective in modeling sequential dialogue sessions, it falls short for dialogues involving more than two interlocutors. Table~\ref{tab:intro} shows a real conversation of 3 people ($p_i$) in the Ubuntu forum. Utterances 3 and 4 both respond to utterance 2, represented as a graph in Figure~\ref{fig:intro_observation}. We see that utterances can occur in parallel with each other. This is beyond the expressive power of sequence models. This paper generalizes sequence-based representation of two-party dialogues to a graph-based representation of multi-party dialogues. Two-party sequence representation is a special case. The proposed model, called GSN (\textit{graph-structured network}), models the information flow in a graph-structured dialogue. It is a general model and works well for both graph-structured (multi-party) and sequential (two-party) dialogues. \begin{table}[ht] \renewcommand{\arraystretch}{1.3} \scriptsize \begin{center} \begin{tabular}{l} \hline utterance 1 ($p_1$): When the screen goes blank and won't display any login page. \\ utterance 2 ($p_2$): I don't know if its a hardware problem or an os.\\ utterance 3 ($p_1$): Did you do any upgrade recently?\\ utterance 4 ($p_3$): If it works for one user it's probably not a hardware issue.\\ \hline \end{tabular} \end{center} \caption{A real conversation in the Ubuntu forum.} \label{tab:intro} \end{table} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{intro_observation_new.png} \caption{Sequence and graph structures.} \label{fig:intro_observation} \end{figure} The core of GSN is an \textit{utterance-level graph-structured encoder} (UG-E), which encodes utterances based on the graph topology rather than the sequence of their appearances. Encoding in UG-E is an iterative process. In each iteration, each utterance (a node in the graph) $i$ accepts information from all its preceding utterances (nodes) $j$. UG-E is thus a generalization of existing sequential encoders, and can handle both sequential and graph-based dialogues. GSN also models the speaker information as the utterances from the same speaker often have certain relationships. Sequence-based methods in~\cite{Li2016APN,Zhang2017NeuralPR} also learn a user embedding and concatenate it to the utterances. However, GSN builds implicit connections between utterances from the same interlocutor to model the dynamic information flow among his/her utterances with no explicit user representation, which results in performance gains. In summary, this paper makes the following contributions. (1) It proposes a novel graph-structured network (GSN) to model graph-structured dialogues. Sequence models are a special case. The core of GSN is an utterance-level graph-structured encoder (UG-E). To our knowledge, no work on graph-based representation learning has been done for dialogues. (2) It formulates the linkage within the graph to model users across dialogue sessions. Experiments show that GSN can reach up to 13.85 BLEU points and improve the state-of-the-art baselines by 2.27 (over 16\%) BLEU points. \section{Problem Formulation} \label{sec:ProblemFormulation} Utterances in a structured dialogue session can be formulated as a directed graph $\mathbf{G}(V, E)$, where $V$ is a set of $m$ vertices $\{1, ..., m\}$ and $E = \{e_{i,j}\}^m_{i,j=1}$ is a set of directed edges. Each vertex $i$ is an utterance represented as a vector $\mathbf{s}_i$ learned by an RNN. If utterance $j$ is a response to utterance $i$, then there is an edge from $i$ to $j$ with $e_{i,j} = 1$; otherwise $e_{i,j} = 0$. Our goal is to generate the (best) response $\mathbf{\bar{r}}$ that maximizes the conditional likelihood given the graph $\mathbf{G}$: \begin{small} \begin{equation}\label{eq:ns1} \mathbf{\bar{r}} = \mathop{\arg \max}_\mathbf{r} \log P(\mathbf{r|G}) = \mathop{\arg \max}_\mathbf{r} \sum_{i=1}^{|\mathbf{r}|} \log P(r_i | \mathbf{G}, \mathbf{r}_{<i}) \end{equation} \end{small} where $P(\mathbf{r}|\mathbf{G})$ is modeled with the proposed GSN. This model can be further enhanced by considering the speaker information, which introduces an adjacency matrix $U = \{u_{i,j}\}^m_{i,j=1}$, with $u_{i,j} = 1$ if utterances $i$ and $j$ are from the same speaker and $j$ is after $i$; $u_{i,j} = 0$ otherwise. \section{Graph-Structured Neural Network (GSN)}\label{sec:model} \begin{figure*}[ht] \centering \includegraphics[width=0.75\textwidth]{ALL_new.png} \caption{Architecture of GSN. } \label{fig:NGM_all_frame} \end{figure*} Figure \ref{fig:NGM_all_frame} gives the overall framework of GSN, which has three main components: a \textit{word-level encoder} (W-E), an \textit{utterance-level graph-structured encoder} (UG-E), and a decoder. UG-E is the core of GSN. To make Figure \ref{fig:NGM_all_frame} concise, we omitted some connecting lines and attentions. `$\otimes$' is a special multiplication operator, called the \textit{update operator} (see below). `$\cdot$' denotes the mathematical matrix multiplication. \subsection{Word-level Encoder (W-E)} Given an utterance $i = (w_{i,1}, w_{i,2}, ..., w_{i,n})$, W-E encodes it into an internal vector representation. We use a bidirectional recurrent neural network (RNN) with LSTM units to encode each word $w_{i,t}, t \in \{1, ..., n \}$ as a hidden vector $ \mathbf{s}_{i,t}$: \begin{small} \begin{equation} \label{eq:ns2} \begin{aligned} \overrightarrow{\mathbf{s}_{i,t}} =& \overrightarrow{LSTM}(\mathbf{e\_w}_{i,t}, \overrightarrow{\mathbf{s}_{i,t-1}}) \\ \overleftarrow{\mathbf{s}_{i,t}} =& \overleftarrow{LSTM}(\mathbf{e\_w}_{i,t}, \overleftarrow{\mathbf{s}_{i,t-1}}) \end{aligned} \end{equation} \end{small} where $\mathbf{e\_w}_{i, t}$ is the embedding of word $w_{i,t}$ at time step $t$, $\overrightarrow{\mathbf{s}_{i,t}}$ is the hidden state for the forward pass LSTM and $\overleftarrow{\mathbf{s}_{i,t}}$ for the backward pass. We use their concatenation, i.e., $[\overrightarrow{\mathbf{s}_{i,t}}; \overleftarrow{\mathbf{s}_{i,1}}]$, as the hidden state $\mathbf{s}_{i,t}$ at time $t$. Note that each word in the utterance indicates a state and a time step. After encoding by W-E, a session with utterances $\{1, ..., m\}$ is represented with $\mathbf{S} = \{\mathbf{s}_i, i \in \{1, ..., m\}\}$, where $\mathbf{s}_i = \mathbf{s}_{i,n}$ is the last hidden state of W-E. \subsection{Utterance-level Graph-Structured Encoder (UG-E)} \label{ssec:IFU} The HRED model is a hierarchical sequence-based word and utterance-level RNN. It predicts the hidden state of each utterance at time step $t$ by encoding the sequence of all utterances appeared so far. Due to graph structures in real dialogues, RNN is no longer suitable for modeling the information flow of utterances. For instance, in Figure \ref{fig:intro_observation}, HRED cannot handle utterances 3 and 4 properly because they are not logically sequential, but are ``in parallel.'' The UG-E comes to help. \subsubsection{UG-E \& Information Flow Over Graph} \label{ssec:IFOG} To model a graph structure and its information flow, we propose a new RNN with dynamic iterations. Given a session $\mathbf{S}$, only the information in the preceding nodes/vertices $i'$ of each node $i$ will flow to $i$ in an iteration (i.e., there is a directed edge from each $i'$ to $i$). Then the state of node (utterance) $i$ is updated and the updated state is used in the next iteration. In each iteration, all updates in a session are done in parallel. In this way, the encoding information and gradients can flow fully over the graph after some iterations. For instance, in the session in Figure \ref{fig:intro_observation} (or \ref{fig:user_info_flow}), although the information flow of one iteration is from one node's preceding nodes to the node, the information in 1 can flow to 3 after two iterations. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{InformationFlowstep.png} \caption{Information flow over the graph.} \label{fig:info_flow_step} \end{figure} UG-E's basic operation is illustrated in Figure \ref{fig:info_flow_step}. For example, given a session $\mathbf{S} = (\mathbf{s}_{1}, \mathbf{s}_{2}, \mathbf{s}_{3}, \mathbf{s}_{4})$, in the $l$-th iteration, the state of the $i$-th utterance can be calculated by: \begin{small} \begin{equation} \label{eq:ns3} \begin{aligned} & \mathbf{s}^l_i = \mathbf{s}^{l - 1}_i + \eta \cdot \Delta \mathbf{s}^{l - 1}_{I|i} \\ & \Delta \mathbf{s}^{l - 1}_{I|i} = \sum_{i' \in \varphi} \Delta \mathbf{s}^{l - 1}_{i'|i} \end{aligned} \end{equation} \end{small} where $\varphi$ is the collection of preceding nodes of the current node $i$ in the direction of the information flow; $\Delta \mathbf{s}^{l - 1}_{I|i}$ is the updating information, which is calculated by Eq. \ref{eq:ns5} below; $\eta$ is the updating coefficient indicating how much the new information (from the preceding nodes) should be added to the current state of the $i$-th utterance (node). Inspired by~\cite{sabour2017dynamic}, we design an alpha-weight as the updating coefficient. {We use a non-linear ``squashing'' function (i.e., $\text{SQH}(\cdot)$) to give vectors with a small norm a weight close to $\alpha$, but a large norm a weight close to 1:} \begin{small} \begin{equation} \label{eq:ns4} \begin{aligned} \eta = \text{SQH}(\Delta \mathbf{s}^{l - 1}_{I|i}) = \frac{\alpha + ||\Delta \mathbf{s}^{l - 1}_{I|i}||}{1 + ||\Delta \mathbf{s}^{l - 1}_{I|i}||} \end{aligned} \end{equation} \end{small} where $\alpha > 0$ is a hyperparameter (it should be greater than 0 to provide enough updating rate from the very beginning); $\Delta \mathbf{s}^{l - 1}_{I|i}$ is the updating information and is produced based on the state of the current utterance $\mathbf{s}^{l - 1}_i$ and the state of the preceding utterance $\mathbf{s}^{l - 1}_{i'}$: \begin{small} \begin{equation} \label{eq:ns5} \begin{aligned} \Delta \mathbf{s}^{l - 1}_{i'|i} = \mathbf{s}^{l - 1}_{i'} \otimes \mathbf{s}^{l - 1}_{i} \end{aligned} \end{equation} \end{small} where `$\otimes$', the \textit{update operator}, computes the updating information. Inspired by the updating operation hidden in Gated Recurrent Units (GRU)~\cite{cholearning}, $\otimes$ is defined as: \begin{small} \begin{equation} \label{eq:ns6} \begin{aligned} \Delta \mathbf{s}^{l - 1}_{i'|i} =&\ (1 - \mathbf{x}_i) * \mathbf{s}^{l - 1}_{i'} + \mathbf{x}_i * \tilde{\mathbf{h}}_i \\\tilde{\mathbf{h}}_i =&\ \tanh(\mathbf{W} \cdot [\mathbf{r}_i * \mathbf{s}^{l - 1}_{i'}, \mathbf{s}^{l - 1}_{i}]) \\ \mathbf{x}_i =&\ \sigma(\mathbf{W}_x \cdot [\mathbf{s}^{l - 1}_{i'}, \mathbf{s}^{l - 1}_{i}] \\ \mathbf{r}_i =&\ \sigma(\mathbf{W}_r \cdot [\mathbf{s}^{l - 1}_{i'}, \mathbf{s}^{l - 1}_{i}] \end{aligned} \end{equation} \end{small} where $\mathbf{W}$, $\mathbf{W}_x$ and $\mathbf{W}_r$ are parameters to be learned. $\sigma$ is the sigmoid function. \subsubsection{Bi-directional Information Flow}\label{ssec:BIF} \begin{figure}[t] \centering \subfigure[\scriptsize Bi-directional information flow.] { \label{fig:info_flow_unit} \includegraphics[width=0.2\textwidth]{bidirectionIF.png} } \subfigure[\scriptsize Speaker information modeling.] { \label{fig:user_info_flow} \includegraphics[width=0.2\textwidth]{user_info_flow.png} } \caption{Information flow.} \label{fig:attn_hm} \end{figure} In Figure \ref{fig:info_flow_unit}, utterances 3 and 4 are two responses to utterance 2. It is obvious that utterance 2 can help generate a better state for utterance 4 and vice versa. However, the algorithm introduced above only allows the information and gradients to flow over the forward direction of the graph (as shown by the purple arrows in Figure \ref{fig:info_flow_unit}). Hence the information in utterance 3 cannot flow to utterance 4. To tackle this problem, we propose a Bi-directional Information Flow (BIF) algorithm, which also uses backward information flow (as shown by the orange arrows in Figure \ref{fig:info_flow_unit}). {In order to allow information to flow thoroughly, we push the information to flow backward first and then forward to ensure that the information can flow from one node to one's sibling nodes, i.e., backward to parent and forward to siblings.} In our example above, the information of utterance 3 can flow to utterance 4 through utterance 2 after one backward flow and one forward flow, illustrated in Figure~\ref{fig:info_flow_unit}. \subsubsection{Speaker Information Flow} \label{ssec:userinfoflow} Representing speaker information in the latent embedding space is a popular method to enhance dialogue generation. However, this method lacks the ability to model the speaker and the dynamic changes of the speaker's ideas in a given session, especially when the speaker only speaks a few times because there may not be enough data to train those embeddings to represent the speaker and the changes since this method usually requires large data to train~\cite{Li2016APN,Qian2017AssigningPT,Zhang2017NeuralPR}. Since the changes in speaker utterances reflect the changes in his/her mind, we propose to create an edge for every pair of utterances from the same speaker following the chronological order of the utterances. Thus there should be hidden edges among all utterances of the same user (e.g., the edge from utterances 1 to 3 in Figure \ref{fig:user_info_flow}). We employ the same $\otimes$ operation to process the hidden edges, but due to different parameters, we use $\circledast$ to denote it: \begin{small} \begin{equation} \label{eq:ns7} \begin{aligned} \Delta \mathbf{s'}^{l - 1}_{i'|i} = \mathbf{s}^{l - 1}_{i'} \circledast \mathbf{s}^{l - 1}_{i} \end{aligned} \end{equation} \end{small} We add the speaker information to Eq. \ref{eq:ns3}: \begin{small} \begin{equation} \label{eq:ns8} \begin{aligned} \mathbf{s}^l_i = &\mathbf{s}^{l - 1}_i + \eta \cdot \Delta \mathbf{s}^{l - 1}_{I|i} + \lambda \cdot \Delta \mathbf{s'}^{l - 1}_{I|i} \\ & \Delta \mathbf{s'}^{l - 1}_{I|i} = \sum_{i' \in \varphi} \Delta \mathbf{s'}^{l - 1}_{i'|i} \end{aligned} \end{equation} \end{small} where $\eta$ and $\Delta \mathbf{s}^{l - 1}_{I|i}$ are the same as those in Eq. \ref{eq:ns3}; $\lambda$ is also calculated with Eq. \ref{eq:ns4} with input $\Delta \mathbf{s'}^{l - 1}_{I|i}$ instead of $\Delta \mathbf{s}^{l - 1}_{I|i}$. \subsection{Reformulation as Matrix Operations} \label{ssec:CIFU} So far, we have presented the proposed model. For computation, we reformulate it as matrix operations (also see UG-E in Figure~\ref{fig:NGM_all_frame}) and give the pseudo-code in Algorithm 1. Recall the session $\mathbf{S} = (\mathbf{s}_{1}, \mathbf{s}_{2}, \mathbf{s}_{3}, \mathbf{s}_{4})$, which is used to build the graph $\mathbf{G}(V,E)$ in Figure \ref{fig:info_flow_unit}. We build a state matrix $\mathbb{S}$ with the vertices of the graph $\mathbf{G}$ (also the session $\mathbf{S}$) as the diagonal elements, and all the other elements are set to 0 (we name this process the \emph{Building State Matrix function}, denoted by $BSM(\mathbf{S})$). We then use the ``@'' relation (a speaker responding to another speaker) as the connection between two vertices to build the edge matrix $\mathbf{E}$ (shown in Figure \ref{fig:NGM_all_frame}). Recall the speaker information modeling in Section \ref{ssec:userinfoflow} and the utterance speaker adjacency matrix $U$. The main operation of Eq. \ref{eq:ns8} can be formalized by: \begin{small} \begin{equation} \label{eq:ns9} \begin{aligned} \Delta \mathbf{E} = \mathbb{S}^{l - 1} \cdot \mathbf{E} \otimes \mathbb{S}^{l - 1}; \Delta \mathbf{U} = \mathbb{S}^{l - 1} \cdot \mathbf{U} \circledast \mathbb{S}^{l - 1} \end{aligned} \end{equation} \begin{equation} \label{eq:ns10} \begin{aligned} \mathbb{S}^l = \mathbb{S}^{l - 1} + BSM(\boldsymbol{\eta} \odot \Delta \mathbb{E} + \boldsymbol{\lambda} \odot \Delta \mathbb{U}) \end{aligned} \end{equation} \end{small} where $\Delta \mathbb{E} = \{\sum^m_{j=1} \Delta \mathbf{E}_{i,j}\}^m_{i=1}$ and $\Delta \mathbb{U} = \{\sum^m_{j=1} \Delta \mathbf{U}_{i,j}\}^m_{i=1}$ are two vectors, $m$ is the length of the given session; $\odot$ denotes the Hadamard product; $\boldsymbol{\eta}$ and $\boldsymbol{\lambda}$ can be calculated by: \begin{small} \begin{equation} \label{eq:ns11} \begin{aligned} \boldsymbol{\eta} = \{\text{SQH}(\Delta \mathbb{E}_{i})\}^m_{i=1}; \boldsymbol{\lambda} = \{\text{SQH}(\Delta \mathbb{U}_{i})\}^m_{i=1} \end{aligned} \end{equation} \end{small} This is just the forward information flow. We can obtain the backward information flow operation by changing Eq. \ref{eq:ns9}: \begin{small} \begin{equation} \label{eq:ns12} \begin{aligned} \Delta \mathbf{E} = \mathbb{S}^{l - 1} \cdot \mathbf{E}^T \otimes \mathbb{S}^{l - 1}; \Delta \mathbf{U} = \mathbb{S}^{l - 1} \cdot \mathbf{U}^T \circledast \mathbb{S}^{l - 1} \end{aligned} \end{equation} \end{small} To obtain $\Delta \mathbb{E}$ and $\Delta \mathbb{U}$ in Eq.~\ref{eq:ns10}, we need to change the direction of the sum, i.e., $\Delta \mathbb{E} = \{\sum^m_{j=1} \Delta \mathbf{E}_{j,i}\}^m_{i=1}$ and $\Delta \mathbb{U} = \{\sum^m_{j=1} \Delta \mathbf{U}_{j,i}\}^m_{i=1}$. $\mathbb{S}, \mathbb{E}, \text{and}\ \mathbb{U}$ can be very sparse. But the proposed method can be well organized and the sparse matrices can be addressed by sparse matrix operations. The pseudo-code is given in Algorithm 1 in \textit{Appendix}~\footnote{\href{https://morning-dews.github.io/Appendix/IJCAI2019_GSN.pdf}{https://morning-dews.github.io/Appendix/IJCAI2019\_GSN.pdf}}. \subsection{Decoder}\label{ssec:decoder} As shown in Figure \ref{fig:NGM_all_frame}, we illustrate a session $\{i\}^m_{i=1}$ with the corresponding encoding state denoted by $\mathbf{S}$. To generate a response to an utterance $i$, the decoder calculates a distribution over the vocabulary and sequentially predicts word $r_k$ using a softmax function: \begin{scriptsize} \begin{equation} \label{eq:ns13} p(\mathbf{r}|\mathbf{S}; \theta) = \prod^{|\mathbf{r}|}_{k=1} P(r_k|\mathbb{S}_{i,i}, \mathbf{r}_{<k}; \theta) = \prod^{|\mathbf{r}|}_{k=1} \text{softmax}(f(\mathbf{h}_k, \mathbf{c}_k, r_{k-1})) \end{equation} \end{scriptsize} where $f(\cdot)$ is the tanh function.~$r_{k-1}$ is the word generated at the (\textit{k}-1)-th time step, obtained from a word look-up table. $\mathbf{h}_k = \text{GRU}(\mathbf{h}_{k-1}, r_{k-1})$ is the hidden state variable of a GRU at time step $k$. $\mathbf{h}_0 = \mathbb{S}_{i,i}$, and $\mathbf{c}_k$ is the attention-based encoding of utterance $i$ at decoding time step $k$ and it is calculated by $\mathbf{c}_k = \sum_{j=1}^n \frac{\text{exp}(e_{j,k}) \mathbf{s}_{i,j}}{\sum_{j = 1}^{m}\text{exp}(e_{j,k})}$, where $\mathbf{s}_{i,j}$ is the encoder hidden state at time step $j$ for utterance $i$, and $e_{j,k} = \mathbf{h}_k \mathbf{W}_a \mathbf{s}_{i,j}$ scores the match degree of $\mathbf{h}_k$ and $\mathbf{s}_{i,j}$. \section{Experiments} \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \scriptsize \begin{center} \begin{tabular}{|l|cccc|c|c|ccc|} \hline Model & BLEU 1 & BLEU 2 & BLEU 3 & BLEU 4 & METEOR & $\text{ROUGE}_\text{L}$ \\ \hline seq2seq & 10.45 & 4.13 & 2.08 & 1.02 & 3.43 & 9.67 \\ seq2seq W-speaker & 10.70 & 4.98 & 2.20 & 1.55 & 3.92 & 9.42 \\ Seq2seq (last utte) & 9.85 & 3.04 & 1.38 & 0.67 & 3.98 & 8.34 \\ HRED~\cite{serban2016building} & 10.80 & 4.60 & 2.54 & 1.42 & 4.38 & 10.23 \\ \rowcolor{lightgray} HRED W-speaker & 11.23 & 4.82 & 3.06 & 1.64 & 4.36 & 10.98 \\ \hline GSN No-speaker (1-iter) & 9.42 & 3.05 & 1.61 & 0.95 & 3.74 & 7.63 \\ GSN No-speaker (2-iter) & 12.06 & 4.87 & 2.80 & 1.70 & 4.32 & 10.09 \\ GSN No-speaker (3-iter) & 12.77$^{\blacktriangle}$ & 5.37$^{\blacktriangle}$ & 3.17 & 1.99$^{\blacktriangle}$ & 4.53 & 10.80 \\ \hline GSN W-speaker (1-iter) & 10.31 & 4.06 & 2.34 & 1.45 & 3.88 & 9.96 \\ GSN W-speaker (2-iter) & 12.77 & 4.93 & 2.61 & 1.46 & 4.79 & 11.34 \\ GSN W-speaker (3-iter) & \textbf{\underline{13.50}}$^{\blacktriangle}$ & \textbf{\underline{5.63}}$^{\blacktriangle}$ & \textbf{\underline{3.24}}$^{\blacktriangle}$ & \textbf{\underline{1.99}}$^{\blacktriangle}$ & \textbf{\underline{4.85}}$^{\blacktriangle}$ & \textbf{\underline{11.36}}$^{\blacktriangle}$ \\ \hline \end{tabular} \end{center} \caption{Experimental results, conducted in different settings, including sequential data and graph data using different models based on automated evaluation. 'Seq2seq (last utte)' is trained by using only the last utterance before the final response of the session as the input (all utterances before are ignored). `$n$-iter' means that the results are obtained after $n$ iterations. `No-speaker' is our proposed GSN model without speaker information flow while `W-speaker' has it. $^{\blacktriangle}$ denotes the $p$-value $< 0.01$ in paired $t$-test against the best baseline (shaded row). } \label{tab:auto_eval} \end{table*} \subsection{Experimental Setups} \label{ssec:dataprepara} \textbf{Data Preparation}: Our experiment uses the Ubuntu Dialogue Corpus\footnote{http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/}~\cite{Lowe2015TheUD} as it is the only benchmark corpus with annotated multiple interlocutors.~It is also large with almost one million multi-turn dialogues, over seven million utterances and 100 million words. Each record contains a response utterance with its speaker ID and posting time. To build the training and testing datasets, we extract all utterances with response relations indicated by the ``@'' symbol in the corpus. For example, ``A @ B'' means that the utterance is addressed to Speaker B by Speaker A. Utterances from Speaker A and Speaker B are encoded into vector representations and used to construct the state matrix introduced in Section \ref{ssec:CIFU} as vertices. A directed edge is installed from A to B and used to build the edge matrix described in Sec. \ref{ssec:CIFU}. Following the baselines, we take the last utterance in each given session as the utterance to be generated\footnote{GSN can generate responses all the way through a dialogue as GSN can gather the dialogue information to any graph node with the help of the dynamic iteration and information flow mechanisms.} (i.e., the output target) and the other utterances in the session as the input. Finally, we extracted 380k sessions (about 1.75M utterances) as the experiment corpus and each session has 3 to 10 utterances and 2 to 7 interlocutors. We randomly divide the corpus into the training, development (with 5k q/a pairs), and test (with 5k q/a pairs) sets. We report the results on the test set. In testing, following the graph structure, the system knows which utterances to respond to. This is reasonable as this is also the case in a human dialogue, i.e., before responding, we know which preceding utterances to rely to. It is important to note here that GSN is a general model that works well for both graph-structured (multi-party) and sequential (two party) dialogues as we will see shortly. \noindent \textbf{Baselines}: We use a seq2Seq model and a hierarchical or HRED model as the baselines. HRED has been shown superior to other state-of-the-art models~\cite{serban2016building,Sutskever2014SequenceTS}. \textbf{HRED model:} We use HRED~\cite{serban2016building}, the latest HRED model. Since HRED cannot deal with structured sessions, we convert each graph-structured dialogue session to a set of dialogue sequences by extracting every possible forward path from the first utterance to the last utterance of the session by following the ``@'' relationship and regarding each path (a sequence) as a dialogue sequence/session. \textbf{Seq2Seq model:} We use the same sequential sessions extracted for HRED. Since the last utterance in each new session is the target utterance to be generated and the others are the input utterances, all the input utterances are concatenated into a \textit{long} utterance as the input. Seq2seq modeling with attention is performed as in~\cite{Bahdanau2014NeuralMT,Sutskever2014SequenceTS} on the concatenated utterance. \textbf{Baselines with speaker information:} We tried 3 methods of adding speaker information to baselines, using trainable speaker embedding (SE)~\cite{Li2016APN,Qian2017AssigningPT} as speaker information: a) concatenating SE with each word in the utterance; b) using SE as the prefix of the utterance; c) using SE as the prefix of the utterance and addressee’s SE as the suffix. For seq2seq (respectively, HRED), c) (b)) is the best, we report the corresponding results in Table \ref{tab:auto_eval}. \textbf{Our GSN Model}: There are multiple ways to implement the proposed GSN model. Since the objective of this paper is not to explore all possible ways, we use GRU units for all recurrent neural networks. For fairness, we adopt this setting for all baselines as well. We set $\alpha$ in Eq. \ref{eq:ns4} to 0.25. Benefited from dynamic iterations, GSN is very flexible in generating responses for any utterance in a given session. However, to be consistent with the baselines, only the last utterance in each session is used as the target in training and testing. The code of our model can be found here \footnote{\href{https://github.com/morning-dews/GSN-Dialogues}{https://github.com/morning-dews/GSN-Dialogues}}. \noindent \textbf{Training Details of Our GSN Model}: We share the word embedding between the word-level encoder and the decoder and limit the shared vocabulary to 30k. The number of hidden units is set as 300 and the word embedding dimension is set as 300. We have 2 layers for both word-level encoder and decoder. The network parameters are updated using the Adam algorithm~\cite{Kingma2014AdamAM} with the learning rate of 0.0001. All utterances are clipped to 30 words. We run all experiments on a single GTX Titan X GPU, and training takes 25 epochs. Each experiment takes about 48 hours. \subsection{Results and Analysis} \label{ssec:resultanalysis} \textbf{Automated Evaluation}: We use two kinds of metrics in automated evaluation: 1) Following \cite{Fu2017AligningWT,Havrylov2017EmergenceOL}, we use the evaluation package of~\cite{Chen2015MicrosoftCC}, which includes BLEU 1 to 4, METEOR and $\text{ROUGE}_\text{L}$. 2) We also use embedding-based metrics~\cite{forgues2014bootstrapping} which can cover the weaknesses of the BLEU's. Table \ref{tab:auto_eval} shows the evaluation results. The first three rows are for the baselines. The three rows in the middle are for our GSN model using only the information flow over the graph structure, and the last three rows are also for our GSN model but with the addition of the speaker information flow. From Table~\ref{tab:auto_eval}, we can make the following observations: (1).~GSN (row 11, with the speaker information flow after 3 iterations) markedly outperforms the baselines (rows 2 and 5) by up to 2.27 BLEU points (BLEU 1). (2).~With-speaker (W-speaker) versions of GSN also clearly outperform the no-speaker versions, indicating the importance of the speaker information flow. To further verify whether the improvement is due to adding more connections or adding the speaker edges, we conducted experiments by adding some random edges with different percentages until full connections among nodes (using No-speaker setting with 3 iterations). The results showed a clear drop with the increase of randomly added edges and received very poor result for full connection, which further shows the usefulness of the proposed speaker information modeling method. As the results are very poor, they are not shown here. {(3).~We also use the exist embedding-based persona method to arm baselines (W-speaker) for a fair comparison with GSN W-speaker version. We can see from Table \ref{tab:auto_eval} the gain is limited, and our method still outperforms the baselines.} (4).~The results of GSN improve as the number of iterations increases, which indicates the importance of the dynamic iterations. With more iterations, more utterances will be modeled by GSN. Only after two iterations, our models with the speaker information flow (row 10) already outperforms both two baselines in 4 out of 6 evaluation metrics. Even for the no-speaker versions (row 7), our model beats the best baseline in 4 out of 6 evaluation metrics. {The BLEU scores had a tiny increase in the 4th iteration (around 0.1\% for GSN W-speaker, and 0.3 \% for GSN No-speaker). For other metrics, e.g., METEOR and ROUGEL, there is little change. The 5th iteration is similar, but the scores decrease from the 6th iteration. We thus choose 3 iterations.} \begin{table*}[t!] \renewcommand{\arraystretch}{1.2} \scriptsize \begin{center} \begin{tabular}{|l|cccc|c|c|} \hline Model & BLEU 1 & BLEU 2 & BLEU 3 & BLEU 4 & METEOR & $\text{ROUGE}_\text{L}$ \\ \hline HRED~\cite{serban2016building} (sequential) & 9.61 & 3.48 & 1.86 & 1.01 & 4.08 & 8.22 \\ GSN No-speaker (2-iter sequential) & 11.39 & 4.55 & 2.68 & 1.71 & 4.40 & 9.74 \\ GSN W-speaker (1-iter sequential) & 8.69 & 3.1 & 1.78 & 1.19 & 3.67 & 9.19 \\ GSN W-speaker (2-iter sequential) & \underline{12.72} & 4.84 & 2.59 & 1.59 & \underline{4.70} & \underline{11.41} \\ GSN W-speaker (3-iter sequential) & 12.03 & \underline{4.92} & \underline{2.94} & \underline{1.97} & 4.31 & 10.1 \\ \hline HRED~\cite{serban2016building} (graph) & 12.16 & 4.90 & 2.68 & 1.49 & 4.42 & 10.90 \\ GSN No-speaker (2-iter graph) & 12.35 & 5.17 & 3.08 & 1.81 & 4.43 & 10.42 \\ GSN W-speaker (1-iter graph) & 10.66 & 4.36 & 2.52 & 1.50 & 3.97 & 10.10 \\ GSN W-speaker (2-iter graph) & 12.76 & 5.23 & 2.94 & 1.75 & 4.80 & 11.33 \\ GSN W-speaker (3-iter graph) & \textbf{\underline{13.85}} & \textbf{\underline{5.83}} & \textbf{\underline{3.33}} & \textbf{\underline{1.98}} & \textbf{\underline{5.10}} & \textbf{\underline{11.66}} \\ \hline \end{tabular} \end{center} \caption{Experimental results of {using \emph{sequential data} (with only 2 interlocutors, the first five rows of the results) or \emph{graph data} only (with more than 2 interlocutors, the last five rows of the results). The symbol string `$n$-iter', `No-speaker' and `W-speaker' in the table have the same meaning as those in Table \ref{tab:auto_eval}. The result of GSN No-speaker 3-iter isn't given as it performs worse.}} \label{tab:mul-seq} \end{table*} \noindent \textbf{Embedding-based metrics}. Based on the embedding-based metrics, our model also outperforms the baselines. Our model gets 0.770 / 1.040 / 0.651 for the three embedding-based metrics (Embedding Average Score / Embedding Greedy Score / Embedding Extrema Score), which are all better than the scores of the best baseline model (HRED), 0.515 / 0.905 / 0.325. {See the details in \textit{Appendix}$^1$, which also includes a \textit{case study} with examples}. \noindent \textbf{Sequential data and graph data}. To verify the generic nature of the GSN model, we conduct an ablation experiment with only sequential data (sessions with only two interlocutors) or graph data (remaining sessions with more than two interlocutors). The results are shown in Table \ref{tab:mul-seq}. We see that GSN significantly outperforms the strong HRED baseline in both sequential and graph settings. {From both Tables \ref{tab:mul-seq} and \ref{tab:auto_eval}, we can see that GSN improves in the sequential case mainly because of the additional encoding iterations.} {Tables \ref{tab:mul-seq} and \ref{tab:auto_eval} also show that the proposed iterative graph-structured encoder UG-E and the graph-based speaker information flow are effective. GSN is thus a good generalization of the sequence-based models, and a desirable system for both graph-structured (multi-party) and sequential (two-party) dialogue response generation.} \begin{table}[h] \renewcommand{\arraystretch}{1.2} \footnotesize \begin{center} \begin{tabular}{|c|c|cc|cc|} \hline \multirow{2}{*}{\bf Human } & \multirow{2}{*}{{\bf HRED} } & \multicolumn{2}{c|}{\bf No-speaker } & \multicolumn{2}{c|}{\bf W-speaker } \\ \cline{3-6} & & 1-iter & 3-iter & 1-iter & 3-iter \\ \hline 3.01 & 1.91 & 1.89 & 1.98 & 2.23$^{\blacktriangle}$ & \textbf{\underline{2.37}}$^{\blacktriangle}$\\ \hline \end{tabular} \end{center} \vspace{-2mm} \caption{Human evaluation results. $^{\blacktriangle}$ denotes $p$-value $< 0.01$ in paired $t$-test against HRED. The perfect score is 4.} \label{tab:human_eval} \end{table} \noindent \textbf{Human Evaluation}: We also conducted human evaluation to measure the quality of responses generated by all methods. We evaluate based on ``\textit{naturalness}'', which includes 1) grammaticality, 2) fluency and 3) rationality. We randomly sampled 100 utterance-response pairs, shuffled the order of systems, and asked three Ph.D. students to rate the pairs in terms of model quality on 0 to 4 scales (4 for the best) and we report their average scores. More details can be found in \textit{Appendix}$^1$. Table \ref{tab:human_eval} shows that GSN (with the speaker information flow after only 1 iteration or the no-speaker version after 3 iterations) outperforms the best baseline (HRED), indicating that GSN generates more natural responses. The reason that our model (with the speaker information flow and just one iteration, column 5 in Table~\ref{tab:human_eval}) outperforms three iterations of our model's no-speaker version (column 4 in Table~\ref{tab:human_eval}) is because the model (with the speaker information flow) can generate a more consistent response for the speaker. The generated utterances are more preferred by humans. \section{Related Work} Existing dialogue models follow the sequential information flow ~\cite{Shang2015NeuralRM,Wen2017ANE,tao2019multi}. Recent progresses in seq2seq models ~\cite{Sutskever2014SequenceTS,Luong2015EffectiveAT} have inspired several efforts ~\cite{Li2019Insufficient,Young2017AugmentingED} to build dialogue systems. Although seq2seq models have achieved good results for dialogue generation, they regard all input utterances as a long sequence, which greatly increases the complexity of the model in passing information and computing gradients. As an improved solution, the HRED models ~\cite{serban2016building} tackle this problem by constructing the sequential flow at the utterance level. However, this setting is insufficient for modeling dialogues that have more than 2 interlocutors, which need a graph-based model. For multi-party dialogues, prior work have employed retrieval-based approaches \cite{Zhang2017AddresseeAR,Meng2017TowardsNS}. No graph modeling method has been proposed, although graph-based methods have been used for other NLP tasks, e.g., Graph Convolutional Networks for classification \cite{kipf2016semi} and semantic role labeling \cite{marcheggiani2017encoding}, Gated Graph Neural Networks for generation from AMR graphs and syntax-based neural machine translation \cite{beck2018graph}. Different from these works, we propose a generation model by formulating the complex dialogue problem using a graph-based solution. Also importantly, compared to seq2seq and HRED, GSN not only can encode graph structured information flows, but also sequential ones. \section{Conclusion} {In this paper, we proposed a \textit{general graph-structured neural network} GSN to model both graph-structured (multi-party) and sequential (two-party) dialogues. The core of the model is an utterance-level graph-based encoder (UG-E), which is a generalization of the conventional sequence-based encoder. For the response generation in multi-party conversations, the speaker information is also modeled in the graph. As our results showed, GSN is general and is suitable for both multi-party and two-party dialogues. The current GSN relies on clear addressee information. Our future work will try to automatically identify the conversation structure and decide who to respond to. Dynamic routine and attention can be leveraged to achieve this goal.} \section{Acknowledgements} This work was partially supported by National Key Research and Development Program of China (No.2017YFC0804001), National Science Foundation of China (No. U1604153, No. 61876196, No. 61672058), Alibaba Innovative Research Fund. Rui Yan was supported by CCF-Tencent Open Research Fund and Microsoft Research Asia Collaborative Research Program. \bibliographystyle{named}
1,108,101,564,093
arxiv
\section{Introduction} Since the discovery of quasicrystals in nature \cite{Shech}, aperiodic structures have been extensively studied in condensed matter physics (see for e.g. \cite{Vek}--\cite{Macia3}) and also in mathematics (see \cite{Quasicrys, DirMath}). One of the main questions in the theory of one-dimensional aperiodic structures is to find the relationship between their atomic topological order and the physical properties \cite{Macia2}, \cite{Alb}. The 1D aperiodic sequences are characterized by the nature of their Fourier spectrum. We define a 1D quasicrystal as a 1D aperiodic sequence with pure point Fourier spectrum. In this article we concentrate on the study of the Silver mean sequence, which is common 1d quasicrystal. A convenient way to study the electronic spectrum of a 1D quasicrystals is the trace-map technique, introduced by Kohmoto, Kadanoff and Tang \cite{KKT}. Using this method the relations for transfer-matrices and their traces for the Fibonacci sequence were obtained \cite{KKT, Koh}. For generalizations of the trace-map to other one-dimensional aperiodic sequences see \cite{Cheng}--\cite{GumAli2} and references in \cite{Macia2}. 1D quasicrystals can be described by the 1D discrete Schr\"{o}dinger equation with quasiperiodic potential \begin{equation} \label{Sch} \psi_{n+1}+\psi_{n-1}+\epsilon_n\psi_n=E\psi_n, \end{equation} where $\psi_n$ is a wave function on $n$th site, $\epsilon_n$ could take two values $\epsilon_A$ or $\epsilon_B$. The equation (\ref{Sch}) has been studied in many articles (see, for example, references in \cite{Macia2, Alb}) and it is an adequate model for considering qualitatively the effects of the aperiodicity on the electronic structure. One can rewrite the Schr\"{o}dinger equation (\ref{Sch}) in the matrix form \begin{equation} \left( \begin{gathered} \psi _{n + 1} \hfill \\ \psi _n \hfill \\ \end{gathered} \right) = M(n) \left( \begin{gathered} \psi _n \hfill \\ \psi _{n - 1} \hfill \\ \end{gathered} \right), \end{equation} where \begin{equation} M(n) = \left( {\begin{array}{*{20}c} {E - \epsilon_n} & { - 1} \\ 1 & 0 \\ \end{array} } \right) \end{equation} is the so-called transfer-matrix at the $n$'th site. The general transfer-matrix $M_n$ connecting the wave function $(\psi_{n+1}, \psi_{n})$ and $(\psi_1,\psi_0)$ defined by \begin{equation} \left( \begin{gathered} \psi _{n + 1} \hfill \\ \psi _n \hfill \\ \end{gathered} \right) = M_n \left( \begin{gathered} \psi _1 \hfill \\ \psi _0 \hfill \\ \end{gathered} \right), \end{equation} with the general transfer-matrix \begin{equation} M_n=M(n)\cdot M(n-1)\ldots M(1). \end{equation} Solving the Schr\"{o}dinger equation (\ref{Sch}) is equivalent to calculate products of transfer-matrices. The allowed regions of the energy spectrum are determined by the condition \begin{equation} |\Tr{M_n}|=|\prod \Tr{M(n)}|\leq 2. \end{equation} From a mathematical point of view this condition means that is that if $M(n)$ is the matrix of an area-preserving linear transformation of a plane into itself, then the mapping is stable if $|\Tr{M_n}|< 2$, and unstable if $|\Tr{M_n}|> 2$ (for more details see \cite{Arn}). \section{Silver mean sequence} To construct the Silver mean sequence we use the two-letter substitution rules: \begin{equation} A \rightarrow B, ~~~~ B \rightarrow BBA. \end{equation} One can furthermore construct the Silver mean sequence in analogue to the Pell numbers \cite{Pell} \begin{equation} P_n=2P_{n-1}+P_{n-2} \end{equation} with $P_1=A$ and $P_2=B$. Some authors call this sequence `` intergrowth sequence'' \cite{Huang1}--\cite{Huang3} or `` octonacci sequence'' \cite{Mos}--\cite{Grimm}. The corresponding general transfer-matrix $M_n$ for the Silver mean sequence can be written as follows: \begin{equation} \label{main} M_{n}=M_{n-1}^2 M_{n-2}. \end{equation} This expression can be proven by induction \cite{Bom}. Using the Cayley-Hamilton theorem for $2\times 2$ matrices $M_n$ with $det M_n=1$, i.e. \begin{equation} M_n^{-1}+M_n=\Tr{M_n}, \end{equation} one can easily obtain the following expression: \begin{eqnarray} \nonumber \label{0} M_{n+1} &=& M_{n-1}(\Tr{M_n})^2-M_{n-1}M_n^{-1}-M_{n-1}\\ \nonumber &=& M_{n-1}(\Tr{M_n})^2-M_{n-1}M_n^{-1}\frac {\Tr{M_{n-1}}} {\Tr{M_{n-1}}}-M_{n-1}\\ \nonumber &=& M_{n-1}(\Tr{M_n})^2-\frac{M_{n-1}^2M_n^{-1}+M_n^{-1}}{\Tr{M_{n-1}}}-M_{n-1}. \end{eqnarray} By taking the trace of (\ref{0}) and using the expression $\Tr{M_{n-2}}=\Tr{M_{n-1}^2M_n^{-1}}$ and $\Tr{M}=\Tr{M^{-1}}$ we find the recurrence relation for the traces of the general transfer-matrices \begin{equation} \label{main} \boxed{\Tr{M_{n+1}}=\Tr{M_{n-1}}(\Tr{M_n})^2-\frac {(\Tr{M_n})^2+\Tr{M_{n-2}}\Tr{M_n}} {\Tr{M_{n-1}}}-\Tr{{M_{n-1}}}.} \end{equation} In contrast to \cite{GumAli, GumAli2} we show that the recurrence relation (\ref{main}) can be expressed in terms of $\Tr{M_i}$'s only. This recurrence relation is useful for analytical calculations, as well for numerical computations. From the physical point of view these traces are important because they determine the structure of the energy spectrum of quasiperiodic sequence. For instance, using (\ref{main}) one can calculate forbidden and allowed regions in the energy spectrum \cite{GumAli}, the Lyapunov exponent \cite{Luck}, etc. For other physical applications see \cite{Macia, Macia2, Alb, Luck} and also \cite{Lu, Roche}. Following \cite{Koh} one can define a three-dimensional vector $r=(x,y,z)$, where $x=\frac{1}{2}\Tr{M_{n+1}}$, $y=\frac{1}{2}\Tr{M_n}$, $z=\frac{1}{2}\Tr{M_{n-1}}$ and then alternatively express (\ref{main}) as \begin{equation} r_{n+1}=f(r_{n}). \end{equation} This nonlinear map has an invariant, i.e. it is the same for any $n$'th generation: \begin{equation} \label{inv} \boxed{I=-xz+\left(\frac{x+z} {2y}\right)^2+y^2-1.} \end{equation} If we now redefine $z$ as \begin{equation} z\rightarrow 4y^2x-x-4yz, \end{equation} expression (\ref{inv}) becomes \begin{equation} \label{inv2} I=x^2+y^2+4z^2-4xyz-1. \end{equation} This is the same result which was found in \cite{GumAli, GumAli2}. \section{Discussion} In this paper we considered the trace-map associated with the Silver mean sequence. We found the recurrence relation for the trace of the general transfer-matrix of the Silver mean sequence. We have shown that this recurrence relation can be expressed in terms of $\Tr{M_i}$'s only. Finally we found an invariant of the trace-map. The recurrence relation and invariant of the trace-map are closely related to the spectral properties of the sequence in question\footnote{For physical mean of the invariant see \cite{KKT}, \cite{Koh} and also the recent preprint \cite{Chak}.}. This invariant plays an important role in understanding quasiperiodic sequences \cite{Iguchi}. \vspace{0.3cm} {\bf Acknowledgments.} I.G. would like to especially thank Edvard Musaev, Boris Kheyfets and David Klein for interesting discussions and for their help in the preparation of the manuscript. The authors are grateful to Gulmammad Mammadov for useful comments.
1,108,101,564,094
arxiv
\section*{Introduction} Let $S=K[x_1,\ldots,x_n]$ be a polynomial ring over a field $K$. Any squarefree monomial ideal $I$ in $S$ can be considered both as the Stanley-Reisner ideal of a simplicial complex $\Delta_I=\{\{x_1,\ldots,x_n\}:\ x_1\cdots x_n\notin I\}$ and as a facet ideal of the simplicial complex $\Delta'=\langle F:\ x^F \textrm{ is a minimal generator of } I\rangle$. Each of these considerations make a natural one-to-one correspondence between the class of squarefree monomial ideals in $S$ and the class of simplicial complexes on $\{x_1,\ldots,x_n\}$. Thus simplicial complexes play an important role in the study monomial ideals. In this regard classifying simplicial complexes with a desired property or making modifications to a structure like a graph or a simplicial complex so that it satisfies a special property, has been considered in many research papers, see for example \cite{BaHe,ER,F,FH,FV,HMV,KhMo,Wo}. The notion of expansion of a simplicial complex was defined in \cite{KhMo} as a natural generalization of the concept of expansion in graph theory and some properties of a simplicial complex and its expansions were related to each other. Our goal in this paper is to investigate more relations between algebraic properties of the Stanley-Reisner ideal of a simplicial complex and those of its expansions, which generalize the results proved in \cite{KhMo}. It turns out that many algebraic and combinatorial properties of a simplicial complex and its expansions are equivalent and so this construction is a very good tool to make new simplicial complexes with a desired property. The paper is organized as follows. In the first section, we review some preliminaries which are needed in the sequel. In Section 2, we study the Stanley-Reisner ideal of an expanded complex. One of the main results is the following theorem. \begin{Mainthm}(Theorem \ref{CM}) Let $\Del$ be a simplicial complex and $\alpha\in\NN^n$. Then $\Delta$ is Cohen-Macaulay if and only if $\Delta^{\alpha}$ is Cohen-Macaulay. \end{Mainthm} As a corollary, it is shown that sequentially Cohen-Macaulayness of a simplicial complex and its expansion are also equivalent. Moreover, using an epimorphism which relates the reduced simplicial homology groups, we show that an expansion of $\Delta$ is Buchsbaum if and only if $\Delta$ has the same property (see Theorem \ref{Buchs}). Theorem \ref{decom} (resp. Corollary \ref{shell}) shows that $\Delta$ is $k$-decomposable (resp. shellable) if and only if an arbitrary expansion of $\Delta$ is $k$-decomposable (resp. shellable). Section 3 is devoted to studying some homological invariants of the Stanley-Reisner ideals of $\Delta$ and $\Del^\alpha$. In fact, we give inequalities which relate the regularity, the projective dimension and the depth of a simplicial complex to those of its expansion. \section{Preliminaries} Throughout this paper, we assume that $\Delta$ is a simplicial complex on the vertex set $X=\{x_1, \dots, x_n\}$, $K$ is a field and $S=K[X]$ is a polynomial ring. The set of facets (maximal faces) of $\Delta$ is denoted by $\mathcal{F}(\Delta)$ and if $\mathcal{F}(\Delta)=\{F_1,\ldots,F_r\}$, we write $\Delta=\langle F_1,\ldots,F_r\rangle$. For a monomial ideal $I$ of $S$, the set of minimal generators of $I$ is denoted by $\mathcal{G}(I)$. For $\alpha=(s_1,\ldots,s_n)\in\NN^n$, we set $X^{\alpha}=\{x_{11},\ldots,x_{1s_1},\ldots,x_{n1},\ldots,x_{ns_n}\}$ and $S^\alpha=K[X^{\alpha}].$ The concept of expansion of a simplicial complex was defined in \cite{KhMo} as follows. \begin{defn} Let $\Del$ be a simplicial complex on $X$, $\alpha=(s_1,\ldots,s_n)\in\NN^n$ and $F=\{x_{i_1},\ldots ,x_{i_r}\}$ be a facet of $\Del$. The \textbf{expansion} of the simplex $\langle F\rangle$ with respect to $\alpha$ is denoted by $\langle F\rangle^\alpha$ and is defined as a simplicial complex on the vertex set $\{x_{i_lt_l}:1\leq l\leq r,\ 1\leq t_l\leq s_{i_l}\}$ with facets $$\{\{x_{i_1j_1},\ldots, x_{i_rj_r}\}: 1\leq j_m\leq s_{i_m}\}.$$ The expansion of $\Del$ with respect to $\alpha$ is defined as $$\Del^\alpha=\bigcup_{F\in\Del}\langle F\rangle^\alpha.$$ A simplicial complex obtained by an expansion, is called an expanded complex. \end{defn} \begin{defn} {\rm A simplicial complex $\Delta$ is called \textbf{shellable} if there exists an ordering $F_1<\cdots<F_m$ on the facets of $\Delta$ such that for any $i<j$, there exists a vertex $v\in F_j\setminus F_i$ and $\ell<j$ with $F_j\setminus F_\ell=\{v\}$. We call $F_1,\ldots,F_m$ a \textbf{shelling} for $\Delta$.} \end{defn} For a simplicial complex $\Delta$ and $F\in \Delta$, the link of $F$ in $\Delta$ is defined as $$\lk_{\Delta}(F)=\{G\in \Delta: G\cap F=\emptyset, G\cup F\in \Delta\},$$ and the deletion of $F$ is the simplicial complex $$\Delta \setminus F=\{G\in \Delta: F \nsubseteq G\}.$$ Woodroofe in \cite{Wo} extended the definition of $k$-decomposability to non-pure complexes as follows. Let $\Delta$ be a simplicial complex on vertex set $X$. Then a face $\sigma$ is called a \textbf{shedding face} if every face $\tau$ containing $\sigma$ satisfies the following exchange property: for every $v \in \sigma$ there is $w\in X \setminus \tau$ such that $(\tau \cup \{w\})\setminus \{v\}$ is a face of $\Delta$. \begin{defn}\cite[Definition 3.5]{Wo} {\rm A simplicial complex $\Delta$ is recursively defined to be \textbf{$k$-decomposable} if either $\Delta$ is a simplex or else has a shedding face $\sigma$ with $\dim(\sigma)\leq k$ such that both $\Delta \setminus \sigma$ and $\lk_{\Delta}(\sigma)$ are $k$-decomposable. The complexes $\{\}$ and $\{\emptyset\}$ are considered to be $k$-decomposable for all $k \geq -1$.} \end{defn} Note that $0$-decomposable simplicial complexes are precisely vertex decomposable simplicial complexes. Also the notion of a decomposable monomial ideal was introduced in \cite{RaYa} as follows. For the monomial $u=x_1^{a_1}\cdots x_n^{a_n}$ in $S$, the support of $u$ denoted by $\supp(u)$ is the set $\{x_i:\ a_i\neq 0\}$. For a monomial $M$ in $S$, set $[u,M] = 1$ if for all $x_i\in\supp(u)$, $x_i^{a_i}\nmid M$. Otherwise set $[u,M]\neq 1$. For the monomial $u$ and the monomial ideal $I$, set $$I^u = (M\in \mathcal{G}(I) :\ [u,M]\neq 1)$$ and $$I_u = (M\in \mathcal{G}(I) :\ [u,M]=1).$$ For a monomial ideal $I$ with $\mathcal{G}(I)=\{M_1,\ldots,M_r\}$, the monomial $u=x_1^{a_1} \cdots x_n^{a_n}$ is called a \textbf{shedding monomial} for $I$ if $I_u\neq 0$ and for each $M_i\in \mathcal{G}(I_u)$ and each $x_l\in \supp(u)$ there exists $M_j\in \mathcal{G}(I^u)$ such that $M_j:M_i=x_l$. \begin{defn}\cite[Definition 2.3]{RaYa} {\rm A monomial ideal $I$ with $\mathcal{G}(I)=\{M_1,\ldots,M_r\}$ is called \textbf{$k$-decomposable} if $r=1$ or else has a shedding monomial $u$ with $|\supp(u)|\leq k+1$ such that the ideals $I_u$ and $I^u$ are $k$-decomposable.} \end{defn} \begin{defn}\label{1.2} {\rm A monomial ideal $I$ in the ring $S$ has \textbf{linear quotients} if there exists an ordering $f_1, \dots, f_m$ on the minimal generators of $I$ such that the colon ideal $(f_1,\ldots,f_{i-1}):(f_i)$ is generated by a subset of $\{x_1,\ldots,x_n\}$ for all $2\leq i\leq m$. We show this ordering by $f_1<\dots <f_m$ and we call it an order of linear quotients on $\mathcal{G}(I)$. Let $I$ be a monomial ideal which has linear quotients and $f_1<\dots <f_m$ be an order of linear quotients on the minimal generators of $I$. For any $1\leq i\leq m$, $\set_I(f_i)$ is defined as $$\set_I(f_i)=\{x_k:\ x_k\in (f_1,\ldots, f_{i-1}) : (f_i)\}.$$ } \end{defn} \begin{defn} {\rm A graded $S$-module $M$ is called \textbf{sequentially Cohen--Macaulay} (over a field $K$) if there exists a finite filtration of graded $S$-modules $$0=M_0\subset M_1\subset \cdots \subset M_r=M$$ such that each $M_i/M_{i-1}$ is Cohen--Macaulay and $$\dim(M_1/M_0)<\dim(M_2/M_1)<\cdots<\dim(M_r/M_{r-1}).$$} \end{defn} For a $\mathbb{Z}$-graded $S$-module $M$, the \textbf{Castelnuovo-Mumford regularity} (or briefly regularity) of $M$ is defined as $$\reg(M) = \max\{j-i: \ \beta_{i,j}(M)\neq 0\},$$ and the \textbf{projective dimension} of $M$ is defined as $$\pd(M) = \max\{i:\ \beta_{i,j}(M)\neq 0 \ \text{for some}\ j\},$$ where $\beta_{i,j}(M)$ is the $(i,j)$th graded Betti number of $M$. For a simplicial complex $\Delta$ with the vertex set $X$, the \textbf{Alexander dual simplicial complex} associated to $\Delta$ is defined as $$\Delta^{\vee}=:\{X\setminus F:\ F\notin \Delta\}.$$ For a squarefree monomial ideal $I=( x_{11}\cdots x_{1n_1},\ldots,x_{t1}\cdots x_{tn_t})$, the \textbf{Alexander dual ideal} of $I$, denoted by $I^{\vee}$, is defined as $$I^{\vee}:=(x_{11},\ldots, x_{1n_1})\cap \cdots \cap (x_{t1},\ldots, x_{tn_t}).$$ For a subset $C\subseteq X$, by $x^C$ we mean the monomial $\prod_{x\in C} x$. One can see that $$(I_{\Delta})^{\vee}=(x^{F^c} \ : \ F\in \mathcal{F}(\Delta)), $$ where $I_{\Delta}$ is the Stanley-Reisner ideal associated to $\Delta$ and $F^c=X\setminus F$. Moreover, $(I_{\Delta})^{\vee}=I_{\Delta^{\vee}}$. A simplicial complex $\Delta$ is called Cohen-Macaulay (resp. sequentially Cohen-Macaulay, Buchsbaum and Gorenstein), if its the Stanley Reisner ring $K[\Delta]=S/I_{\Delta}$ is Cohen-Macaulay (resp. sequentially Cohen-Macaulay, Buchsbaum and Gorenstein). For a simplicial complex $\Delta$, the facet ideal of $\Delta$ is defined as $I(\Delta)=(x^F:\ F\in \mathcal{F}(\Delta))$. Also the complement of $\Delta$ is the simplicial complex $\Delta^c=\langle F^c:\ F\in \mathcal{F}(\Delta)\rangle$. In fact $I(\Delta^c)=I_{\Delta^{\vee}}$. \section{Algebraic properties of an expanded complex} In this section, for a simplicial complex $\Delta$, we study the Stanley-Reisner ideal $I_{\Delta^{\alpha}}$ to see how its algebraic properties are related to those of $I_{\Delta}$. \begin{prop}\label{expansion} Let $\Del$ be a simplicial complex on $X$ and let $\alpha\in\NN^n$. \begin{enumerate}[\upshape (i)] \item $\dim(\Del^\alpha)=\dim(\Del)$ and $\Del$ is pure if and only if $\Del^\alpha$ is pure; \item For $F\in\Del$ and for every facet $G\in\langle F\rangle^\alpha$, we have $$\lk_{\Del^\alpha}(G)=(\lk_\Del (F))^\alpha;$$ \item For all $i\leq\dim(\Del)$, there exists an epimorphism $\theta:\tilde{H}_{i}(\Del^\alpha;K)\rightarrow\tilde{H}_{i}(\Del;K)$ and so $$\frac{\tilde{H}_{i}(\Del^\alpha;K)}{\ker(\theta)}\cong\tilde{H}_{i}(\Del;K).$$ \end{enumerate} \end{prop} \begin{proof} (i) and (ii) are easily verified. (iii) Let the map $\varphi:\Del^\alpha\rightarrow\Del$ be defined by $\varphi(\{x_{i_1j_1},\ldots,x_{i_qj_q}\})=\{x_{i_1},\ldots,x_{i_q}\}$. For each $q$, let $\varphi_\#:\tilde{\C}_q(\Del^\alpha;K)\rightarrow\tilde{\C}_q(\Del;K)$ be a homomorphism defined on the basis elements as follows. $$\varphi_\#([x_{i_0j_0},\ldots,x_{i_qj_q}])=\left[\varphi(\{x_{i_0j_0}\}),\ldots,\varphi(\{x_{i_qj_q}\})\right].$$ It is clear from the definitions of $\tilde{\C}_q(\Del^\alpha;K)$ and $\tilde{\C}_q(\Del;K)$ that $\varphi_\#$ is well-defined. Also, define $\varphi^\alpha_q:\tilde{H}_{q}(\Del^\alpha;K)\rightarrow\tilde{H}_{q}(\Del;K)$ by $$\varphi^{\alpha}_q(z+B_q(\Del^\alpha))=\varphi_\#(z)+B_q(\Del).$$ Consider the diagram \begin{tabbing} \hskip 50mm$\tilde{\C}_{q+1}(\Del^\alpha;K)$ $\stackrel{\partial^{\alpha}_{q+1}}\longrightarrow$ $\tilde{\C}_q(\Del^\alpha;K)$\\ \hskip 50mm\ \ \ \ \ \ \ \ $\downarrow$ \hskip 21mm $\downarrow$\\ \hskip 50mm$\tilde{\C}_{q+1}(\Del;K)$\ \ $\stackrel{\partial_{q+1}}\longrightarrow$\ \ $\tilde{\C}_q(\Del;K)$ \end{tabbing} where $\partial^{\alpha}_{q+1}$ and $\partial_{q+1}$ are the homomorphism of the chain complexes of $\Delta^{\alpha}$ and $\Delta$, respectively and the vertical homomorphisms are $\varphi^{\alpha}_{q+1}$ and $\varphi^{\alpha}_q$. One can see that for any basis element $\sigma=[x_{i_0j_0},\ldots,x_{i_{q+1}j_{q+1}}]\in \tilde{\C}_{q+1}(\Del^\alpha;K)$, $\varphi^{\alpha}_q \partial^{\alpha}_{q+1}(\sigma)=\partial_{q+1}\varphi^{\alpha}_{q+1}(\sigma)$. So the diagram commutes and then $$\varphi^{\alpha}_q(B_q(\Del^\alpha))\subseteq B_q(\Del).$$ Therefore $\varphi^{\alpha}_q$ is a well-defined homomorphism. It is easy to see that $\varphi^{\alpha}_q$ is surjective, because for $[x_{i_0},\ldots,x_{i_q}]+B_q(\Del)\in \tilde{H}_q(\Del;K)$ we have $$\varphi^{\alpha}_q([x_{i_01},\ldots,x_{i_q1}]+B_q(\Del^\alpha))=[x_{i_0},\ldots,x_{i_q}]+B_q(\Del).$$ \end{proof} Reisner gave a criterion for the Cohen-Macaulayness of a simplicial complex as follows. \begin{thm}\cite[Theorem 5.3.5]{villarreal}\label{Reis} Let $\Delta$ be a simplicial complex. If $K$ is a field, then the following conditions are equivalent: \begin{enumerate}[\upshape (i)] \item $\Delta$ is Cohen-Macaulay over $K$; \item $\tilde{H}_{i}(\lk_{\Delta}(F);K)=0$ for $F\in \Delta$ and $i<\dim(\lk_{\Delta}(F))$. \end{enumerate} \end{thm} For a simplicial complex $\Delta$ with the vertex set $X$ and $x_i\in X$, an expansion of $\Delta$ obtained by duplicating $x_i$ is the simplicial complex with the vertex set $X'=X\cup\{x'_i\}$, where $x'_i$ is a new vertex, defined as follows $$\Delta'=\Delta\cup\langle (F\setminus\{x_i\})\cup\{x'_i\}:\ F\in \mathcal{F}(\Delta), x_i\in F\rangle.$$ In fact $\Delta'=\Delta^{(s_1,\ldots,s_n)}$, where $s_j=\left\{ \begin{array}{ll} 1 & \hbox{if}\ j\neq i \\ 2 & \hbox{if}\ j=i. \end{array} \right.$ \begin{rem}\label{rem1} Let $\mathcal{L}$ be a property such that for any simplicial complex $\Delta$ with the property $\mathcal{L}$, any expansion of $\Delta$ obtained by duplicating a vertex of $\Delta$ has the property $\mathcal{L}$. Then by induction for a simplicial complex $\Delta$ with the property $\mathcal{L}$ any expansion of $\Delta$ has the property $\mathcal{L}$, since for any $(s_1,\ldots,s_n)\in \NN^n$, $\Delta^{(s_1,\ldots,s_n)}$ is the expansion of $\Delta^{(s_1,\ldots,s_{i-1},s_i-1,s_{i+1},\ldots,s_n)}$ by duplicating $x_{i1}$. \end{rem} Now, we come to one of the main results of this paper. \begin{thm}\label{CM} Let $\Del$ be a simplicial complex and $\alpha\in\NN^n$. Then $\Delta$ is Cohen-Macaulay if and only if $\Delta^{\alpha}$ is Cohen-Macaulay. \end{thm} \begin{proof} The 'if' part follows from Proposition \ref{expansion} and Theorem \ref{Reis}. To prove the converse, let $\Delta$ be Cohen-Macaulay. In the light of Remark \ref{rem1}, it is enough to show that any expansion of $\Delta$ obtained by duplication an arbitrary vertex, is Cohen-Macaulay. Let $X$ be the vertex set of $\Delta$, $x_i\in X$ and $\Delta'$ be an expansion of $\Delta$ by duplicating $x_i$. Then $\Delta'=\Delta \cup \langle (F\setminus\{x_i\})\cup\{x'_i\}:\ F\in \mathcal{F}(\Delta), x_i\in F\rangle$ and \begin{multline} I_{\Delta'^{\vee}}=(x^{(X\cup\{x'_i\})\setminus F}:\ F\in \mathcal{F}(\Delta'))=(x'_ix^{X\setminus F}:\ F\in \mathcal{F}(\Delta))+(x^{(X\cup\{x'_i\})\setminus (F\cup\{x'_i\}\setminus \{x_i\})}: \\ \ F\in \mathcal{F}(\Delta), x_i\in F)= (x'_ix^{X\setminus F}:\ F\in \mathcal{F}(\Delta))+(x_ix^{X\setminus F}:\ F\in \mathcal{F}(\Delta), x_i\in F). \end{multline} Thus $$I_{\Delta'^{\vee}}=x'_iI_{\Delta^{\vee}}+x_iI_{(\lk_{\Delta}(x_i))^{\vee}}.$$ Since $\Delta$ is Cohen-Macaulay, by \cite[Proposition 5.3.8]{villarreal}, $\lk_{\Delta}(x_i)$ is also Cohen-Macaulay. So \cite[Theorem 3]{ER} implies that $I_{\Delta^{\vee}}$ and $I_{(\lk_{\Delta}(x_i))^{\vee}}$ have $(n-d-1)$-linear resolutions, where $d=\dim(\Delta)$. Therefore $x'_iI_{\Delta^{\vee}}$ and $x_iI_{(\lk_{\Delta}(x_i))^{\vee}}$ have $(n-d)$-linear resolutions. Note that $\mathcal{G}(I_{\Delta'^{\vee}})$ is the disjoint union of $\mathcal{G}(x'_iI_{\Delta^{\vee}})$ and $\mathcal{G}(x_iI_{(\lk_{\Delta}(x_i))^{\vee}})$. Now by \cite[Corollary 2.4]{splitting}, $I_{\Delta'^{\vee}}=x'_iI_{\Delta^{\vee}}+x_iI_{(\lk_{\Delta}(x_i))^{\vee}}$ is a Betti splitting and hence by \cite[Corollary 2.2]{splitting}, $$\reg(I_{\Delta'^{\vee}})=\max\{\reg(x'_iI_{\Delta^{\vee}}),\reg(x_iI_{(\lk_{\Delta}(x_i))^{\vee}}),\reg(x'_iI_{\Delta^{\vee}}\cap x_iI_{(\lk_{\Delta}(x_i))^{\vee}})-1\}.$$ As discussed above, $\reg(x'_iI_{\Delta^{\vee}})=\reg(x_iI_{(\lk_{\Delta}(x_i))^{\vee}})=n-d$. Note that $x'_iI_{\Delta^{\vee}}\cap x_iI_{(\lk_{\Delta}(x_i))^{\vee}}=(x'_ix^{X\setminus F}:\ F\in \mathcal{F}(\Delta))\cap(x_ix^{X\setminus F}:\ F\in \mathcal{F}(\Delta), x_i\in F)=x'_ix_iI_{(\lk_{\Delta}(x_i))^{\vee}}$. So $\reg(x'_iI_{\Delta^{\vee}}\cap x_iI_{(\lk_{\Delta}(x_i))^{\vee}})=\reg(I_{(\lk_{\Delta}(x_i))^{\vee}})+2=n-d-1+2=n-d+1$. Thus $\reg(I_{\Delta'^{\vee}})=n-d$. Since $I_{\Delta'^{\vee}}$ is homogenous of degree $n-d$, this implies that $I_{\Delta'^{\vee}}$ has a $(n-d)$-linear resolution (see for example \cite[Lemma 5.55]{ME}). Thus using again \cite[Theorem 3]{ER} implies that $\Delta'$ is Cohen-Macaulay. \end{proof} The following theorem compares Buchsbaumness in a simplicial complex and its expansion. \begin{thm}\label{Buchs} Let $\Del$ be a simplicial complex on $X$ and let $\alpha\in\NN^n$. Then $\Del$ is Buchsbaum if and only if $\Del^\alpha$ is. \end{thm} \begin{proof} The ``if'' part follows from Proposition \ref{expansion} and \cite[Theorem 8.1]{St}. Let $\Del$ be Buchsbaum. In the light of Remark \ref{rem1}, it is enough to show that any expansion of $\Delta$ obtained by duplication an arbitrary vertex, is Buchsbaum. Let $\Del'$ be the expansion of $\Del$ obtained by duplicating the vertex $x_i$. By \cite[Theorem 4.5]{BiRo}, a simplicial complex $\Del$ is Buchsbaum if and only if $\Del$ is pure and for any vertex $x\in X$, $\lk_\Del(x)$ is Cohen-Macaulay. We use this fact to prove the assertion. Since $\lk_{\Del'}(x_i)=\lk_{\Del'}(x'_i)=\lk_\Del(x_i)$, it follows from Buchsbaum-ness of $\Del$ that $\lk_{\Del'}(x_i)$ and $\lk_{\Del'}(x'_i)$ are Cohen-Macaulay. Now suppose that $x_j\in X'=X\cup \{x'_i\}$ with $x_j\neq x_i$ and $x_j\neq x'_i$. Then $\lk_{\Del'}(x_j)$ is the expansion of $\lk_\Del(x_j)$ obtained by duplication of $x_i$. Since $\lk_\Del(x_j)$ is Cohen-Macaulay it follows from Theorem \ref{CM} that $\lk_{\Del'}(x_j)$ is Cohen-Macaulay. Therefore the assertion holds. \end{proof} Now, we study the sequentially Cohen-Macaulay property in an expanded complex. For a simplicial complex $\Delta$ and a subcomplex $\Gamma$ of $\Delta$, $\Delta/\Gamma=\{F\in \Delta:\ F\notin \Gamma\}$ is called a \textbf{relative simplicial complex}. \begin{lem}\label{rel} Let $\Del$ be a $(d-1)$-dimensional simplicial complex, $\alpha\in\NN^n$ and $\Del_i$ be the subcomplex of $\Delta$ generated by the $i$-dimensional facets of $\Del$. Set $$\Omega_i=\Del_i/(\Del_i\cap(\cup_{j>i}\Del_j)).$$ Then for all $0\leq i\leq d-1$, $\Omega^\alpha_i=\Del^\alpha_i/ (\Del^\alpha_i\cap(\cup_{j>i}\Del^\alpha_i)).$ \end{lem} \begin{proof} The assertion follows from the fact that if $\Gamma_1$ and $\Gamma_2$ are two simplicial complexes on $X$ and $\alpha\in\NN^n$ then $$(\Gamma_1\cup\Gamma_2)^\alpha=\Gamma^\alpha_1\cup\Gamma^\alpha_2,\qquad (\Gamma_1\cap\Gamma_2)^\alpha=\Gamma^\alpha_1\cap\Gamma^\alpha_2.$$ \end{proof} \begin{thm}\label{Stanley} (\cite[Proposition 2.10]{St}) Let $\Del$ be a $(d-1)$-dimensional simplicial complex on $X$. Then $\Del$ is sequentially Cohen-Macaulay if and only if the relative simplicial complexes $\Omega_i$ (defined in Lemma \ref{rel}) are Cohen-Macaulay for $0\leq i\leq d-1$. \end{thm} \begin{cor}\label{SCM} Let $\Del$ be a simplicial complex and let $\alpha\in\NN^n$. Then $\Delta$ is sequentially Cohen-Macaulay if and only if $\Del^\alpha$ is sequentially Cohen-Macaulay. \end{cor} \begin{proof} Combining Theorem \ref{Stanley} and Lemma \ref{rel} with Theorem \ref{CM} we obtain the assertion. \end{proof} For $F\in\Del$, set $\star(F)=\{G\in\Del:F\cup G\in\Del\}$. Let $\Gamma_\Del$ be the induced subcomplex of $\Delta$ on the set $\c(X)$ where $\c(X)=\{x_i\in X:\star(\{x_i\})\neq\Del\}$. A combinatorial description of Gorenstein simplicial complexes was given in \cite{St}. It was proved that the simplicial complex $\Del$ is Gorenstein over a field $K$ if and only if for all $F\in\Gamma_\Del$, $\tilde{H}_i(\lk_{\Gamma_\Del}(F);K)\cong K$ if $i=\dim(\lk_{\Gamma_\Del}(F))$ and $\tilde{H}_i(\lk_{\Gamma_\Del}(F);K)$ is vanished if $i<(\lk_{\Gamma_\Del}(F))$ (see Theorem 5.1 of \cite{St}). For a face $F\in\Del^\alpha$, we set $\bar{F}=\{x_i: x_{ij}\in F\ \textrm{for some}\ j\}$. \begin{lem}\label{core} Let $\Del$ be a simplicial complex with $\Del=\Gamma_\Del$ and let $\alpha\in\NN^n$. Then $\Gamma_{\Del^\alpha}=(\Gamma_\Del)^\alpha$. \end{lem} \begin{proof} Let $F$ be a facet of $\Gamma_{\Del^\alpha}$. Then $F\in\Del^\alpha$ and $F\subset\c(X^\alpha)$. It is clear that $\bar{F}\in\Del= \Gamma_\Del$. So $F\in(\Gamma_\Del)^\alpha$. Conversely, suppose that $F\in(\Gamma_\Del)^\alpha$. Then $F\in G^\alpha$ for some $G\in \Gamma_\Del$. Hence $G\in\Del$ and $G\subset\c(X)$. This implies that for every $x_i\in G$, $\star(\{x_i\})\neq\Del$. Thus for every $x_i\in G$ and all $1\leq j\leq k_i$, $\star(\{x_{ij}\})\neq\Del^\alpha$. Therefore $F\subset\c(X^\alpha)$. Finally, it follows from $F\in\Del^\alpha$ that $F\in\Gamma_{\Del^\alpha}$. \end{proof} \begin{thm}\label{Gor} Let $\Del$ be a simplicial complex with $\Del=\Gamma_\Del$ and let $\alpha\in\NN^n$. If $\Del^\alpha$ is Gorenstein then $\Del$ is Gorenstein, too. \end{thm} \begin{proof} The assertion follows from Lemma \ref{core}, Proposition \ref{expansion} and \cite[Theorem 5.1]{St}. \end{proof} \begin{rem} The converse of Theorem \ref{Gor} does not hold in general. To see this fact we first recall a result from \cite{BrHe} which implies the necessary and sufficient conditions for a simplicial complex to be Gorenstein. By \cite[Theorem 5.6.2]{BrHe} for a simplicial complex $\Del$ with $\Del=\Gamma_\Del$, $\Del$ is Gorenstein over a field $K$ if and only if $\Del$ is an Euler complex which is Cohen-Macaulay over $K$. Recall that the simplicial complex $\Del$ of dimension $d$ is an \textbf{Euler complex} if $\Del$ is pure and $\widetilde{\chi}(\lk_\Del(F))=(-1)^{\dim \lk_\Del(F)}$ for all $F\in\Del$ where $\widetilde{\chi}(\Del)=\sum^{d-1}_{i=0}(-1)^if_i(\Del)-1$. Now, consider the simplicial complex $\Del=\langle x_1x_2,x_1x_3,x_2x_3\rangle$ on $\{x_1,x_2,x_3\}$ and let $\alpha=(2,1,1)$. Then $\Del=\Gamma_\Del$ and $\Del^\alpha=\Gamma_{\Del^\alpha}$. It is easy to check that $\Del$ is Euler. Also, $\Del$ is Cohen-Macaulay and so it is Gorenstein. On the other hand, $\widetilde{\chi}(\lk_{\Del^\alpha}(\{x_2\}))=2\neq 1=(-1)^{\dim \lk_{\Del^\alpha}(\{x_2\})}$. It follows that $\Del^\alpha$ is not Euler and so it is not Gorenstein. \end{rem} The following theorem was proved in \cite{RaYa}. \begin{thm}\cite[Theorem 2.10]{RaYa}\label{kdual} A simplicial complex $\Delta$ is $k$-decomposable if and only if $I_{\Delta^{\vee}}$ is a $k$-decomposable ideal, where $k\leq\dim(\Delta)$. \end{thm} We use the above theorem to prove the following result, which generalizes \cite[Theorem 2.7]{KhMo}. \begin{thm}\label{decom} Let $\Delta$ be a simplicial complex and $\alpha\in\NN^n$. The $\Delta$ is $k$-decomposable if and only if $\Delta^{\alpha}$ is $k$-decomposable. \end{thm} \begin{proof} ``Only if part'': Considering Remark \ref{rem1}, it is enough to show that an expansion of $\Delta$ obtained by duplicating one vertex, is $k$-decomposable. Let $X$ be the vertex set of $\Delta$, $x_i\in X$ and $\Delta'$ be an expansion of $\Delta$ by duplicating $x_i$. Then $\Delta'=\Delta \cup \langle (F\setminus\{x_i\})\cup\{x'_i\}:\ F\in \mathcal{F}(\Delta), x_i\in F\rangle$. As was shown in the proof of Theorem \ref{CM}, $$I_{\Delta'^{\vee}}=x'_iI_{\Delta^{\vee}}+x_iI_{(\lk_{\Delta}(x_i))^{\vee}}.$$ It is easy to see that $(I_{\Delta'^{\vee}})_{x'_i}=x_iI_{(\lk_{\Delta}(x_i))^{\vee}}$ and $(I_{\Delta'^{\vee}})^{x'_i}=x'_iI_{\Delta^{\vee}}$. Now, let $\Delta$ be $k$-decomposable. Then by \cite[Proposition 3.7]{Wo}, $\lk_{\Delta}(x_i)$ is $k$-decomposable. Thus \cite[Theorem 2.10, Lemma 2.6]{RaYa} imply that $x'_iI_{\Delta^{\vee}}$ and $x_iI_{(\lk_{\Delta}(x_i))^{\vee}}$ are $k$-decomposable ideals. Also for any minimal generator $x_ix^{X\setminus F}\in (I_{\Delta'^{\vee}})_{x'_i}$, we have $(x'_ix^{X\setminus F}:x_ix^{X\setminus F})=(x'_i)$. Thus $x'_i$ is a shedding face of $\Delta'$. Clearly $\dim(x'_i)\leq k$. Thus $I_{\Delta'^{\vee}}$ is a $k$-decomposable ideal. So using again \cite[Theorem 2.10]{RaYa} yields that $\Delta'$ is $k$-decomposable. ``If part'': Let $\Del'$ be an expansion of $\Del$ obtained by duplicating one vertex $x_i$. We first show that if $\Del'$ is $k$-decomposable then $\Del$ is $k$-decomposable, too. If $\F(\Del)=\{F\}$ then the expansion of $\Del$ obtained by duplicating one vertex $x_i$ is $\Del'=\langle x_i,x'_i\rangle\ast\langle F\backslash x_i\rangle$ and so it is $k$-decomposable,by Proposition 3.8 of \cite{Wo}. Hence suppose that $\Del$ has more than one facet. Let $\sigma$ be a shedding face of $\Del'$ and let $\lk_{\Del'}\sigma$ and $\Del'\backslash\sigma$ are $k$-decomposable. We have two cases: Case 1. Let $x'_i\in \sigma$. Then $\Del=\Del'\backslash\sigma$ and so $\Del$ is $k$-decomposble. Similarly, if $x_i\in\sigma$ then $\Del'\backslash\sigma$ and $\Del$ are isomorphic and we are done. Case 2. Let $x_i\not\in\sigma$ and $x'_i\not\in\sigma$. Then $\lk_{\Del'}\sigma$ and $\Del'\backslash\sigma$ are, respectively, the expansions of $\lk_{\Del}\sigma$ and $\Del\backslash\sigma$ obtained by duplicating $x_i$. Therefore it follows from induction that $\lk_{\Del}\sigma$ and $\Del\backslash\sigma$ are $k$-decomposable. Also, it is trivial that $\sigma$ is a shedding face of $\Del$. Now suppose that $\alpha=(k_1,\ldots,k_n)\in\NN^n$ and $\Del^\alpha$ is $k$-decomposable. Let $k_i>1$ an set $\beta=(k_1,\ldots,k_{i-1},k_i-1,k_{i+1},\ldots,k_n)$. Then $\Del^\alpha$ is the expansion of $\Del^\beta$ and by above assertion, if $\Del^\alpha$ is $k$-decomposable then $\Del^\beta$ is $k$-decomposable, too. In particular, it follows by induction that $\Del$ is $k$-decomposable, as desired. \end{proof} The following theorem which was proved in \cite{Wo}, relates the shellability of a simplicial complex to $d$-decomposability of it. \begin{thm}\cite[Theorem 3.6]{Wo}\label{decomshell} A $d$-dimensional simplicial complex $\Delta$ is shellable if and only if it is $d$-decomposable. \end{thm} Using Theorems \ref{decom} and \ref{decomshell}, we get the following corollaries. \begin{cor}\label{shell} Let $\Del$ be a simplicial complex and $\alpha\in\NN^n$. Then $\Del$ is shellable if and only if $\Del^\alpha$ is shellable. \end{cor} \begin{cor} Let $\Del$ be a simplicial complex and $\alpha\in\NN^n$. Then $I_\Del$ is clean if and only if $I_{\Del^\alpha}$ is clean. \end{cor} \section{Homological invariants of the Stanley-Reisner ideal of an expanded complex} In this section, we compare the regularity, the projective dimension and the depth of the Stanley-Reisner rings of a simplicial complex and its expansions. The following theorem, gives an upper bound for the regularity of the Stanley-Reisner ideal of an expanded complex in terms of $\reg(I_{\Delta})$. \begin{thm}\label{regs} Let $\Delta$ be a simplicial complex and $\alpha=(s_1,\ldots,s_n)\in\NN^n$. Then $\reg(I_{\Delta^{\alpha}})\leq \reg(I_{\Delta})+r$, where $r=|\{i:s_i>1\}|$. \end{thm} \begin{proof} It is enough to show that for any $1\leq i\leq n$, \begin{equation}\label{rs} \reg(I_{\Delta^{(1,\ldots,1,s_i,1,\ldots,1)}})\leq\reg(I_{\Delta})+1. \end{equation} Then from the equality $\Delta^{(s_1,\ldots,s_n)}=(\Delta^{(s_1,\ldots,s_{n-1},1)})^{(1,\ldots,1,s_n)}$, we have $$\reg(I_{\Delta^{(s_1,\ldots,s_n)}})\leq\reg(I_{\Delta^{(s_1,\ldots,s_{n-1},1)}})+1,$$ and one can get the result by induction on $n$. To prove (\ref{rs}), we proceed by induction on $s_i$. First we show that the inequality $\reg(I_{\Delta'})\leq\reg(I_{\Delta})+1$ holds for any expansion of $\Delta$ obtained by duplicating a vertex. Let $X$ be the vertex set of $\Delta$, $x_i\in X$ and $\Delta'$ be an expansion of $\Delta$ by duplicating $x_i$ on the vertex set $X'=X\cup\{x'_i\}$. Then $\Delta'=\Delta \cup \langle (F\setminus\{x_i\})\cup\{x'_i\}:\ F\in \mathcal{F}(\Delta), x_i\in F\rangle$. For a subset $Y\subseteq X$, let $P_Y=(x_i:\ x_i\in Y)$. By \cite[Proposition 5.3.10]{villarreal}, $$I_{\Delta'}=\bigcap_{F\in \mathcal{F}(\Delta')}P_{X'\setminus F}=\bigcap_{F\in \mathcal{F}(\Delta)}(P_{X\setminus F}+(x'_i))\bigcap (\bigcap_{F\in \mathcal{F}(\Delta),x_i\in F}P_{X'\setminus ((F\setminus\{x_i\})\cup\{x'_i\})})=$$ $$((x'_i)+\bigcap_{F\in \mathcal{F}(\Delta)}P_{X\setminus F})\bigcap ((x_i)+\bigcap_{F\in \mathcal{F}(\Delta),x_i\in F}P_{X\setminus F}).$$ So \begin{equation}\label{stanexpan} I_{\Delta'}=(x_ix'_i)+I_{\Delta}+x'_iI_{\lk_{\Delta}(x_i)}. \end{equation} Let $S'=S[x'_i]$ and consider the following short exact sequence of graded modules $$0\longrightarrow S'/(I_{\Delta'}:x'_i)(-1) \longrightarrow S'/I_{\Delta'}\longrightarrow S'/(I_{\Delta'},x'_i) \longrightarrow 0.$$ By equation (\ref{stanexpan}), one can see that $(I_{\Delta'},x'_i)=(I_{\Delta},x'_i)$ and $(I_{\Delta'}:x'_i)=(I_{\lk_{\Delta}(x_i)},x_i)$. So we have the exact sequence $$0\longrightarrow (S'/(x_i,I_{\lk_{\Delta}(x_i)}))(-1) \longrightarrow S'/I_{\Delta'} \longrightarrow S'/(x'_i,I_{\Delta}) \longrightarrow 0.$$ Thus by \cite[Lemma 3.12, Lemma 3.5]{MoVi}, \begin{equation}\label{r0} \reg(S'/I_{\Delta'})\leq\max\{\reg(S/I_{\lk_{\Delta}(x_i)})+1,\reg (S/I_{\Delta})\}. \end{equation} By \cite[Lemma 2.5]{HW}, $\reg(S/I_{\lk_{\Delta}(x_i)})\leq\reg (S/I_{\Delta})$. So $\reg(S'/I_{\Delta'})\leq\reg (S/I_{\Delta})+1$ or equivalently $\reg(I_{\Delta'})\leq\reg (I_{\Delta})+1$. Note that $\Delta^{(1,\ldots,1,s_i,1,\ldots,1)}$ is obtained from $\Delta^{(1,\ldots,1,s_i-1,1,\ldots,1)}$ by duplicating $x_i$. Thus by (\ref{r0}), $$\reg(I_{\Delta^{(1,\ldots,1,s_i,1,\ldots,1)}})\leq\max\{\reg(I_{\lk_{\Delta^{(1,\ldots,1,s_i-1,1,\ldots,1)}}(x_i)})+1,\reg (I_{\Delta^{(1,\ldots,1,s_i-1,1,\ldots,1)}})\}.$$ But $\lk_{\Delta^{(1,\ldots,1,s_i-1,1,\ldots,1)}}(x_i)=\lk_{\Delta}(x_i)$. Also by induction hypothesis $\reg(I_{\Delta^{(1,\ldots,1,s_i-1,1,\ldots,1)}})\leq\reg (I_{\Delta})+1$. Again using \cite[Lemma 2.5]{HW}, we get the result. \end{proof} The next result which generalizes \cite[Theorem 3.1]{KhMo} explains $\pd(S^{\alpha}/I_{\Del^{\alpha}})$ in terms of $\pd(S/I_{\Del})$ for a sequentially Cohen-Macaulay simplicial complex. \begin{thm}\label{pd} Let $\Delta$ be a sequentially Cohen-Macaulay simplicial complex on $X$ and $\alpha=(s_1,\ldots,s_n)$. Then $$\pd(S^{\alpha}/I_{\Del^{\alpha}})=\pd(S/I_{\Del})+s_1+\cdots+s_n-n$$ and $$\depth(S^{\alpha}/I_{\Del^{\alpha}})=\depth(S/I_{\Del}).$$ \end{thm} \begin{proof} By Corollary \ref{SCM}, $\Delta^{\alpha}$ is sequentially Cohen-Macaulay too. Thus by \cite[Corollary 3.33]{MoVi}, $$\pd(S^{\alpha}/I_{\Del^\alpha})=\bight(I_{\Del^\alpha})$$ and $$\pd(S/I_\Del)=\bight(I_\Del).$$ Let $t=\min\{|F|:\ F\in \F(\Del)\}$. Then $\min\{|F|:\ F\in \F(\Del^{\alpha})\}=t$, $\bight(I_{\Del})=n-t$ and $$\bight(I_{\Del^{\alpha}})=|X^{\alpha}|-t=s_1+\cdots+s_n-t=s_1+\cdots+s_n+\pd(S/I_\Del)-n.$$ The second equality holds by Auslander-Buchsbaum formula. Note that $\depth(S^\alpha)=s_1+\cdots+s_n$. \end{proof} Let $\fm$ and $\fn$ be, respectively, the maximal ideals of $S$ and $S^\alpha$. \begin{thm}\label{localco} (Hochster \cite{HeHi}) Let $\ZZ^n_{-}=\{\a\in\ZZ^n:a_i\leq 0\ \mbox{for}\ i=1,\ldots,n\}$. Then $$\dim H^i_\fm(K[\Del])_\a=\left\{ \begin{array}{ll} \dim\tilde{H}_{i-|F|-1}(\lk_\Del (F);K) , & \hbox{if}\ \a\in\ZZ^n_{-},\ \hbox{where}\ F=\supp(\a) \\ 0, & \hbox{if}\ \a\not\in\ZZ^n_{-}. \end{array} \right. $$ \end{thm} \begin{thm} Let $\Del$ be a simplicial complex on $X$ and $\alpha=(s_1,\ldots,s_n)$. Then $\depth(K[\Del^\alpha])\leq\depth(K[\Del])$. It follows that $\pd(S^{\alpha}/I_{\Del^{\alpha}})\geq\pd(S/I_{\Del})+s_1+\cdots+s_n-n$. \end{thm} \begin{proof} Set $s=\sum_is_i$. For $\a\in\ZZ^k$, set $s_0=0$ and for $1\leq i\leq n$ set $\bar{\a}(i)=\sum^{s_i}_{j=s_{i-1}+1}\a(j)$. Let $\a\in\ZZ^s_-$ and $F=\supp(\a)$. It follows from Theorem \ref{localco} that $\dim H^i_\fn(K[\Del^\alpha])_\a=\dim\tilde{H}_{i-|F|-1}(\lk_{\Del^\alpha}(F);K)$. On the other hand by \cite[Theorem 6.2.7]{BrSh} $\depth(K[\Del])$ is the least integer $i$ such that $H^i_\fm(K[\Del])\neq 0$. Now, if $\depth(K[\Del^\alpha])=d$, then $\tilde{H}_{d-|F|-1}(\lk_{\Del^\alpha}(F);K)\neq 0$ and $\tilde{H}_{i-|F|-1}(\lk_{\Del^\alpha}(F);K)\neq 0$ for any $i<d$. By Proposition \ref{expansion}, one can see that $\dim\tilde{H}_{i-|\bar{F}|-1}(\lk_\Del(\bar{F});K)=\dim H^i_\fm(K[\Del])_{\bar{\a}}=0$ for any $i<d$. This obtains the assertion. The second inequality holds by Auslander-Buchsbaum formula. \end{proof} \textbf{Acknowledgments:} The authors would like to thank the referee for careful reading of the paper. The research of Rahim Rahmati-Asghar and Somayeh Moradi were in part supported by a grant from IPM (No. 94130029) and (No. 94130021).
1,108,101,564,095
arxiv
\section{Introduction} The intracluster medium (ICM) is a mixture of thermal and non-thermal components and a precise physical description of the ICM also requires adequate knowledge of the role of non-thermal components. The most detailed evidence for non-thermal phenomena comes from the radio observations. A number of clusters of galaxies are known to contain wide diffuse synchrotron sources (radio halos and relics) which have no obvious connection with the individual cluster galaxies, but are rather associated to the ICM (e.g., Giovannini \& Feretti 2000; Kempner \& Sarazin 2001; see Giovannini \& Feretti 2002 for a review). The synchrotron emission of such sources requires a population of GeV relativistic electrons (and/or positrons) and cluster magnetic fields on $\mu$G levels. Evidence for relativistic electrons (and positrons) in the ICM may also come from the detection of hard X-ray (HXR) excess emission in the case of a few galaxy clusters (e.g., Rephaeli \& Gruber 2003, Fusco--Femiano et al. 2004), and possibly from extreme ultra-violet (EUV) excess emission (e.g., Kaastra et al. 2003; Bowyer et al. 2004). It is also believed that the amount of the energy budget of high energy protons in the ICM might be significant, due to the confinement of cosmic rays over cosmological time scales (V\"{o}lk et al. 1996; Berezinsky, Blasi \& Ptuskin 1997; En\ss lin et al. 1997). Nevertheless, the gamma radiation that would allow to infer the fraction of relativistic hadrons in clusters has not been detected as yet (Reimer et al., 2003, see Pfrommer \& En\ss lin 2004 for upper limit on this fraction). Shock waves are unavoidably formed during merger events; they may efficiently accelerate relativistic particles contributing to the injection of relativistic hadrons and of relativistic emitting electrons in the ICM (e.g., Ryu et al.~2003, Gabici \& Blasi 2003). However the accelerated electrons have a short pathlength due to IC losses and thus they can travel a short distance away from the shock front, emitting synchrotron radiation concentrated around the shock rim (e.g., Miniati et al. 2001). Radio Relics, which are polarized and elongated radio sources located in the cluster peripheral regions, may indeed be associated to these shock waves, as a result of Fermi-I diffusive shock acceleration of ICM electrons (En\ss lin et al. 1998; Roettiger et al.~1999), or of adiabatic energization of relativistic electrons confined in fossil radio plasma, released in the past by active radio galaxies (En\ss lin \& Gopal-Krishna 2001; Hoeft et al. 2004). The most spectacular evidence of diffuse synchrotron emission in galaxy clusters is that associated to giant radio halos, Mpc-size radio sources which permeate the cluster volume similarly to the X--ray emitting gas. In this respect, two main possibilities have been investigated in some detail to explain that GeV electrons (and/or positrons) are present and able to radiate on distance scales larger than their typical loss lengths: {\it i)} the so-called {\it reacceleration} models, whereby relativistic electrons (and positrons) injected in the ICM by a variety of processes active during the life of galaxy clusters are continuously re-energized {\it in situ} during the life--time of the observed radio halos (which is estimated to be $\sim 1$ Gyr, Kuo et al.~2004) and {\it ii)} the {\it secondary electron} models, whereby electrons are secondary products of the hadronic interactions of cosmic rays with the intracluster medium, as first proposed by Dennison (1980). Although the origin of the emitting particles in radio halos is still a matter of debate (e.g., En\ss lin 2004), the above two models for the production of the radiating electrons (and positrons) have a substantial predictive power, which can be used to discriminate among such models by comparing their predictions with observations. Although future observations remain crucial to achieve a firm conclusion, at least as far as the few well studied clusters and the analysis of statistical samples are concerned, present data seem to suggest the presence of {\it in situ} particle--reacceleration mechanisms active in the ICM (e.g., Brunetti 2003,04; Blasi 2004; Feretti et al. 2004; Hwang 2004; Reimer et al. 2004). Radio observations of galaxy clusters indicate that the detection rate of radio halos shows an abrupt increase with increasing the X-ray luminosity of the host clusters. In particular, about 30-35\% of the galaxy clusters with X-ray luminosity larger than $10^{45}$ erg/s show diffuse non-thermal radio emission (Giovannini \& Feretti 2002); these clusters have also high temperature (kT $>$ 7 keV) and large mass ($\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}$ 2$\times$ $10^{15} M_{\odot}$). Furthermore, giant radio halos are always found in merging clusters (e.g., Buote 2001; Schuecker et al 2001). Although the physics of particle acceleration due to turbulence generated in merging clusters has been investigated in some detail (e.g., Schlickeiser et al.~1987; Petrosian 2001; Fujita et al 2003; Brunetti et al.~2004; Brunetti \& Blasi 2005) and the model expectations seem to reproduce the observed radio features and possibly also the hard X--rays (e.g., Brunetti et al.~2001; Kuo et al.~2003; Brunetti 2004; Hwang 2004), a theoretical investigation of the statistical properties of the Mpc diffuse emission in galaxy clusters in the framework of these models has not been carried out extensively as yet. In particular, the fact that giant radio halos are always associated to massive galaxy clusters and the presence of a trend between their radio power and the mass (temperature, X-ray luminosity) of the parent clusters may be powerful tools to test and constrain present models. In a recent paper Cassano \& Brunetti (2005; hereafter CB05) have modelled the statistical properties of giant radio halos in the framework of the merger--induced {\it in situ} particle acceleration scenario. By adopting the semi--analytic Press \& Schechter (1974; PS74) theory to follow the cosmic evolution and formation of a large synthetic population of galaxy clusters, it was assumed that the energy injected in the form of magnetosonic waves during merging events in clusters is a fraction, $\eta_t$, of the $PdV$ work done by the infalling subclusters in passing through the most massive one. Then the processes of stochastic acceleration of the relativistic electrons by these waves, and the ensuing synchrotron emission properties, have been worked out under the assumption that the magnetic field intensities, ICM temperatures and particle densities (both thermal and non-thermal) have constant volume averaged values (within 1 Mpc$^3$). The main findings of these calculations is that giant radio halos are {\it naturally} expected only in the more massive clusters, and the expected fraction of clusters with radio halos (at redshifts $z\raise 2pt \hbox {$<$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,0.2$) can be reconciled with the observed one under viable assumptions ($\eta_t\,\simeq\,0.24-0.34\,$). Specifically, the probability to form giant radio halos in the synthetic cluster population was found to be of order 20-30 \% in the more massive galaxy clusters ($M > 2\times10^{15}\,M_{\odot}$), 2-5 \% in $M \sim 10^{15}\,M_{\odot}$ clusters, and negligible in less massive systems. Such increase of the probability with the cluster mass is essentially due to the increase of both the energy density of turbulence and of the turbulence injection volume with cluster mass (see CB05). The present paper is a natural extension of the CB05 work, the most important difference being that here we adopt a scaling law between the rms magnetic field strength (averaged in the synchrotron emitting volume) and the virial mass of the parent clusters, $B \propto M^b$. We carry out a detailed comparison between statistical data of giant radio halos currently available and model expectations as derived by adopting the CB05 procedures. In Sec.~2 we collect radio and X-ray data for well known giant radio halos from the literature and derive radio--X-ray correlations. In Sec.~3 we investigate the possibility to match the observed radio--X-ray correlations for giant radio halos with electron acceleration models. This comparison provides stringent constraints on the physical parameters in the ICM, in particular for the magnetic field in galaxy clusters. In Sec.~4 we derive the expected probability to form giant radio halos as a function of $M_v$ and z. This is done by adopting the same values of the physical parameters which allows to account for the observed radio--X-ray correlations. In Secs.5--6 we finally calculate the expected luminosity functions and number counts of giant radio halos. As in CB05, we focus our attention on giant radio halos only (linear size $\sim$1 $h_{50}^{-1}$ Mpc, GRHs elsewhere). The adopted cosmology is: $\Lambda$CDM ($H_{o}=70$ Km $s^{-1}$ $Mpc^{-1}$, $\Omega_{o,m}=0.3$, $\Omega_{\Lambda}=0.7$, $\sigma_8=0.9$). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fit_LxLr_LCDM_500_n1_paper.ps}} \caption[]{Correlation between the radio power at 1.4 GHz and the X-ray luminosity between [0.1-2.4] kev for the GRHs.} \label{LxLrfig} \end{figure} \section{Observed Correlations} In this section we discuss the observed correlations between the X-ray and the radio properties of clusters hosting GRHs. \noindent We collect galaxy clusters with known GRHs from the literature obtaining a total sample of 17 clusters. In Tab.~\ref{RH} we report the radio and X-ray properties of this sample in a $\Lambda$CDM cosmology. In order to have the best estimate of the X-ray temperatures we select results from XMM-Newton observations when available, otherwise we use ASCA results or combine ASCA and Chandra information. We investigate the correlations between the X-ray and the radio properties of the selected clusters by making use of a linear regression fit in log-log space following the procedures given in Akritas \& Bershady (1996). This method allows for intrinsic scatter and errors in both variables. \subsection{Radio Power--X-ray luminosity correlation} The presence of a correlation between the radio powers and the X-ray luminosities is well known (Liang et al. 2000; Feretti 2000, 2003; En\ss lin and R\"ottgering 2002). In Fig.\ref{LxLrfig} we report the correlation between the X-ray luminosity (in the 0.1-2.4 keV energy band) and the radio power at 1.4 GHz ($P_{1.4}$) for our sample of GRHs. The fit has been performed by using the form: \begin{equation} \log\Big(\frac{P_{1.4\,GHz}}{3.16\cdot10^{24}\,h_{70}^{-1}\,\frac{Watt}{Hz}}\Big)=A_f+b_f\, \log\bigg[\frac{L_X}{10^{45}\,h_{70}^{-1}\,\frac{ergs}{s}}\bigg] \label{LxLreq} \end{equation} \noindent where the best fit parameters are: $A_f=0.159\pm 0.060$ and $b_f=1.97\pm 0.25$. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{brillanza_BA3562_2.ps}} \caption[]{Halo radio brightness at 1.4 GHz, normalized to the radio brightness of A3562 (which is just visible in the NVSS), versus X-ray luminosity between [0.1-2.4] keV. Different symbols indicate GRHs (filled circles) and smaller radio halos (open circles) visible in the NVSS. Asterisks mark A2256, which falls below the NVSS surface brightness limit, and 1E50657-558, which falls below the declination range of the NVSS. Large circles mark objects visible in the NVSS and in the redshift range $z\sim 0.15-0.3$.} \label{Br} \end{figure} \begin{table*} \caption{Radio and X-ray properties of cluster with GRHs ( linear size $\sim\, 1\, h_{50}^{-1}$ Mpc) in a $\Lambda$CDM cosmology. In Col.(1): Cluster name. Col.(2): Cluster redshift. Col.(3): Cluster temperature given in keV. Col.(4): X-ray luminosity in the energy range $[0.1-2.4]$ keV in unit of $h_{70}^{-2}\,10^{44}$ erg/s. Col.(5): Bolometric X-ray luminosity in the energy range $[0.01-40]$ keV in unit of $h_{70}^{-2}\,10^{44}$ erg/s. Col.(6): Radio power at $1.4$ GHz in unit of $h_{70}^{-2}\, 10^{24}$ Watt/Hz. Col.(7): Large Linear Size (LLS) of the Radio Halo is in $h_{70}^{-1}\,kpc$. Ref. for the temperature data in brackets: (Z04) Zhang al. 2004 (XMM); (W00) White et al. 2000 (ASCA); (M96) Markevitch 1996 (ASCA); (m) mean value between Mushotzky \& Scharf 1997 (ASCA) and Govoni et al. 2004 (Chandra); (e) Ebeling et al. 1996 (from $L_x$-T relation) ; (D93) David et al. 1993 (Einstein MPC+ Exosat + Ginga); (M98) Markevitch et al. 1998 (ASCA); (m1) mean value between Z04 and Pierre et al. 1999 (ASCA data); (H93) Hughes et al. 1993 (GINGA). Ref. for the X-ray luminosities in brackets: (B04) Boehringer et al 2004, (E98) Ebeling et al 1998, (E96) Ebeling et al 1996, (T96) Tsuru et al 1996, Ref. for the radio data in brackets: (L00) Liang et al. 2000 (ATCA) (F00) Feretti 2000, (B03) Bacchi et al 2003, (GF00) Giovannini \& Feretti 2000, (V03) Venturi et al 2003, (GFG01) Govoni et al. 2001, (G05) Govoni et al. 2005, (FF03) Feretti et al. 2001, (m2) mean value between Kim et al. 1990 and Deiss et al. 1997} \begin{tabular}{llllllllllll} \hline \hline cluster's & z & T & $L_X$ &$L_{bol}$ & $P_{1.4}$ & LLS\\ name & & [keV]& [$10^{44}$ erg/s ]&[$10^{44}$ erg/s ]& [$10^{24}$ Watt/Hz] & [Mpc $h_{70}^{-1}$]\\ \hline \hline 1E50657-558 & 0.2994 & $13.59^{+0.71}_{-0.58}$(Z04) & $23.322\pm 1.84$(B04)& $88.619\pm 7.00$ & $28.21\pm 1.97$(L00) & 1.43\\ A2163 & 0.2030 & $13.29^{+0.64}_{-0.64}$(W00) & $23.435\pm 1.50$(B04)& $82.021\pm 5.24$ & $18.44\pm0.24$(FF01) & 1.86\\ A2744 & 0.3080 & ~$\:8.65^{+0.43}_{-0.29}$(Z04) & $13.061\pm 2.44$(B04)& $37.315\pm 6.97$ & $17.16\pm 1.71$(GFG01)& 1.64\\ A2219 & 0.2280 & ~$\:9.52^{+0.55}_{-0.40}$(W00) & $12.732\pm 0.98$(E98)& $40.293\pm 4.34$ & $12.23\pm 0.59$(B03) & 1.56\\ CL0016+16 & 0.5545 & ~$\:9.13^{+0.24}_{-0.22}$(W00) & $18.829\pm 1.88$(T96)& $51.626\pm 5.16$ & ~$\:6.74\pm 0.67$(GF00) & 0.79\\ A1914 & 0.1712 & $10.53^{+0.51}_{-0.50}$(W00) & $10.710\pm 1.02$(E96)& $33.738\pm 3.21$ & ~$\:5.21\pm 0.24$(B03) & 1.18\\ A665 & 0.1816 & ~$\:8.40^{+1.0}_{-1.0}$(M96) & ~$\:9.836\pm 0.98$(E98) & $25.130\pm 3.92$ & ~$\:3.98\pm 0.39$(GF00) & 1.69\\ A520 & 0.2010 & ~$\:7.84^{+0.52}_{-0.52}$(m) & ~$\:8.830\pm 0.79$(E98) & $22.841\pm 5.14$ & ~$\:3.91\pm 0.39$(GFG01)& 1.00\\ A2254 & 0.1780 & ~$\:7.50^{+0.0}_{-0.0}$(e) & ~$\:4.319\pm 0.26$(E96) & $11.076\pm 0.66$ & ~$\:2.94\pm 0.29$(GFG01)& 0.86\\ A2256 & 0.0581 & ~$\:6.90^{+0.11}_{-0.11}$(W00) & ~$\:3.814\pm 0.16$(E96) & ~$\:9.535\pm 0.42$ & ~$\:0.24\pm 0.02$(F00) & 0.85\\ A773 & 0.2170 & ~$\:8.39^{+0.42}_{-0.42}$(m) & ~$\:8.097\pm 0.65$(E98) & $21.728\pm 3.62$ & ~$\:1.73\pm 0.17$(GFG01)& 1.14\\ A545 & 0.1530 & ~$\:5.50^{+6.20}_{-1.10}$(D93) & ~$\:5.732\pm 0.50$(B04) & $12.608\pm 1.10$ & ~$\:1.48\pm 0.06$(B03) & 0.82 \\ A2319 & 0.0559 & ~$\:8.84^{+0.29}_{-0.24}$(M98) & ~$\:7.403\pm 0.41$(E96) & $20.730\pm 1.14$ & ~$\:1.12\pm 0.11$(F00) & 1.01\\ A1300 & 0.3071 & ~$\:9.42^{+0.26}_{-0.25}$(m1) & $14.114\pm 2.08$(B04)& $33.870\pm 4.98$ & ~$\:6.09\pm 0.61$(F00) & 0.86\\ A1656 & 0.0231 & ~$\:8.21^{+0.16}_{-0.16}$(H93) & ~$\:3.772\pm 0.10$(E96) & $10.182\pm 0.26$ & ~$\:0.72^{+0.07}_{-0.04}$ (m2) & 0.78\\ A2255 & 0.0808 & ~$\:6.87^{+0.20}_{-0.20}$(W00) & ~$\:2.646\pm 0.12$(E96) & ~$\:6.611\pm 0.30$ & ~$\:0.89\pm 0.05$(G04) & 0.88\\ A754 & 0.0535 & ~$\:9.38^{+0.27}_{-0.27}$(W00) & ~$\:4.314\pm 0.33$(E96) & $12.946\pm 0.98$ & ~$\:1.08\pm 0.06$(B03) & 0.96\\ \hline \hline \label{RH} \end{tabular} \end{table*} Our findings are consistent with those of En\ss lin and R\"ottgering (2002) who used 14 clusters with radio halos and found a correlation of the form $P_{1.4\,GHz}\propto L_{X}^{1.94}$. Using 16 clusters with GRHs Feretti (2003) found a correlation between the X-ray bolometric luminosity and the radio power at 1.4 GHz of the form $P_{1.4\,GHz}\propto (L_{X}^{bol})^{1.8\pm0.2}$. A consistent result is obtained with the data in Tab.~1 ($P_{1.4\,GHz} \propto (L_{X}^{bol})^ {1.74\pm 0.21}$). \begin{table*} \caption{Parameters of the $\beta$-fit and cluster mass estimated for the 16 galaxy clusters with GRHs for which $\beta$-fits are avaiable. Col.(1): Cluster name. Col.(2): $\beta$-parameter value with 1$\sigma$ error. Col.(3): Core radius in units of $h_{70}^{-1}$ kpc and corresponding uncertainty. Col.(4): Virial mass and is uncertainty in units of $h_{70}^{-1}\,10^{15}\,M_{\odot}$. Col.(5): Virial radius in units of $h_{70}^{-1}$ kpc. Col.(6): Mass estimated inside the core radius in units of $h_{70}^{-1}\,10^{13}\,M_{\odot}$. Ref. for the (data) source in brackets: (a) Markevitch et. al 2002 (Chandra); (b) RB02 (ROSAT for $\beta$-fit and T as in table 1); (c) Govoni et al. 2001 (ROSAT); (d) Ettori \& Fabian 1999 (ROSAT); (e) Ettori et. al 2004 (Chandra); (f) Feretti 2004 (Einstein); (g) Lemonon et al. 1997 (ROSAT).} \begin{tabular}{llllll} \hline \hline cluster's & $\beta$& $r_c$ & $M_v$ & $R_v$ & $M_c$ \\ name & & [kpc $h_{70}^{-1}$] & [$10^{15}$ $M_{\odot} $] & [kpc $h_{70}^{-1}$] & [$10^{13}$ $M_{\odot} $] \\ \hline \hline 1E50657-558(a)& $0.70\pm 0.07$ & $179\pm 18$ & $3.43\pm 0.38$ & 3301 & ~$\:9.50\pm 1.40 $ \\ A2163 (b)& $0.80\pm 0.03$ & $371\pm 21$ & $4.32\pm 0.26$ & 3766 & $ 22.00\pm 1.84$ \\ A2744 (c)& $1.00\pm 0.08$ & $458\pm 46$ & $2.87\pm 0.26$ & 3096 & $ 22.10\pm 2.96$ \\ A2219 (d)& $0.79\pm 0.08$ & $343\pm 34$ & $2.52\pm 0.28$ & 3104 & $ 14.40\pm 2.16$ \\ CL0016+16 (e)& $0.68\pm 0.01$ & $237\pm 80$ & $1.47\pm 0.05$ & 2166 & ~$\:8.27\pm 0.38 $\\ A1914 (b)& $0.75\pm 0.02$ & $165\pm 80$ & $2.90 \pm 0.15$ & 3356 & ~$\:7.28\pm 0.51$ \\ A665 (f)& $0.74\pm 0.07$ & $350\pm 35$ & $1.97\pm 0.30$ & 2933 & $ 12.10\pm 2.20$ \\ A520 (c)& $0.87\pm 0.08$ & $382\pm 50$ & $2.22\pm 0.25$ & 3018 & $ 14.50\pm 2.51$ \\ A2256 (b)& $0.91\pm 0.05$ & $419\pm 28$ & $2.23\pm 0.13$ & 3281 & $ 14.70\pm 1.28$ \\ A773 (c)& $0.63\pm 0.07$ & $160\pm 27$ & $1.52\pm 0.19$ & 2636 & ~$\:4.72\pm 0.98$ \\ A545 (d)& $0.82\pm 0.08$ & $286\pm 29$ & $1.25\pm 0.84$ & 2562 & ~$\:7.20\pm 4.89$ \\ A2319 (b)& $0.59\pm 0.01$ & $204\pm 10$ & $1.71\pm 0.07$ & 3009 & ~$\:5.95\pm 0.38$ \\ A1300 (g)& $0.64\pm 0.01$ & $171\pm 80$ & $1.71\pm 0.06$ & 2609 & ~$\:5.76\pm 0.33$ \\ A1656 (b)& $0.65\pm 0.02$ & $246\pm 15$ & $1.83\pm 0.07$ & 3136 & ~$\:7.38\pm 0.53$ \\ A2255 (b)& $0.80\pm 0.05$ & $419\pm 28$ & $1.76\pm 0.12$ & 2996 & $ 12.80\pm 1.22$ \\ A754 (b)& $0.70\pm 0.03$ & $171\pm 12$ & $2.42\pm 0.11$ & 3379 & ~$\:6.25\pm 0.52$ \\ \hline \hline \label{beta_fit} \end{tabular} \end{table*} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{TLr_relations_halo.ps}} \caption[]{Panel a):correlation between the radio power at 1.4 GHz and the temperature for the GRHs; Panel b): correlation between the radio power at 1.4 GHz and the X-ray temperature for a total sample of 24 cluster with a GRHs or with a smaller size ($\sim 200-700$ kpc $h_{50}^{-1}$).} \label{LrTfig} \end{figure*} Although the trend in Fig.\ref{LxLrfig} appears quite stringent, one may wonder wether it could be affected by observational biases. It should be pointed out that most GRHs have been discovered by follow ups of radio halo candidates mostly identified from the NRAO VLA Sky Survey (NVSS, Condon et al. 1998). \noindent In Fig.\ref{Br} we plot the average radio surface brightness of GRHs at 1.4 GHz, normalized to the radio brightness of A3562 (open circle with the smallest X-ray luminosity), which is just visible in the NVSS, versus the X-ray luminosity of the hosting clusters. We note that all GRHs have a radio surface brightness which is well above that of A3562 (with the exception of A2256 not visible in the NVSS). The fact that clusters in the redshift range $z\sim 0.15-0.3$ have similar radio brightnesses indicates that the correlation in Fig.\ref{LxLrfig} is not driven by the radio surface limit. Indeed, these clusters have $L_X\sim 3\cdot 10^{44}- 3\cdot 10^{45}$ erg/s and range over more than one order of magnitude in radio power, whereas the effect of brightness dimming, due to the small z-range, is limited to within a factor $\sim 1.6$. We also note the presence of a trend, with the average radio brightness increasing with X-ray luminosity (see also Feretti 2004), which further supports the notion that the correlation in Fig.\ref{LxLrfig} at high luminosities ($L_X\,\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}\; 10^{45}$ erg/s) is not driven by selection effects. Furthermore, relatively deep upper limits for non radio-halo clusters are now available and in some cases lie well below (a factor of $\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,10$) the trend in Fig.\ref{LxLrfig} at X-ray luminosities $\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,5\cdot10^{44}$ erg/s (Dolag 2006). On the other hand one may argue that the NVSS tends to select only the most powerful GRHs associated with $L_X\, \raise 2pt \hbox {$<$} \kern-0.8em \lower 4pt \hbox {$\sim$}\; 5\cdot10^{44}$ erg/s clusters. To evaluate the effect of a possible bias at these luminosities, we perform a fit by considering only GRH clusters with $L_X\, \raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}\, 5\cdot10^{44}$ and find a slope $2.22\pm 0.36$ which is consistent within 1 $\sigma$ with Eq.~\ref{LxLreq}. Thus we conclude that despite the poor statistics, the derived correlations (Eq.\ref{LxLrfig}) stands on sound observational basis. \subsection{Radio Power--ICM temperature correlation} We also investigate the correlation between the radio power at 1.4 GHz and the X-ray ICM temperature. A $P_{1.4}-T$ correlation was first noted by Liang et al.(1999) and Colafrancesco (1999); with a sample of only 8 radio halos the last author obtained a steep trend of the form $P_{1.4}\propto T^{6.25^{+6.25}_{-2.08}}$. In Fig.~\ref{LrTfig}a we report the best fit for our sample. The fit has been performed using the form: \begin{equation} \log\bigg[\frac{P_{1.4\,GHz}}{3.16\cdot10^{24}\,h_{70}^{-1}\,\frac{Watt}{Hz}}\bigg]=A_f+b_f\,\log\Big(\frac{T}{8 \,keV}\Big) \label{TLreq} \end{equation} \noindent and best fit parameters are: $A_f=-0.390\pm 0.139$ and $b_f=9.83\pm 4.92$. We note that the observed $P_{1.4}-T$ correlation is very steep, it seems rather a "wall" than a correlation and it is dominated by the large errors of the cluster temperatures avaiable to date. In order to test the strength of this correlation we try to increase the sample by including also 7 additional clusters with smaller (size $\sim 200-700$ kpc $h_{50}^{-1}$) radio halos. In Fig.~\ref{LrTfig}b we report the best-fit $P_{1.4}-T$ obtained for the extended sample, which has a slope $b_f=6.40\pm 1.64$. Given the large uncertainties we note that the two correlations are consistent at the 1$\sigma$ level and, in addition, the value of the lower allowed bound of the two slopes is almost the same. \subsection{Radio Power -- virial mass correlation} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fit_reflexhalo_LxM_paper.ps}} \caption[]{Correlation between the X-ray luminosity [0.1-2.4] keV and the virial cluster mass: for the HIFLUGCS sample (black points) plus the 16 clusters with GRHs (red points, excluding A2254 for which no information on the $\beta$-model are available) (solid line) and for the HIFLUGCS sample alone (dashed line).} \label{LMfig} \end{figure} The most important correlation for our study is that between the virial mass ($M_v$) of a cluster and the radio power at 1.4 GHz. This correlation is indeed extensively used in the calculations of the RHLFs and number counts (Sec.7) and in constraining the values of the magnetic field in galaxy clusters to be used in our calculations (Sec.3). On the other hand, this is also the most difficult correlation to derive since it is very difficult to measure the cluster mass. Govoni et al. (2001) first obtained a correlation between the radio power and cluster gravitational mass (within 3 $h_{50}^{-1}$ Mpc radius) estimated from the surface brightness profile of the X-ray image using 6 radio halo clusters. This correlation was confirmed by Feretti (2003) who extended the sample to 10 cluster radio halos and obtained a best fit of the form $P_{1.4}\propto M^{2.3}$, where $M$ is, again, the gravitational mass computed within 3 $h_{50}^{-1}$ Mpc from the cluster center. However it should be pointed out that while the X-ray mass determination method gives good results in relaxed clusters, it may fail in the case of merging clusters (e.g., Evrard et al. 1996, Roettiger et al. 1996; Schindler 1996). This is because the merger may cause substantial deviation from hydrostic equilibrium and spherical symmetry. As a result, the masses in merging clusters can be either overestimated (up to twice the true mass in the presence of shocks) or underestimated (since substructures tend to flatten the average density profile giving an underestimation of the order of 50 \% of the true mass; see Schindler 2002). In addition, if the temperature systematically decreases with increasing radius, then the isothermal assumption leads to an overestimation of the cluster mass of about 30\% at about six core radii (Markevitch et al. 1998). The effect of the scattering produced by all these uncertainties can hopefully be reduced by making use of large cluster samples. Thus, we choose to obtain the $P_{1.4\,GHz}-M_v$ correlation by combining the $L_x-M_v$ correlation, obtained for a large statistical sample of galaxy clusters, with the $P_{1.4}-L_x$ correlation previously derived (Eq.\ref{LxLreq}, Fig.~\ref{LxLrfig}). We use a complete sample of the X-ray--brightest clusters (HIFLUGCS, the Highest X-ray FLUx Galaxy Cluster Sample) compiled by Reiprich \& B$\ddot{o}$hringer (2002) (hereafter RB02). We use this sample of luminous clusters ($L_x \sim 10^{44}-10^{45}$erg s$^{-1}$) since it is large and homogeneously studied. It consists of 63 bright clusters with galactic latitude $|b_{II}|>20^{o}$, flux $f_X(0.1-2.4\, keV)\ge 2\times 10^{-11}$ ergs s$^{-1}$ cm$^{-2}$ and it covers about 2/3 of the whole sky. \noindent The clusters have been reanalyzed in detail by RB02 using mainly ROSAT PSPC pointed observations. RB02 report the value of $\beta$ and core radius, $r_c$, for all the 63 clusters obtained by fitting the surface brightness profile of the X-ray image with a standard $\beta$-model. Then under the assumption that the intracluster gas is in hydrostatic equilibrium and isothermal (using the ideal gas equations), the gravitational cluster mass within a radius r is given by (e.g., Sarazin 1986): \begin{equation} M_{tot}(<r)=\frac{3K_{B}T r^{3}\beta}{\mu m_p G}\left(\frac{1}{r^2+r_c^2}\right), \label{mteq} \end{equation} \noindent Eq.\ref{mteq} gives the total (dark matter plus gas) mass of the cluster as a function of radius; then one must define a physically meaningful radius to compare masses of different clusters. It is frequently convenient to use $r_{200}$ meaning the radius within which the mean total mass density is 200 times the critical density of the universe at the cluster redshift. This is because $M_{200}$, the mass contained within $r_{200}$, is usually taken as a good approximation of the virial mass since in the spherical collapse model the ratio between the average density within the virial radius and the mean cosmic density at redshift z is $\Delta_{c}=18\pi^2\simeq 178$ independent of the redshift for $\Omega_m=1$ (e.g., Lacey \& Cole 1993). In general, the value of $\Delta_{c}$ depends on the adopted cosmology. In the $\Lambda$CDM cosmology $\Delta_{c}$ is given by (Kitayama \& Suto 1996): \begin{equation} \Delta_{c}(z)= 18\pi^2(1+0.4093\omega(z)^{0.9052}), \label{Dc} \end{equation} \noindent where $\omega(z)\equiv \Omega_{f}(z)^{-1}-1$ with: \begin{equation} \Omega_{f}(z) =\frac{\Omega_{m,0}(1+z)^3}{\Omega_{m,0}(1+z)^3+\Omega_{\Lambda}}, \label{Omz} \end{equation} \noindent Thus, by using Eq.\ref{mteq} we calculate the virial radius, $R_v$, as the radius at which the ratio between the average density in the cluster and the mean cosmic density at the redshift of the cluster is given by $\Delta_c(z)$ (Eq.\ref{Dc}). The virial mass, $M_v$, and the virial radius are thus related by: \begin{equation} R_{v}=\Bigg[\frac{3M_{v}}{4\pi\Delta_{c}(z)\rho_{m}(z)} \Bigg]^{1/3} \label{Rv} \end{equation} \noindent where $\rho_{m}(z)=2.78\times10^{11}\,\Omega_{m,o}\,(1+z)^3\,h^2\,M_{\odot} Mpc^{-3}$ is the mean mass density of the universe at redshift z. We estimate the virial mass in the $\Lambda$CDM cosmology for the 63 clusters of the HIFLUGCS sample using Eq.~\ref{mteq}; the fit parameters ($\beta$ and $r_c$ corrected for a $\Lambda$CDM cosmology) and the temperature $T$ are given in RB02. We have searched in the literature for $\beta$-fit parameters and $T$ of the clusters with GRHs (ref. in Tab.~\ref{beta_fit}) in order to estimate $M_v$ also for these clusters. Since some clusters of the HIFLUGCS sample are also in our sample, we note that in the majority of these cases the fits to the mass profile (and $T$) given in RB02 leads to a virial mass which is consistent at 1$\sigma$ level with the mass derived by making use of the parameters obtained from more recent observations in the literature (given in Tabs.~1, 2). The $L_x-M_v$ distribution of the combined sample is reported in Fig.~\ref{LMfig}. The presence of a relatively large dispersion indicates the difficulty in estimating the virial masses of the single objects and confirms the need of large samples in these studies. We note that the statistical distribution of clusters with GRHs is not different from that of the HIFLUGCS sample. On the other hand, we note that clusters with known GRHs span a range in mass comparable to the mass--dispersion in the HIFLUGCS sample which is due to the different dynamical status of clusters in the sample and to the uncertainties in the measurements. This further strengthens the need of the approach followed in this Section, since a $L_x$ (or $P_{1.4}$)--$M_v$ fit based on GRHs alone would be affected by large uncertainties. In order to better sample the region of higher X-ray luminosities and masses (typical of clusters with GRHs), we compute the $L_x$--$M_v$ fit by combining the HIFLUGCS with the radio--halo sample. The fit has been performed using the form: \begin{equation} \log\bigg[\frac{L_X}{10^{44}\,h_{70}^{-1}\,\frac{ergs}{s}}\bigg]=A_f+b_f\, \log\Big(\frac{M_v}{3.16 \times 10^{14}\,h_{70}^{-1}\,M_{\odot}}\Big) \label{LMeq} \end{equation} \noindent The best fit values of the parameters are: $A_f=-0.229\pm 0.051$ and $b_f=1.47\pm0.08$ ($b_f=1.41\pm0.10$ is obtained with HIFLUGCS sample only). In order to derive the $P_{1.4\,GHz}-M_v$ correlation for GRHs, we combine Eqs~\ref{LMeq} and \ref{LxLreq} and find : \begin{eqnarray} \lefteqn{\log\Big[\frac{P_{1.4}}{3.16\cdot10^{24}\,h_{70}^{-1}\,\frac{Watt}{Hz}}\Big]=(2.9\pm 0.4) \log\Big[\frac{M_v}{10^{15}\,h_{70}^{-1}\,M_{\odot}}\Big] {} } \nonumber\\ & & {} \;\;-(0.814\pm 0.147) \label{LrMeq} \end{eqnarray} Our $P_{1.4\,GHz}-M_v$ correlation is slightly steeper than that obtained with 10 clusters by Feretti (2003) ($P_{1.4\,GHz}\propto M^{2.3}$), which, however, was derived in an EdS cosmology by considering the mass within 3 $h_{50}^{-1}$ Mpc from the cluster centers, and not the virial mass. \section{Expected correlations and magnetic field constraints} The main goal of this Section is to extract the values of the physical parameters to be used in the model calculations of Sec.4-6. The region of the physical parameters (in particular of B) is constrained by comparing the model expected and observed trends of the synchrotron power of GRHs with the mass (and temperature) of the parent clusters. As already discussed (Sec.2) it is unlikely that the observed correlations are driven by selection effects; however, it cannot be excluded that the detailed shape and scatter of these correlations might somewhat change with improved statistics, especially at low X-ray luminosities. \subsection{Radio power--cluster mass correlation} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{slope_B_M_z019_finale.ps}} \caption[]{Expected slope of the $P_{1.4}-M_v$ correlation as a function of the magnetic field intensity in a cluster of mass $<M>=1.6\times\,10^{15}\,M_{\odot}$. The calculations are obtained for b=0.5,0.6,0.7,0.8,0.9,1,1.2,1.3,1.5 and 1.7 (from bottom to top); $M_1=1.1\times\,10^{15}\,M_{\odot}$ and $M_2=2.5\times\,10^{15}\,M_{\odot}$ are adopted. The continuous lines are for $\Gamma\simeq0.67$ and the dashed lines are for $\Gamma\simeq0.56$. The two horizontal lines mark the 1 $\sigma$ value of the observed slope.} \label{slopeM} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{slope_B_T_z019_finale.ps}} \caption[]{Expected slope of the $P_{1.4}-T$ correlation as a function of the magnetic field intensity in a cluster with temperature $<T>=8$ keV. The calculations are obtained for b=0.5,0.6,0.7,0.8,0.9,1,1.2,1.3,1.5 and 1.7 (from bottom to top); $T_1=6$ keV and $T_2=10$ keV are adopted. The continuous lines are for $\Gamma\simeq0.67$ and the dashed lines are for $\Gamma\simeq0.56$. The two horizontal lines mark the 1 $\sigma$ value of the observed slope.} \label{slopeT} \end{figure} Cassano \& Brunetti (2005) derived an expected trend between the bolometric radio power, $P_R$, and the virial cluster's mass and/or temperature. In the case of the GRHs, the mergers which mainly contribute to the injection of turbulence in the ICM are those with $r_s\ge R_H$, $r_s$ being the stripping radius of the infalling sub--cluster (see Sec.~6 in CB05). It can be shown that, as a first approximation, the expected scaling $P_{R}-M_v$ is given by: \begin{equation} P_{R}\propto \frac{M_v^{2-\Gamma}\,B^2\,n_e}{(B^2+B_{cmb}^2)^2} \label{PMpre} \end{equation} \noindent where $B$ is the rms magnetic field strength in the radio halo volume (particle pitch angle isotropitazion is assumed), $B_{cmb}=3.2 (1+z)^2 \mu G$ is the equivalent magnetic field strength of the CMB and $n_e$ is the number density of relativistic electrons in the volume of the GRH. The parameter $\Gamma$ is defined by $T\propto M^{\Gamma}$; we consider $\Gamma\simeq 2/3$ (virial scaling) and $\Gamma\simeq0.56$ (e.g.; Nevalainen et al. 2000). \noindent In this paper we release the assumption adopted in CB05 of a magnetic field independent of cluster mass and assume that the rms field in the emitting volume scales as $B=B_{<M>}(M/<M>)^{b}$, with $b > 0$ and $B_{<M>}$ the value of the rms magnetic field associated to a cluster with mass equal to the mean mass $<M>$ of the clusters sample. A scaling of the magnetic field intensity with the cluster mass is indeed found in numerical cosmological MHD simulations (e.g. Dolag et al. 2002, 2004). Dolag et al. (2002) found a scaling $B\propto T^2$ that would mean $B\propto M^{1.33}$ assuming the virial scaling or $B\propto M^{1.12}$ for $\Gamma\simeq0.56$. \noindent We assume that the number density of the relativistic electrons in galaxy clusters, $n_e$, does not depend on cluster mass. This is because there is no straightforward physical reason to believe that this value should scale systematically with $M_v$, and since only a relatively fast scaling of $n_e$ with mass would significantly affect the radio power -- mass trend (Eq.~\ref{PMpre}). It is indeed more likely that $n_e$ may change from cluster to cluster, but that the major effect would simply be to drive some scattering on the $P_R-M_v$ trend (Eq.~\ref{PMpre}). \noindent Given these assumptions Eq.~\ref{PMpre} becomes: \begin{equation} P_{R}\propto \frac{M_v^{2-\Gamma}\,B_{<M>}^2\cdot (M_v/<M>)^{2b}} {(B_{<M>}^2\cdot(M_v/<M>)^{2\,b}+B_{cmb}^2)^2} \label{PMpre2} \end{equation} \noindent  which has two asymptotic behaviors: $P_R\propto M_v^{2-\Gamma+2b}$ for $B_{<M>}<< B_{cmb}$ and $P_R\propto M_v^{2-\Gamma-2b}$ for $B_{<M>}>>B_{cmb}$. \noindent The observed correlations derived in Sect.~2 involve the monochromatic radio power at 1.4 GHz. How this monochromatic radio power can be scaled to $P_R$ depends on the spectrum of radio halos. In the context of particle acceleration models (e.g., Brunetti et al. 2001, Ohno et al 2002, Kuo et al. 2003) the spectrum of radio halos is given by the superposition of spectra emitted from regions in the emitting volume with different magnetic field strenghts. It is expected to reach a peack at $\nu_b$ and then gradually drop as a power-law which should further steepen at higher frequencies. The break frequency can be expressed as a function of the cluster mass and of the rms field B in the emitting volume(CB05): \begin{equation} \nu_b\propto M^{2-\Gamma}{{B\:\eta_t^2}\over{(B^2+B_{cmb}^{2})^{2}}} \label{nub} \end{equation} \noindent If we adopt a power-law spectrum extending from the frequency of the peak to a few GHz, $P(\nu)\propto\nu^{-a}$, $P_R$ and the monochromatic radio power at a fixed frequency $\nu_o$ ($\nu_o\ge\nu_b$) scale as $P(\nu_o)/P_R\propto(\frac{\nu_b}{\nu_o})^{a-1}$. This depends on the cluster mass (Eq.\ref{nub}): \begin{equation} \frac{P(\nu_o)}{P_R} \propto {{ {M_v}^{(a-1)(2 - \Gamma +b)} }\over{ \left(B_{<M>}^2(M_v/<M>)^{2b} +B_{cmb}^2\right)^{2(a-1)} }} \label{setti} \end{equation} \noindent thus in the case $B<<B_{cmb}$ one has $P(\nu_o)/P_R\propto(\frac{M}{<M>})^{(a-1)(2-\Gamma+b)}$, while in the case $B>>B_{cmb}$ one has $P(\nu_o)/P_R\propto(\frac{M}{<M>})^{(a-1)(2-\Gamma-3b)}$, which means that for $B<<B_{cmb}$ the $P(\nu_o)-M$ trend is steeper than the $P_R-M$, while the opposite happens in the case $B>>B_{cmb}$ (the two scaling should be equal for continuity for $B\sim B_{cmb}$). On the other hand, the trends of $P(\nu_o)/P_R$ with the cluster mass in massive galaxy clusters is rather weak because the observed radio spectral index between 327--1400 MHz is $a \sim 1.2$ (e.g., Feretti 2003) and because B in the most massive objects is probably close to $B_{cmb}$ (Sec.3.3, Fig.\ref{regions}; Govoni \& Feretti 2004). Thus, in order to compare the model expectations with the observations, we will safely assume the same scaling for monochromatic and total radio power. In order to have a prompt comparison with observations we calculate the slope $\alpha_M$ of the $P_{1.4}-M$ correlation between two points as: \begin{equation} \alpha_M=\frac{\log(P_1/P_2)}{\log(M_1/M_2)} \label{alpha} \end{equation} \noindent Eq.\ref{alpha} can be compared with the observed slope to constrain the value of the magnetic field and of the slope, $b$, of the scaling between $B$ and the cluster mass. The $M_1$ and $M_2$ values give the representative mass range spanned by the bulk of clusters with GRHs, while $B_{cmb}$ should be calculated at the mean redshift of our sample ($<z>\simeq 0.19$). We point out that given $B_{<M>}$ and $b$, the values of B are fixed for all the values of the masses of the clusters in our sample. In Fig.\ref{slopeM} we report the expected slope $\alpha_M$ (Eq.~\ref{alpha}) as a function of $B_{<M>}$. The different curves are obtained for different scaling-laws of the magnetic field with the cluster mass ($b=0.5$ to $1.7$, see caption). Dashed lines refer to $\Gamma\simeq 0.56$ and solid lines to the virial case. The two blue horizontal lines (Fig.\ref{slopeM}) indicate the range of the observed slope ($\alpha_M=2.9\pm0.4$). \noindent Fig.~\ref{slopeM} shows that there are values of $B_{<M>}$ and $b$ for which the expected slope is consistent with the observed one. As a first result we find that with increasing b the values of $B_{<M>}$ should increase in order to match the observations (for example, $b\sim 0.6$ requires $B_{<M>}\:\sim\:0.2\:-\:1.4\:\mu$G while $b\sim1.7$ requires $B_{<M>}\:\sim\:2\:-\:3\:\mu$G). Finally, the asymptotic behavior of Eq.\ref{PMpre2}, combined with the observed correlation (Eq.~\ref{LrMeq}) allows to immediately constrain b: for $B_{<M>}\,<< B_{cmb}$ one has $0.58(0.53)<b<0.98(0.93)$ for the virial (non-virial) case, whereas in the case of $B_{<M>}\,>>B_{cmb}$ the model expectations cannot be reconciled with the observations. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{regione_z019_fin.ps}} \caption[]{The region in the plane ($B_{<M>}$,b) allowed from the observed $P_{1.4}-M_v$ and $P_{1.4}-T$ correlations is reported as a shadowed area; $<M>=1.6\times\,10^{15}\,M_{\odot}$. The dashed line indicate the upper bound of the allowed region obtained considering only the $P_{1.4}-M_v$ correlation. The coloured points indicate the relevant configurations of the parameters used in the statistical calculations in Sec.4-6 (Tab.~\ref{choose_value}). The vertical arrows indicate the IC limits on $B$.} \label{regions} \end{figure} \begin{table} \begin{center} \caption{Values of $\alpha_M$ and $\eta_t$ derived for relevant sets of b, $B_{<M>}[\mu G]$ parameters.} \begin{tabular}{c|c|c|c|c} \hline \hline b & $ B_{<M>}[\mu G]$ &$\alpha_M$ & $\eta_{min} $ & $\eta_{max}$ \\ \hline \hline 1.7 & 3.0 & 2.5 & 0.19 & 0.2 \\ 1.7 & 2.2 & 3.22 & 0.17 & 0.2 \\ 1.5 & 1.9 & 3.3 & 0.15 & 0.2 \\ 1.3 & 2.25 & 2.84 & 0.15 & 0.2 \\ 1.0 & 1.55 & 2.96 & 0.16 & 0.21 \\ 1.0 & 0.45 & 3.3 & 0.29 & 0.33 \\ 0.9 & 0.18 & 3.23 & 0.39 & 0.44 \\ 0.6 & 0.2 & 2.63 & 0.38 & 0.44 \\ \hline \hline \label{choose_value} \end{tabular} \end{center} \end{table} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{P_eta_B_finale1.ps}} \caption[]{Probability to form GRHs at $ 0.05 \leq z \leq 0.15$ in the observed mass bin I: $0.95-1.9\times 10^{15} M_{\odot}$ and at $0.05 \leq z \leq 0.2$ in bin II: $1.9-3.8\times 10^{15} M_{\odot}$ as a function of $\eta_t$. The calculations are reported for the following representative cases: $b=1.7$, $B_{<M>}=3.0\mu$G (blue points); $b=1.0$, $B_{<M>}=1.55\mu$G (black points); $b=0.9$, $B_{<M>}=0.18\mu$G (cyan points) and $b=0.6$, $B_{<M>}=0.2\mu$G (green points). The bottom shadowed region marks the observed probability for GRHs in the mass bin I while the top shadowed region marks that in the mass bin II. The values of the observed probabilities are obtained by combining the results from Giovannini et al. 1999, Giovannini \& Feretti 2000, and Feretti 2002. The observed probabilities for the bin I are calculated up to $z \leq 0.15$ to minimize the effect due to the incompleteness of the X--ray and radio catalogs used by these authors.} \label{exa} \end{figure} \subsection{Radio power--cluster temperature correlation} Since the temperature is related to the cluster mass, the radio power -- mass correlation also implies a correlation between synchrotron radio power and cluster temperature. Thus, in order to maximize the observational constraints, an analysis similar to that of Sect.~3.1 can also be done for the radio power -- temperature correlation ($P_{R}-T$). Combining Eq.~\ref{PMpre2} with the $M-T$ scaling law ($T\propto M^{2/3}$ for the virial case and $T\propto M^{0.56}$) one has: \begin{equation} P_{R}\propto \frac{T^{\frac{2}{\Gamma}-1}\,B_{<M>}^2\,(T/<T>)^{2\,b_T}} {(B_{<M>}^2\cdot(T/<T>)^{2\,b_T}+B_{cmb}^2)^2} \label{PTpre} \end{equation} \noindent where $b_T=b/\Gamma$ with $\Gamma\simeq 2/3$ (virial case) or $\Gamma\simeq 0.56$ (non-virial case). The asymptotic behaviors of Eq.~\ref{PTpre} are given by $P_R\propto T^{2/\Gamma-1+2b_T}$ ($B_{<M>}\,<< B_{cmb}$) and $P_R\propto M_v^{2\Gamma-1-2b_T}$ ($B_{<M>}\,>>B_{cmb}$). As in Sec.3.1, here we can adopt the same scaling with T for both $P_{R}$ and $P_{1.4}$ and compare the values of the expected slope with those of the observed one. We can calculate the slope $\alpha_T$ of the $P_{1.4}-T$ correlation between two points as: \begin{equation} \alpha_T=\frac{\log(P_1/P_2)}{\log(T_1/T_2)} \label{alphaT} \end{equation} \noindent where $T_1$ and $T_2$ define the interval of temperature of our sample, $<T>=8$ keV is the mean temperature, and $B_{cmb}$ is evaluated at $<z>\simeq 0.19$. In Fig.~\ref{slopeT} we report the slope $\alpha_T$ of the $P_{1.4}-T$ correlation as a function of the magnetic field strength in a average cluster, $B_{<M>}$. The different curves are obtained for different scaling-laws of the cluster magnetic fields with mass (i.e., temperature) (b=0.5 to 1.7). Dashed lines are for $\Gamma\simeq 0.65$ and continuous lines are for the virial case. The horizontal blue lines mark the lower limit $\alpha_T\simeq 4.76$ and the upper limit $\alpha_T\simeq 8.05$ of the observed correlation. Fig.~\ref{slopeT} shows that there is a range of values of the parameters ($B_{<M>},b$) for which the model is consistent with the observed slope. The relevant point is that, similarly to the case of the $P_{1.4}-M$ correlations, also in this case values of $B_{<M>}\,>>B_{cmb}$ cannot be reconciled with observations: a clear upper boundary at $B<3\mu$G is obtained for $B_{<M>}$. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{P_M_z_024_b1p7.ps}} \caption[]{a) Occurrence of GRHs as a function of the cluster mass in three redshift bins:0-0.1 (black line),0.2-0.3 (blue line), 0.4-0.5 (green line). b) Occurrence of GRHs as a function of redshift in two mass bins: [1-2]$\times 10^{15} M_{\odot}$ (cyan line) and [2-4.5]$\times 10^{15} M_{\odot}$ (blue line). The calculation have been performed assuming: b=1.7, $B_{<M>}=3.0 \mu$G, $\eta_t=0.2$ in both panels.} \label{PMz_1p7} \end{figure*} \subsection{Constraining the magnetic field} We combine the results obtained from the observed correlations (both $P_{1.4}-M_v$ and $P_{1.4}-T$) and the model expected trends to selects the allowed region of the ($B_{<M>}$, $b$) parameters. We consider the slope of the $P_{1.4}-T$ correlation $\alpha_T\simeq 6.4\pm 1.64$ as derived for the extended sample of 24 galaxy clusters with giant and small radio halos. This is because what is important here is the allowed lower bound of the values of $\alpha_T$ which does not depend on the adopted sample (Sect.~2.2). \noindent In Fig.\ref{regions} we report the region of the plane ($B_{<M>},b$) allowed by the observed slopes at 1$\sigma$ level. The lower bound of the ($B_{<M>}$,b) region is due to the $P_{1.4}-M_v$ correlation while the upper bound is mostly due to the $P_{1.4}-T$ correlation which is poorly constrained because of the very large statistical errors. This bound is however also limited by the $P_{1.4}-M_v$ correlation (Fig.\ref{regions}, dashed line). An additional limit on $B_{<M>}$, also reported in Fig.\ref{regions} (vertical arrows), can be obtained from inverse Compton (IC) arguments. Indeed a lower bound to the magnetic field strength can be inferred in order to not overproduce, via IC scattering of the photons of the CMB radiation, the hard-X ray excess fluxes observed up to now in a few clusters (e.g., Rephaeli \& Gruber 2003, Fusco-Femiano et al 2003). In this case the value of the mean magnetic field intensity in the cluster volume can be estimated from the ratio between the hard-X ray and radio emission. The resulting value of the magnetic field should be considered as a lower limit because the IC emission may come from more external region with respect to the synchrotron emission (e.g., Brunetti et al. 2001, Kuo et al. 2003, Colafrancesco et al. 2005b) and also because, in principle, additional mechanisms may contribute to the hard-X ray fluxes (e.g., Fusco-Femiano et al. 2003). One of the best studied cases is that of the Coma cluster for which an average magnetic field intensity of the order of $B_{IC}\simeq\,0.2\,\mu G$ was derived (Fusco-Femiano et al. 2004). As a first approximation we can use this value to obtain the lower bound of B for each cluster mass from the scaling $B=B_{<M>}(M/<M>)^{b}$. The resulting ($B_{<M>}$,$b$) region spans a wide range of values of B and b. An inspection of Fig.\ref{regions} immediately identifies two allowed regimes: a super-linear scaling ($b>1$) with relatively high values of B and a sub-linear scaling ($b<1$) with lower values of B. All the calculations we will report in the following sections are carried out by assuming representative values of ($B_{<M>}$,$b$) inside the constrained region (Fig.~\ref{regions} coloured filled dots and Tab.\ref{choose_value}). \section{Probability to form giant radio halos} \subsection{Probability of radio halos and constraining $\eta_t$} In this Section we derive the probability with cluster mass to find GRHs in the redshift range $z$=0--0.2. The byproduct of the Section is to calibrate the model by requiring that the expected fraction of cluster with GRHs is consistent with the observational constraints. This allows to select a range of values of the parameter $\eta_t$, which is the ratio between the energy injected in the form of magnetosonic waves and the $PdV$ work done by the infalling subclusters in passing through the most massive one. $\eta_t$ is a free parameter in our calculations since the fraction of the energy which goes into the form of compressible modes is likely to depend on the details of the driving force. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{P_M_z_051_b0p9.ps}} \caption[]{a) Occurrence of GRHs as a function of the cluster mass in three redshift bins: 0-0.1 (black line),0.2-0.3 (blue line), 0.4-0.5 (green line). b) Occurrence of GRHs as a function of redshift in two mass bins: [1-2]$\times 10^{15} M_{\odot}$ (cyan line) and [2-4.5]$\times 10^{15} M_{\odot}$ (blue line). The calculation have been performed assuming: b=0.9, $B_{<M>}=0.2 \mu$G, $\eta_t=0.42$ in both panels.} \label{PMz_0p9} \end{figure*} \noindent In the conservative case of solenoidal forcing (and beta of plasma $>>$ 1) this fraction is expected to scale with $\mathcal{M}_s^2\,\mathcal{R}_e$ (with $\mathcal{M}_s<1$, the turbulent Mach number) for $\mathcal{M}_s^2\,\mathcal{R}_e\,<10$ and with a flatter slope for larger values (Bertoglio et al. 2001). Assuming a Reynolds number (at the injection scale, {\it i.e.,~} hundreds of Kpc) in hot and magnetized galaxy clusters $\mathcal{R}_e\,\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$} 10^3$ (see discussion in Lazarian 2006; Brunetti 2006) and a turbulent energy of the order of $\sim$ 20\% of the thermal energy (CB05), from Fig.~8 in Bertoglio et al. (2001) one finds a reference value $\eta_t\sim\,0.1$ which may be even larger in the case of compressible driving. \noindent Radio halos are identified with those objects in a synthetic cluster population with a synchrotron break frequency (Eq.\ref{nub}) $\nu_b\:\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}\: 200$ MHz in a region of 1 $Mpc$ $h_{50}^{-1}$ size. In CB05 it was assumed that the magnetic field in the radio halo volume is independent from the cluster mass and it is $B\:\simeq\:0.5 \mu G$. Then $ \nu_b\: \propto \:M^{2-\Gamma}$ and consequently massive clusters are expected to be favourite in forming GRHs. CB05 indeed showed that the expected fraction of clusters with GRHs naturally shows an abrupt increase with cluster mass, and that the observed fractions (20-30 \% for $M > 2\times10^{15}\,M_{\odot}$ clusters, 2-5 \% for $M \sim 10^{15}\,M_{\odot}$ clusters and negligible for less massive objects) can be well reconciled with the model expectations by assuming $\eta_t \sim 0.24-0.34$. In the present paper we assume that the rms magnetic field depends on the cluster mass and this should affect the synchrotron break frequency (Eq.~\ref{nub}) and the occurrence of GRHs with cluster mass. On the other hand, in Sect.~3.3 we have also shown that the comparison between the expected and observed trends between radio power and cluster mass (and temperature) helps in constraining the range of values which can be assigned to the magnetic field in clusters. \noindent Thus our calculations of the occurrence of GRHs ($z\le 0.2$) and the selection of the values of $\eta_t$ necessary to reproduce the observations should be performed within the dashed region in Fig.\ref{regions}. To calculate the expected probabilities to form radio halos we first run a large number, ${\cal{N}}$, of trees for different cluster masses at $z=0$, ranging from $\sim 5\times10^{14}M_{\odot}$ to $\sim 6\times10^{15} M_{\odot}$. Then we choose different mass bins $\Delta M$ and redshift bins $\Delta z$ in which to perform our calculations. Thus, for each mass $M$, we estimate the formation probability of GRHs in the mass bin $\Delta M$ and in the redshift bin $\Delta z$ as (CB05): \begin{equation} f_M^{\Delta M,\:\Delta z}=\frac{\sum_{j=1}^{{\cal{N}}}t_u^j}{\sum_{j=1}^{{{\cal{N}}}} (t_u^j+t_d^j)} \label{partialrate} \end{equation} \noindent where $t_u$ is the time in the redshift interval $\Delta z$ that the cluster spends in the mass bin $\Delta M$ with $\nu_b \geq\:200$MHz and $t_d$ is the time that the same cluster spends in $\Delta M$ with $\nu_b<200$ MHz. The total probability of formation of GRHs in the mass bin $\Delta M$ and in the redshift bin $\Delta z$ is obtained by combining all the contributions (Eq.~\ref{partialrate}) weighted with the present day mass function of clusters given by the Press \& Schecther mass function. To have a prompt comparison with present observational constraints, we calculate the probability to form GRHs at $z\,\raise 2pt \hbox {$<$} \kern-0.8em \lower 4pt \hbox {$\sim$} \,0.2$ in the two observed mass bins: bin I ($[0.95-1.9]\times 10^{15} M_{\odot}$) and bin II ($[1.9-3.8]\times 10^{15} M_{\odot}$). As an example, in Fig.~\ref{exa} we report these probabilities in both bin I and bin II as a function of $\eta_t$ for three representative cases which nicely sample the region in Fig.\ref{regions}: $b=1.7$, $B_{<M>}=3.0\mu$G (blue points); $b=1.0$, $B_{<M>}=1.55\mu$G (black points); $b=0.9$, $B_{<M>}=0.18\mu$G (cyan points); $b=0.6$, $B_{<M>}=0.2\mu$G (green points). The bottom shadowed region in Fig.~\ref{exa} marks the observed probability for GRHs in the mass bin I while the top shadowed region marks that in the mass bin II. Fig.~\ref{exa} shows that it is possible to find a range of values of the parameter $\eta_t$ for which the theoretical expectations are consistent with the observed statistics in both the mass bins. However we note that the requirement in terms of energy of the MS modes increases with decreasing the magnetic field: it goes from $\eta_t \sim 0.15-0.2$ for intermediate--large values of $B$ up to $\eta_t \sim 0.5$ at the lower bound of the allowed $B$ strengths. The fact that the magnetic field depends on the cluster mass is reflected in the different behavior that the various selected configurations of parameters may have in Fig.~\ref{exa} in the two mass bins: one configuration of parameters may be favoured in a mass bin with respect to another configuration but disfavoured in the other mass bin. This is related to the transition from IC dominance ($B < B_{cmb}$) to synchrotron dominance ($B > B_{cmb}$) that occurs in going from the bin I to the more massive clusters of bin II. In the case of IC dominance an increase of $B$ does not significantly affect the particle energy losses, it causes an increase of $\nu_b$ (Eq.\ref{nub}) and thus an increase of the probability to have GRHs. On the other hand, in the case of synchrotron dominance the particle energy losses increase and consequently $\nu_b$ decreases (Eq.\ref{nub}) as well as the probability to form GRHs. For this reason, given $\eta_t$, the ratio between the probability to form GRHs in the bin I and in the bin II is expected to decrease with increasing b, as larger values of $b$ yield a more rapid increase of B with cluster mass (Fig.\ref{exa}). In Tab.\ref{choose_value} we report the maximum and the minimum values of $\eta_t$ ($\eta_{t,max}$ and $\eta_{t,min}$) for which the model reproduces the observed probabilities (1 $\sigma$ limits) in both the mass bins. The results are given for the relevant ($B_{<M>},b$) configurations reported in Fig.~\ref{regions}. In agreement with the above discussions, one might notice that in the case of IC dominance a larger magnetic field implies a smaller energetic request (smaller $\eta_{t,max}$). \subsection{Probability of radio halos with $M_v$ and evolution with $z$} In this Section we calculate the expected probability to form GRHs with cluster mass without restricting ourselves to the mass bins considered by present observations (bin I and bin II in Fig.~\ref{exa}) and calculate the evolution of this probability with redshift. In doing these calculations we use the values of $\eta_t$ as constrained in Tab.\ref{choose_value} within the region ($B_{<M>}$,b) of Fig.\ref{regions}, and make the viable (and necessary) assumption that the value of $\eta_t$ (i.e., efficiency of turbulence in going into MS modes) is constant with redshift. A detailed calculation of the acceleration efficiency and of the probability to have GRHs requires detailed Montecarlo calculations (see Sec.~6 of CB05) essentially because at each redshift the acceleration is driven by MS modes injected in the ICM from the mergers that the cluster experienced in the last few Gyr at that redshift. However, to readily understand the model results reported in the following, we may use the simplified formula Eq.~(\ref{nub}) which describes the approximate trend of the break frequency with cluster mass. The scaling $B\propto M^b$ adopted in this paper implies that the synchrotron losses overcome the IC losses first in the more massive objects. Clusters of smaller mass in our synthetic populations have $B<<B_{cmb}$ and this implies (Eq.\ref{nub}) $\nu_b\propto M^{2-\Gamma+b}\,(1+z)^{-8}$ so that the probability to form GRHs in these clusters increases with the cluster mass ($2-\Gamma+b>0$ always) and decreases with redshift. In the case of more massive clusters the situation may be more complicated. Indeed for these clusters there is a value of the mass, $M_{*}$, for which the cluster magnetic field becomes equal to $B_{cmb}$. For $M\,>\,M_*(z)$ it is $\nu_b\propto M^{2-\Gamma-3b}$ (Eq.~\ref{nub}) and thus the probability to form GRHs would decrease as the mass becomes larger (given the lower bound of the slope b as constrained in Fig.~\ref{regions}, it is $2-\Gamma-3b<0$). In these cases, at variance with the smaller clusters, the occurrence of GRHs with z is only driven by the cosmological evolution of the cluster-merger history (which drives the injection of turbulence) rather than by the dependence of the IC losses with z (at least up to a redshift for which $B\sim B_{cmb}(z)$). As a consequence, the general picture is that going from smaller to larger masses, the probability should reach a max value around $M_*$ for which $B\sim B_{cmb}(z)$, and then it should start to smoothly decrease. The value of this mass increases with z and depends on the scaling law of B with M. It is: \begin{equation} M_*(z) \simeq\,<M> \left( {{3.2\, (1+z)^2}\over{B_{<M>}(\mu G)}} \right)^{1/b} \label{m*} \end{equation} In order to show in some detail this complex behavior in the following we analyze two relevant examples. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{RHLFs_Noi_Ensslin_z0_finale.ps}} \caption[]{Expected RHLFs at $z\simeq0.05$ (coloured lines with dots) obtained assuming: b=1.7, $B_{<M>}=3.0\mu$G (blue lines: $\eta_t=0.2$ (solid line) and $\eta_t=0.19$ (dashed line)); b=1.7, $B_{<M>}=2.2\mu$G and $\eta_t=0.2$ (magenta line); b=1.5, $B_{<M>}=1.9\mu$G and $\eta_t=0.2$ (red line); b=0.9, $B_{<M>}=0.18\mu$G and $\eta_t=0.39$ (cyan line); b=0.6, $B_{<M>}=0.2\mu$G and $\eta_t=0.38$ (green line); b=1.0, $B_{<M>}=0.45\mu$G and $\eta_t=0.33$ (black line). For a comparison we report the range of Local RHLF obtained by E\&R02 (black solid thick lines).} \label{RHLF_noi_ens} \end{figure} \subsubsection{An example with super-linear scaling: large B} \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{2RHLF.ps}} \caption[]{Evolution of RHLFs with redshift. The RHLFs are reported from redshifts 0-0.1 to 0.5-0.6 (curves from top to bottom). Calculations are developed for: Panel a) b=1.7, $B_{<M>}=3.0\,\mu$G, $\eta_t=0.2$, $\alpha_M\simeq 2.5$ and Panel b) b=0.9, $B_{<M>}=0.18\,\mu$G, $\eta_t=0.39$, $\alpha_M\simeq 3.23$.} \label{RHLF_flat} \end{figure*} As a first example we focus on the case of a super-linear scaling. In Fig.~\ref{PMz_1p7}, we report the occurrence of GRHs as a function of the cluster mass in three redshift bins (panel a)) and the occurrence of GRHs as a function of redshift in two mass bins (panel b)). These calculations have been performed using $b=1.7$ and $B_{<M>}=3 \mu$G which are allowed from the observed correlations. We adopt $\eta_t=0.2$ which is in the corresponding range of values obtained in Sec.~5 (see Tab.~\ref{choose_value}) in order to reproduce the observed probability of GRHs at $z<0.2$. One finds that at lower redshifts ($z\,\raise 2pt \hbox {$<$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,0.1$) the probability to form GRHs increases with the mass of the clusters up to $M_*\:\sim\:2\,\times\,10^{15}\:M_{\odot}$, while for $M\:\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$} M_*$ synchrotron losses become dominant and this causes the decrease of the probability for $M\:\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$} M_*$. The mass at which $B\sim B_{cmb}(z)$ increases as $(1+z)^{2/b}$ and this causes the shift with z of the value of the cluster mass at which the maximum of the probability is reached. \noindent Fig.\ref{PMz_1p7}b shows the occurrence of GRHs with z. In the higher mass bin ($2\cdot 10^{15}\le M \le 4.5\cdot 10^{15}$) the occurrence increases up to $z\sim 0.4$ and than start to drop. In this very massive clusters the magnetic field is larger than $B_{cmb}(z)$ at any redshift and thus the synchrotron losses are always the dominant loss term. The behavior of the probability with z in this case is essentially due to the fact that the bulk of turbulence in these massive clusters is injected preferentially between $z\sim 0.2-0.5$. A different behavior is observed in the lower mass bin ($10^{15}\le M \le 2\cdot 10^{15}$) where the occurrence of GRHs decreases with redshift. This is because clusters with these lower masses have always $B<B_{cmb}(z)$. \subsubsection{An example with sub-linear scaling: small B} As a second example we focus on a sublinear scaling b. In Fig.~\ref{PMz_0p9} we report the occurrence of GRHs as a function of the cluster mass in three redshift bins (panel a)) and the occurrence of GRHs as a function of redshift in two mass bins (panel b)). The calculations have been performed using $b=0.9$ and $B_{<M>}=0.2 \mu$G, which are allowed from the correlations, and adopting a corresponding $\eta_t=0.42$, which is within the range of values obtained in Sec.~5 (see Tab.~\ref{choose_value}) in order to reproduce the observed probability of formation of GRHs at redshift $z<0.2$. In this case at any redshift the probability to form GRHs increases with the mass of the clusters. Indeed the magnetic field in these clusters is always $B<<B_{cmb}(z)$ (for all redshifts and masses) and the IC losses are always the dominant loss term. In addition, as expected, in both the considered mass bins the probability to form GRHs decreases as a function of redshift, due to the increase of the IC losses (Fig.~\ref{PMz_0p9}, panel b)). \section {Luminosity Functions of Giant Radio Halos} In this Section we derive the expected luminosity functions of giant radio halos (RHLFs). Calculations for the RHLFs are carried out within the ($B_{<M>}$,b) region of Fig.~\ref{regions} by adopting the corresponding values of $\eta_t$ which allow to match the GRH occurrence at $z<0.2$. First we use the probability $P_{\Delta z}^{\Delta M}$ to form GRHs with the cluster's mass to estimate the mass functions of GRHs ($dN_{H}(z)/dM dV$): \begin{equation} {dN_{H}(z)\over{dM\,dV}}= {dN_{cl}(z)\over{dM\,dV}}\times P_{\Delta z}^{\Delta M}=n_{PS}\times P_{\Delta z}^{\Delta M}, \label{RHMF} \end{equation} \begin{figure*} {\hsize=5cm}{}{\includegraphics{RHLFs_z06_M15p25_f.ps}} \caption[]{Expected RHLFs in 6 redshift bins (as reported in the panels). Calculations are performed by using the following values of the parameters : b=1.7, $B_{<M>}=3.0\mu$G (blue lines: $\eta_t=0.2$ (solid lines) and $\eta_t=0.19$ (dashed lines)); b=1.7, $B_{<M>}=2.2\mu$G and $\eta_t=0.2$ (magenta lines); b=1.5, $B_{<M>}=1.9\mu$G and $\eta_t=0.2$ (red lines); b=0.9, $B_{<M>}=0.18\mu$G and $\eta_t=0.39$ (cyan lines); b=0.6, $B_{<M>}=0.2\mu$G and $\eta_t=0.38$ (yellow lines); b=1.0, $B_{<M>}=0.45\mu$G and $\eta_t=0.33$ (black lines).} \label{RHLF_flat_z} \end{figure*} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{conteggi_z02_M15p25_finali.ps}} \caption[]{Number of expected GRHs above a given radio flux at 1.4 Ghz from a full sky coverage up to $z\le\,0.2$ (the colour code is that of Fig.\ref{RHLF_noi_ens}). The black points are the data taken from Giovannini et al.(1999) and corrected for the incompleteness of their sky-coverage ($\sim\,2\,\pi$ sr).} \label{conteggi_z02} \end{figure} \begin{figure} \resizebox{\hsize}{!}{\includegraphics{conteggi_z06_M15p25_finali.ps}} \caption[]{Number of expected GRHs from the whole universe above a given radio flux at 1.4 GHz. The colour code is the same of Fig.\ref{RHLF_noi_ens}.} \label{conteggi_z06_f} \end{figure} \noindent where $n_{PS}=n_{PS}(M,z)$ is the Press \& Schechter (1974) mass function whose normalization depends essentially by $\sigma_8$ (present-day rms density fluctuation on a scale of $8 h^{-1}$ Mpc) and $\Omega_o$; we use $\sigma_8=0.9$ in a $\Omega_o=0.3$ universe. We stress that we use $n_{PS}$ since our model is based on the Press \& Schechter formalism. \noindent The RHLF is thus given by: \begin{equation} {dN_{H}(z)\over{dV\,dP_{1.4}}}= {dN_{H}(z)\over{dM\,dV}}\bigg/ {dP_{1.4}\over dM}. \label{RHLF} \end{equation} \noindent $dP_{1.4}/dM$ depends on the adopted ($B_{<M>}$, b) since each allowed configuration in Fig.~\ref{regions} selects a value of the slope of $P_{1.4}-M_v$ (e.g., Tab.~3) which is consistent (at 1 $\sigma$) with the value of the observed slope obtained with present observations ($\alpha_M=2.9 \pm 0.4$; see Sec.~3). In particular from Fig.~\ref{slopeM} one has that, for a given $b$, larger values of the magnetic field select smaller values of the slope of the $P_{1.4}-M_v$ correlation (and viceversa). In Fig.\ref{RHLF_noi_ens} we report the Local RHLFs (number of GRHs per comoving $Gpc^3$ as a function of the radio power) as expected from our calculations. The most interesting feature in the RHLFs is the presence of a cut-off/flattening at low radio powers. This flattening is a unique feature of particle acceleration models since it marks the effect of the decrease of the efficiency of the particles acceleration (in 1 Mpc $h_{50}^{-1}$ cube) in the case of the less massive galaxy clusters. We stress that this result does not depend on the particular choice of the parameters. \noindent To highlight the result, in Fig.\ref{RHLF_noi_ens} we also compare our RHLFs with the range of Local $(RHLFs)_{E\&R}$ (black solid lines) reported by En\ss lin \& R$\ddot{o}$ttgering (2002). These $(RHLFs)_{E\&R}$ are obtained by combining the X-ray luminosity function of clusters with the radio-X-ray correlation for GRHs and assuming that a costant fraction, $f_{rh}=1/3$, of galaxy clusters have GRHs independently from the cluster mass (see En\ss lin \& R$\ddot{o}$ttgering 2002). \noindent The most important difference between the two expectations is indeed that a low-radio power cut-off does not show up in the $(RHLFs)_{E\&R}$ in which indeed the bulk of GRHs is expected at very low radio powers. The agreement between the two Local RHLFs at higher synchrotron powers is essentially because the derived occurrence of GRHs in massive objects (Sect.~4.2) is in line with the fraction, $f_{rh}=1/3$, adopted by En\ss lin \& R$\ddot{o}$ttgering (2002). In Fig.~\ref{RHLF_flat} we report the RHLFs expected by our calculations in different redshift bins. The calculations are performed by using two relevant sets of parameters (a super--linear and a sub--linear case as given in the caption of Fig.~\ref{RHLF_flat}) allowed from the observed correlations. With increasing redshift the RHLFs decrease due to the evolution of the clusters mass function with z and to the evolution of the probability to form GRHs with z. Fig.~\ref{RHLF_flat}, allows to readily appreciate the different behavior of the RHLFs in the case of a super-linear scaling of B with M, $b=1.7$, (Fig.~\ref{RHLF_flat}, Panel a)) and of a sub-linear scaling, $b=0.9$ (Fig.~\ref{RHLF_flat}, Panel b)): the evolution with redshift in the Panel b) (sub--linear case) is faster than that in the Panel a) (super--linear case). This difference is driven by the probability to form GRHs as a function of redshift in the two cases: in the super--linear case the probability to form GRHs does not decrease rapidly with $z$, while a rapid decrease of such a probability is obtained in the sub--linear case (see also Figs.~\ref{PMz_1p7}, ~\ref{PMz_0p9} Sect.~6). In Fig.~\ref{RHLF_flat_z} we report the RHLFs obtained by our calculations by adopting the selected set of configurations given in Tab.~\ref{choose_value} (colour code is the same of Fig.~\ref{regions}). The combination of these configurations define a bundle of expected RHLFs which determines the range of the possible RHLFs. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{conteggi_binz_finali.ps}} \caption[]{Expected total number of GRHs above a given radio flux in different redshift bins: panel a) above 5 mJy; panel b) above 30 mJy. In both panels the colour code is the same of Fig.\ref{RHLF_noi_ens}.} \label{conteggi_z06} \end{figure*} \noindent All the calculations are performed for the corresponding range of values of $\eta_t$ which allow to be consistent with the observed probability to form radio halos at $z\,\raise 2pt \hbox {$<$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,0.2$. One finds that with increasing redshift the bundle of the RHLFs broadens along the $n_{H}(P)\,\times\,P$ axis. This is again due to the different evolutions of the probability to form GRHs with $z$ of the super--linear and sub--linear cases. \section{Number counts of giant radio halos} In this Section we derive the expected number counts of giant radio halos (RHNCs). This will allow us to perform a first comparison between the model expectations and the counts of GRHs which can be derived from present observations, but also to derive expectations for future observations. As for the case of the RHLFs, in calculating the RHNCs we adopt the configurations of parameters which allow to reproduce the observed probabilities of GRHs at $z<0.2$. However, we point out that the fact that our expectations are consistent with the observed probability to form GRHs at $z\raise 2pt \hbox {$<$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,0.2$ does not imply that they should also be consistent with the observed flux distribution of GRHs in the same redshift interval. Given the RHLFs ($dN_H(z)/dP_{1.4}dV$) the number of GRHs with $f> f_{1.4}$ is given by: \begin{equation} N_{H}(>f_{1.4})=\int_{z=0}^{z}dz' ({{dV}\over{dz'}}) \int_{P_{1.4}(f_{1.4}^{*},z')}{{dN_H(P_{1.4},z')}\over{dP_{1.4}\,dV}}dP_{1.4} \label{RHNC} \end{equation} \noindent where $dV/dz$ is the comoving volume element in the $\Lambda$CDM cosmology \noindent (e.g., Carroll, Press and Turner 1992); the radio flux and the radio power are related by $P_{1.4}=4\pi\,d_L^2\,f_{1.4}$ with $d_L$ the luminosity distance (where we neglect the K-correction since the slope of the spectrum of radio halos is close to unity). As a first step, we use Eq.~\ref{RHNC} to calculate the number of expected GRHs above a given radio flux at 1.4 Ghz from a full sky coverage up to $z\ltsim0.2$ and compare the results with number counts derived by making use of the present day observations (Fig.~\ref{conteggi_z02}, the colour code is that of Fig.\ref{RHLF_noi_ens}). Calculations in Fig.~\ref{conteggi_z02} are obtained by using the full bundle of RHLFs obtained in the previous Section (Fig.~\ref{RHLF_flat_z}). The black points are obtained by making use of the radio data from the analysis of the radio survey NVSS by Giovannini et al.(1999); normalization of counts is scaled to correct for the incompleteness due to the sky-coverage in Giovannini et al. ($\sim\,2\,\pi$ sr). The NVSS has a 1$\sigma$ level at 1.4 GHz equal to 0.45 mJy/beam (beam=45$\times$45 arcsec, Condon et al. 1998). By adopting a typical size of GRH of the order of 1 Mpc, the surface brightness of the objects which populate the peak of the RHLFs ($\sim 10^{24}$ W/Hz) at z$\sim$0.15 is expected to fall below the 2$\sigma$ limit of the NVSS. These GRHs have a flux of about 20 mJy, thus below this flux the NVSS becomes poorly efficient in catching the bulk of GRHs in the redshift bin z=0--0.2 and a fair comparison with observations is not possible. For larger fluxes we find that the expected number counts are in excellent agreement with the counts obtained from the observations. We note that assuming a superlinear scaling of $B$ with cluster mass, up to 30-40 GRHs at $z<0.2$ are expected to be discovered with future deeper radio surveys. On the other hand, the number of these GRHs in the case of a sublinear scaling should only be a factor of $\sim 2$ larger than that of presently known halos. As a second step, we calculate (Fig.\ref{conteggi_z06_f}) the whole sky number of GRHs expected up $z=0.7$ (the probability to form GRHs at $z\,>\,0.7$ is negligible). We note that the number counts of GRHs increases down to a radio flux \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{lofar.ps}} \caption[]{{\bf a)} The occurrences of GRHs as a function of the cluster mass in the redshift bins 0-0.1 (solid lines) and 0.4-0.5 (dashed lines) are reported for 150 MHz (thick lines) and for 1.4 GHz (thin lines). {\bf b)} Mass functions of GRHs in the redshift bins 0-0.1 (solid lines) and 0.4-0.5 (dashed lines) are reported for 150 MHz (thick lines) and for 1.4 GHz (thin lines). {\bf c)} Comparison between the expected RHNCs above a given radio flux at 1.4 Ghz (thin lines) and at 150 MHz (thick lines) from a full sky coverage up to $z\le\,0.6$. \\ All the calculations have been performed assuming: b=1.5, $B_{<M>}=1.9 \mu$G and $\eta_t=0.2$. } \label{lofar} \end{figure*} \noindent of $f_{1.4}\sim\,2-3$ mJy and then flattens due to the strong (negative) evolution of the RHLFs (Fig.~\ref{RHLF_flat_z}). We note that the expected total number of GRHs above 1 mJy at 1.4 GHz is of the order of $\sim\,100$ depending on the scaling of the magnetic field with cluster mass. Finally we calculate the expected number counts of GRHs above a given radio flux in different redshift bins. This allows us to catch the redshift at which the bulk of GRHs is expected. In Fig.~\ref{conteggi_z06} we report the RHNCs integrated above 5 mJy (Panel a)) and above 30 mJy (Panel b)). We note that the bulk of GRHs is expected in the redshift interval $0.1-0.3$ and this does not strongly depend on the flux limit. We note that the relatively high value of such redshift range is also due to the presence of the low radio power cut-off in the RHLFs which suppresses the expected number of low power GRHs. On the other hand, at radio fluxes $>$ 30 mJy the contribution from higher redshift decreases since the requested radio luminosities at these redshift correspond to masses of the parent clusters which are above the high--mass cut-off of the cluster mass function. \section{Towards low radio frequencies: model expectations at 150 MHz} Due to their steep radio-spectra, GRHs are ideal targets for upcoming low-frequency radio telescopes, such as LOFAR and LWA. In this section we present calculations of the statistics of GRHs at 150 MHz derived from the electron reacceleration model. For simplicity, we present these results only for one set of the parameters in the plane ($B_{<M>},b$) (Fig.\ref{regions}): a super-linear case (b=1.5, $B_{<M>}=1.9 \mu$G) (see Sec.3). First, we calculate the probability to have GRHs at $\sim 150$ MHz as a function of the cluster's mass following the procedure outlined in Sec.4.1 and requiring a break frequency $\nu_b\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$} 20\,$ MHz to account for the observation frequency. In Fig.\ref{lofar}a we report the probability to have GRHs as a function of virial mass in two redshift bins at 1.4 GHz (thin lines) and at 150 MHz (thick lines). As expected, the probability at 150 MHz is substantially larger than that calculated at 1.4 GHz, particularly for higher redshifts and for low massive clusters. One of the main findings of our work is the presence of a cut-off in the RHLFs at low radio powers (see Sec.5), which reflects the drop of the probability to form GRHs as the cluster's mass decreases. In Fig.\ref{lofar}b we plot the mass functions of radio halos (RHMFs) at 1.4 GHz and at 150 MHz in two redshift bins (see caption of Fig.\ref{lofar}). We note that the number density of GRHs is increased by only a factor $\sim 2$ for $M> 2\cdot 10^{15}\,M_{\odot}$, but by more than one order of magnitude for $M\le\,10^{15}\,M_{\odot}$. The most interesting feature is again the presence of a low mass cut-off in the RHMFs at 150 MHz, which however is shifted by a factor $\sim 2$ towards smaller masses with respect to the case at 1.4 GHz. This is related to the fact that a smaller energy density in the form of turbulence is sufficient to boost GRHs at lower frequencies, and this allows the formation of GRHs also in slightly smaller clusters, which indeed are expected to be less turbulent (CB05; see also Vazza et al. 2006). Finally, in order to obtain estimates for the RHLFs and RHNCs at 150 MHz, we tentatively assume the same $P_{R}-M$ scaling found at 1.4 GHz, scaled at 150 MHz with an average spectral index $\alpha_{\nu}\sim 1.2$, and follow the approach outlined in Secs.~5 and 6. In Fig.\ref{lofar}c we report the expected integral number counts of radio halos from a full sky coverage above a given radio flux at 1.4 GHz (thin lines) and at 150 MHz (thick lines) up to a redshift $z\sim\,0.6$. The expected number of GRHs at 150 MHz are a factor of $\sim 10$ larger than the number expected at 1.4 GHz, with the bulk of GRHs at fluxes $\ge$ few mJy. In the near future LOFAR will be able to detect diffuse emission on Mpc scale at 150 MHz down to these fluxes and this would be sufficient to catch the bulk of these GRHs (a more detailed study will be presented in a forthcoming paper). \section{Summary and Discussion} The observed correlations between radio and X-ray properties of galaxy clusters provide useful tools to constraining the physical parameters that are relevant to the reacceleration models for the onset of giant radio halos (GRHs). Our analysis is based on the calculations of Cassano \& Brunetti (2005; CB05), which assumes that a seed population of relativistic electrons reaccelerated by magnetosonic (MS) waves is released in the ICM by relatively recent merger events. To this end we have collected from the literature a sample of 17 GRH clusters for all of which, but one (A2254), both radio and X-ray homogeneous data are available, as summarized in Tab.1 \& 2. Based on the relationships derived in CB05 paper, we have been able to constrain the (likely) dependence of the average magnetic field intensity (B) on the cluster mass, under the assumption that B can be parameterized as $B =B_{<M>} (M/<M>)^b$ (with $B_{<M>}$ the average field intensity of a cluster of mean mass $<M>=1.6\times\,10^{15}\,M_{\odot}$ and b positive). This is an important achievement because both the emitted synchrotron spectrum and losses depend critically on the field intensity. Following CB05 approach, the merger events are obtained in the statistical scenario provided by the extended Press \& Schechter formalism that describes the hierarchical formation of galaxy clusters. The main results of our study can be summarized as follows: \begin{itemize} \item[$\bullet$] {\it Observed correlations} \\ In Sect.2 we derive the correlations between the radio power at 1.4 GHz ($P_{1.4}$) and the X-ray luminosity (0.1-2.4 keV), ICM temperature and cluster mass. Most important for the purpose of the present investigation is the $P_{1.4}-M_v$ correlation which has been derived by combining the $L_X-M_v$ correlation obtained for a large statistical sample of galaxy clusters (the HIFLUGCS sample plus our sample) with the $P_{1.4}-L_X$ correlation derived for our sample of GRHs. This procedure allows us to avoid the well known uncertainties and limits which are introduced in measuring the masses of small samples of galaxy clusters, especially in the case of merging systems. We find a value of the slope $\alpha_M=2.9 \pm 0.4$ ($P_{1.4} \propto M_v^{\alpha_M}$). A steep correlation of the synchrotron luminosity with the ICM temperature is also found, although with a large statistical error in the determination of the slope : $\alpha_T=6.4 \pm 1.6$ ($P_{1.4} \propto T^{\alpha_T}$). In Sec.2 we have also shown that at least in the case of high X-ray luminosity clusters ($L_X\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,5\cdot 10^{44}$ erg/s) the above trends are unlikely driven by selection effects in the present observations.\\ \item[$\bullet$] {\it Constraining the magnetic field dependence on the mass} \\ A correlation between the radio power and the cluster virial mass is naturally expected in the framework of electron acceleration models. This relationships, discussed in Sec. 3.1 (Eq.\ref{setti}), can reproduce the observed correlation for viable values of the physical parameters. For instance, in the case $B<<B_{cmb}$, it is $P(\nu_o) \propto {M_v}^{a(2 - \Gamma +b)+b}$ and the exponent agrees with the observed one ($\alpha_M\sim 3$) by adopting a typical slope of the radio spectrum $a=1-1.2$ and a sub--linear scaling $b\sim 0.6-0.8$. A systematic comparison of the expected correlations between the radio power and the cluster mass with the observed one (Sect.3.1 \& 2) allows the definition of a permitted region of the parameters' space ($B_{<M>}$,b), where a lower bound $B_{<M>}=0.2\, \mu$G is obtained in order not to overproduce via the IC scattering of the CMB photons the hard X-ray fluxes observed in the direction of a few GRHs (Sect. 3.3 and Fig.6). It is found a lower bound at $b\sim 0.5-0.6$ and that a relatively narrow range of $B_{<M>}$ values is allowed for a fixed b. The boundaries of the allowed region, aside from the lower bound of $B_{<M>}$, are essentially sensitive to the limits from the $P_{1.4}-M_v$ correlation. \noindent A super--linear scaling of $B$ with mass, as expected by MHD simulations (Dolag et al.~2004) falls within the allowed region. \noindent The values of the average magnetic field intensity in the superlinear case are close (slightly smaller) to those obtained from the Faraday rotation measurements (e.g., Govoni \& Feretti 2004), which, however, generally sample regions which are even more internally placed than those spanned by GRHs. Future observations will allow to better constrain the radio-X ray correlations and thus to better define the region of the model parameters.\\ \item[$\bullet$] {\it Probability to form GRHs} \\ In Sect.4 we report on extensive calculations aimed at constraining $\eta_t$, the fraction of the available energy in MS waves, which is required to match the observed occurrence of GRHs at redshifts $z\le 0.2$ (Fig.7). By adopting a representative sampling of the allowed ($B_{<M>}$,$b$) parameter space (Fig.6) we find $0.15 \le \eta_t\le 0.44$: the larger values are obtained for $B_{<M>}$ approaching the lower bound of the allowed region, because of the larger acceleration efficiency necessary to boost electrons at higher energies to obtain a fixed fraction of clusters with GRHs. With an appropriate $\eta_t$ value for each set of ($B_{<M>}$,b) parameters we can calculate the probability of occurrence of GRHs at larger redshifts for which observational data are not available. This probability depends on the merging history of clusters and on the relative importance of the synchrotron and IC losses, and shows a somewhat complicated behavior with cluster mass and redshift. The maximum value of this probability at a given redshift is found for a cluster mass $M_*$ (Eq.\ref{m*}) which mark the transition between the Compton and the synchrotron dominated phases. \noindent In the case of sublinear scaling of the magnetic field with cluster mass (b$\sim$0.6--0.9) the allowed values of the strength of the magnetic field are relatively small (Fig.~\ref{regions}), the value of $M_*$ is large and the IC losses are always dominant for the mass range of clusters with known GRHs. As a consequence the probability to have GRHs increases with cluster mass and decreases with redshift (Fig~\ref{PMz_0p9}). On the other hand superlinear scalings (b$\sim$1.2--1.7) imply allowed values of $B_{<M>}$ relatively large (Fig.~\ref{regions}), and even larger values of the magnetic field for the most massive objects. In this case the value $M_*$ falls within the range of masses spanned by GRH clusters: the predicted fraction of clusters with GRHs increases with mass, then reaches a maximum value at about $M_v \sim M_*$, and finally falls down for larger masses (Fig~\ref{PMz_1p7}). At variance with the case of sublinear scaling, in this case the fraction of the most massive objects with GRHs is expected to slightly increase with redshift, at least up to z=0.2--0.4 (Fig~\ref{PMz_1p7}) where the bulk of turbulence is injected in a $\Lambda$CDM model (CB05).\\ \item[$\bullet$]{\it Luminosity functions (RHLFs)} \\ In Sect.5 we report the results of extensive calculations following a fair sampling of the ($B_{<M>}$,b) allowed region as summarized in Tab. 3; this essentially allows a full coverage of all possible RHLFs given the present correlations at 1$\sigma$. We find that, although the large uncertainties in the ($B_{<M>}$,b) region, the predicted local RHLFs are confined to a rather narrow bundle, the most characteristic common feature being the presence of a flattening/cut-off at radio powers below about $10^{24}$ W/Hz at 1.4 GHz (Fig.\ref{RHLF_noi_ens}). The fraction of GRHs with 1.4 GHz luminosity below $\sim 5 \times 10^{22}$W Hz$^{-1}$ h$_{70}^{-2}$, a factor of $\sim 5$ smaller than the luminosity of the less powerful GRH (A2256, z=0.0581) known so far, is negligible. This characteristic shape of the RHLFs, obtained in our paper for the first time, represents a unique prediction of particle acceleration models, and does not depend on the adopted physical details for the particle acceleration mechanism. This is due to the decrease of the efficiency of particle acceleration in the case of less massive clusters which is related to three major reasons (see CB05): \begin{itemize} \item[{\it i})] smaller clusters are less turbulent than larger ones since the turbulent energy is expected to scale with the thermal one (CB05; see also Vazza et al. 2006); \item[{\it ii})] turbulence is typically injected in large Mpc regions in more massive clusters and thus these are favoured for the formation of GRHs (CB05); \item[{\it iii})] since in the present paper we found $B\propto M^b$ with $b\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$}\,0.5$, higher energy electrons should be accelerated in smaller clusters to emit synchrotron radiation at a given frequency. \end{itemize} \noindent Deep radio survey with future radio telescopes (LOFAR, LWA, SKA) are required to test the presence of this cut-off/flattening in the luminosity function of the GRHs. The predicted evolution of the RHLFs with redshift is illustrated in Fig.~\ref{RHLF_flat_z}: the comoving number density of GRHs decreases with redshift due to the evolutions of the cluster mass function and of the probability to form GRHs. The decrease with redshift of the RHLFs calculated by adopting sublinear scaling of the magnetic field with cluster mass is faster than that in the superlinear scaling causing a spread in the RHLFs bundle with z.\\ \item[$\bullet$]{\it Number counts (RHNCs)} \\ In Sec.7 we have derived the integral number counts of GRHs at 1.4 GHz. We find that the number counts predicted for the same set of RHLFs discussed in Secs.6 generally agree with those derived from the NVSS at the limit of this survey and within $z=0.2$ (Fig.\ref{conteggi_z02}). The flattening of the counts below $\sim 50-60$ mJy is both due to the combination of the low power cut-offs of the RHLFs with the redshift limit, and to the RHLFs evolution with redshift. On the other hand, past calculations which assume a fixed fraction of GRHs with cluster mass predict an increasing number of sources at lower fluxes (e.g., En\ss lin \& R\"ottgering, 2002). GRHs around the peak of our LFs ($P_{1.4 GHz} \sim 10^{24}$W/Hz) and at z$\sim$0.15 would be detectable at fluxes below about 20 mJy, which however is below the sensitivity limit of the NVSS for this type of objects. We estimate that the number of GRHs below this flux could be up to 30-40 (whole sky, $z\le 0.2$) if superlinear scalings of the mass with B hold. The predicted number of GRHs (Fig.\ref{conteggi_z06_f}) (whole Universe) could be up to $\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$} 100$ if a superlinear scaling of the mass with B holds, while a sublinear scaling would give a number 2-3 times smaller. A substantial number of these objects would be found also down to a flux of a few mJy at 1.4 GHz in the case of a superlinear scaling, while in the case of sublinear scalings the number of GRHs below about 10 mJy would be negligible. We also find that the bulk of GRHs is expected at $z \sim$0.1--0.3 (Fig.\ref{conteggi_z06}). It should be mainly composed by those RHs populating the peak of the RHLFs, i.e. objects similar (or slightly more powerful) to the GRH in the Coma cluster.\\ \item[$\bullet$]{\it Toward expectations at low radio frequencies} \\ In Sec.7 we have extended our estimates to the case of low frequency observations which will be made with upcoming instruments, such as LOFAR and LWA. Lower energetic electrons contribute to these frequencies and thus - in the framework of the particle re-acceleration scenario - the efficiency of producing GRHs in galaxy clusters is expected to be higher than that of GRHs emitting at 1.4 GHz. By presenting the analysis for a representative set of parameters, we have shown that the probability to have GRHs emitting at 150 MHz is significantly larger than that of those emitting at 1.4 GHz, particularly in the mass range $\sim\,5\cdot10^{14}-1.5\cdot10^{15}\,M_{\odot}$. Consequently, the low mass cut-off in the RHMFs is shifted down by a factor of $\sim$ 2. This is naturally expected and is due to the fact that slightly less turbulent systems are able to generate GRHs at lower frequencies. We have also estimated that the number counts of GRHs at low frequencies might outnumber those at 1.4 GHz by at least one order of magnitude. We venture to predict that LOFAR is likely to discover $\raise 2pt \hbox {$>$} \kern-0.8em \lower 4pt \hbox {$\sim$} 10^3$ (all sky) GRHs down to a flux of few mJy at 150 MHz. \end{itemize} \section*{Acknowledgments} RC warmly thank S.Ettori and C.Lari for useful discussions on the statistical analysis. We acknowledge D.Dallacasa, K.Dolag and L.Feretti for comments on the manuscript, and T.Ensslin for useful discussions and for kindly providing the Local RHLFs in Fig.~\ref{RHLF_noi_ens}. The anonymous referee is acknowledged for useful comments. RC acknowledge the MPA in Garching for the hospitality during the preparation of this paper. This work is partially supported by MIUR under grant PRIN2004.
1,108,101,564,096
arxiv
\section{Introduction} Exploration through the exact solution of models has a secular tradition in mathematical physics. Empirically, exact solvability is possible in the presence of symmetries, which come in various guises and which are described by a variety of mathematical structures. In many cases, exact solutions are expressed in terms of special functions, whose properties encode the symmetries of the systems in which they arise. This can be represented by the following virtuous circle: \begin{equation*} \xymatrix{ & \text{Exact solvability} & \\ \text{Symmetries} \ar@/^/@{<->}[ur]& & \text{Special functions}\ar@/_/@{<->}[ul] \\ & \text{Algebraic structures} \ar@/_/@{<->}[ur]\ar@/^/@{<->}[ul] & } \end{equation*} The classical path is the following: start with a model, find its symmetries, determine how these symmetries are mathematically described, work out the representations of that mathematical structure and obtain its relation to special functions to arrive at the solution of the model. However, one can profitably start from any node on this circle. For instance, one can identify and characterize new special functions, determine the algebraic structure they encode, look for models that have this structure as symmetry algebra and proceed to the solution. In this paper, the following path will be taken: \begin{equation*} \text{Algebra}\longrightarrow \text{Orthogonal polynomials}\longrightarrow \text{Symmetries}\longrightarrow \text{Exact solutions} \end{equation*} The outline of the paper is as follows. In section 2, the Bannai-Ito algebra is introduced and some of its special cases are presented. In section 3, a realization of the Bannai-Ito algebra in terms of discrete shift and reflection operators is exhibited. The Bannai-Ito polynomials and their properties are discussed in section 4. In section 5, the Bannai-Ito algebra is used to derive the recurrence relation satisfied by the Bannai-Ito polynomials. In section 6, the paraboson algebra and the $sl_{-1}(2)$ algebra are introduced. In section 7, the realization of $sl_{-1}(2)$ in terms of Dunkl operators is discussed. In section 8, the Racah problem for $sl_{-1}(2)$ and its relation with the Bannai-Ito algebra is examined. A superintegrable model on the 2-sphere with Bannai-Ito symmetry is studied in section 9. In section 10, a Dunkl-Dirac equation on the 2-sphere with Bannai-Ito symmetry is discussed. A list of open questions is provided in lieu of a conclusion. \section{The Bannai-Ito algebra} Throughout the paper, the notation $[A,B]=AB-BA$ and $\{A,B\}=AB+BA$ will be used. Let $\omega_1$, $\omega_2$ and $\omega_3$ be real parameters. The Bannai-Ito algebra is the associative algebra generated by $K_1$, $K_2$ and $K_3$ together with the three relations \begin{align} \label{BI-Algebra} \{K_1,K_2\}=K_3+\omega_3,\quad \{K_2,K_3\}=K_1+\omega_1,\quad \{K_3,K_1\}=K_2+\omega_2, \end{align} or $\{K_i,K_j\}=K_k+\omega_{k}$, with $(ijk)$ a cyclic permutation of $(1,2,3)$. The Casimir operator \begin{align*} Q=K_1^2+K_2^2+K_3^2, \end{align*} commutes with every generator; this property is easily verified with the commutator identity $[AB,C]=A\{B,C\}-\{A,C\}B$. Let us point out two special cases of \eqref{BI-Algebra} that have been considered previously in the literature. \begin{enumerate} \item $\omega_1=\omega_2=\omega_3=0$ \end{enumerate} The special case with defining relations \begin{align*} \{K_1,K_2\}=K_3,\quad \{K_2,K_3\}=K_1,\quad \{K_3,K_1\}=K_2, \end{align*} is sometimes referred to as the \emph{anticommutator spin algebra} \cite{Arik-2003, Gorodnii-1984}; representations of this algebra were examined in \cite{Arik-2003, Gorodnii-1984, Brown-2013, Silvestrov-1992}. \begin{enumerate}\setcounter{enumi}{1} \item $\omega_1=\omega_2=0\neq \omega_3$ \end{enumerate} In recent work on the construction of novel finite oscillator models \cite{Jafarov-2011-05, VDJ-2011}, E. Jafarov, N. Stoilova and J. Van der Jeugt introduced the following extension of $\mathfrak{u}(2)$ by an involution $R$ ($R^2=1$): \begin{gather*} [I_3,R]=0,\quad \{I_1,R\}=0,\quad \{I_2,R\}=0, \\ [I_3,I_1]=i I_2,\quad [I_2,I_3]=i I_1,\quad [I_1,I_2]=i (I_3+\omega_3 R). \end{gather*} It is easy to check that with \begin{align*} K_1=i I_1 R,\quad K_2=I_2,\quad K_3=I_3 R, \end{align*} the above relations are converted into \begin{align*} \{K_1,K_3\}=K_2,\quad \{K_2,K_3\}= K_1,\quad \{K_1,K_2\}=K_3+\omega_3. \end{align*} \section{A realization of the Bannai-Ito algebra with shift and reflections operators} Let $T^{+}$ and $R$ be defined as follows: \begin{align*} T^{+}f(x)=f(x+1),\quad Rf(x)=f(-x). \end{align*} Consider the operator \begin{align} \label{K1-Hat} \widehat{K}_1=F(x) (1-R)+G(x)(T^{+}R-1)+h,\qquad h=\rho_1+\rho_2-r_1-r_2+1/2, \end{align} with $F(x)$ and $G(x)$ given by \begin{gather*} F(x)=\frac{(x-\rho_1)(x-\rho_2)}{x},\quad G(x)=\frac{(x-r_1+1/2)(x-r_2+1/2)}{x+1/2}, \end{gather*} where $\rho_1, \rho_2,r_1, r_2$ are four real parameters. It can be shown that $\widehat{K}_1$ is the most general operator of first order in $T^{+}$ and $R$ that stabilizes the space of polynomials of a given degree \cite{Tsujimoto-2012-03}. That is, for any polynomial $Q_{n}(x)$ of degree $n$, $[\widehat{K}_1 Q_{n}(x)]$ is also a polynomial of degree $n$. Introduce \begin{align} \label{K2-Hat} \widehat{K}_2=2x+1/2, \end{align} which is essentially the ``multiplication by $x$'' operator and \begin{align} \label{K3-Hat} \widehat{K}_3\equiv \{\widehat{K}_1,\widehat{K}_2\}-4(\rho_1\rho_2-r_1r_2). \end{align} It is directly verified that $\widehat{K}_1$, $\widehat{K}_2$ and $\widehat{K}_3$ satisfy the commutation relations \begin{align} \label{BI-Hat} \{\widehat{K}_1,\widehat{K}_2\}=\widehat{K}_3+\widehat{\omega}_3,\quad \{\widehat{K}_2,\widehat{K}_3\}=\widehat{K}_1+\widehat{\omega}_1,\quad \{\widehat{K}_3,\widehat{K}_1\}=\widehat{K}_2+\widehat{\omega}_2, \end{align} where the structure constants $\widehat{\omega}_1$, $\widehat{\omega}_2$ and $\widehat{\omega}_3$ read \begin{align} \label{Omega-Hat} \widehat{\omega}_1=4(\rho_1\rho_2+r_1r_2),\quad \widehat{\omega}_2=2(\rho_1^2+\rho_2^2-r_1^2-r_2^2),\quad \widehat{\omega}_3=4(\rho_1\rho_2-r_1r_2). \end{align} The operators $\widehat{K}_1$, $\widehat{K}_2$ and $\widehat{K}_3$ thus realize the Bannai-Ito algebra. In this realization, the Casimir operator acts as a multiple of the identity; one has indeed \begin{align*} \widehat{Q}=\widehat{K}_1^2+\widehat{K}_2^2+\widehat{K}_3^2=2(\rho_1^2+\rho_2^2+r_1^2+r_2^2)-1/4. \end{align*} \section{The Bannai-Ito polynomials} Since the operator \eqref{K1-Hat} preserves the space of polynomials of a given degree, it is natural to look for its eigenpolynomials, denoted by $B_{n}(x)$, and their corresponding eigenvalues $\lambda_{n}$. We use the following notation for the generalized hypergeometric series \cite{Andrews_Askey_Roy_1999} \begin{align*} {}_rF_{s}\left(\genfrac{}{}{0pt}{}{a_1,\ldots, a_{r}}{b_{1},\ldots,b_{s}}\,\Big \rvert \, z\right)=\sum_{k=0}^{\infty}\frac{(a_1)_{k}\cdots (a_{r})_{k}}{(b_1)_{k}\cdots (b_{s})_{k}}\,\frac{z^{k}}{k!}, \end{align*} where $(c)_{k}=c(c+1)\cdots (c+k-1)$, $(c)_0\equiv 1$ stands for the Pochhammer symbol; note that the above series terminates if one of the $a_{i}$ is a negative integer. Solving the eigenvalue equation \begin{align} \label{BI-Eigen} \widehat{K}_1 B_{n}(x)=\lambda_{n}B_{n}(x),\qquad n=0,1,2,\ldots \end{align} it is found that the eigenvalues $\lambda_{n}$ are given by \cite{Tsujimoto-2012-03} \begin{align} \label{BI-Eigenvalues} \lambda_{n}=(-1)^{n}(n+h), \end{align} and that the polynomials have the expression \begin{align} \frac{B_{n}(x)}{c_{n}}= \begin{cases} {}_{4}F_{3}\left(\genfrac{}{}{0pt}{}{-\frac{n}{2},\,\frac{n+1}{2}+h,\, x-r_1+1/2,\,-x-r_1+1/2}{1-r_1-r_2,\, \rho_1-r_1+\frac{1}{2},\, \rho_2-r_1+\frac{1}{2}}\,\Big \rvert \, 1\right) & \\[.5cm] \quad +\frac{(\frac{n}{2})(x-r_1+\frac{1}{2})}{(\rho_1-r_1+\frac{1}{2})(\rho_2-r_1+\frac{1}{2})}\;{}_{4}F_{3}\left(\genfrac{}{}{0pt}{}{1-\frac{n}{2},\,\frac{n+1}{2}+h,\,x-r_1+3/2,\,-x-r_1+1/2}{1-r_1-r_2,\,\rho_1-r_1+\frac{3}{2},\, \rho_2-r_1+\frac{3}{2}}\,\Big\rvert \, 1\right) & \text{$n$ even}, \\[.7cm] {}_4F_{3}\left(\genfrac{}{}{0pt}{}{-\frac{n-1}{2},\, \frac{n}{2}+h,\, x-r_1+\frac{1}{2},\, -x-r_1+\frac{1}{2}}{1-r_1-r_2,\, \rho_1-r_1+\frac{1}{2},\, \rho_2-r_1+\frac{1}{2}}\,\Big\rvert\,1\right) & \\[.5cm] \quad - \frac{(\frac{n}{2}+h)(x-r_1+\frac{1}{2})}{(\rho_1-r_1+\frac{1}{2})(\rho_2-r_1+\frac{1}{2})} \; {}_4F_{3}\left(\genfrac{}{}{0pt}{}{-\frac{n-1}{2},\, \frac{n+2}{2}+h,\, x-r_1+\frac{3}{2},\, -x-r_1+\frac{1}{2}}{1-r_1-r_2,\, \rho_1-r_1+\frac{3}{2},\, \rho_2-r_1+\frac{3}{2}}\,\Big \rvert \, 1 \right) & \text{$n$ odd}, \end{cases} \label{BI-OPs} \end{align} where the coefficient \begin{align*} c_{2n+p}&=(-1)^{p}\frac{(1-r_1-r_2)_{n}(\rho_1-r_1+1/2,\rho_2-r_1+1/2)_{n+p}}{(n+h+1/2)_{n+p}},\qquad p\in \{0,1\}, \end{align*} ensures that the polynomials $B_{n}(x)$ are monic, i.e. $B_{n}(x)=x^{n}+\mathcal{O}(x^{n-1})$. The polynomials \eqref{BI-OPs} were first written down by Bannai and Ito in their classification of the orthogonal polynomials satisfying the \emph{Leonard duality} property \cite{Leonard-1982-07, Bannai-1984}, i.e. polynomials $p_{n}(x)$ satisfying both \begin{itemize} \item A 3-term recurrence relation with respect to the degree $n$, \item A 3-term difference equation with respect to a variable index $s$. \end{itemize} The identification of the defining eigenvalue equation \eqref{BI-Eigen} of the Bannai-Ito polynomials in \cite{Tsujimoto-2012-03} has allowed to develop their theory. That they obey a three-term difference equation stems from the fact that there are grids such as \begin{align*} x_{s}=(-1)^{s}(s/2+a+1/4)-1/4, \end{align*} for which operators of the form \begin{align*} H=A(x) R+B(x) T^{+}R+C(x), \end{align*} are tridiagonal in the basis $f(x_s)$ \begin{align*} Hf(x_{s})= \begin{cases} B(x_{s}) f(x_{s+1})+ A(x_{s}) f(x_{s-1})+C(x_{s}) f(x_{s}) & \text{$s$ even}, \\ A(x_{s}) f(x_{s+1})+B(x_{s})f(x_{s-1})+C(x_{s})f(x_{s}) & \text{$s$ odd}. \end{cases} \end{align*} It was observed by Bannai and Ito that the polynomials \eqref{BI-OPs} correspond to a $q\rightarrow -1$ limit of the $q$-Racah polynomials (see \cite{Koekoek-2010} for the definition of $q$-Racah polynomials). In this connection, it is worth mentioning that the Bannai-Ito algebra \eqref{BI-Hat} generated by the defining operator $\widehat{K}_1$ and the recurrence operator $\widehat{K}_2$ of the Bannai-Ito polynomials can be obtained as a $q\rightarrow -1$ limit of the Zhedanov algebra \cite{Zhedanov-1991-11}, which encodes the bispectral property of the $q$-Racah polynomials. The Bannai-Ito polynomials $B_{n}(x)$ have companions \begin{align*} I_{n}(x)=\frac{B_{n+1}(x)-\frac{B_{n+1}(\rho_1)}{B_{n}(\rho_1)}B_{n}(x)}{x-\rho_1}, \end{align*} called the \emph{complementary} Bannai-Ito polynomials \cite{Genest-2013-02-1}. It has now been understood that the polynomials $B_{n}(x)$ and $I_{n}(x)$ are the ancestors of a rich ensemble of polynomials referred to as ``$-1$ orthogonal polynomials'' \cite{Tsujimoto-2012-03,Genest-2013-02-1, Genest-2013-09-02, Vinet-2011-01,Vinet-2012-05, Tsujimoto-2013-03-01, Vinet-2011}. All polynomials of this scheme are eigenfunctions of first or second order operators of Dunkl type, i.e. which involve reflections. \section{The recurrence relation of the BI polynomials from the BI algebra} Let us now show how the Bannai-Ito algebra can be employed to derive the recurrence relation satisfied by the Bannai-Ito polynomials. In order to obtain this relation, one needs to find the action of the operator $\widehat{K}_2$ on the BI polynomials $B_{n}(x)$. Introduce the operators \begin{align} \label{KPM} \begin{aligned} \widehat{K}_{+}=(\widehat{K}_2+\widehat{K}_3)(\widehat{K}_1-1/2)-\frac{\widehat{\omega}_2+\widehat{\omega}_3}{2},\quad \widehat{K}_{-}=(\widehat{K}_2-\widehat{K}_3)(\widehat{K}_1+1/2)+\frac{\widehat{\omega}_2-\widehat{\omega}_3}{2}, \end{aligned} \end{align} where $\widehat{K}_i$ and $\widehat{\omega}_i$ are given by \eqref{K1-Hat}, \eqref{K2-Hat}, \eqref{K3-Hat} and \eqref{Omega-Hat}. It is readily checked using \eqref{BI-Hat} that \begin{align*} \{\widehat{K}_1,\widehat{K}_{\pm}\}=\pm K_{\pm}. \end{align*} One can directly verify that $\widehat{K}_{\pm}$ maps polynomials to polynomials. In view of the above, one has \begin{align*} \widehat{K}_1 \widehat{K}_{+} B_{n}(x)=(-\widehat{K}_{+}\widehat{K}_1+\widehat{K}_{+})B_{n}(x)=(1-\lambda_{n})\widehat{K}_{+}B_{n}(x), \end{align*} where $\lambda_{n}$ is given by \eqref{BI-Eigenvalues}. It is also seen from \eqref{BI-Eigenvalues} that \begin{align*} 1-\lambda_{n}= \begin{cases} \lambda_{n-1} & \text{$n$ even}, \\ \lambda_{n+1} & \text{$n$ odd}. \end{cases} \end{align*} It follows that \begin{align*} \widehat{K}_{+}B_{n}(x)= \begin{cases} \alpha_{n}^{(0)}B_{n-1}(x) & \text{$n$ even}, \\ \alpha_{n}^{(1)} B_{n+1}(x) & \text{$n$ odd}. \end{cases} \end{align*} Similarly, one finds \begin{align*} \widehat{K}_{-}B_{n}(x)= \begin{cases} \beta_{n}^{(0)} B_{n+1}(x) & \text{$n$ even}, \\ \beta_{n}^{(1)} B_{n-1}(x) & \text{$n$ odd}. \end{cases} \end{align*} The coefficients \begin{gather*} \alpha_{n}^{(0)}=\frac{2n(\frac{n}{2}+\rho_1+\rho_2)(r_1+r_2-\frac{n}{2})(\frac{n-1}{2}+h)}{n+h-\frac{1}{2}},\quad \alpha_{n}^{(1)}=-4(n+h+1/2), \\ \beta_{n}^{(0)}=4(n+h+1/2),\quad \beta_{n}^{(1)}=\frac{4(\rho_1-r_1+\frac{n}{2})(\rho_2-r_1+\frac{n}{2})(\rho_1-r_2+\frac{n}{2})(\rho_2-r_2+\frac{n}{2})}{n+h-1/2}, \end{gather*} can be obtained from the comparison of the highest order term. Introduce the operator \begin{align} \label{V-1} V=\widehat{K}_{+}(\widehat{K}_1+1/2)+\widehat{K}_{-}(\widehat{K}_1-1/2). \end{align} From the definition \eqref{KPM} of $\widehat{K}_{\pm}$, it follows that \begin{align} \label{V-2} V=2 \widehat{K}_2(\widehat{K}_1^2-1/4)-\widehat{\omega}_3 \widehat{K}_1-\widehat{\omega}_2/2. \end{align} From \eqref{BI-Eigen}, \eqref{V-1} and the actions of the operators $\widehat{K}_{\pm}$, we find that $V$ is two-diagonal \begin{align} \label{First} V B_{n}(x)= \begin{cases} (\lambda_n+1/2) \alpha_{n}^{(0)} B_{n-1}(x)+(\lambda_{n}-1/2)\beta_{n}^{(0)} B_{n+1}(x) & \text{$n$ even}, \\ (\lambda_n-1/2)\beta_{n}^{(1)} B_{n-1}(x)+(\lambda_{n}+1/2)\alpha_{n}^{(1)} B_{n+1}(x) & \text{$n$ odd}. \end{cases} \end{align} From \eqref{V-2} and recalling the definition \eqref{K2-Hat} of $\widehat{K}_2$, we have also \begin{align} \label{Second} V B_{n}(x)=\left[(\lambda_n^2-1/4)(4x+1)-\widehat{\omega}_3 \lambda_n-\widehat{\omega}_2/2\right]B_{n}(x). \end{align} Upon combining \eqref{First} and \eqref{Second}, one finds that the Bannai-Ito polynomials satisfy the three-term recurrence relation \begin{align*} x\,B_{n}(x)=B_{n+1}(x)+(\rho_1-A_{n}-C_{n}) B_{n}(x)+A_{n-1}C_{n} B_{n-1}(x), \end{align*} where \begin{align} \label{BI-RECU} \begin{aligned} A_{n}&= \begin{cases} \frac{(n+1+2\rho_1-2r_1)(n+1+2\rho_1-2r_2)}{4(n+\rho_1+\rho_2-r_1-r_2+1)} & \text{$n$ even}, \\ \frac{(n+1+2\rho_1+2\rho_2-2r_1-2r_2)(n+1+2\rho_1+2\rho_2)}{4(n+\rho_1+\rho_2-r_1-r_2+1)} & \text{$n$ odd}, \end{cases} \\ C_{n}&= \begin{cases} -\frac{n(n-2r_1-2r_2)}{4(n+\rho_1+\rho_2-r_1-r_2)} & \text{$n$ even}, \\ -\frac{(n+2\rho_2-2r_2)(n+2\rho_2-2r_1)}{4(n+\rho_1+\rho_2-r_1-r_2)} & \text{$n$ odd}. \end{cases} \end{aligned} \end{align} The positivity of the coefficient $A_{n-1}C_{n}$ restricts the polynomials $B_{n}(x)$ to being orthogonal on a finite set of points \cite{Chihara-2011}. \section{The paraboson algebra and $sl_{-1}(2)$} The next realization of the Bannai-Ito algebra will involve $sl_{-1}(2)$; this algebra, introduced in \cite{Tsujimoto-2011-10}, is closely related to the parabosonic oscillator. \subsection{The paraboson algebra} Let $a$ and $a^{\dagger}$ be the generators of the paraboson algebra. These generators satisfy \cite{Green-1953} \begin{align*} [\{a,a^{\dagger}\}, a]=-2a,\quad [\{a,a^{\dagger}\},a^{\dagger}]=2a^{\dagger}. \end{align*} Setting $H=\frac{1}{2}\{a,a^{\dagger}\}$, the above relations amount to \begin{align*} [H, a]=-a,\quad [H, a^{\dagger}]=a^{\dagger}, \end{align*} which correspond to the quantum mechanical equations of an oscillator. \subsection{Relation with $\mathfrak{osp}(1|2)$} The paraboson algebra is related to the Lie superalgebra $\mathfrak{osp}(1|2)$ \cite{Ganchev-1980}. Indeed, upon setting \begin{align*} F_{-}=a,\quad F_{+}=a^{\dagger},\quad E_0=H=\frac{1}{2}\{F_{+}, F_{-}\},\quad E_{+}=\frac{1}{2} F_{+}^2,\quad E_{-}=\frac{1}{2} F_{-}^2, \end{align*} and interpreting $F_{\pm}$ as odd generators, it is directly verified that the generators $F_{\pm}$, $E_{\pm}$ and $E_0$ satisfy the defining relations of $\mathfrak{osp}(1|2)$ \cite{Kac-1977}: \begin{gather*} [E_0, F_{\pm}]=\pm F_{\pm},\quad \{F_{+}, F_{-}\}=2 E_0,\quad [E_0, E_{\pm}]=\pm 2E_{\pm},\quad [E_{-}, E_{+}]=E_0, \\ [F_{\pm}, E_{\pm}]=0,\quad [F_{\pm}, E_{\mp}]=\mp F_{\mp}. \end{gather*} The $\mathfrak{osp}(1|2)$ Casimir operator reads \begin{align*} C_{\mathfrak{osp}(1|2)}=(E_0-1/2)^2-4E_{+}E_{-}-F_{+}F_{-}. \end{align*} \subsection{$sl_{q}(2)$} Consider now the quantum algebra $sl_{q}(2)$. It can be presented in terms of the generators $A_{0}$ and $A_{\pm}$ satisfying the commutation relations \cite{Vilenkin-1991} \begin{align*} [A_0,A_{\pm}]=\pm A_{\pm},\quad [A_{-}, A_{+}]=2 \frac{q^{A_0}-q^{-A_0}}{q-q^{-1}}. \end{align*} Upon setting \begin{align*} B_{+}=A_{+} q^{(A_0-1)/2},\quad B_{-}=q^{(A_0-1)/2}A_{-},\quad B_0=A_0, \end{align*} these relations become \begin{align*} [B_0, B_{\pm}]=\pm B_{\pm},\quad B_{-}B_{+}-q B_{+}B_{-}=2\frac{q^{2 B_0}-1}{q^2-1}. \end{align*} The $sl_{q}(2)$ Casimir operator is of the form \begin{align*} C_{sl_{q}(2)}=B_{+}B_{-}q^{-B_0}-\frac{2}{(q^2-1)(q-1)}(q^{B_0-1}+q^{-B_0}). \end{align*} Let $j$ be a non-negative integer. The algebra $sl_{q}(2)$ admits a discrete series representation on the basis $\ket{j,n}$ with the actions \begin{align*} q^{B_0}\ket{j,n}=q^{j+n}\ket{j,n},\qquad n=0,1,2,\ldots. \end{align*} The algebra has a non-trivial coproduct $\Delta: sl_{q}(2)\rightarrow sl_{q}(2)\otimes sl_{q}(2)$ which reads \begin{align*} \Delta(B_0)=B_0\otimes 1+1\otimes B_0,\quad \Delta(B_{\pm})=B_{\pm}\otimes q^{B_0}+1\otimes B_{\pm}. \end{align*} \subsection{The $sl_{-1}(2)$ algebra as a $q\rightarrow -1$ limit of $sl_{q}(2)$} The $sl_{-1}(2)$ algebra can be obtained as a $q\rightarrow -1$ limit of $sl_{q}(2)$. Let us first introduce the operator $R$ defined as \begin{align*} R=\lim_{q\rightarrow -1} q^{B_{0}}. \end{align*} It is easily seen that \begin{align*} R\ket{j,n}=(-1)^{j+n}\ket{j,n}=\epsilon (-1)^{n} \ket{j,n}, \end{align*} where $\epsilon=\pm 1$ depending on the parity of $j$, thus $R^2=1$. When $q\rightarrow -1$, one finds that \begin{gather*} \begin{matrix} q^{B_0}B_{+}=q B_{+}q^{B_0} \\ B_{-}q^{B_0}=q q^{B_0} B_{-} \end{matrix} \longrightarrow \{R, B_{\pm}\}=0, \\ B_{-}B_{+}-q B_{+}B_{-}=2\frac{q^{2B_0}-1}{q^2-1}\longrightarrow \{B_{+}, B_{-}\}=2B_{0}, \\ C_{sl_{q}(2)}\longrightarrow B_{+}B_{-}R-B_{0}R+R/2, \\ \Delta(B_{\pm})=B_{\pm}\otimes q^{B_0}+1\otimes B_{\pm}\longrightarrow \Delta(B_{\pm})=B_{\pm}\otimes R+1\otimes B_{\pm}. \end{gather*} In summary, $sl_{-1}(2)$ is the algebra generated by $J_{0}$, $J_{\pm}$ and $R$ with the relations \cite{Tsujimoto-2011-10} \begin{align} \label{SL} [J_0, J_{\pm}]=\pm J_{\pm},\quad [J_0, R]=0,\quad \{J_{\pm}, R\}=0,\quad \{J_{+}, J_{-}\}=2 J_0,\quad R^2=1. \end{align} The Casimir operator has the expression \begin{align} \label{SL-Casimir} Q=J_{+}J_{-}R-J_0 R+R/2, \end{align} and the coproduct is of the form \cite{Daska-2000-02} \begin{align} \label{SL-Coproduct} \Delta(J_0)=J_0\otimes 1+1\otimes J_0,\quad \Delta(J_{\pm})=J_{\pm}\otimes R+1\otimes J_{\pm},\quad \Delta(R)=R\otimes R. \end{align} The $sl_{-1}(2)$ algebra \eqref{SL} has irreducible and unitary discrete series representations with basis $\ket{\epsilon, \mu;n}$, where $n$ is a non-negative integer, $\epsilon=\pm 1$ and $\mu$ is a real number such that $\mu>-1/2$. These representations are defined by the following actions: \begin{gather*} J_{0}\ket{\epsilon,\mu;n}=(n+\mu+\frac{1}{2})\ket{\epsilon, \mu;n},\quad R\ket{\epsilon,\mu;n}=\epsilon (-1)^{n}\ket{\epsilon,\mu;n}, \\ J_{+}\ket{\epsilon,\mu;n}=\rho_{n+1}\ket{\epsilon,\mu;n+1},\quad J_{-}\ket{\epsilon,\mu;n}=\rho_{n}\ket{\epsilon,\mu;n-1}, \end{gather*} where $\rho_{n}=\sqrt{n+\mu (1-(-1)^{n})}$. In these representations, the Casimir operator takes the value \begin{align*} Q\ket{\epsilon,\mu;n}=-\epsilon \mu \ket{\epsilon,\mu;n}. \end{align*} These modules will be denoted by $V^{(\epsilon,\mu)}$. Let us offer the following remarks. \begin{itemize} \item The $sl_{-1}(2)$ algebra corresponds to the parabose algebra supplemented by $R$. \item The $sl_{-1}(2)$ algebra consists of the Cartan generator $J_0$ and the two odd elements of $\mathfrak{osp}(1|2)$ supplemented by the involution $R$. \item One has $C_{\mathfrak{osp}(1|2)}=Q^2$, where $Q$ is given by \eqref{SL-Casimir}. Thus the introduction of $R$ allows to take the square-root of $C_{\mathfrak{osp}(1|2)}$. \item In $sl_{-1}(2)$, one has $[J_{-}, J_{+}]=1-2 QR$. On the module $V^{(\epsilon,\mu)}$, this leads to \begin{align*} [J_{-}, J_{+}]=1+2\epsilon \mu R. \end{align*} \end{itemize} \section{Dunkl operators} The irreducible modules $V^{(\epsilon,\mu)}$ of $sl_{-1}(2)$ can be realized by Dunkl operators on the real line. Let $R_{x}$ be the reflection operator \begin{align*} R_{x} f(x)=f(-x). \end{align*} The $\mathbb{Z}_2$-Dunkl operator on $\mathbb{R}$ is defined by \cite{Dunkl-1989-01} \begin{align*} D_{x}=\frac{\partial}{\partial x}+\frac{\nu}{x}(1-R_{x}), \end{align*} where $\nu$ is a real number such that $\nu>-1/2$. Upon introducing the operators \begin{align*} \widehat{J}_{\pm}=\frac{1}{\sqrt{2}}(x\mp D_{x}), \end{align*} and defining $\widehat{J}_0=\frac{1}{2}\{\widehat{J}_{-}, \widehat{J}_{+}\}$, it is readily verified that a realization of the $sl_{-1}(2)$-module $V^{(\epsilon,\mu)}$ with $\epsilon=1$ and $\mu=\nu$ is obtained. In particular, one has \begin{align*} [\widehat{J}_{-}, \widehat{J}_{+}]=1+2\nu R_{x}. \end{align*} It can be seen that $\widehat{J}_{\pm}^{\dagger}=\widehat{J}_{\mp}$ with respect to the measure $|x|^{2\nu}\,\mathrm{d}x$ on the real line \cite{Genest-2013-04}. \section{The Racah problem for $sl_{-1}(2)$ and the Bannai-Ito algebra} The Racah problem for $sl_{-1}(2)$ presents itself when the direct product of three irreducible representations is examined. We consider the three-fold tensor product \begin{align*} V=V^{(\epsilon_1,\mu_1)}\otimes V^{(\epsilon_2,\mu_2)}\otimes V^{(\epsilon_3,\mu_3)}. \end{align*} It follows from the coproduct formula \eqref{SL-Coproduct} that the generators of $sl_{-1}(2)$ on $V$ are of the form \begin{gather*} J^{(4)}=J_0^{(1)}+J_0^{(2)}+J_0^{(3)},\quad J_{\pm}^{(4)}=J_{\pm}^{(1)}R^{(2)}R^{(3)}+J_{\pm}^{(2)}R^{(3)}+J_{\pm}^{(3)},\quad R^{(4)}=R^{(1)}R^{(2)}R^{(3)}, \end{gather*} where the superscripts indicate on which module the generators act. In forming the module $V$, two sequences are possible: one can first combine $(1)$ and $(2)$ to bring $(3)$ after or one can combine $(2)$ and $(3)$ before adding $(1)$. This is represented by \begin{align} \label{Coupling-Scheme} \left(V^{(\epsilon_1,\mu_1)}\otimes V^{(\epsilon_2,\mu_2)}\right)\otimes V^{(\epsilon_3,\mu_3)}\quad \text{or}\quad V^{(\epsilon_1,\mu_1)}\otimes \left(V^{(\epsilon_2,\mu_2)}\otimes V^{(\epsilon_3,\mu_3)}\right). \end{align} These two addition schemes are equivalent and the two corresponding bases are unitarily related. In the following, three types of Casimir operators will be distinguished. \begin{itemize} \item The initial Casimir operators \begin{align*} Q_{i}=J_{+}^{(i)}J_{-}^{(i)}R^{(i)}-(J_0^{(i)}-1/2)R^{(i)}=-\epsilon_i\mu_i,\qquad i=1,2,3. \end{align*} \item The intermediate Casimir operators \begin{align*} Q_{ij}&=(J_{+}^{(i)}R^{(j)}+J_{+}^{(j)})(J_{-}^{(i)}R^{(j)}+J_{-}^{(j)})R^{(i)}R^{(j)}-(J_{0}^{(i)}+J_0^{(j)}-1/2)R^{(i)}R^{(j)} \\ &=(J_{-}^{(i)}J_{+}^{(j)}-J_{+}^{(i)}J_{-}^{(j)})R^{(i)}-R^{(i)}R^{(j)}/2+Q_{i}R^{(j)}+Q_{j}R^{(i)}, \end{align*} where $(ij)=(12), (23)$. \item The total Casimir operator \begin{align*} Q_{4}=[J_{+}^{(4)}J_{-}^{(4)}-(J_0^{(4)}-1/2)]R^{(4)}. \end{align*} \end{itemize} Let $\ket{q_{12}, q_{4}; m}$ and $\ket{q_{23}, q_{4};m}$ be the orthonormal bases associated to the two coupling schemes presented in \eqref{Coupling-Scheme}. These two bases are defined by the relations \begin{align*} Q_{12}\ket{q_{12}, q_{4};m}=q_{12}\ket{q_{12}, q_{4};m},\quad Q_{23}\ket{q_{23},q_{4};m}=q_{23}\ket{q_{23},q_{4};m}, \end{align*} and \begin{align*} Q_{4} \ket{-, q_{4};m}=q_{4} \ket{-, q_{4};m},\quad J_0^{(4)}\ket{-, q_{4};m}=(m+\mu_1+\mu_2+\mu_3+3/2)\ket{-,q_{4};m}. \end{align*} The Racah problem consists in finding the overlap coefficients \begin{align*} \braket{q_{23}, q_{4}}{q_{12},q_{4}}, \end{align*} between the eigenbases of $Q_{12}$ and $Q_{23}$ with a fixed value $q_{4}$ of the total Casimir operator $Q_{4}$; as these coefficients do not depend on $m$, we drop this label. For simplicity, let us now take \begin{align*} \epsilon_1=\epsilon_2=\epsilon_3=1. \end{align*} Upon defining \begin{align*} K_1=-Q_{23},\qquad K_3=-Q_{12}, \end{align*} one finds that the intermediate Casimir operators of $sl_{-1}(2)$ realize the Bannai-Ito algebra \cite{Genest-2012} \begin{align} \label{BI-Racah} \{K_1,K_3\}=K_2+\Omega_2,\quad \{K_1,K_2\}=K_3+\Omega_3,\quad \{K_2,K_3\}=K_1+\Omega_1, \end{align} with structure constants \begin{align} \label{Structure-Constants} \Omega_1=2(\mu_1\mu+\mu_2\mu_3),\quad \Omega_2=2(\mu_1\mu_3+\mu_2 \mu),\quad \Omega_3=2(\mu_1\mu_2+\mu_3\mu), \end{align} where $\mu=\epsilon_4 \mu_4=-q_{4}$. The first relation in \eqref{BI-Racah} can be taken to define $K_2$ which reads \begin{align*} K_2=(J_{+}^{(1)}J_{-}^{(3)}-J_{-}^{(1)}J_{+}^{(3)})R^{(1)}R^{(2)}+R^{(1)}R^{(3)}/2-Q_{1}R^{(3)}-Q_{3}R^{(1)}. \end{align*} In the present realization the Casimir operator of the Bannai-Ito algebra becomes \begin{align*} Q_{\text{BI}}=\mu_1^2+\mu_2^2+\mu_3^2+\mu_4^2-1/4. \end{align*} It has been shown in section 3 that the Bannai-Ito polynomials form a basis for a representation of the BI algebra. It is here relatively easy to construct the representation of the BI algebra on bases of the three-fold tensor product module $V$ with basis vectors defined as eigenvectors of $Q_{12}$ or of $Q_{23}$. The first step is to obtain the spectra of the intermediate Casimir operators. Simple considerations based on the nature of the $sl_{-1}(2)$ representation show that the eigenvalues $q_{12}$ and $q_{23}$ of $Q_{12}$ and $Q_{23}$ take the form \cite{Genest-2013-12, Genest-2013-02, Genest-2012, Tsujimoto-2011-10}: \begin{align*} q_{12}=(-1)^{s_{12}+1}(s_{12}+\mu_1+\mu_2+1/2),\quad q_{23}=(-1)^{s_{23}}(s_{23}+\mu_2+\mu_3+1/2), \end{align*} where $s_{12}, s_{23}=0,1,\ldots, N$. The non-negative integer $N$ is specified by \begin{align*} N+1=\mu_4-\mu_1-\mu_2-\mu_3. \end{align*} Denote the eigenstates of $K_3$ by $\ket{k}$ and those of $K_1$ by $\ket{s}$; one has \begin{align*} K_3 \ket{k}=(-1)^{k}(k+\mu_1+\mu_2+1/2)\ket{k},\quad K_1\ket{s}=(-1)^{s}(s+\mu_2+\mu_3+1/2)\ket{s}. \end{align*} Given the expressions \eqref{Structure-Constants} for the structure constants $\Omega_{k}$, one can proceed to determine the $(N+1)\times (N+1)$ matrices that verify the anticommutation relations \eqref{BI-Racah}. The action of $K_1$ on $\ket{k}$ is found to be \cite{Genest-2012}: \begin{align*} K_1\ket{k}=U_{k+1}\ket{k+1}+V_{k}\ket{k}+U_{k}\ket{k-1}, \end{align*} with $V_{k}=\mu_2+\mu_3+1/2-B_{k}-D_{k}$ and $U_{k}=\sqrt{B_{k-1}D_{k}}$ where \begin{align*} B_{k}&= \begin{cases} \frac{(k+2\mu_2+1)(k+\mu_1+\mu_2+\mu_3-\mu+1)}{2(k+\mu_1+\mu_2+1)} & \text{$k$ even,} \\ \frac{(k+2\mu_1+2\mu_2+1)(k+\mu_1+\mu_2+\mu_3+\mu+1)}{2(k+\mu_1+\mu_2+1)} & \text{$k$ odd,} \end{cases} \\ D_{k}&= \begin{cases} -\frac{k(k+\mu_1+\mu_2-\mu_3-\mu)}{2(k+\mu_1+\mu)2)} & \text{$n$ even}, \\ -\frac{(k+2\mu_1)(k+\mu_1+\mu_2-\mu_3+\mu)}{2(k+\mu_1+\mu_2)} & \text{$n$ odd}. \end{cases} \end{align*} Under the identifications \begin{align*} \rho_1=\frac{1}{2}(\mu_2+\mu_3),\quad \rho_2=\frac{1}{2}(\mu_1+\mu),\quad r_1=\frac{1}{2}(\mu_3-\mu_2),\quad r_2=\frac{1}{2}(\mu-\mu_1), \end{align*} one has $B_{k}=2A_{k}$, $D_{k}=2C_{k}$, where $A_{k}$ and $C_{k}$ are the recurrence coefficients \eqref{BI-RECU} of the Bannai-Ito polynomials. Upon setting \begin{align*} \braket{s}{k}=w(s) 2^{k} B_k(x_{s}),\qquad B_{0}(x_{s})\equiv 1, \end{align*} one has on the one hand \begin{align*} \Braket{s}{K_1}{k}=(-1)^{s}(s+2\rho_1+1/2) \,\braket{s}{k}, \end{align*} and on the other hand \begin{align*} \Braket{s}{K_1}{k}= U_{k+1} \braket{s}{k+1}+V_{k}\braket{s}{k}+U_{k-1}\braket{s}{k-1}. \end{align*} Comparing the two RHS yields \begin{align*} x_{s} B_{k}(x_{s})=B_{k+1}(x_{s})+(\rho_1-A_{k}-C_{k})B_{k}(x_{s})+A_{k-1}C_{k} B_{k-1}(x_{s}), \end{align*} where $x_{s}$ are the points of the Bannai-Ito grid \begin{align*} x_{s}=(-1)^{s}\left(\frac{s}{2}+\rho_1+1/4\right)-1/4,\quad s=0,\ldots, N. \end{align*} Hence the Racah coefficients of $sl_{-1}(2)$ are proportional to the Bannai-Ito polynomials. The algebra \eqref{BI-Racah} with structure constants \eqref{Structure-Constants} is invariant under the cyclic permutations of the pairs $(K_i, \mu_i)$. As a result, the representations in the basis where $K_1$ is diagonal can be obtained directly. In this basis, the operator $K_3$ is seen to be tridiagonal, which proves again that the Bannai-Ito polynomials possess the Leonard duality property. \section{A superintegrable model on $S^2$ with Bannai-Ito symmetry} We shall now use the analysis of the Racah problem for $sl_{-1}(2)$ and its realization in terms of Dunkl operators to obtain a superintegrable model on the two-sphere. Recall that a quantum system in $n$ dimensions with Hamiltonian $H$ is maximally superintegrable it it possesses $2n-1$ algebraically independent constants of motion, where one of these constants is $H$ \cite{Miller-2013-10}. Let $(s_1,s_2,s_3)\in \mathbb{R}$ and take $s_1^2+s_2^2+s_3^2=1$. The standard angular momentum operators are \begin{align*} L_1=\frac{1}{i}\left(s_2 \frac{\partial}{\partial s_3}-s_3 \frac{\partial}{\partial s_2}\right),\quad L_2=\frac{1}{i}\left(s_3 \frac{\partial}{\partial s_1}-s_1 \frac{\partial}{\partial s_3}\right), \quad L_3=\frac{1}{i}\left(s_1 \frac{\partial}{\partial s_2}-s_2 \frac{\partial}{\partial s_1}\right). \end{align*} The system governed by the Hamiltonian \begin{align} \label{Hamiltonian} H=L_1^2+L_2^2+L_3^2+\frac{\mu_1}{s_1^2}(\mu_1-R_1)+\frac{\mu_2}{s_2^2}(\mu_2-R_{2})+\frac{\mu_3}{s_3^2}(\mu_3-R_3), \end{align} with $\mu_i$, $i=1,2,3$, real parameters such that $\mu_i>-1/2$ is superintegrable \cite{Genest-2014-1}. \begin{enumerate} \item The operators $R_i$ reflect the variable $s_i$: $R_i f(s_i)=f(-s_i)$. \item The operators $R_i$ commute with the Hamiltonian: $[H, R_i]=0$. \item If one is concerned with the presence of reflection operators in a Hamiltonian, one may replace $R_i$ by $\kappa_i=\pm 1$. This then treats the 8 potential terms \begin{align*} \frac{\mu_1}{s_1^2}(\mu_1-\kappa_1)+\frac{\mu_2}{s_2^2}(\mu_2-\kappa_2)+\frac{\mu_3}{s_3^2}(\mu_3-\kappa_3), \end{align*} simultaneously much like supersymmetric partners. \item Rescaling $s_i\rightarrow r s_i$ and taking the limit as $r\rightarrow \infty$ gives the Hamiltonian of the Dunkl oscillator \cite{Genest-2013-04, Genest-2013-09} \begin{align*} \widetilde{H}=-[D_{x_1}^2+D_{x_2}^2]+\widehat{\mu}_3^2(x_1^2+x_2^2), \end{align*} after appropriate renormalization; see also \cite{Genest-2013-10, Genest-2013-07, Genest-2013-12-1}. \end{enumerate} It can be checked that the following three quantities commute with the Hamiltonian \eqref{Hamiltonian} \cite{Genest-2013-12, Genest-2014-1}: \begin{align*} C_1&=\left(i L_1+\mu_2 \frac{s_3}{s_2}R_2-\mu_3 \frac{s_2}{s_3} R_{3}\right)R_2+\mu_2 R_3+\mu_3 R_2+R_2R_3/2, \\ C_2&=\left(-i L_2+\mu_1 \frac{s_3}{s_1} R_1-\mu_3 \frac{s_1}{s_3} R_{3}\right)R_1R_2+\mu_1 R_3+\mu_3 R_1+R_1R_3/2, \\ C_3&=\left(i L_3+\mu_1 \frac{s_2}{s_1}R_1-\mu_2 \frac{s_1}{s_2}R_2\right)R_1+\mu_1 R_2+\mu_2 R_1+R_1R_2/2, \end{align*} that is, $[H, C_i]=0$ for $i=1,2,3$. To determine the symmetry algebra generated by the above constants of motion, let us return to the Racah problem for $sl_{-1}(2)$. Consider the following (gauge transformed) parabosonic realization of $sl_{-1}(2)$ in the three variables $s_i$: \begin{align} \label{Para-Realization} J_{\pm}^{(i)}=\frac{1}{\sqrt{2}}\left[s_i \mp \partial_{s_i}\pm \frac{\mu_i}{s_i}R_i\right],\quad J_0^{(i)}=\frac{1,}{2}\left[-\partial_{s_i}^2+s_i^2+\frac{\mu_i}{s_i^2}(\mu_i-R_i)\right],\quad R^{(i)}=R_{i}, \end{align} for $i=1,2,3$. Consider also the addition of these three realizations so that \begin{align} \label{Realization-2} \begin{aligned} J_0=J_0^{(1)}+J_0^{(2)}+J_0^{(3)}, \quad J_{\pm}=J_{\pm}^{(1)}R^{(2)}R^{(3)}+J_{\pm}^{(2)}R^{(3)}+J_{\pm}^{(3)},\quad R=R^{(1)}R^{(2)}R^{(3)}. \end{aligned} \end{align} It is observed that in the realization \eqref{Realization-2}, the total Casimir operator can be expressed in terms of the constants of motion as follows: \begin{align*} Q=-C_1 R^{(1)}-C_2 R^{(2)}-C_3 R^{(3)}+\mu_1 R^{(2)}R^{(3)}+\mu_2 R^{(1)}R^{(3)}+\mu_3 R^{(1)}R^{(2)}+R/2, \end{align*} Upon taking \begin{align*} \Omega=Q R, \end{align*} one finds \begin{align} \label{Expression} \Omega^2+\Omega=L_1^2+L_2^2+L_3^2+(s_1^2+s_2^2+s_3^2)\left(\frac{\mu_1}{s_1^2}(\mu_1-R_1)+\frac{\mu_2}{s_2^2}(\mu_2-R_{2})+\frac{\mu_3}{s_3^2}(\mu_3-R_3)\right), \end{align} so that $H=\Omega^2+\Omega$ if $s_1^2+s_2^2+s_3^2=1$. Assuming this constraint can be imposed, $H$ is a quadratic combination of $QR$. By construction, the intermediate Casimir operators $Q_{ij}$ commute with the total Casimir operator $Q$ and with $R$ and hence with $\Omega$; they thus commute with $H=\Omega^2+\Omega$ and are the constants of motion. It is indeed found that \begin{align*} Q_{12}=-C_3,\qquad Q_{23}=-C_{1}, \end{align*} in the parabosonic realization \eqref{Para-Realization}. Let us return to the constraint $s_1^2+s_2^2+s_3^2=1$. Observe that \begin{align*} \frac{1}{2}(J_{+}+J_{-})^2=(s_1 R_2 R_3+s_2 R_3+s_3)^2=s_1^2+s_2^2+s_3^2. \end{align*} Because $(J_{+}+J_{-})^2$ commutes with $\Omega=QR$, $Q_{12}$ and $Q_{23}$, one can impose $s_1^2+s_2^2+s_3^2=1$. Since it is already known that the intermediate Casimir operators in the addition of three $sl_{-1}(2)$ representations satisfy the Bannai-Ito structure relations, the constants of motion verify \begin{align*} \{C_1,C_2\}&=C_3-2\mu_3 Q+2\mu_1 \mu_2, \\ \{C_2, C_3\}&= C_1-2\mu_1 Q+2\mu_2 \mu_3, \\ \{C_3, C_1\}&= C_2-2\mu_2 Q+2\mu_3 \mu_1, \end{align*} and thus the symmetry algebra of the superintegrable system with Hamiltonian \eqref{Hamiltonian} is a central extension (with $Q$ begin the central operator) of the Bannai-Ito algebra. Let us note that the relation $H=\Omega^2+\Omega$ relates to chiral supersymmetry since with $S=\Omega+1/2$ one has \begin{align*} \frac{1}{2}\{S,S\}=H+1/4. \end{align*} \section{A Dunkl-Dirac equation on $S^2$} Consider the $\mathbb{Z}_2$-Dunkl operators \begin{align*} D_i=\frac{\partial}{\partial x_i}+\frac{\mu_i}{x_i}(1-R_i),\qquad i=1,2,\ldots,n, \end{align*} with $\mu_i>-1/2$. The $\mathbb{Z}_2^{n}$-Dunkl-Laplace operator is \begin{align*} \vec{D}^2=\sum_{i=1}^{n} D_i^2. \end{align*} With $\gamma_n$ the generators of the Euclidean Clifford algebra \begin{align*} \{\gamma_m,\gamma_{n}\}=2\delta_{nm}, \end{align*} the Dunkl-Dirac operator is \begin{align*} \slashed{D}=\sum_{i=1}^{n} \gamma_i D_{i}. \end{align*} Clearly, one has $\slashed{D}^2=\vec{D}^2$. Let us consider the three-dimensional case. Introduce the Dunkl ``angular momentum'' operators \begin{align*} J_1=\frac{1}{i}(x_2 D_3-x_3 D_2),\quad J_2=\frac{1}{i}(x_3 D_1-x_1 D_3),\quad J_3=\frac{1}{i}(x_1 D_2-x_2 D_1). \end{align*} Their commutation relations are found to be \begin{align} \label{Comm-1} [J_i, J_k]=i\epsilon_{jkl} J_{l}(1+2\mu_{l} R_{l}). \end{align} The Dunkl-Laplace equation separates in spherical coordinates; i.e. one can write \begin{align*} \vec{D}^2=D_1^2+D_2^2+D_3^2=\mathcal{M}_r+\frac{1}{r^2}\Delta_{S^2}, \end{align*} where $\Delta_{S^2}$ is the Dunkl-Laplacian on the 2-sphere. It can be verified that \cite{Genest-2013-12-1} \begin{align} \label{J-Square} \begin{aligned} \vec{J}^2&=J_1^2+J_2^2+J_3^2 \\ &=-\Delta_{S^2}+2\mu_1\mu_2(1-R_1R_2)+2\mu_2 \mu_3 (1-R_2R_3)+2\mu_1 \mu_3 (1-R_1R_3) \\ & \qquad \qquad \qquad -\mu_1 R_1-\mu_2 R_2-\mu_3 R_3+\mu_1+\mu_2+\mu_3. \end{aligned} \end{align} In three dimensions the Euclidean Clifford algebra is realized by the Pauli matrices \begin{align*} \sigma_1= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad \sigma_2= \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix}, \quad \sigma_3= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, \end{align*} which satisfy \begin{align*} \sigma_i\sigma_j=i \epsilon_{ijk}\sigma_k+\delta_{ij}. \end{align*} Consider the following operator: \begin{align*} \Gamma=(\vec{\sigma}\cdot \vec{J})+\vec{\mu}\cdot \vec{R}, \end{align*} with $\vec{\mu}\cdot \vec{R}=\mu_1 R_1+\mu_2 R_2+\mu_3 R_3$. Using the commutation relations \eqref{Comm-1} and the expression \eqref{J-Square} for $\vec{J}^2$, it follows that \begin{align*} \Gamma^2+\Gamma=-\Delta_{S^2}+(\mu_1+\mu_2+\mu_3)(\mu_1+\mu_2+\mu_3+1). \end{align*} This is reminiscent of the expression \eqref{Expression} for the superintegrable system with Hamiltonian \eqref{Hamiltonian} in terms of the $sl_{-1}(2)$ Casimir operator. This justifies calling $\Gamma$ a Dunkl-Dirac operator on $S^2$ since a quadratic expression in $\Gamma$ gives $\Delta_{S^2}$. The symmetries of $\Gamma$ can be constructed. They are found to have the expression \cite{DeBie-2014} \begin{align*} M_i=J_i+\sigma_i(\mu_j R_j+\mu_k R_k+1/2),\quad \text{$(ijk)$ cyclic}, \end{align*} and one has $[\Gamma, M_i]=0$. It is seen that the operators \begin{align*} X_i=\sigma_i R_i\qquad i=1,2,3 \end{align*} also commute with $\Gamma$. Furthermore, one has \begin{align*} [M_i, X_i]=0,\qquad \{M_i, X_j\}=\{M_i, X_k\}=0. \end{align*} Note that $Y=-i X_1X_2X_3=R_1R_2R_3$ is central (like $\Gamma$). The commutation relations satisfied by the operators $M_i$ are \begin{align*} [M_i, M_j]=i\epsilon_{ijk}\left(M_{k}+2\mu_k(\Gamma+1)X_{k}\right)+2\mu_i\mu_j[X_i, X_j]. \end{align*} This is again an extension of $\mathfrak{su}(2)$ with reflections and central elements. Let \begin{align*} K_i=M_i X_i Y=M_i \sigma_i R_{j}R_{k}. \end{align*} It is readily verified that the operators $K_i$ satisfy \begin{align*} \{K_1,K_2\}&=K_3+2\mu_3(\Gamma+1)Y+2\mu_1\mu_2, \\ \{K_2,K_3\}&=K_1+2\mu_1 (\Gamma+1)Y+2\mu_2\mu_3, \\ \{K_3,K_1\}&= K_2+2\mu_3(\Gamma+1)Y+2\mu_3\mu_1, \end{align*} showing that the Bannai-Ito algebra is a symmetry subalgebra of the Dunkl-Dirac equation on $S^2$. Therefore, the Bannai-Ito algebra is also a symmetry subalgebra of the Dunkl-Laplace equation. \section{Conclusion} In this paper, we have presented the Bannai-Ito algebra together with some of its applications. In concluding this overview, we identify some open questions. \begin{enumerate} \item Representation theory of the Bannai-Ito algebra\medskip Finite-dimensional representations of the Bannai-Ito algebra associated to certain models were presented. However, the complete characterization of all representations of the Bannai-Ito algebra is not known. \item Supersymmetry\medskip The parallel with supersymmetry has been underscored at various points. One may wonder if there is a deeper connection. \item Dimensional reduction \medskip It is well known that quantum superintegrable models can be obtained by dimensional reduction. It would be of interest to adapt this framework in the presence of reflections operators. Could the BI algebra can be interpreted as a $W$-algebra ?\medskip \item Higher ranks\medskip Of great interest is the extension of the Bannai-Ito algebra to higher ranks, in particular for many-body applications. In this connection, it can be expected that the symmetry analysis of higher dimensional superintegrable models or Dunkl-Dirac equations will be revealing. \end{enumerate} \section*{Acknowledgements} V.X.G. holds an Alexander-Graham-Bell fellowship from the Natural Science and Engineering Research Council of Canada (NSERC). The research of L.V. is supported in part by NSERC. H. DB. and A.Z. have benefited from the hospitality of the Centre de recherches math\'ematiques (CRM). \section*{References} \bibliographystyle{iopart-num} \providecommand{\newblock}{}
1,108,101,564,097
arxiv
\section{Introduction} \IEEEPARstart{E}{lectricity} demand forecasting is a crucial task for grid operators. Indeed the production must balance the consumption as storage capacities are still negligible compared to the load. Time series methods have been applied to address that problem, relying on calendar information and lags of the electricity consumption. Statistical and machine learning models have been designed to use exogenous information such as meteorological forecasts (the load usually depends on the temperature for instance, due to electric heating and cooling). The field has been thoroughly studied the past decades and we will not propose here an exhaustive bibliographic study. Instead, we choose to focus on recent results in the different forecasting challenges related to this the field. The Global Energy Forecasting Competitions (GEFCOM) (\cite{hong2014global}, \cite{HONG2016896} and \cite{hong2019global}) provide a large benchmark of popular and efficient load forecasting methods. Black box machine learning models such as gradient boosting machines \cite{lloyd2014gefcom2012} and neural networks \cite{ryu2017deep, dimoulkas2019neural} rank among the first as well as statistical models like Generalized Additive Models (GAM) \cite{nedellec2014gefcom2012, dordonnat2016gefcom2014} or parametric regression models \cite{CHARLTON2014364, ZIEL20191400}. Ensemble methods or expert aggregation are also a common practice for competitors \cite{gaillard2016additive, smyl2019machine}. The behaviour of the consumption changed abruptly during the coronavirus crisis, especially during lockdowns imposed by many governments. These changes of consumption mode have been challenging for electricity grid operators as historical forecasting procedures performed poorly. It is to be noted that purely time series methods like autoregressives didn't drift as they are very adaptive in essence, but they fail to capture the dependence of the load to, for instance, meteorological variables. Therefore designing new forecasting strategies to take that evolution into account is important to reduce the cost of forecasting errors and to ensure the stability of the network in the future. We claim that state-space models allow the best of both worlds. First, machine learning models trained on historical data are used to design new feature representations. Second, a state-space representation yields a methodology to adapt these complex forecasting models. Our work extends a previous study on the French electricity load \cite{obst2021adaptive} where a state-space approach to adapt generalized additive models was presented. The novelty of this article lies both in the method and in the application. First, besides generalized additive models we apply our procedure to other widely used machine learning models including neural networks. Second, after applying a standard Kalman filter we apply VIKING (Variational Bayesian Variance Tracking), another state-space approach allowing to estimate jointly the state and the variances \cite{tsw}. Third, our procedure resulted in the winning strategy in a competition on post-covid day-ahead electricity demand forecasting\footnote{\href{https://ieee-dataport.org/competitions/day-ahead-electricity-demand-forecasting-post-covid-paradigm}{https://ieee-dataport.org/competitions/day-ahead-electricity-demand-forecasting-post-covid-paradigm}}, motivating the efficiency of the proposed approach. Section \ref{sec:data_pres} is an introduction to the competition and we detail how we handled the data provided. In Section \ref{sec:timeinvariant} we discuss the meteorological variables and we present standard forecasting methods. The core of our strategy is Section \ref{sec:adaptation} where we propose a generic state-space framework to adapt these methods. We discuss the numerical performances of the various models in Section \ref{sec:experiments} and we combine them through aggregation of experts to leverage each model's advantages. \section{Data Presentation and Pre-Processing}\label{sec:data_pres} The objective of the competition was to predict the electricity load of an undisclosed location of average consumption 1.1 GW, that is of the order of one million people in western countries. The break in the electricity demand in March 2020 is clear in Figure \ref{fig:daily_profile}. The objective of the competition was to design new strategies for day-ahead forecasting in order to be robust to this unstable period. \begin{figure} \centering \includegraphics[width=8cm]{load.png} \includegraphics[width=8cm]{daily_profiles.pdf} \caption{On top: electricity load from March 18\textsuperscript{th} 2017 to February 16\textsuperscript{th} 2021. On the bottom: daily profiles of the electricity load in March-April 2019 compared to March-April 2020.} \label{fig:daily_profile} \end{figure} \subsection{Time segmentation} The competition's setting was to forecast the hourly load 16 to 40 hours ahead in an online manner. Precisely we had to predict the consumption of each hour of day d with data up to 8AM day d-1. After our prediction was sent a new batch of data up to 8AM day d was released so that we had to predict day d+1 ... The evaluation was based on the Mean Average Error on the period ranging from January 18\textsuperscript{th} to February 16\textsuperscript{th} 2021. To build a forecasting model, the historical load starting from March 18\textsuperscript{th} 2017 was provided, as well as meteorological forecasts and realizations during the same period. \subsection{Meteorological Forecasts} Aside from calendar variables, is is usual that the most important exogenous factor explaining the electricity demand is meteorology. The dependence of the load to the temperature for example is due to electric heating and cooling. Moreover, the dependence of the electricity demand to meteorology is augmented by the development of decentralized renewables. Indeed, small renewable production is often used by its owner, yielding a net consumption that highly depends on wind or solar radiation. Therefore the error of a forecasting model for the electricity demand crucially depends on the performance of the meteorological forecasts. The data of the competition includes forecasts and realization of the temperature, the cloud cover, the pressure, the wind direction and speed. These forecasts are assumed to be known 48 hours in advance and invariant after, thus they can be used to forecast the load at the 16 to 40 hours horizon. However from the statistical properties of the meteorological forecasting residuals (c.f. Figure \ref{fig:meteoresiduals}), we conjecture that the forecasts come from physical models that need to be statistically corrected. Indeed, as the forecasts are available 48 hours in advance, if a statistical correction had been applied then auto-correlations of the residuals over 48 hours would be negligible. \begin{figure} \centering \includegraphics[width=4.3cm]{tempres.pdf} \includegraphics[width=4.3cm]{cloudcoverres.pdf} \includegraphics[width=4.3cm]{tempres0.pdf} \includegraphics[width=4.3cm]{tempres3.pdf} \caption{On top: auto-correlation plots of the temperature (left) and cloud cover (right) forecasting residuals, with lags in hours. On the bottom: auto-correlation plots of the temperature residuals focused on a specific hour of the day (midnight on the left, 3AM on the right), with lags in days.} \label{fig:meteoresiduals} \end{figure} We thus use correction models close to autoregressives on the residuals. Formally, let $(z_t)$ be any of the meteorological variable and $(\hat z_t)$ the forecast given in the data set. Then we use the model \begin{align*} z_t = \alpha \hat z_t + \sum\limits_{l\in \mathcal{L}_{p,P,h(t)}} \beta_l (z_{t-l} - \hat z_{t-l}) + \gamma z_{t-l_0(t)} +\delta + \varepsilon_t \,, \end{align*} where $h(t)\in\{0,\hdots,23\}$ is the hour of the day of time $t$ and \begin{align*} & l_0(t) = \begin{cases} 24 & \text{if } h\le 7\,, \\ 48 & \text{if } h>7\,, \end{cases} \\ & \mathcal{L}_{p,P,h} = \begin{cases} \resizebox{0.52\hsize}{!}{$\{24,\hdots,24*P,h+17,\hdots,h+16+p\}$} & \text{if } h\le 7\,, \\ \resizebox{0.6\hsize}{!}{$\{48,\hdots,24*(P+1),h+17,\hdots,h+16+p\}$} & \text{if } h>7\,. \end{cases} \end{align*} In other words, we forecast the residual of the variable of interest with a linear model on \begin{itemize} \item the last $P$ available daily lags of the residual, \item the last $p$ available lags of the residual (up to 7AM of the previous day), \item the forecast, \item the last daily lag of the variable of interest. \end{itemize} We optimize the coefficients separately for each hour of the day for the temperature, whereas we use the same coefficients at each hour of the day for the cloud cover, pressure and wind speed (except the intercept term). We don't correct the wind direction. The parameters $p$ and $P$ are selected based on BIC. We display in Table \ref{tab:weather_correction} the error of the initial forecast, compared to simply using the last daily lag of the variable of interest, and our corrected forecast. \begin{table} \begin{center} \caption{Mean Average Error of different meteorological forecasts.} \label{tab:weather_correction} \begin{tabular}{|c c c c|} \hline & Initial & Last daily lag & Corrected \\ \hline Temperature ($^\circ$C) & 3.00 & 2.11 & 1.69 \\ \hline Cloud cover (\%) & 17.28 & 18.74 & 14.99 \\ \hline Pressure (kPa) & 0.506 & 1.30 & 0.423 \\ \hline Wind Speed (km/h) & 4.53 & 3.49 & 2.53 \\ \hline \end{tabular} \end{center} The first column is the forecast given in the data set. The second one consists in using the variable of interest with a 24 or 48-hour delay. The last is our corrected forecast. We evaluate through the mean average error during 2020 while we train the corrections on the data before 2020. \end{table} \section{Time-invariant Experts}\label{sec:timeinvariant} We summarize the explanatory variables used in our forecasting models: \begin{itemize} \item calendar variables: the hour of the day, the day of the week, the time of year ($Toy$) growing linearly from 0 on January 1\textsuperscript{st} to 1 on December 31\textsuperscript{st}, and a variable growing linearly with time to account for a trend, \item meteorological forecasts after statistical correction: the temperature along with exponential smoothing variants of parameters 0.95 and 0.99 (respectively $Temps95$ and $Temps99$), the cloud cover, the pressure, the wind direction and speed, \item lags of the electricity load: the load a week ago $LoadW$ and the last load available $LoadD$ (a day ago for the forecast before 8AM and two days ago after 8AM, this constraint coming from the availability of the online data during the competition). \end{itemize} The dependence to the hour of the day and the day of the week is well observed in Figure \ref{fig:daily_profile}. We display in Figure \ref{fig:explore} the dependence of the load to a few of the aforementioned covariates. \begin{figure} \centering \includegraphics[width=4.3cm]{toy_load3.pdf} \includegraphics[width=4.3cm]{temp_load3.pdf} \includegraphics[width=4.3cm]{load2D_load3.pdf} \includegraphics[width=4.3cm]{loadw_load3.pdf} \caption{Dependence of the load at 3PM to different covariates on the data up to January 1\textsuperscript{st} 2020.} \label{fig:explore} \end{figure} \subsection{Statistical and Machine Learning Methods}\label{sec:statsml} Based on the covariates we experiment a few classical predictive models. We define independent models for the different hours of the day as is usual in electricity load forecasting. For each model we use the same structure for the different hours but we learn the model parameters independently for each time of day based on the training data of that particular time of day. In what follows we denote by $y_t$ the load at time $t$. \begin{itemize} \item {\bf Autoregressive}. We consider a seasonal autoregressive model based on the daily and weekly lags of the load: \begin{align} & \label{eq:autoregressive} y_t = \sum\limits_{l\in\mathcal{L}_{h(t)}} \alpha_{l} y_{t-l} + \sum\limits_{1\le l\le 6} \alpha_{7\times24l} y_{t-7\times24l} + \varepsilon_t\,,\\ \nonumber & \mathcal{L}_{h} = \begin{cases} \{24,48,72\}\qquad \text{if } h\le 7 \,,\\ \{48,72,96\}\qquad \text{if } h> 7\,. \end{cases} \end{align} \item {\bf Linear regression}. We use a linear model with the following variables: temperature, cloud cover, pressure, wind direction and speed, day type (7 booleans), time of year, linear trend variable, and the two lags $LoadW$ and $LoadD$. \item \textbf{Generalized additive model (GAM)}. We propose a Gaussian generalized additive model \cite{wood2017generalized}: \begin{align*} y_t & = \sum_{i=1}^6 \beta_i \mathds{1}_{DayType_t = i} + \gamma Temps95_t + f_1(Toy_t) \\ & \quad + f_2(LoadD_t) + f_3(LoadW_t) + \alpha t + \beta_0 + \varepsilon_t \,, \end{align*} where $f_1$ is obtained by penalized regression on cubic cyclic splines and $f_2,f_3$ on cubic regression splines. \item \textbf{Random Forest (RF)}. We build a random forest \cite{RF} with the following covariates: linear trend variable, time of year, day type, the two lags and the two exponential smoothing variables of the temperature. Quantile variants were also computed. \item \textbf{Random Forest (RF\_GAM)}. We also correct the GAM using a random forest on the GAM residuals, with the same covariates as in {\bf RF} to which we add the GAM effects $f_1(Toy_t)$, $f_2(LoadD_t)$, $f_3(LoadW_t)$ as well as lags (one week, one or two days) of the GAM residuals. \item \textbf{Multi-Layer Perceptron (MLP)}. Finally we test a multi-layer perceptron of 2 hidden layers of 15 and 10 neurons using hyperbolic tangent activation. We take as input: linear trend variable, time of year, day type, the exponential smoothing variable $Temps95$ and the two lags. \end{itemize} \subsection{Intraday correction}\label{sec:intraday} Although it performs better to use different models at the different hours of the day, let it be noted that the correlation between different hours is important. To capture intraday information we fit on the residuals of each model an autoregressive model incorporating lags of the 24 last available hours and optimized for each forecast horizon. This follows from the intuition that to predict the load at 8AM, instead of using as the last available data a delay of 48 hours, we can use a 25-hour delay. We apply this correction to the models presented in Section \ref{sec:statsml} as well as to the ones resulting from the adaptation framework of Section \ref{sec:adaptation}, see the numerical improvements in Section \ref{sec:exp_indiv}. \section{Adaptation using State-Space Models}\label{sec:adaptation} Due to the lockdowns the consumer's behaviours changed abruptly and therefore the models presented in Section \ref{sec:statsml} perform poorly during Spring 2020 and afterwards, see Figure \ref{fig:adaptation_motivation}. \begin{figure} \centering \includegraphics[width=8cm]{ma_error_offline.pdf} \caption{Evolution of the forecasting error for the different models introduced in Section \ref{sec:statsml} trained on the data up to January 1\textsuperscript{st} 2020 and with intraday correction (Section \ref{sec:intraday}).} \label{fig:adaptation_motivation} \end{figure} To adapt the models in time, we rely on linear gaussian state-space models, summarized as \begin{align*} & \theta_t - \theta_{t-1} \sim \mathcal{N}(0,Q_t)\,, \\ & y_t - \theta_t^\top x_t \sim \mathcal{N}(0,\sigma_t^2) \,, \end{align*} where $\theta_t$ is the latent state, $Q_t$ the process noise covariance matrix and $\sigma_t^2$ is the observation variance. \subsection{Definition of $x_t$} This state-space representation is natural for linear regression for which $x_t$ is the vector containing the explanatory variables detailed in Section \ref{sec:statsml}. Autoregressive models also fit directly in that framework, as they are in fact linear models on lags of the load, see Equation \eqref{eq:autoregressive}. To adapt GAM and MLP we linearize the models and $x_t$ is just another feature representation. We freeze the non-linear effects in the GAM as in \cite{obst2021adaptive}, and $x_t$ contains the different effects, linear and non-linear. We apply a similar approach for the MLP, for which we freeze the deepest layers and we learn the last one, that is $x_t$ is the final hidden state, see Figure \ref{fig:mlp}. \begin{figure} \centering \includegraphics[width=6cm]{mlp_x.png} \caption{Diagram of the definition of the features to adapt the MLP. The network has two hidden layers of 15 and 10 neurons, we freeze all the weights except the last ones.} \label{fig:mlp} \end{figure} The state-space approach is not applied to the random forest. For the latter we compare with incremental offline random forests, consisting in re-training the random forest each day with all the data available at the time. \subsection{Kalman Filter}\label{sec:kf} Bayesian estimation of the state $\theta_t$ in linear gaussian state-space models is well understood under known variances $\sigma^2,Q$. The best estimator is obtained by the well known Kalman Filter \cite{kalman1961new}. It yields recursive exact estimation of the mean and covariance matrix of the state given the past observations, denoted respectively by $\hat\theta_t$ and $P_t$. However there is no consensus in the literature as to how to tune the hyper-parameters, see for instance \cite{brockwell1991time,durbin2012time,fahrmeir1992posterior}. The widely used Expectation-Maximization algorithm is an iterative algorithm that guarantees convergence to a local maximum of the likelihood. However there is no global guarantee and in our case it performs poorly. We propose instead the following settings, building on \cite{obst2021adaptive}: \begin{itemize} \item \textbf{Static}. We consider the degenerate setting where $Q_t=0$ and $\hat\theta_1=0,P_1=I,\sigma_t^2=1$. \item \textbf{Static break}. We consider a break at March 1\textsuperscript{st} 2020 by setting $\hat\theta_1=0,P_1=I,\sigma_t^2=1,Q_t=0$ except $Q_T=I$ where $T$ is March 1\textsuperscript{st} 2020. \item \textbf{Dynamic}. We approximate the maximum-likelihood for constant $\sigma_t^2,Q_t$. We set $P_1=\sigma^2 I$ and we observe that for a given $Q/\sigma^2$ we have closed-form solutions for $\hat\theta_1, \sigma^2$. Then we restrict ourselves to diagonal matrices $Q/\sigma^2$ whose nonzero coefficients are in $\{2^j,-30\le j\le 0\}$ and we apply a greedy procedure: starting from $Q/\sigma^2=0$ we change at each step the coefficient improving the most the likelihood. That procedure is designed to optimize $Q$ on the training data (up to January 1\textsuperscript{st} 2020). \item \textbf{Dynamic break}. We use similar $\hat\theta_1,P_1,\sigma_t^2=\sigma^2,Q_t=Q$ as in the dynamic setting except $Q_T=P_1=\sigma^2 I$ where $T$ is March 1\textsuperscript{st} 2020. \item \textbf{Dynamic big}. We simply use $\sigma^2=1$ and a matrix $Q$ proportional to $I$ defined based on the 2020 data. \end{itemize} Also, it is important that we estimate a gaussian posterior distribution, therefore we have a probabilistic forecast for the load. Precisely, our estimate is $\theta_t\sim\mathcal{N}(\hat\theta_t,P_t)$, thus we have $y_t\sim\mathcal{N}(\hat\theta_t^\top x_t, \sigma^2+x_t^\top P_t x_t)$. The likelihood that is optimized to obtain the dynamic setting is built on that probabilistic forecast of $y_t$ given the past observations. In the competition we added quantiles of these gaussian distributions as forecasters in an expert aggregation. \subsection{Dynamical Variances} The idea behind the break settings introduced in the previous paragraph is that we would like the model to adapt faster during an evolving period such as a lockdown than before. However it consists in modelling a break in the data, a sudden change of state resulting from a noise of much bigger variance at a specific time specified {\it a priori}. A way to extend the approach would be to define a time-varying covariance matrix depending for instance of a lockdown stringency index such as defined by \cite{hale2021global}. However the competition policy forbade the use of external data and the location was undisclosed. In a more long-term perspective let it be hoped that the evolution of the electricity load won't be driven by lockdowns. It is therefore more generic to learn the variances of the state-space model in an adaptive fashion. We therefore apply a novel approach for time-series forecasting introduced in \cite{tsw}, and named Variational Bayesian Variance Tracking, alias VIKING. We briefly recall how the method works. This method was designed in parallel of the competition and was improved afterwards. We present the last version only. We treat the variances as latent variables and we augment the state-space model: \begin{align*} & a_t - a_{t-1} \sim\mathcal{N}(0,\rho_a)\,, \quad b_t - b_{t-1} \sim\mathcal{N}(0,\rho_b)\,, \\ & \theta_t - \theta_{t-1} \sim\mathcal{N}(0, \exp(b_t) I) \,,\\ & y_t - \theta_t^\top x_t \sim\mathcal{N}(0, \exp(a_t)) \,. \end{align*} Instead of estimating the state $\theta_t$ with variances fixed {\it a priori}, we estimate both the state and the variances represented by $a_t,b_t$. Although we have removed $\sigma_t^2,Q_t$ as hyper-parameters, we now have to set priors on $a_0,b_0$ along with the parameters $\rho_a,\rho_b$ controlling the smoothness of the dynamics on the variances. We apply a bayesian approach. At each step, we start from a prior $p(\theta_{t-1},a_{t-1},b_{t-1}\mid \mathcal{F}_{t-1})$ obtained at the last iteration, where we introduce the filtration of the past observations $\mathcal{F}_t=\sigma(x_1,y_1,...,x_{t-1},y_{t-1})$. Then we obtain a prediction step thanks to the dynamical equations yielding $p(\theta_t,a_t,b_t\mid \mathcal{F}_{t-1})$. Finally Bayes' rule yields the posterior distribution $p(\theta_t,a_t,b_t\mid \mathcal{F}_t)$. However the posterior distribution is analytically intractable, therefore the principle of VIKING is to apply the classical Variational Bayesian approach \cite{vsmidl2006variational}. The posterior distribution is recursively approximated with a factorized distribution. In our setting we look for the best product $\mathcal{N}(\hat\theta_{t\mid t},P_{t\mid t}) \mathcal{N}(\hat a_{t\mid t},s_{t\mid t})\mathcal{N}(\hat b_{t\mid t},\Sigma_{t\mid t})$ approximating $p(\theta_t,a_t,b_t\mid \mathcal{F}_t)$. The criterion minimized is the Kullback-Leibler (KL) divergence \begin{align*} KL(\mathcal{N}(\hat\theta_{t\mid t},P_{t\mid t}) \mathcal{N}(\hat a_{t\mid t},s_{t\mid t})\mathcal{N}(\hat b_{t\mid t},\Sigma_{t\mid t})\ ||\ p(\theta_t,a_t,b_t\mid \mathcal{F}_t)) \,, \end{align*} where $KL(p,q)=\int p\log(p/q)dp$. At each step it yields a coupled optimization problem in the three gaussian distributions. The classical iterative method (see for instance \cite{tzikas2008variational}) consists in computing alternately $\exp(\mathbb{E}[\log p(\theta_t,a_t,b_t\mid \mathcal{F}_t)])$ where the expected value is taken with respect to two of the three latent variables, and identifying the desired first two moments with respect to the other latent variable. However the expression $\exp(\mathbb{E}_{\theta_t,b_t}[\log p(\theta_t,a_t,b_t\mid \mathcal{F}_t)])$ doesn't match a gaussian distribution in $a_t$, and similarly for $b_t$. We therefore use the first two moments of the gaussian distribution to derive an upper-bound of the KL divergence for which we have an analytical solution. We refer to \cite{tsw} for the detailed derivation of the algorithm. \section{Experiments}\label{sec:experiments} We display the performance of the introduced methods that we call experts. Then we use aggregation of experts to leverage specificities of each forecaster. The end of the section is devoted to a discussion of our day-to-day strategy during the competition. Finally, we refer to the implementation for more details\footnote{\href{https://gitlab.com/JosephdeVilmarest/state-space-post-covid-forecasting}{https://gitlab.com/JosephdeVilmarest/state-space-post-covid-forecasting}}. \subsection{Individual Experts}\label{sec:exp_indiv} We first display in Table \ref{tab:indiv_offline} the results of the statistical and machine learning methods of Section \ref{sec:statsml}, with or without the intraday correction of Section \ref{sec:intraday}. To present the improvement brought by the intraday correction we give the performance during a stable period (after the training of the model, but before the covid crisis). We observe that the only model for which the intraday correction doesn't improve the performance (RF\_GAM) is the one including already a residual correction. The improvement during the evaluation period (2021) is much bigger (57\% decrease of the MAE for the MLP for instance), and it is natural as the intraday correction is an autoregressive, that is, an adaptive model. \begin{table} \begin{center} \caption{Mean Average Error (in MW) of each method of Section \ref{sec:data_pres} during normal test period.} \label{tab:indiv_offline} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Adaptation & AR & Linear & GAM & RF & RF\_GAM & MLP \\ \hline Offline & 29.3 & 20.8 & 20.7 & 24.6 & 23.0 & 21.2 \\ Offline intraday & 27.0 & 19.9 & 19.3 & 24.4 & 23.7 & 20.6 \\ \hline \end{tabular} \end{center} Models are trained up to Jan. 1\textsuperscript{st} 2020 and tested during the next two months before the break of March. \end{table} Then we focus on adaptive models to show the improvements due to each setting, see Table \ref{tab:individual}. \begin{table}[] \begin{center} \caption{Mean Average Error (in MW) of each method during the competition evaluation set (2021-01-18 to 2021-02-16).} \label{tab:individual} \begin{tabular}{|c|c|c|c|c|} \hline Adaptation & AR & Linear & GAM & MLP \\ \hline Offline & 14.6 & 22.8 & 22.7 & 16.7 \\ \hline Static & 20.5 & 15.7 & 17.0 & 22.9 \\ Static break & 27.9 & 14.4 & 28.4 & 35.4 \\ Dynamic & 14.4 & 14.9 & 15.3 & 13.0 \\ Dynamic break & 16.2 & 13.6 & 14.3 & 12.3 \\ Dynamic big & 14.3 & 11.2 & 12.4 & 12.4 \\ \hline VIKING & 14.4 & 11.5 & 12.7 & 12.5 \\ \hline \end{tabular} \end{center} The performances are displayed for each model after intraday correction. As a comparison, re-training the random forest every day yields an online RF of MAE 15.0 MW, and an online GAM\_RF of MAE 18.1 MW. \end{table} We have 4 different models (autoregressive, linear, GAM and MLP). For each one, we try the various adaptation settings (no adaptation, Kalman filters and VIKING). Kalman filters with constant covariance matrix proportional to the identity obtain the best results. That is not the case on the data previous to the competition and it depends on the intrinsic evolution of the data. We illustrate the different settings in Figure \ref{fig:evol_gam} where we display the evolution of the state coefficient for the GAM adaptation strategies. \begin{figure*} \centering \includegraphics[width=5cm]{evol_static.pdf} \includegraphics[width=5cm]{evol_dynamic.pdf} \includegraphics[width=5cm]{evol_viking.pdf} \caption{Evolution of the state coefficients for various adaptations of the GAM, see Section \ref{sec:adaptation}. On the left, we use the Kalman Filter in the static setting (degenerate covariance matrix $Q_t=0$). On the middle, the dynamic setting where the variances are constant, and we provide the ratio $Q/\sigma^2=diag(2^{-17},0,2^{-8},2^{-16},2^{-14},0,0)$: we observe that the coefficient corresponding to the biggest coefficient of $Q$ (the effect of $Temps95$) evolves much faster. On the right, the VIKING setting where we estimate the variances adaptively.} \label{fig:evol_gam} \end{figure*} \subsection{Aggregation} Online robust aggregation of experts \cite{Cesa-Bianchi:2006} is a powerful model agnostic approach for time series forecasting, already applied to load forecasting during the lockdown in \cite{obst2021adaptive}. We use the ML-Poly algorithm proposed in \cite{gaillard2014second} and implemented in the \texttt{R} package \texttt{opera} \cite{gaillard2016opera} to compute these online weights. The aggregation weights are estimated independently for each hour of the day. We summarize different variants in Table \ref{tab:aggregation}. \begin{table}[] \centering \caption{Mean average error of aggregation strategies (in MW) during the competition evaluation set (2021-01-18 to 2021-02-16).} \label{tab:aggregation} \begin{tabular}{|c|c|c|c|c|c|} \hline Adaptation & AR & Linear & GAM & MLP & All \\ \hline Best expert & 14.3 & 11.2 & 12.7 & 12.3 & 11.2 \\ \hline Aggregation & 14.4 & 11.4 & 11.7 & 11.9 & {\bf 10.9} \\ \hline \end{tabular} \end{table} First, for each family of models we compute the aggregation of all the adaptation settings (7 for each). Then we aggregate all of them (28 models). An example of the weights obtained at 3PM is displayed in Figure \ref{fig:weights}. \begin{figure} \centering \includegraphics[height=6cm]{weights15.pdf} \caption{Evolution of the aggregation weights at 3PM from July 1\textsuperscript{st} 2020 to February 16\textsuperscript{th} 2021.} \label{fig:weights} \end{figure} The aggregation presented in this paper obtains a performance close to our strategy winning the competition (degradation of about 0.05 MW). \subsection{Day-to-day Forecasts}\label{sec:daybyday} During the competition our predictions were not exactly the ones of the aggregation method presented in the previous subsection. There are mainly two reasons for that. First, we considered a bigger set of forecasting methods (we had $72$ experts). It seemed reasonable to prune the strategy for the sake of clarity of paper at the cost of a very small change of error, but it is interesting to present also the predictions used during the evaluation. We found a trade-off in the selection of experts. Indeed too much experts in the aggregation yields poor performances. We applied a greedy procedure to select the experts we keep in the aggregation: we begin with an empty set, and at each step we add the one improving the most the performance. That performance was evaluated with the MAE on the last month of the training data set. We provide in Figure \ref{fig:segmentation} a graphical representation of how we defined different time periods. We refer to Figure \ref{fig:greedy_selection} for the evolution of the validation MAE as the selection grows. We observe a sharp decrease of the error as experts are added with high diversity and then a slow increase of the loss as the set of experts becomes too large. \begin{figure} \centering \includegraphics[width=8cm]{segmentation.png} \caption{Segmentation of the data set. Meteorological corrections as well as time-invariant forecasts were trained on the train period (up to January 1\textsuperscript{st} 2020). Adaptive forecasting methods were evolving on the whole period. Then the aggregation weights were trained by MLpol from July 1\textsuperscript{st} 2020, and the expert selection was determined with respect to the validation set.} \label{fig:segmentation} \end{figure} \begin{figure} \centering \includegraphics[height=6cm]{select_aggreg.pdf} \caption{Evolution of the validation MAE as the expert selection grows from 1 to 30 experts. The nomenclature is provided in the Appendix.} \label{fig:greedy_selection} \end{figure} Second, we were constantly experimenting the different strategies. We used a variational bayesian method that was a prior version of the one of this paper. We also changed a lot the aggregation procedure. We refer to the appendix for a detailed presentation of our daily strategy. Overall these day-by-day changes degraded the performance, if we had stayed on the first strategy with no change at all, our MAE would have been 10.51 MW instead of 10.84 MW. The critical issue in such unstable periods is to find the right validation period to select the prediction procedure. The month before the evaluation period seems {\it a posteriori} a good compromise. During the competition we changed "manually" based on the performances in a shorter range, for instance considering an expert performing well on the last few weeks for a specific day type ... We should have trusted the aggregation's robustness. \section{Conclusion} In this paper we presented our procedure to win a competition on electricity load forecasting during an unstable period. Our approach relies heavily on state-space models and the competition was the first data set on which was applied a recent approach to adapt the variances of a state-space model. Some perspectives have been raised during the competition such as interpretability of the global approach and a better understanding of the error propagation along the different adaptations (intraday correction, Kalman filtering, variance tracking and aggregation). Finally, similar state-space methods have been applied to obtain the first place in another competition in which the objective was to forecast the electricity consumption of a building\footnote{\href{http://www.gecad.isep.ipp.pt/smartgridcompetitions/}{http://www.gecad.isep.ipp.pt/smartgridcompetitions/}}. \section*{Appendix} \subsection{Nomenclature} The experts \texttt{AR}, \texttt{Lin}, \texttt{GAM}, \texttt{RF}, \texttt{RF\_GAM}, \texttt{MLP} are the ones presented in that same order in Section \ref{sec:statsml}. Names of the form \texttt{model\_setting} refer to the expert obtained by state-space adaptation of the model \texttt{model} with the setting \texttt{setting}. For instance, \texttt{Lin\_dynamic} refers to a linear model adapted with the Kalman filter in the dynamic setting, {\it c.f.} Section \ref{sec:kf}. We consider quantile variants of \texttt{RF}, denoted by \texttt{RFq} where \texttt{q} is the quantile order in percent ({\it e.g.} \texttt{RF40} is the quantile random forest of quantile value 0.4). We also consider a quantile variant of the dynamic MLP denoted by similar names (\texttt{MLP\_dynamic60} is the quantile 0.6 of the MLP in the dynamic setting). Furthermore, we introduce an expert named \texttt{GAM\_SAT} forecasting each day with the GAM as if it were a Saturday motivated by \cite{obst2021adaptive}. Finally, each expert \texttt{x} yields another expert \texttt{x\_corr} after intraday correction. \subsection{Day-to-day Evolution of the Forecasting Strategy} As explained in Section \ref{sec:daybyday}, our strategy evolved in time and we recall here every change. \begin{itemize} \item \textbf{From January 18\textsuperscript{th} to January 24\textsuperscript{th}}: we used the following set of experts obtained by the greedy selection described in Section \ref{sec:daybyday}: \texttt{RF}, \texttt{RF\_corr}, \texttt{RF50\_corr}, \texttt{RF60\_corr}, \texttt{Lin\_dynamic\_corr}, \texttt{Lin\_viking\_corr}, \texttt{GAM}, \texttt{GAM\_corr}, \texttt{GAM\_staticbreak\_corr}, \texttt{GAM\_dynamic\_corr}, \texttt{GAM\_viking\_corr}, \texttt{GAM\_dynamicbig\_corr}, \texttt{RF\_GAM}, \texttt{RF\_GAM\_corr}, \texttt{GAM\_SAT\_corr}, \texttt{MLP\_dynamic60}, \texttt{MLP\_dynamic90}, \texttt{MLP\_dynamic99}. We aggregated with ML-poly with an aggregation estimated independently for each hour, with the absolute loss. We found afterwards a bug in \texttt{RF50\_corr, RF60\_corr}: the quantile RF were set to 0 on the test set so that these two experts were simple intraday autoregressive trained in an unwanted manner. \item \textbf{From January 25\textsuperscript{th} to January 31\textsuperscript{st}}: we removed the experts \texttt{RF50\_corr, RF60\_corr} and we replaced them with \texttt{AR\_corr}. \item \textbf{February 1\textsuperscript{st} and 2\textsuperscript{nd}}: we used the uniform average between three forecasts. First, the previous aggregation. Second, another aggregation procedure called RF-stacking, consisting in a quantile random forest minimizing the MAE and taking as input the 72 experts as well as the day type and hour of the day. Third, a benchmark close to the one given by the competition organizers: we predict each time with the last available load of the same hour and the same day group (week days, saturdays and sundays). \item \textbf{From February 3\textsuperscript{rd} to 7\textsuperscript{th}}: we removed the benchmark which damaged the performances. For \textbf{Feb. 5\textsuperscript{th}} we corrected the ML-poly prediction using a special day correction, once we observe that Feb. 5\textsuperscript{th} had a special behaviour in the last three years. Precisely, we observed that the relative error of the model is significantly negative on the last three years, a behaviour that may come from a bank holiday for instance. Therefore we fit a smoothed function of the time of day on the relative error and we applied it to our forecast. We truncated so that there is no correction during night. See the shape of the correction in Figure \ref{fig:specialday}. \begin{figure} \centering \includegraphics[width=7cm]{specialday.pdf} \caption{Special day correction applied on February 5\textsuperscript{th}. It is a multiplicative correction, {\it e.g.} at midday we reduce our forecast by about 3.8\%.} \label{fig:specialday} \end{figure} \item \textbf{February 8\textsuperscript{th}}: we used the single expert \texttt{Lin\_dynamicbig\_corr} as we observed that it was by far our best expert on the last week, and it seemed to perform especially well on Mondays. \item \textbf{February 9\textsuperscript{th}}: we came back to the average between ML-poly and the RF-stacking but we added to the aggregation ML-poly the expert \texttt{Lin\_dynamicbig\_corr}, and we replaced the expert \texttt{AR\_corr} with another expert \texttt{AR\_intra} incorporating directly the intraday correction in the autoregressive, instead of correcting an autoregressive based only on daily lags. \item \textbf{February 10\textsuperscript{th} and 11\textsuperscript{th}}: we removed the RF-stacking which degraded our performances since its introduction and we kept only the aggregation ML-poly. \item \textbf{February 12\textsuperscript{th} and 13\textsuperscript{th}}: we corrected {\it a posteriori} the electricity load for February 5\textsuperscript{th} with the special day correction. It was important to do it on that day as the weekly lags is important in the models. \item \textbf{February 14\textsuperscript{th}}: we used once again the average between the ML-poly aggregation and the RF-stacking, as we observed that the RF-stacking is especially good on Sunday. \item \textbf{February 15\textsuperscript{th} and 16\textsuperscript{th}}: we used only the ML-poly aggregation. \end{itemize} \bibliographystyle{IEEEtran}
1,108,101,564,098
arxiv
\section{Introduction} Let $(Z_t)_{t\geq 0}$ be a one-dimensional square integrable L\'evy process. Then for some $a \in {\mathbb{R}}$, some $b \in {\mathbb{R}}_+$, and some measure $\nu$ on ${\mathbb{R}}_*$ satisfying $\int_{{\mathbb{R}}_*} z^2 \nu(dz) <\infty$, \begin{equation}\label{levy} Z_t = at+ b B_t + \displaystyle \int_0^t \displaystyle \int_{\rr_*} z {\tilde N}(ds,dz), \end{equation} where $(B_t)_{t\geq 0}$ is a standard Brownian motion, independent of a Poisson measure $N(ds,dz)$ on $[0,\infty)\times {\mathbb{R}}_*$ with intensity measure $ds\nu(dz)$, and where ${\tilde N}$ is its compensated Poisson measure, see Jacod-Shiryaev \cite{js}. \vskip.2cm We consider, for some $x\in {\mathbb{R}}$ and some function $\sigma:{\mathbb{R}}\mapsto {\mathbb{R}}$, the S.D.E. \begin{equation}\label{sde} X_t = x + \displaystyle \int_0^t \sigma(X_{s-})dZ_s. \end{equation} Using some classical results (see e.g. Ikeda-Watanabe \cite{iw}), there is strong existence and uniqueness for (\ref{sde}) as soon as $\sigma$ is Lipschitz continuous: for any given couple $(B,N)$, there exists an unique c\`adl\`ag adapted solution $(X_t)_{t\geq 0}$ to (\ref{sde}). By {\it adapted}, we mean adapted to the filtration $({\mathcal F}_t)_{t\geq 0}$ generated by $(B,N)$. \vskip.2cm We consider two related problems in this paper. The first one, exposed in the next section, deals with the numerical approximation of the solution $(X_t)_{t\geq 0}$. The second one concerns the approximation of $(X_t)_{t\geq 0}$ by the solution to a Brownian S.D.E., when $Z$ has only very small jumps, and is presented in Section \ref{sapp}. \vskip.2cm Our results are based on a recent work of Rio \cite{r} that concerns the rate of convergence in the central limit theorem, when using the quadratic Wasserstein distance. This result, and its application to L\'evy processes, is exposed in Section \ref{wass}. \section{Numerical simulation} The first goal of this paper is to study a numerical scheme to solve (\ref{sde}). The first idea is to perform an Euler scheme $(X^n_{i/n})_{n\geq 0}$ with time-step $1/n$, see Jacod \cite{j}, Jacod-Protter \cite{jp}, Protter-Talay \cite{pt} for rates of convergence. However, this is generally not a good scheme in practise, unless one knows how to simulate the increments of the underlying L\'evy process, which is the case e.g. when $Z$ is a stable process. \vskip.2cm We assume here that the L\'evy measure $\nu$ is known explicitely: one can thus simulate random variables with law $\nu(dz){{\bf 1}}_A(z)/\nu(A)$, for any $A$ such that $\nu(A)<\infty$. \vskip.2cm The first idea is to approximate the increments of $Z$ by $\widehat \Delta_{i}^{n,{\epsilon}}= Z^{{\epsilon}}_{i/n}-Z^{{\epsilon}}_{(i-1)/n}$ , where $Z^{{\epsilon}}_t$ is the same L\'evy process as $Z$ without its (compensated) jumps smaller than ${\epsilon}$. However, Asmussen-Rosinski \cite{ar} have shown that for a L\'evy process with many small jumps, it is more convenient to approximate small jumps by some Gaussian variables than to neglect them. We thus introduce $\Delta_{i}^{n,{\epsilon}}= \widehat\Delta^{n,{\epsilon}}_{i}+ U^{n,{\epsilon}}_{i}$, where $U^{n,{\epsilon}}_i$ is Gaussian with same mean and variance as the neglected jumps. The arguments of \cite{ar} concern only L\'evy processes, and it does not seem so easy to apply such an idea to the simulation of SDEs. \vskip.2cm Let us write $(\widehat X^{n,{\epsilon}}_{[nt]/n})_{t\geq 0}$ (resp. $(X^{n,{\epsilon}}_{[nt]/n})_{t\geq 0}$) for the Euler scheme using the approximate increments $(\widehat \Delta^{n,{\epsilon}}_i)_{i\geq 1}$ (resp. $(\Delta^{n,{\epsilon}}_i)_{i\geq 1}$). They of course have a similar computational cost. \vskip.2cm Jacod-Kurtz-M\'el\'eard-Protter \cite{jkmp} have computed systematically the {\it weak} error for the {\it approximate} Euler scheme. In particular, they prove some very fine estimates of $\mathbb{E}[g(X_{[nt]/n}^{n,{\epsilon}})] - \mathbb{E}[g(X_t)]$ for $g$ smooth enough. The obtained rate of convergence is very satisfying. \vskip.2cm Assume now that the goal is to approximate some functional of the path of the solution (e.g. $\sup_{[0,T]} |X_t|$). Then we have to estimate the error between the laws of the paths of the processes (not only between the laws of the time marginals). A common way to perform such an analysis is to introduce a suitable coupling between the numerical scheme $(X_{[nt]/n}^{n,{\epsilon}})_{t\geq 0}$ and the true solution $(X_t)_{t\geq 0}$, and to estimate the (discretized) {\it strong} error $\mathbb{E}[\sup_{[0,T]} |X^{n,{\epsilon}}_{[nt]/n}-X_{[nt]/n}|^2]$. We refer to Jacod-Jakubowski-M\'emin \cite{jjm} for the rate of convergence of the discretized process $(X_{[nt]/n})_{t\geq 0}$ to the whole process $(X_{t})_{t\geq 0}$. \vskip.2cm Rubenthaler \cite{ru} has studied the strong error when neglecting small jumps. He obtains roughly $\mathbb{E}[\sup_{[0,T]} |\widehat X^{n,{\epsilon}}_{[nt]/n}-X_{[nt]/n}|^2] \simeq C_T(n^{-1}+\int_{|z|\leq {\epsilon}} z^2 \nu(dz))$ (if $b\ne 0$). For $\nu$ very singular near $0$, the obtained precision is very low. \vskip.2cm Our aim here is to study the strong error when using $X^{n,{\epsilon}}_{[nt]/n}$. We will see that the precision is much higher (see Subsection \ref{op} below). \vskip.2cm The main difficulty is to find a suitable coupling between the true increments $(Z_{i/n}-Z_{(i-1)/n})_{i\geq 1}$ and the approximate increments $(\Delta^{n,{\epsilon}}_{i})_{i\geq 1}$: clearly, one considers $Z$, then one erases its jumps smaller than ${\epsilon}$, but how to build the additionnal Gaussian variable in such a way that it is a.s. close to the erased jumps? We will use a recent result of Rio \cite{r}, which gives some very precise rate of convergence for the standard central limit theorem in Wasserstein distance, in the spirit of Koml\'os-Major-Tsun\'ady \cite{kmt}. \subsection{Notation}\label{sn} We introduce, for ${\epsilon}\in (0,1)$, $k\in {\mathbb{N}}$, \begin{align}\label{mom} &F_{\epsilon}(\nu) = \int_{|z|> {\epsilon}} \nu(dz), \quad m_k(\nu)=\int_{{\mathbb{R}}_*} |z|^k \nu(dz), \\ \nonumber &m_{k,{\epsilon}}(\nu) =\int_{|z|\leq {\epsilon}} |z|^k\nu(dz), \quad \delta_{\epsilon}(\nu)= \frac{m_{4,{\epsilon}}(\nu)}{m_{2,{\epsilon}}(\nu)}. \end{align} Observe that we always have $\delta_{\epsilon}(\nu) \leq {\epsilon}^2$ and $F_{\epsilon}(\nu) \leq {\epsilon}^{-2} m_2(\nu)$. \vskip.2cm For $n\in{\mathbb{N}}$ and $t\geq 0$, we set $\rho_n(t)=[nt]/n$, where $[x]$ is the integer part of $x$. \subsection{Numerical scheme}\label{sscheme} Let $n \in {\mathbb{N}}$ and ${\epsilon}\in (0,1)$ be fixed. We introduce an i.i.d. sequence $(\Delta^{n,{\epsilon}}_i)_{i\geq 1}$ of random variables, with \begin{equation}\label{deltaneps} \Delta^{n,{\epsilon}}_1= a_{n,{\epsilon}} + b_{n,{\epsilon}} G + \sum_{1}^{N_{n,{\epsilon}}} Y_i^{\epsilon}, \end{equation} where $a_{n,{\epsilon}}=(a - \int_{|z|>{\epsilon}} z \nu(dz))/n$, where $b^2_{n,{\epsilon}}=(b^2+ m_{2,{\epsilon}}(\nu))/n$, where $G$ is Gaussian with mean $0$ and variance $1$, where $N_{n,{\epsilon}}$ is Poisson distributed with mean $F_{\epsilon}(\nu)/n$, and where $Y_1^{\epsilon},Y_2^{\epsilon},...$ are i.i.d. with law $\nu(dz){{\bf 1}}_{|z|> {\epsilon}} / F_{\epsilon}(\nu)$. All these random variables are assumed to be independent. Then we introduce the scheme \begin{equation}\label{scheme} X^{n,{\epsilon}}_0=x, \quad X^{n,{\epsilon}}_{(i+1)/n}= X^{n,{\epsilon}}_{i/n}+ \sigma(X^{n,{\epsilon}}_{i/n} )\Delta^{n,{\epsilon}}_{i+1} \quad (i\geq 0). \end{equation} Observe that $\bullet$ the cost of simulation of $\Delta^{n,{\epsilon}}_1$ is of order $1+\mathbb{E}[N_{n,{\epsilon}}]=1+F_{\epsilon}(\nu)/n$, whence that of $(X^{n,{\epsilon}}_{\rho_n(t)})_{t\in [0,T]}$ is of order $Tn(1+F_{\epsilon}(\nu)/n) = T(n + F_{\epsilon}(\nu) )$, as in \cite{ru}; $\bullet$ $\Delta^{n,{\epsilon}}_{i+1}$ has the same law as $Z^{\epsilon}_{(i+1)/n}-Z^{\epsilon}_{i/n} + U_{n,{\epsilon}}$, where $U_{n,{\epsilon}}$ is Gaussian with same mean and variance as $\int_{i/n}^{(i+1)/n} \int_{|z|\leq {\epsilon}} z{\tilde N}(ds,dz)$ and where $Z^{\epsilon}_t=at+bB_t+\int_0^t \int_{|z|>{\epsilon}}z{\tilde N}(ds,dz)$. \subsection{Main result} We may now state our main result. \begin{thm}\label{main} Assume that $\sigma:{\mathbb{R}}\mapsto {\mathbb{R}}$ is bounded and Lipschitz continuous. Let ${\epsilon}\in(0,1)$ and $n\in {\mathbb{N}}$. There is a coupling between a solution $(X_t)_{t\geq 0}$ to (\ref{sde}) and an approximated solution $(X^{n,{\epsilon}}_{\rho_n(t)})_{t\geq 0}$ as in Subsection \ref{sscheme} such that for all $T>0$, \begin{equation*} \mathbb{E}\left[\sup_{[0,T]} |X_{\rho_n(t)}- X^{n,{\epsilon}}_{\rho_n(t)}|^2 \right] \leq C_T \left( n^{-1} + n \delta_{\epsilon}(\nu) \right), \end{equation*} where the constant $C_T$ depends only on $T,\sigma,a,b,m_2(\nu)$. \end{thm} The first bound $n^{-1}$ is due to the time discretization (Euler scheme), and the second bound $n\delta_{\epsilon}$ is due to the approximation of the increments of the L\'evy process. As noted by Jacod \cite{j}, the first bound may be improved if there is no brownian motion $b=0$ (but we have to work with some weaker norm). \vskip.2cm We assume here that $m_2(\nu)<\infty$ for simplicity: this allows us to work in $L^2$. However, we believe that Theorem \ref{main} allows one to show that in the general case where $\int_{{\mathbb{R}}_*} \min(z^2,1)\nu(dz)<\infty$, the family $(n^{-1}+n\delta_{\epsilon}(\nu))^{-1} \sup_{[0,T]} |X_{\rho_n(t)}-X^{n,{\epsilon}}_{\rho_n(t)} |^2$ is tight: decompose the L\'evy process $Z_t=Z^1_t+Z^2_t$, where $Z^1$ satisfies our assumptions, and $Z^2$ is a compound Poisson process. Apply Theorem \ref{main} between the jumps of $Z^2$, and paste the pieces... this might be complicated to write, but the principle is very simple. \subsection{Optimisation}\label{op} Choose ${\epsilon}=1/n$. Then recalling that $\delta_{\epsilon}(\nu) \leq {\epsilon}^2$, we get $$ \mathbb{E}\left[\sup_{[0,T]} |X_{\rho_n(t)}- X^{n,1/n}_{\rho_n(t)}|^2 \right] \leq C_T /n, $$ for a mean cost to simulate $(X^{n,1/n}_{\rho_n(t)})_{t\in [0,T]}$ of order $T(n + F_{1/n}(\nu) )$. $\bullet$ We always have $F_{\epsilon}(\nu)\leq C {\epsilon}^{-2}$, so that the cost is always smaller than $T n^2$. $\bullet$ If $\nu(dz) \stackrel {z\to 0} \simeq |z|^{-1-\alpha} dz$ for some $\alpha \in (0,2)$, $F_{\epsilon}(\nu) \simeq {\epsilon}^{-\alpha}$, so that the cost is of order $T n^{\max(1,\alpha)}$. \vskip.2cm Still assume that $\nu(dz) \stackrel {z\to 0} \simeq |z|^{-1-\alpha} dz$, for some $\alpha \in (0,2)$. When neglecting the small jumps, the mean cost to get a mean squared error of order $1/n$ is of order $T n^{\max(1,\alpha/(2-\alpha))}$ (see \cite{ru}), which is huge when $\alpha$ is close to $2$. We observe that the present method is more precise as soon as $\alpha>1$. \subsection{Discussion} The computational cost to get a given precision does not explode when the L\'evy measure becomes very singular near $0$. The more $\nu$ is singular at $0$, the more there are jumps greater than ${\epsilon}$, which costs many simulations. But the more it is singular, the more the jumps smaller than ${\epsilon}$ are well-approximated by Gaussian random variables. These two phenomena are in competition, and we prove that the second one compensates (partly) the first one. \vskip.2cm Our result involves a suitable coupling between the solution $(X_t)_{t\geq 0}$ and its approximation $(X^{n,{\epsilon}}_t)_{t\geq 0}$. Of course, this is not very interesting in practise, since by definition, $(X_t)_{t\geq 0}$ is completely unknown. This is just an artificial way to estimate the rate of convergence {\it in law}, using a Wasserstein type distance. \vskip.2cm The simulation algorithm can easily be adapted to the case of dimension $d\geq 2$. We believe that the result still holds. However, the result of Rio \cite{r} is not known in the multidimensional setting (although it is believed to hold). We could use instead the results of Einmahl \cite{e}. This would be much more technical, and would lead to a lower rate of convergence. \section{Brownian approximation}\label{sapp} Consider the L\'evy process introduced in (\ref{levy}), consider $x\in {\mathbb{R}}$, $\sigma:{\mathbb{R}}\mapsto {\mathbb{R}}$ Lipschitz continuous, and the unique solution $(X_t)_{t\geq 0}$ to (\ref{sde}). Recall (\ref{mom}), consider a Brownian motion $(W_t)_{t\geq 0}$, set \begin{equation}\label{levy2} {\tilde Z}_t = at + \sqrt{b^2+m_2(\nu)}W_t, \end{equation} which has the same mean and variance as $Z_t$. Let $({\tilde X}_t)_{t\geq 0}$ be the unique solution to \begin{equation}\label{sde2} {\tilde X}_t = x + \displaystyle \int_0^t \sigma({\tilde X}_{s-}) d{\tilde Z}_s. \end{equation} \begin{thm}\label{main2} Assume that $\sigma$ is Lipshitz continuous and bounded. Then it is possible to couple the solutions $(X_t)_{t\geq 0}$ to (\ref{sde}) and $({\tilde X}_t)_{t\geq 0}$ to (\ref{sde2}) in such a way that for all $p\geq 4$, all $T>0$, all $n \geq 1$, recall (\ref{mom}) \begin{equation*} \mathbb{E}\left[\sup_{[0,T]} |X_t-{\tilde X}_t|^2 \right] \leq C_{T,p} \left( n^{2/p-1} + m_p(\nu)^{2/p} + n m_4(\nu)\right), \end{equation*} where $C_{T,p}$ depends only on $p,T,\sigma,a,b,m_2(\nu)$. \end{thm} If we only know that $m_4(\nu)<\infty$ , then we choose $n=[m_4(\nu)^{-2/3}]$, and we get, at least when $m_4(\nu)\leq 1$, $\mathbb{E}\left[\sup_{[0,T]} |X_t-{\tilde X}_t|^2 \right] \leq C_{T} m_4(\nu)^{1/3}$. \vskip.2cm Consider a sequence of L\'evy processes $(Z^{\epsilon}_t)_{t\geq 0}$ with drift $a$, diffusion coefficient $b$ and L\'evy measure $\nu_{\epsilon}$, such that $z^2\nu_{\epsilon}(dz)$ tends weakly to $\delta_0$. Then $\lim_{{\epsilon} \to 0} m_2(\nu_{\epsilon}) = 1$, while in almost all cases, $\lim_{{\epsilon}\to 0} m_p(\nu_{\epsilon})=0$ for some (are all) $p>2$. Consider the solution to $X^{\epsilon}_t=x+\int_0^t \sigma(X^{\epsilon}_{s-}) dZ^{\epsilon}_s$. Then it is well-known and easy to show that $(X^{\epsilon}_t)_{t\geq 0}$ tends in law to the solution of a Brownian S.D.E. Theorem \ref{main2} allows one to obtain a rate of convergence (for some Wasserstein distance). For example, we will immediately deduce the following corollary. \begin{cor}\label{corcv} Assume that $\sigma$ is Lipshitz continuous and bounded. Assume that $\nu(\{|z|> {\epsilon}\})=0$ for some ${\epsilon}\in (0,1]$. Then it is possible to couple the solutions $(X_t)_{t\geq 0}$ to (\ref{sde}) and $({\tilde X}_t)_{t\geq 0}$ to (\ref{sde2}) in such a way that for all $\eta \in (0,1)$, all $T>0$, \begin{equation*} \mathbb{E}\left[\sup_{[0,T]} |X_t-{\tilde X}_t|^2 \right] \leq C_{T,\eta} {\epsilon}^{1-\eta} \end{equation*} where $C_{T,\eta}$ depends only on $\eta,T,\sigma,a,b,m_2(\nu)$. \end{cor} The original motivation of this work was to estimate the error when approximating the Boltzmann equation by the Landau equation. The Boltzmann equation is a P.D.E. that can be related to a Poisson-driven S.D.E. (see Tanaka \cite{t}), while the Landau equation can be related to a Brownian S.D.E. (see Gu\'erin \cite{g}). In the grazing collision limit, the S.D.E. related to the Boltzmann equation has only very small jumps. However, many additionnal difficulties arise for those equations. Furthermore, we are able to prove our results only in dimension $1$, while the kinetic Boltzmann and Landau equations involve $3$-dimensional S.D.E.s \section{Coupling results}\label{wass} Consider two laws $P,Q$ on ${\mathbb{R}}$ with finite variance. The Wasserstein distance ${\mathcal W}_2$ is defined by \begin{equation*} {\mathcal W}^2_2(P,Q)=\inf\left\{\mathbb{E}\left[|X-Y|^2\right], \; {\mathcal L}(X)=P,{\mathcal L}(Y)=Q \right\}. \end{equation*} With an abuse of notation, we also write ${\mathcal W}_2(X,Y)={\mathcal W}_2(X,Q)= {\mathcal W}_2(P,Q)$ if ${\mathcal L}(X)=P$ and ${\mathcal L}(Y)=Q$. We recall the following result of Rio \cite[Theorem 4.1]{r}. \begin{thm}\label{rio} There is an universal constant $C$ such that for any sequence of i.i.d. random variables $(Y_i)_{i\geq 1}$ with mean $0$ and variance $\theta^2$, for any $n\geq 1$, \begin{equation*} {\mathcal W}_2^2 \left( \frac{1}{\sqrt n}\sum_{1}^n Y_i, {\mathcal N}(0,\theta^2)\right) \leq C \frac{\mathbb{E}[Y_1^4]}{n \theta^2} \end{equation*} \end{thm} Here ${\mathcal N}(0,\theta^2)$ is the Gaussian distribution with mean $0$ and variance $\theta^2$. Recall now (\ref{mom}). \begin{cor}\label{col} Consider a pure jump centered L\'evy process $(Y_t)_{t\geq 0}$ with L\'evy measure $\mu$. In other words $Y_t=\int_0^t \int_{{\mathbb{R}}_*}z {\tilde M}(ds,dz)$, where ${\tilde M}$ is a compensated Poisson measure with intensity $ds \mu(dz)$. There is an universal constant $C$ such that \begin{equation*} \forall \; t\geq 0, \quad {\mathcal W}_2^2 \left( Y_t, {\mathcal N}(0,t m_2(\mu)) \right) \leq C \frac{m_4(\mu)}{m_2(\mu)}. \end{equation*} \end{cor} \begin{proof} Let $t>0$. For $n\geq 1$, $i\geq 1$, write $Y_i^n=n^{1/2}\int_{(i-1)t/n}^{it/n} \int_{{\mathbb{R}}_*} z {\tilde M}(ds,dz)$, whence $Y_t=n^{-1/2}\sum_1^n Y_i^n$. The $Y_i^n$ are i.i.d., centered, $\mathbb{E}[(Y_1^n)^2]=t m_2(\mu)$, and \begin{align*} \mathbb{E}[(Y_1^n)^4]=& n^2 \mathbb{E}\left[\left( \int_0^{t/n} \displaystyle \int_{\rr_*} z^2 M(ds,dz)\right)^2 \right]\\ =& n^2 \mathbb{E}\left[\left( \int_0^{t/n} \displaystyle \int_{\rr_*} z^2 {\tilde M}(ds,dz) + (t/n)m_2(\mu) \right)^2 \right] \\ =& n^2 \left[t m_4(\mu)/n + (t m_2(\mu)/n )^2 \right]=ntm_4(\mu) + t^2 m_2(\mu). \end{align*} Using Theorem \ref{rio}, we get \begin{equation*} {\mathcal W}_2^2 \left( Y_t, {\mathcal N}(0,t m_2(\mu)) \right) \leq C \frac{n t m_4(\mu) + t^2 m_2(\mu)}{n t m_2(\mu)} \stackrel{n\to \infty}{\longrightarrow} C\frac{m_4(\mu)}{m_2(\mu)}, \end{equation*} which concludes the proof. \end{proof} This result is quite surprising at first glance: since the variances of the involved variables are $tm_2(\mu)$, it would be natural to get a bound that descreases to $0$ as $t$ decreases to $0$ (and that explodes for large $t$). Of course, we deduce the bound ${\mathcal W}^2_2(Y_t,{\mathcal N}(0,tm_2(\mu)))\leq C \min(m_4(\mu)/m_2(\mu),tm_2(\mu))$, but this is now optimal, as shown in the following example. \vskip.2cm {\bf Example.} Consider, for ${\epsilon}>0$, $\mu_{\epsilon}=(2{\epsilon}^2)^{-1}(\delta_{\epsilon}+\delta_{-{\epsilon}})$, and the corresponding pure jump (centered) L\'evy process $(Y^{\epsilon}_t)_{t\geq 0}$. It takes its values in ${\epsilon}{\mathbb{Z}}$. Observe that $m_2(\mu_{\epsilon})=1$ and $m_4(\mu_{\epsilon})={\epsilon}^2$. There is $c>0$ such that for all $t\geq 0$, all ${\epsilon}>0$, ${\mathcal W}_2^2(Y^{\epsilon}_t, {\mathcal N}(0,t))\geq c \min(t, {\epsilon}^2)= c \min(m_4(\mu_{\epsilon})/m_2(\mu_{\epsilon}),tm_2(\mu_{\epsilon}) )$. Indeed, $\bullet$ if $t\leq {\epsilon}^2$, then $\mathbb{P}(Y^{\epsilon}_t=0)\geq e^{-t\mu_{\epsilon}({\mathbb{R}}_*)}= e^{-t/{\epsilon}^2}\geq 1/e$, from which the lowerbound ${\mathcal W}_2^2(Y^{\epsilon}_t, {\mathcal N}(0,t))\geq c t=c\min(t,{\epsilon}^2)$ is easily deduced; $\bullet$ if $t\geq {\epsilon}^2$, use that ${\mathcal W}_2^2(Y^{\epsilon}_t,{\mathcal N}(0,t)) \geq \mathbb{E}[\min_{n\in {\mathbb{Z}}} |t^{1/2} G- n{\epsilon}|^2 ] = t \mathbb{E}[\min_{n\in {\mathbb{Z}}} |G- n {\epsilon} t^{-1/2}|^2 ]$, where $G$ is Gaussian with mean $0$ and variance $1$. Tedious computations show that there is $c>0$ such that for any $a \in (0,1]$, $\mathbb{E}[\min_{n\in {\mathbb{Z}}} |G- n a|^2 ] \geq (a/4)^2 \mathbb{P}(G \in \cup_{n\in {\mathbb{Z}}}[(n+1/4)a, (n+3/4)a]) \geq c a^2$. Hence ${\mathcal W}_2^2(Y^{\epsilon}_t,{\mathcal N}(0,t)) \geq c t ({\epsilon} t^{-1/2})^2=c {\epsilon}^2= c \min(t,{\epsilon}^2)$. \section{Proof of Theorem \ref{main}}\label{pr1} We recall elementary results about the Euler scheme for (\ref{sde}) in Subsection \ref{euler}. We introduce our coupling in Subsection \ref{coupling}, which allows us to compare our scheme with the Euler scheme in Subsection \ref{esti}. We conclude in Subsection \ref{conclu}. We assume in the whole section that $\sigma$ is bounded and Lipschitz continuous. \subsection{Euler scheme}\label{euler} We introduce the Euler scheme with step $1/n$ associated to (\ref{sde}). Let \begin{align} &\Delta_i^n=Z_{i/n}-Z_{(i-1)/n} \quad (i\geq 1), \label{deltan} \\ &X^n_0=x,\quad X^n_{(i+1)/n}=X^n_{i/n}+ \sigma\left(X^n_{i/n}\right) \Delta^n_{i+1} \quad (i\geq 0).\label{eqeul} \end{align} The following result is classical. \begin{prop}\label{conveul} Consider a L\'evy process $(Z_t)_{t\geq 0}$ as in (\ref{levy}). For $(X_t)_{t\geq 0}$ the solution to (\ref{sde}) and for $(X^n_{i/n})_{i\geq 0}$ defined in (\ref{deltan})-(\ref{eqeul}), \begin{equation*} \mathbb{E}\left[ \sup_{[0,T]} |X_{\rho_n(t)} - X^n_{\rho_n(t)}|^2\right] \leq C_T /n, \end{equation*} where $C_T$ depends only on $T,a,b,m_2(\nu)$, and $\sigma$. \end{prop} We sketch a proof for the sake of completeness. \begin{proof} Using the Doob and Cauchy-Scharz inequalities, we get, for $0\leq s \leq t \leq T$, \begin{align}\label{tec1} \mathbb{E}\left[\sup_{[s,t]}|X_u-X_s|^2\right]&\leq C \mathbb{E}\Big[ \left(a\int_s^t |\sigma(X_u)| du\right)^2+ \sup_{[s,t]} \left(b\int_s^v \sigma(X_u)dB_u\right)^2 \\ \nonumber &\hskip2cm +\sup_{[s,t]}\left(\int_s^v \displaystyle \int_{\rr_*}\sigma(X_{u-})z {\tilde N}(ds,du)\right)^2\Big] \\ \nonumber &\leq C_T \int_s^t (a^2+b^2+m_2(\nu) )||\sigma||_\infty^2 du \leq C_T (t-s). \end{align} Observe now that $X^n_{\rho_n(t)}=x+\int_0^{\rho_n(t)} \sigma(X^n_{\rho_n(s)-})dZ_s$. Setting $A^n_t=\sup_{[0,t]} |X_{\rho_n(s)} - X^n_{\rho_n(s)} |^2$, we thus get $A^n_t=\sup_{[0,t]} |\int_0^{\rho_n(s)} (\sigma(X_{u-})-\sigma(X^n_{\rho_n(u)-}))dZ_u |^2$. Using the same arguments as in (\ref{tec1}), then the Lipschitz property of $\sigma$ and (\ref{tec1}), we get \begin{align*} \mathbb{E}[A^n_t]&\leq C_T \int_0^{\rho_n(t)} (a^2+b^2+m_2(\nu) ) \mathbb{E}[(\sigma(X_s)- \sigma(X^n_{\rho_n(s)}))^2] ds \\ &\leq C_T \int_0^t \mathbb{E}[(X_s-X_{\rho_n(s)})^2+ (X_{\rho_n(s)}- X^n_{\rho_n(s)})^2] ds\\ &\leq C_T \int_0^t \left( |s-\rho_n(s)| + \mathbb{E}[A^n_s] \right) ds. \end{align*} We conclude using that $|s-\rho_n(s)|\leq 1/n$ and the Gronwall Lemma. \end{proof} \subsection{Coupling}\label{coupling} We now introduce a suitable coupling between the Euler scheme (see Subsection \ref{euler}) and our numerical scheme (see Subsection \ref{sscheme}). Recall (\ref{mom}). \begin{lem}\label{lcou} Let $n \in {\mathbb{N}}$ and ${\epsilon}>0$. It is possible to build two coupled families of i.i.d. random variables $(\Delta^n_i)_{i\geq 1}$ and $(\Delta^{n,{\epsilon}}_i)_{i\geq 1}$, distributed respectively as in (\ref{deltan}) and (\ref{deltaneps}) in such a way that for each $i\geq 1$, \begin{equation*} \mathbb{E}[(\Delta^n_i -\Delta^{n,{\epsilon}}_i)^2] \leq C \delta_{\epsilon}(\nu), \end{equation*} where $C$ is an universal constant. Furthermore, for all ${\epsilon}>0$, all $n\in {\mathbb{N}}$, all $i\geq 1$, \begin{equation*} \mathbb{E}[\Delta^n_i]=\mathbb{E}[\Delta^{n,{\epsilon}}_i]= \frac{a}{n},\quad {\mathbb{V} \rm ar} [\Delta^n_i]= {\mathbb{V} \rm ar} [\Delta^{n,{\epsilon}}_i]= \frac{b^2+m_2(\nu)}{n}. \end{equation*} \end{lem} \begin{proof} It of course suffices to build $(\Delta_1^n,\Delta_1^{n,{\epsilon}})$, and then to take independent copies. Consider a Poisson measure $N(ds,dz)$ with intensity measure $ds \nu(dz){{\bf 1}}_{\{|z|\leq {\epsilon}\}}$ on $[0,\infty) \times \{|z|\leq {\epsilon}\}$. Observe that $\int_0^t \int_{|z|\leq {\epsilon}} z {\tilde N}(ds,dz)$ is a centered pure jump L\'evy process with L\'evy measure $\nu_{\epsilon}(dz)= {{\bf 1}}_{|z|\leq {\epsilon}}\nu(dz)$. Then we use Corollary \ref{col} and enlarge the underlying probability space if necessary: there is a Gaussian random variable $G_{1}^{n,{\epsilon}}$ with mean $0$ and variance $m_2(\nu_{\epsilon})/n=m_{2,{\epsilon}}(\nu)/n$ such that $\mathbb{E}\left[|\int_0^{1/n} \int_{|z|\leq {\epsilon}} z {\tilde N}(ds,dz)- G_1^{n,{\epsilon}} |^2 \right] \leq C m_4(\nu_{\epsilon})/m_2(\nu_{\epsilon}) =C\delta_{\epsilon}(\nu)$. We consider a Brownian motion $(B_t)_{t\geq 0}$ and a Poisson measure $N$ with intensity measure $ds \nu(dz){{\bf 1}}_{\{|z|> {\epsilon}\}}$ on $[0,\infty) \times \{|z|> {\epsilon}\}$, independent of the couple $(G_1^{n,{\epsilon}},\int_0^{1/n} \int_{|z|\leq {\epsilon}} z {\tilde N}(ds,dz))$ and we set $\bullet$ $\Delta^n_1:= a/n + b B_{1/n} + \int_0^{1/n}\int_{|z|\leq {\epsilon}} z {\tilde N}(ds,dz) +\int_0^{1/n}\int_{|z|>{\epsilon}} z{\tilde N}(ds,dz)$, $\bullet$ $\Delta^{n,{\epsilon}}_1:= a/n + b B_{1/n} + G^{n,{\epsilon}}_1 +\int_0^{1/n}\int_{|z| >{\epsilon}} z{\tilde N}(ds,dz)$. Then $\Delta^n_1$ has obviously the same law as $Z_{1/n}-Z_0$ (see (\ref{levy}) and (\ref{deltan})), while $\Delta^{n,{\epsilon}}_1$ has also the desired law (see (\ref{deltaneps})). Indeed, $b B_{1/n}+ G^{n,{\epsilon}}_1$ has a centered Gaussian law with variance $b^2/n+m_{2,{\epsilon}}(\nu)/n=b^2_{n,{\epsilon}}$, and $a/n+\int_0^{1/n}\int_{|z| >{\epsilon}} z{\tilde N}(ds,dz)=a_{n,{\epsilon}}+ \int_0^{1/n}\int_{|z|>{\epsilon}} z N(ds,dz)$. This last integral can be represented as in (\ref{deltaneps}). Finally $\mathbb{E}[(\Delta^{n}_1-\Delta^{n,{\epsilon}}_1)^2]\leq \mathbb{E}\left[|\int_0^{1/n} \int_{|z|\leq {\epsilon}} z {\tilde N}(ds,dz)- G_1^{n,{\epsilon}} |^2 \right] \leq C \delta_{\epsilon}(\nu)$, and the mean and variance estimates are obvious. \end{proof} \subsection{Estimates}\label{esti} We now compare our scheme with the Euler scheme. To this end, we introduce some notation. First, we consider the sequence $(\Delta^n_i,\Delta^{n,{\epsilon}}_i)_{i\geq 1}$ introduced in Lemma \ref{lcou}. Then we consider $(X^n_{i/n})_{i\geq 0}$ and $(X^{n,{\epsilon}}_{i/n})_{i\geq 0}$ defined in (\ref{eqeul}) and (\ref{scheme}). We introduce the filtration ${\mathcal F}^{n,{\epsilon}}_i=\sigma(\Delta^n_k,\Delta^{n,{\epsilon}}_k, k\leq i)$, and the processes (with $V^{n,{\epsilon}}_0=0$) \begin{align*} Y^{n,{\epsilon}}_i=X^{n}_{i/n }-X^{n,{\epsilon}}_{i/ n}, \quad V^{n,{\epsilon}}_i= \frac{a}{n}\sum_{k=0}^{i-1} [\sigma(X^{n}_{k/n})-\sigma(X^{n,{\epsilon}}_{k/n})], \quad M^{n,{\epsilon}}_i= Y^{n,{\epsilon}}_i - V^{n,{\epsilon}}_i. \end{align*} \begin{lem}\label{lemtec1} There is a constant C, depending only on $\sigma,a,b,m_2(\nu)$ such that for all $N\geq 1$, \begin{equation*} \mathbb{E} \left[\sup_{i=0,...,N}|Y^{n,{\epsilon}}_i|^2 \right] \leq C n \delta_{\epsilon}(\nu) (1+C/n)^N (1+N^2/n^2). \end{equation*} \end{lem} \begin{proof} We divide the proof into four steps. {\it Step 1.} We prove that for all $i \geq 0$, $\mathbb{E} \left[|Y^{n,{\epsilon}}_i|^2 \right] \leq C n \delta_{\epsilon}(\nu) (1+C/n)^i$. First, \begin{align*} \mathbb{E}[|Y^{n,{\epsilon}}_{i+1}|^2] &= \mathbb{E}[|Y^{n,{\epsilon}}_i|^2]+ \mathbb{E}[(\sigma(X^n_{i /n})\Delta^n_{i+1} -\sigma(X^{n,{\epsilon}}_{ i /n})\Delta^{n,{\epsilon}}_{i+1} )^2 ] \\ &+ 2 \mathbb{E}\left[ Y^{n,{\epsilon}}_i(\sigma(X^n_{i /n})\Delta^n_{i+1} -\sigma(X^{n,{\epsilon}}_{ i /n})\Delta^{n,{\epsilon}}_{i+1} ) \right] = \mathbb{E}[|Y^{n,{\epsilon}}_i|^2] + I^{n,{\epsilon}}_i+ J^{n,{\epsilon}}_i. \end{align*} Now, using Lemma \ref{lcou} and that $(\Delta^n_{i+1},\Delta^{n,{\epsilon}}_{i+1})$ is independent of ${\mathcal F}^{n,{\epsilon}}_i$, we deduce that \begin{equation*} J^{n,{\epsilon}}_i = \frac{2a}{n}\mathbb{E}\left[ Y^{n,{\epsilon}}_i(\sigma(X^n_{i / n}) -\sigma(X^{n,{\epsilon}}_{ i/ n}))\right] \leq \frac{C}{n} E[|Y^{n,{\epsilon}}_{i}|^2], \end{equation*} since $\sigma$ is Lipschitz continuous. Using now the Lipschitz continuity and the boundedness of $\sigma$, together with Lemma \ref{lcou} and the independence of $(\Delta^n_{i+1},\Delta^{n,{\epsilon}}_{i+1})$ with respect to ${\mathcal F}^{n,{\epsilon}}_i$, we get \begin{equation*} I^{n,{\epsilon}}_i \leq C \mathbb{E}[ |Y^{n,{\epsilon}}_i|^2 (\Delta^{n,{\epsilon}}_{i+1})^2] + C \mathbb{E}[ (\Delta^{n,{\epsilon}}_{i+1}-\Delta^n_{i+1})^2] \leq \frac{C}{n} E[|Y^{n,{\epsilon}}_{i}|^2] + C \delta_{\epsilon}(\nu). \end{equation*} Finally, we get \begin{align*} \mathbb{E}[|Y^{n,{\epsilon}}_{i+1}|^2]\leq (1+C/n) E[|Y^{n,{\epsilon}}_{i}|^2] + C \delta_{\epsilon}(\nu). \end{align*} Since $Y^{n,{\epsilon}}_0=0$, this entails that $E[|Y^{n,{\epsilon}}_i|^2] \leq C \delta_{\epsilon}(\nu)[1+(1+C/n)+...+(1+C/n)^{i-1}]\leq C n\delta_{\epsilon}(\nu) (1+C/n)^i$. \vskip.2cm {\it Step 2.} We check that for $N\geq 1$, $\mathbb{E}[\sup_{0,...,N} |V^{n,{\epsilon}}_i|^2] \leq C n \delta_{\epsilon}(\nu) (1+C/n)^N N^2/n^2$. It suffices to use the Lipschitz property of $\sigma$, the Cauchy-Schwarz inequality, and then Step 1: \begin{align*} \mathbb{E}\left[\sup_{1,...,N} |V^{n,{\epsilon}}_i|^2 \right] &\leq C \mathbb{E}\left[\left(\frac{1}{n}\sum_0^{N-1} |Y^{n,{\epsilon}}_i| \right)^2\right] \leq C \frac{N}{n^2}\sum_0^{N-1}\mathbb{E}[|Y^{n,{\epsilon}}_{i}|^2]\\ &\leq C \frac{N^2}{n^2} n \delta_{\epsilon}(\nu) (1+C/n)^N \end{align*} \vskip.2cm {\it Step 3.} We now verify that $(M^{n,{\epsilon}}_i)_{i \geq 0}$ is a $({\mathcal F}^{n,{\epsilon}}_i)_{i\geq 0}$-martingale. We have $M^{n,{\epsilon}}_{i+1}-M^{n,{\epsilon}}_i= \sigma(X^n_{i/n})[\Delta^n_{i+1}-a/n] - \sigma(X^{n,{\epsilon}}_{i/n})[\Delta^{n,{\epsilon}}_{i+1}-a/n]$. The step is finished, since the variables $\Delta^n_{i+1}-a/n$ and $\Delta^{n,{\epsilon}}_{i+1}-a/n$ are centered and independent of ${\mathcal F}^{n,{\epsilon}}_i$. \vskip.2cm {\it Step 4.} Using the Doob inequality and then Steps 1 and 2, we get \begin{align*} \mathbb{E}\left[\sup_{i=0,...,N} |M^{n,{\epsilon}}_i|^2\right] &\leq C \sup_{i=0,...,N}\mathbb{E}\left[|M^{n,{\epsilon}}_i|^2\right] \\ &\leq C \sup_{i=0,...,N}\mathbb{E}\left[|Y^{n,{\epsilon}}_i|^2\right] +C \sup_{i=0,...,N}\mathbb{E}\left[|V^{n,{\epsilon}}_i|^2\right] \\ &\leq C n \delta_{\epsilon}(\nu)(1+C/n)^N(1+N^2/n^2). \end{align*} But now \begin{align*} \mathbb{E}\left[\sup_{i=0,...,N} |Y^{n,{\epsilon}}_i|^2\right] &\leq C \mathbb{E}\left[\sup_{i=0,...,N}|M^{n,{\epsilon}}_i|^2\right] + C \mathbb{E}\left[\sup_{i=0,...,N}|V^{n,{\epsilon}}_i|^2\right], \end{align*} which allows us to conclude. \end{proof} Let us rewrite these estimates in terms of $X^n$ and $X^{n,{\epsilon}}$. \begin{lem}\label{fifi} Consider the sequence $(\Delta^n_i,\Delta^{n,{\epsilon}}_i)_{i\geq 1}$ introduced in Lemma \ref{lcou}, and then $(X^n_{i/n})_{i\geq 0}$ and $(X^{n,{\epsilon}}_{i/n})_{i\geq 0}$ defined in (\ref{eqeul}) and (\ref{scheme}). For all $T\geq 0$, \begin{equation*} \mathbb{E}\left[\sup_{[0,T]}|X^n_{\rho_n(t)} - X^{n,{\epsilon}}_{\rho_n(t)}|^2 \right] \leq C_T n\delta_{\epsilon}(\nu), \end{equation*} where $C_T$ depends only on $T, a, b, m_2(\nu), \sigma$. \end{lem} \begin{proof} With the previous notation, $\sup_{[0,T]}|X^n_{\rho_n(t)} - X^{n,{\epsilon}}_{\rho_n(t)}| = \sup_{i=0,...,[nT]} |Y^{n,{\epsilon}}_i|$. Thus using Lemma \ref{lemtec1}, we get the bound $C n \delta_{\epsilon}(\nu) (1+C/n)^{[nT]}(1+[nT]^2/n^2) \leq C n \delta_{\epsilon}(\nu) e^{CT}(1+T^2)$, which ends the proof. \end{proof} \subsection{Conclusion}\label{conclu} We finally give the \begin{preuve} {\it of Theorem \ref{main}.} Fix $n\in{\mathbb{N}}$ and ${\epsilon}>0$. Denote by $Q(du,dv)$ the joint law of $(\Delta_1^{n},\Delta_1^{n,{\epsilon}})$ built in Lemma \ref{lcou}, and write $Q(du,dv)=Q_1(du)R(u,dv)$, where $R(u,dv)$ is the law of $\Delta_1^{n,{\epsilon}}$ conditionnally to $\Delta_1^n=u$. \vskip.2cm Consider a L\'evy process $(Z_t)_{t\geq 0}$ as in (\ref{levy}), and $(X_t)_{t\geq 0}$ the corresponding solution to (\ref{sde}). Set, for $i\geq 0$, $\Delta^n_i=Z_{i/n}-Z_{(i-1)/n}$, and consider the Euler scheme $(X^n_{i/n})_{i\geq 0}$ as in (\ref{eqeul}). For each $i\geq 1$, let $\Delta^{n,{\epsilon}}_i$ be distributed according to $R(\Delta^{n}_i,dv)$, in such a way that $(\Delta^{n,{\epsilon}}_i)_{i\geq 1}$ is an i.i.d. sequence. Finally, let $(X^{n,{\epsilon}}_{i/n})_{i\geq 0}$ as in (\ref{scheme}). \vskip.2cm By this way, the processes $(X_t)_{t\geq 0}$, $(X^{n}_{i/n})_{i\geq 0}$ and $(X^{n,{\epsilon}}_{i/n})_{i\geq 0}$ are coupled in such a way that we may apply Proposition \ref{conveul} and Lemma \ref{fifi}. We get \begin{align*} \mathbb{E}\left[\sup_{[0,T]} |X_{\rho_n(t)}-X^{n,{\epsilon}}_{\rho_n(t)}|^2 \right] &\leq 2\mathbb{E}\left[\sup_{[0,T]} |X_{\rho_n(t)}-X^{n}_{\rho_n(t)}|^2 + \sup_{[0,T]} |X^n_{\rho_n(t)}-X^{n,{\epsilon}}_{\rho_n(t)}|^2 \right]\\ &\leq C_T [n^{-1} + n \delta_{\epsilon}(\nu)]. \end{align*} This concludes the proof. \end{preuve} \section{Proofs of Theorem \ref{main2} and Corollary \ref{corcv}} We assume in the whole section that $\sigma$ is bounded and Lipschtiz continuous. We start with a technical lemma. \begin{lem}\label{lemtec3} Let $(X_t)_{t\geq 0}$ and $({\tilde X}_t)_{t\geq 0}$ be solutions to (\ref{sde}) and (\ref{sde2}). Then for $p\geq 2$, for all $t_0\geq 0$, all $h\in (0,1]$, $$ \mathbb{E}\left[\sup_{[t_0,t_0+h]} |X_t-X_{t_0}|^p\right] \leq C_p (h^{p/2} + h m_p(\nu)), \;\; \mathbb{E}\left[\sup_{[t_0,t_0+h]} |{\tilde X}_t-{\tilde X}_{t_0}|^p\right] \leq C_p h^{p/2} , $$ where $C_p$ depends only on $p,\sigma,a,b,m_2(\nu)$. \end{lem} \begin{proof} It clearly suffices to treat the case of $(X_t)_{t\geq 0}$. Let thus $p\geq 2$. Using the Burholder-Davies-Gundy inequality and the boundedness of $\sigma$, we get \begin{align*} &\mathbb{E}\left[\sup_{[t_0,t_0+h]}|X_t-X_{t_0}|^p \right]\leq C_p \mathbb{E}\left[ \left(\int_{t_0}^{t_0+h} |a\sigma(X_s)| ds \right)^p \right]\\ &+C_p \mathbb{E}\left[ \left(\int_{t_0}^{t_0+h} b^2\sigma^2(X_s) ds\right)^{p/2} \right] + C_p \mathbb{E} \left[ \left(\int_{t_0}^{t_0+h}\displaystyle \int_{\rr_*} \sigma^2(X_s)z^2 N(ds,dz) \right)^{p/2} \right]\\ & \leq C_p h^p + C_p h^{p/2} + C_p \mathbb{E} \left[ \left(\int_{t_0}^{t_0+h} \displaystyle \int_{\rr_*} z^2 N(ds,dz) \right)^{p/2} \right] \leq C_p h^{p/2} + C_p\mathbb{E}[U_{h}^{p/2}], \end{align*} where $U_t=\int_0^t \int_{{\mathbb{R}}_*} z^2 N(ds,dz)$. It remains to check that for $t\geq 0$, $\mathbb{E}[U_{t}^{p/2}]\leq C_p(t^{p/2}+ tm_p(\nu))$. But, with $C_p$ depending on $m_2(\nu)$, \begin{align*} \mathbb{E}[U_t^{p/2}] &= \int_0^t ds \int_{{\mathbb{R}}_*}\nu(dz) \mathbb{E}[(U_s+z^2)^{p/2}-U_s^{p/2}] \\ &\leq C_p \int_0^t ds \int_{{\mathbb{R}}_*}\nu(dz) \mathbb{E}[z^2 U_s^{p/2-1} + |z|^p] \leq C_p \int_0^t \mathbb{E}[U_s^{p/2-1}] ds + C_p m_p(\nu)t \\ &\leq C_p \int_0^t \mathbb{E}[U_s^{p/2}]{\epsilon}^{-1} ds + C_p ({\epsilon}^{p/2-1} + m_p(\nu))t, \end{align*} for any ${\epsilon}>0$. Hence $\mathbb{E}[U_t^{p/2}] \leq C_p({\epsilon}^{p/2-1}t+m_p(\nu)t) e^{C_p t /{\epsilon}}$ by the Gronwall Lemma. Choosing ${\epsilon}=t$, we conclude that $\mathbb{E}[U_t^{p/2}] \leq C_p(t^{p/2}+ m_p(\nu)t)$. \end{proof} \begin{preuve} {\it of Theorem \ref{main2}.} We fix $n\geq 1$, $T>0$, and $p\geq 4$. \vskip.2cm {\it Step 1.} Using Lemma \ref{col} (see also Lemma \ref{lcou}) we deduce that we may couple two i.i.d. families $(\Delta_i^n)_{i\geq 1}$ and $(\tilde \Delta_i^n)_{i\geq 1}$, in such a way that: $\bullet$ $(\Delta_i^n)_{i\geq 1}$ has the same law as the increments $(Z_{i/n}-Z_{(i-1)/n} )_{i\geq 1}$ of the L\'evy process (\ref{levy}); $\bullet$ $(\Delta_i^n)_{i\geq 1}$ has the same law as the increments $({\tilde Z}_{i/n}-{\tilde Z}_{(i-1)/n} )_{i\geq 1}$ of the L\'evy process (\ref{levy2}); $\bullet$ for all $i\geq 1$, $\mathbb{E}[(\Delta_i^n-\tilde \Delta_i^n)^2] \leq C m_4(\nu)$ (we allow constants to depend on $m_2(\nu)$). \vskip.2cm {\it Step 2.} We then set $X^n_0={\tilde X}^n_0=x$, and for $i\geq 1$, $X^n_{i/n}=X^n_{(i-1)/n} + \sigma(X^n_{(i-1)/n}) \Delta_i^n$ and ${\tilde X}^n_{i/n}={\tilde X}^n_{(i-1)/n} + \sigma({\tilde X}^n_{(i-1)/n}) \tilde\Delta_i^n$. Using exactly the same arguments as in Lemmas \ref{lemtec1} and \ref{fifi}, we deduce that $\mathbb{E}\left[\sup_{[0,T]} |X^n_{\rho_n(t)} - {\tilde X}^n_{\rho_n(t)}|^2 \right] \leq C_T n m_4(\nu)$, where $C_T$ depends only on $T,\sigma, a,b, m_2(\nu)$. \vskip.2cm {\it Step 3.} But $(X^n_{\rho_n(t)})_{t\geq 0}$ is the Euler discretization of (\ref{sde}), while $({\tilde X}^n_{\rho_n(t)})_{t\geq 0}$ is the Euler discretization of (\ref{sde2}). Hence using Step 2, Proposition \ref{conveul} and a suitable coupling as in the final proof of Theorem \ref{main}, $\mathbb{E}\left[\sup_{[0,T]} |X_{\rho_n(t)} - {\tilde X}_{\rho_n(t)}|^2 \right] \leq C_T (1/n+n m_4(\nu))$. \vskip.2cm {\it Step 4.} We now prove that $\mathbb{E}\left[\sup_{[0,T]} |X_{t} - X_{\rho_n(t)}|^2 \right] \leq C_{T,p} (n^{2/p-1}+ m_p(\nu)^{2/p})$. We set $\Gamma_i= \sup_{[i/n,(i+1)/n]}|X_t-X_{\rho_n(t)}|= \sup_{[i/n,(i+1)/n]}|X_t-X_{i/n}|$. By Lemma \ref{lemtec3}, $\mathbb{E}[\Gamma_i ^p] \leq C_p[(1/n)^{p/2}+ m_p(\nu)/n ]$. Thus, since $p\geq 2$, \begin{align*} \mathbb{E}\left[\sup_{[0,T]} |X_{t} - X_{\rho_n(t)}|^2 \right] &\leq \mathbb{E}\left[\sup_{1,...,[nT]} \Gamma_i^2 \right] \leq \mathbb{E}\left[\sup_{1,...,[nT]} \Gamma_i^p\right]^{2/p} \leq \mathbb{E}\left[\sum_{1}^{[nT]} \Gamma_i^p\right]^{2/p} \\ &\leq C_{T,p} n^{2/p} \left[(1/n)^{p/2}+ m_p(\nu)/n \right]^{2/p}, \end{align*} which ends the step. \vskip.2cm {\it Step 5.} Exactly as in Step 4, we get $\mathbb{E}\left[\sup_{[0,T]} |{\tilde X}_{t} - {\tilde X}_{\rho_n(t)}|^2 \right] \leq C_{T,p} n^{2/p-1}$. \vskip.2cm {\it Step 6.} Using Steps 3, 4 and 5, we deduce that with a suitable coupling, we have $\mathbb{E}[\sup_{[0,T]}|X_t-{\tilde X}_t|^2] \leq C_{T,p} (n^{2/p-1}+ m_p(\nu)^{2/p}+ n^{-1} +nm_4(\nu))$. \end{preuve} \vskip.2cm We conclude the paper with the \begin{preuve} {\it of Corollary \ref{corcv}.} Since $\nu(\{|z|>{\epsilon}\})=0$, we deduce that $m_p(\nu) \leq m_2(\nu){\epsilon}^{p-2}$, for any $p\geq 2$. Applying Theorem \ref{main2} and choosing $n=[{\epsilon}^{-p/(p-1)}]$, we get the bound $$ C_{T,p}\left( {\epsilon}^{(1-2/p)(p/(p-1))} + {\epsilon}^{(p-2)(2/p)}+{\epsilon}^{2-p/(p-1)} \right) \leq C_{T,p} ({\epsilon}^{1- 1/(p-1)} + {\epsilon}^{2-4/p}). $$ Hence for $\eta\in (0,1)$, it is possible to get the bound $C_{T,\eta} {\epsilon}^{1-\eta}$, choosing $p$ large enough. \end{preuve} \vskip.2cm {\bf Acknowledgement.} I wish to thank Jean Jacod for fruitfull discussions.
1,108,101,564,099
arxiv
\section{Introduction} Kinetically constrained models (KCMs) are interacting particle systems on $\mathds{Z}^d$, in which each element (or \emph{site}) of $\mathds{Z}^d$ can be in state 0 or 1. Each site tries to update its state to 0 at rate $q$ and to 1 at rate $1-q$, with $q \in [0,1]$ fixed, but an update is accepted if and only if a \emph{constraint} is satisfied. This constraint is defined via an \emph{update family} $\mathcal{U}=\{X_1,\dots,X_m\}$, where $m \in \mathds{N}^*$ and the $X_i$, called \emph{update rules}, are finite nonempty subsets of $\mathds{Z}^d \setminus \{0\}$: the constraint is satisfied at a site $x$ if and only if there exists $X \in \mathcal{U}$ such that all the sites in $x+X$ have state zero. Since the constraint at a site does not depend on the state of the site, it can be easily checked that the product $\mathrm{Bernoulli}(1-q)$ measure, $\nu_q$, satisfies the detailed balance with respect to the dynamics, hence is reversible and invariant. $\nu_q$ is the \emph{equilibrium measure} of the dynamics. KCMs were introduced in the physics literature by Fredrickson and Andersen \cite{Fredrickson_et_al1984} to model the liquid-glass transition, an important open problem in condensed matter physics (see \cite{Ritort_et_al,Garrahan_et_al}). In addition to this physical interest, KCMs are also mathematically challenging, because the presence of the constraints make them very different from classical Glauber dynamics and prevents the use of most of the usual tools. One of the most important features of KCMs is the existence of blocked configurations. These blocked configurations imply that the equilibrium measure $\nu_q$ is not the only invariant measure, which complicates a lot the study of the out-of equilibrium behavior of KCMs; even the basic question of their convergence to $\nu_q$ remains open in most cases. Because of the blocked configurations, one cannot expect such a convergence to equilibrium for all initial laws. Initial measures particularly relevant for physicists are the $\nu_{q'}$ with $q' \neq q$ (see \cite{Leonard_et_al2007}). Indeed, $q$ is a measure of the temperature of the system: the closer $q$ is to 0, the lower the temperature is. Therefore, starting the dynamics with a configuration of law $\nu_{q'}$ means starting with a temperature different from the equilibrium temperature. In this case, KCMs are expected to converge to equilibrium with exponential speed as soon as no site is blocked for the dynamics in a configuration of law $\nu_{q}$ or $\nu_{q'}$. However, there have been few results in this direction so far (see \cite{Cancrini_et_al2010,Blondel_et_al2013,stretched_exponential_East-like,Mountford_FA1f,Mareche2019Est}), and they have been restricted to particular update families or initial laws. Furthermore, general update families have attracted a lot of attention in recent years. Indeed, there recently was a breakthrough in the study of a monotone deterministic counterpart of KCMs called bootstrap percolation. Bootstrap percolation is a discrete-time dynamics in which each site of $\mathds{Z}^d$ can be \emph{infected} or not; infected sites are the bootstrap percolation equivalent of sites at zero. To define it, we fix an update family $\mathcal{U}$ and choose a set $A_0$ of initially infected sites; then for any $t \in \mathds{N}^*$, the set of sites that are infected at time $t$ is \[ A_t = A_{t-1} \cup \{x \in \mathds{Z}^d \,|\, \exists X \in \mathcal{U}, x+X \subset A_{t-1}\}, \] which means that the sites that were infected at time $t-1$ remain infected at time $t$ and a site $x$ that was not infected at time $t-1$ becomes infected at time $t$ if and only if there exists $X \in \mathcal{U}$ such that all sites of $x + X$ are infected at time $t-1$. Until recently, bootstrap percolation had only been considered with particular update families, but the study of general update families was opened by Bollobás, Smith and Uzzell in \cite{Bollobas_et_al2015}. Along with Balister, Bollobás, Przykucki and Smith \cite{Balister_et_al2016}, they proved that general update families satisfy the following universality result: in dimension 2, they can be sorted into three classes, \emph{supercritical}, \emph{critical} and \emph{subcritical} (see definition \ref{def_universality_classes}), which display different behaviors (their result for the critical class was later refined by Bollobás, Duminil-Copin, Morris and Smith in \cite{Bollobas_et_al2017}). These works opened the study of KCMs with general update families. In \cite{MMT,lbounds_infection_time,Hartarsky_et_al2019,Hartarsky_et_al2019bis}, Hartarsky, Martinelli, Morris, Toninelli and the author showed that the grouping of two-dimensional update families into supercritical, critical and subcritical is still relevant for KCMs, and established an even more precise classification. However, these results deal only with equilibrium dynamics. Until now, nothing had been shown on out-of-equilibrium KCMs with general update families, apart from a perturbative result in dimension 1 \cite{Cancrini_et_al2010}. In this article, we prove that for all supercritical update families, for any initial law $\nu_{q'}$, $q'\in]0,1]$, when $q$ is close enough to 1, the dynamics of the KCM converges to equilibrium with exponential speed. This result holds in dimension 2 and also in dimension 1 for a good definition of one-dimensional supercritical update families. It is the first non-perturbative result of convergence to equilibrium holding for a whole class of update families. This result may help to gain a better understanding of the out-of-equilibrium behavior of supercritical KCMs. In particular, such results of convergence to equilibrium were key in proving ``shape theorems'' for specific one-dimensional constraints in \cite{Blondel2013,Ganguly_et_al,Blondel_et_al2018}. \section{Notations and result} Let $d \in \mathds{N}^*$. We denote by $\|.\|_\infty$ the $\ell^\infty$-norm on $\mathds{Z}^d$. For any set $S$, $|S|$ will denote the cardinal of~$S$. For any configuration $\eta \in \{0,1\}^{\mathds{Z}^d}$, for any $x\in \mathds{Z}^d$, we denote $\eta(x)$ the value of $\eta$ at $x$. Moreover, for any $S \subset \mathds{Z}^d$, we denote $\eta_S$ the restriction of $\eta$ to $S$, and $0_S$ (or just 0 when $S$ is clear from the context) the configuration on $\{0,1\}^S$ that contains only zeroes. We set an update family $\mathcal{U}=\{X_1,\dots,X_m\}$ with $m \in \mathds{N}^*$ and the $X_i$ finite nonempty subsets of $\mathds{Z}^d \setminus \{0\}$. To describe the classification of update families, we need the concept of \emph{stable directions}. \begin{definition} For $u \in S^{d-1}$, we denote $\mathds{H}_u = \{x \in \mathds{R}^d \,|\, \langle x,u \rangle < 0 \}$ the half-space with boundary orthogonal to $u$. We say that $u$ is a \emph{stable direction} for the update family $\mathcal{U}$ if there does not exist $X \in \mathcal{U}$ such that $X \subset \mathds{H}_u$; otherwise $u$ is \emph{unstable}. We denote by $\mathcal{S}$ the set of stable directions. \end{definition} \cite{Bollobas_et_al2015} gave a classification of two-dimensional update families into supercritical, critical or subcritical depending on their stable directions. Here is the generalization proposed for $d$-dimensional update families by \cite{Bollobas_et_al2017} (definition 9.1 therein), where for any $\mathcal{E} \subset S^{d-1}$, $\mathrm{int}(\mathcal{E})$ is the interior of $\mathcal{E}$ in the usual topology on $S^{d-1}$. \begin{definition}\label{def_universality_classes} A $d$-dimensional update family $\mathcal{U}$ is \begin{itemize} \item supercritical if there exists an open hemisphere $C \subset S^{d-1}$ that contains no stable direction; \item critical if every open hemisphere $C \subset S^{d-1}$ contains a stable direction, but there exists a hemisphere $C \subset S^{d-1}$ such that $\mathrm{int}(C \cap \mathcal{S}) = \emptyset$; \item subcritical if $\mathrm{int}(C \cap \mathcal{S}) \neq \emptyset$ for every hemisphere $C \subset S^{d-1}$. \end{itemize} \end{definition} Our result will be valid for supercritical update families. The KCM process with update family $\mathcal{U}$ can be constructed as follows. We set $q \in [0,1]$. Independently for all $x \in \mathds{Z}^d$, we define two independent Poisson point processes $\mathcal{P}^0_x$ and $\mathcal{P}^1_x$ on $[0,+\infty[$, with respective rates $q$ and $1-q$. We call the elements of $\mathcal{P}^0_x \cup \mathcal{P}^1_x$ \emph{clock rings} and denote them by $t_{1,x} < t_{2,x} < \cdots$. The elements of $\mathcal{P}^0_x$ will be \emph{0-clock rings} and the elements of $\mathcal{P}^1_x$ will be \emph{1-clock rings}. For any intial configuration $\eta \in \{0,1\}^{\mathds{Z}^d}$, we construct the KCM as the continuous-time process $(\eta_t)_{t \in [0,+\infty[}$ on $\{0,1\}^{\mathds{Z}^d}$ defined thus: for any $x \in \mathds{Z}^d$, $\eta_t(x)=\eta_0(x)$ for $t \in [0,t_{1,x}[$, and for any $k \in \mathds{N}^*$, \begin{itemize} \item if there exists $X \in \mathcal{U}$ such that $(\eta_{t_{k,x}^-})_{x+X}=0_{x+X}$, then $\eta_t(x)=\varepsilon$ for $t \in [t_{k,x},t_{k+1,x}[$, where $t_{x,k}$ is a $\varepsilon$-clock ring, $\varepsilon \in \{0,1\}$; \item if such an $X$ does not exist, $\eta_t(x)=\eta_{t_{k-1,x}}(x)$ for $t \in [t_{k,x},t_{k+1,x}[$. \end{itemize} In other words, sites try to update themselves to 0 when there is a 0-clock ring, which happens at rate $q$, and to 1 when there is a 1-clock ring, which happens at rate $1-q$, but an update at $x$ is successful if and only if there exists an update rule $X$ such that all sites of $x+X$ are at zero. This construction is known as \emph{Harris graphical construction}. One can use the arguments in part 4.3 of \cite{Swart2017} to see that it is well-defined. We denote by $\mathds{P}_\nu$ the law of $(\eta_t)_{t \in [0,+\infty[}$ when the initial configuration has law $\nu$. For any $q' \in [0,1]$, we denote $\nu_{q'}$ the product $\mathrm{Bernoulli}(1-q')$ measure. Since the constraint at a site does not depend on the state of the site, it can be easily checked that $\nu_q$ satisfies the detailed balance with respect to the dynamics, hence is reversible and invariant. $\nu_q$ is called equilibrium measure of the dynamics. We will say that a function $f : \{0,1\}^{\mathds{Z}^d} \mapsto \mathds{R}$ is \emph{local} if its output depends only on the states of a finite set of sites, and we then denote $\|f\|_\infty = \sup_{\eta \in \{0,1\}^{\mathds{Z}^d}}|f(\eta)|$ its norm. \begin{theorem}\label{thm_convergence} If $d=1$ or 2, for any supercritical update family $\mathcal{U}$, for any $q' \in ]0,1]$, there exists $q_0=q_0(\mathcal{U},q') \in [0,1[$ such that for any $q \in [q_0,1]$, for any local function $f: \{0,1\}^{\mathds{Z}^d} \mapsto \mathds{R}$, there exist two constants $c=c(\mathcal{U},q')>0$ and $C=C(\mathcal{U},q',f)>0$ such that for any $t \in [0,+\infty[$, \[ \left| \mathds{E}_{\nu_{q'}} (f(\eta_t))-\nu_q(f) \right| \leq C e^{-ct}. \] \end{theorem} \begin{remark} We expect theorem \ref{thm_convergence} to hold also for $d \geq 3$. However, our proof relies on proposition \ref{prop_Bollobas}, which is easy for $d=1$ and was proven in \cite{Bollobas_et_al2015} for $d=2$, but for which there is no equivalent for $d \geq 3$. Such an equivalent would extend our result to $d \geq 3$. \end{remark} The remainder of this article is devoted to the proof of theorem \ref{thm_convergence}. The argument is based on the proof given in \cite{Mountford_FA1f} for the particular case of the Fredrickson-Andersen one-spin facilitated model, but brings in novel ideas in order to accommodate the much greater complexity of general supercritical models. From now on, we fix $d=1$ or 2 and $\mathcal{U}$ a supercritical update family in dimension $d$. We begin in section \ref{sec_dual_paths} by using the notion of dual paths to reduce the proof of theorem \ref{thm_convergence} to the simpler proof of proposition \ref{prop_all_paths_activated}. Then in section \ref{sec_codings} we use the concept of codings to simplify it further, reducing it to the proof of proposition \ref{prop_bound_single_coding}. In section \ref{sec_aux_proc} we introduce an auxiliary oriented percolation process, that we use in section \ref{sec_preuve_codings} to prove proposition \ref{prop_bound_single_coding} hence theorem \ref{thm_convergence}. \section{Dual paths}\label{sec_dual_paths} In this section, we use the concept of \emph{dual paths} to reduce the proof of theorem \ref{thm_convergence} to the easier proof of proposition \ref{prop_all_paths_activated}. Let $q,q' \in [0,1]$. We notice that the Harris graphical construction allows us to couple a process $(\eta_t)_{t \in [0,+\infty[}$ with initial law $\nu_{q'}$ and a process $(\tilde{\eta}_t)_{t \in [0,+\infty[}$ with initial law $\nu_q$ by using the same clock rings but different initial configurations (independent of the clock rings and of each other). We denote the joint law by $\mathds{P}_{q',q}$. We notice that since $\nu_q$ is an invariant measure for the dynamics, $\tilde{\eta}_t$ has law $\nu_q$ for all $t \in [0,+\infty[$. To prove theorem \ref{thm_convergence}, it is actually enough to show \begin{proposition}\label{prop_conv_1site} For any $q' \in ]0,1]$, there exists $q_0=q_0(\mathcal{U},q') \in [0,1[$ such that for any $q \in [q_0,1]$, there exist two constants $c_1=c_1(\mathcal{U},q')>0$ and $C_1=C_1(\mathcal{U},q')>0$ such that for any $x \in \mathds{Z}^d$ and $t \in [0,+\infty[$, $\mathds{P}_{q',q}(\eta_t(x) \neq \tilde{\eta}_t(x)) \leq C_1 e^{-c_1t}$. \end{proposition} Indeed, if $f: \{0,1\}^{\mathds{Z}^d} \mapsto \mathds{R}$ is a local function depending of a finite set of sites $S$, \[ \left| \mathds{E}_{\nu_{q'}} (f(\eta_t))-\nu_q(f) \right| = \left| \mathds{E}_{q',q} (f(\eta_t))-\mathds{E}_{q',q}(f(\tilde{\eta}_t)) \right| \leq \mathds{E}_{q',q}(|f(\eta_t)-f(\tilde{\eta}_t)|) \] \[ \leq 2\|f\|_\infty \mathds{P}_{q',q}((\eta_t)_S \neq (\tilde{\eta}_t)_S) \leq 2\|f\|_\infty \sum_{x \in S} \mathds{P}_{q',q}(\eta_t(x) \neq \tilde{\eta}_t(x)). \] Therefore we will work on proving proposition \ref{prop_conv_1site}. In order to do that, we need to introduce dual paths. We define the \emph{range} $\rho$ of $\mathcal{U}$ by \[ \rho = \max \{\|x\|_\infty \,|\, x \in X, X \in \mathcal{U}\}. \] For any $x \in \mathds{Z}^d$, $t > 0$ and $0 \leq t' \leq t$, a dual path of length $t'$ starting at $(x,t)$ (see figure \ref{fig_dual_paths}) is a right-continuous path $(\Gamma(s))_{0 \leq s \leq t'}$ that starts at site $x$ at time $t$, goes backwards, is allowed to jump only when there is a clock ring, and only to a site within $\ell^\infty$-distance $\rho$. To write it rigorously, the path satisfies $\Gamma(0)=x$ and there exists a sequence of times $0=s_0 < s_1 < \cdots < s_{n}=t'$ satisfying the following properties: for all $0 \leq k \leq n-1$ and all $s \in [s_k,s_{k+1}[$, $\Gamma(s)=\Gamma(s_k)$, $\Gamma(s_n) = \Gamma(s_{n-1})$ and for all $0 \leq k < n-1$, $t-s_{k+1} \in \mathcal{P}^0_{\Gamma(s_k)} \cup \mathcal{P}^1_{\Gamma(s_k)}$ and $\|\Gamma(s_{k+1}) - \Gamma(s_k)\|_\infty \leq \rho$. \begin{figure} \begin{tikzpicture}[scale=0.9] \draw (0,0)--(9,0) ; \draw (0,1)--(9,1) ; \draw (0,2)--(9,2) ; \draw (0,3)--(9,3) ; \draw (0,4)--(9,4) ; \draw (0,0) node[left]{$x-3$} ; \draw (0,1) node[left]{$x-2$} ; \draw (0,2) node[left]{$x-1$} ; \draw (0,3) node[left]{$x$} ; \draw (0,4) node[left]{$x+1$} ; \draw[dashed] (0.5,4.5)--(0.5,-0.5) node[below] {$0$} ; \draw[dashed] (2.5,4.5)--(2.5,-0.5) node[below] {$t-t'$} ; \draw[dashed] (8.5,4.5)--(8.5,-0.5) node[below] {$t$} ; \draw[<->] (2.5,-0.3)--(8.5,-0.3) node[midway,below] {$t'$} ; \draw (1,2) node {$\times$} ; \draw (2,1) node {$\times$} ; \draw (3,4) node {$\times$} ; \draw (3.5,0) node {$\times$} ; \draw (4,1) node {$\times$} ; \draw (4.5,2) node {$\times$} ; \draw (5.5,3) node {$\times$} ; \draw (6,1) node {$\times$} ; \draw (6.5,3) node {$\times$} ; \draw (8,0) node {$\times$} ; \draw[very thick] (8.5,3)--(5.5,3)--(5.5,1)--(4,1)--(4,2)--(2.5,2) ; \draw (5.5,1.5) node{$\blacktriangledown$}; \draw (7.5,3) node{$\blacktriangleleft$}; \draw (7,3) node[above]{$\Gamma$}; \end{tikzpicture} \caption{Illustration of a dual path $\Gamma$ of length $t'$ starting at $(x,t)$ for $d=1$ and $\rho=2$. Each horizontal line represents the timeline of a site of $\mathds{Z}$, the $\times$ representing the clock rings. $\Gamma$ is the thick polygonal line; it starts at $t$ and ends at $t-t'$. It can jump only when there is a clock ring, and never at a distance greater than $\rho=2$.} \label{fig_dual_paths} \end{figure} We denote $\mathcal{D}(x,t,t')$ the (random) set of all dual paths of length $t'$ starting from $(x,t)$. A dual path $\Gamma \in \mathcal{D}(x,t,t')$ is called an \emph{activated path} if it ``encounters a point at which both processes are at 0'', i.e. if there exists $s \in [0,t']$ such that $\eta_{t-s}(\Gamma(s))=\tilde{\eta}_{t-s}(\Gamma(s))=0$. The set of all activated paths in $\mathcal{D}(x,t,t')$ is called $\mathcal{A}(x,t,t')$. We have the \begin{lemma}\label{lemma_activated_paths} For any $x \in \mathds{Z}^d$ and $t > 0$, if $\eta_t(x) \neq \tilde{\eta}_t(x)$, then for all $0 \leq t' \leq t$, $\mathcal{A}(x,t,t')\neq\mathcal{D}(x,t,t')$. \end{lemma} \begin{proof}[Sketch of proof.] The proof is the same as for lemma 1 of \cite{Mountford_FA1f}, apart from the fact that if the path is at $y$, it does not necessarily jump to a neighbor of $y$, but to an element of $y+X$, $X \in \mathcal{U}$. The idea of the proof is to start a dual path at $(x,t)$, where the two processes disagree, and, staying at $x$, to go backwards in time until the processes agree at $x$. At this time, there was an update at $x$ in one process but not in the other, hence an update rule $x+X$ that was full of zeroes in one process but not in the other, thus a site at distance at most $\rho$ of $x$ at which the two processes disagree. We jump to this site and continue to go backwards. This construction yields a dual path along which the two processes disagree, hence they can not be both at zero, so the path is not activated. \end{proof} Lemma \ref{lemma_activated_paths} implies that to prove proposition \ref{prop_conv_1site} hence theorem \ref{thm_convergence}, it is enough to prove \begin{proposition}\label{prop_all_paths_activated} For any $q' \in ]0,1]$, there exists $q_0=q_0(\mathcal{U},q') \in [0,1[$ such that for any $q \in [q_0,1]$, there exist two constants $c_2=c_2(\mathcal{U},q')>0$ and $C_2=C_2(\mathcal{U},q')>0$ such that for any $x \in \mathds{Z}^d$, $t \in [0,+\infty[$, there exists $0 \leq t' \leq t$ such that $\mathds{P}_{q',q}(\mathcal{A}(x,t,t') \neq\mathcal{D}(x,t,t')) \leq C_2 e^{-c_2t}$. \end{proposition} The remainder of the article will be devoted to the proof of proposition \ref{prop_all_paths_activated}. \section{Codings}\label{sec_codings} This section is devoted to the reduction of the proof of proposition \ref{prop_all_paths_activated} (hence of theorem \ref{thm_convergence}) to the simpler proof of proposition \ref{prop_bound_single_coding}, via the use of \emph{codings}. The idea is the following: in order to prove proposition \ref{prop_all_paths_activated}, it is enough to show that along each dual path, the two processes are at zero at one of the discrete times $0$, $K$, $2K$, etc. hence we only need to consider the positions of the path at these times, which will make up the coding of the path. Let $K \geq 2$ and $t \geq K$. A coding is a sequence $(y_k)_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ of sites in $\mathds{Z}^d$. Moreover, for $x \in \mathds{Z}^d$ and $\Gamma \in \mathcal{D}(x,t,\frac{t}{K})$, the coding $\bar{\Gamma}$ of $\Gamma$ is the sequence $\{\Gamma(kK)\}_{k\in\{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$. If $\gamma = (y_k)_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ is a coding, we define the event $G(\gamma) = \left\{\exists k\in\left\{0,\dots,\left\lfloor\frac{t}{K^2}\right\rfloor\right\}, \eta_{t-kK}(y_k) = \tilde{\eta}_{t-kK}(y_k)=0\right\}$. If $G(\bar{\Gamma})$ is satisfied, $\Gamma$ is an activated path. Therefore, to prove proposition \ref{prop_all_paths_activated} hence theorem \ref{thm_convergence}, it is enough to prove \begin{proposition}\label{prop_paths_to_codings} For any $q' \in ]0,1]$, there exists $q_0=q_0(\mathcal{U},q') \in [0,1[$ such that for any $q \in [q_0,1]$, there exist two constants $c_3=c_3(\mathcal{U},q')>0$ and $C_3=C_3(\mathcal{U},q')>0$ and a constant $K = K(\mathcal{U},q') \geq 2$ such that for any $x \in \mathds{Z}^d$ and $t \geq 2K^2$, $\mathds{P}_{q',q}(\exists \Gamma \in \mathcal{D}(x,t,\frac{t}{K}),G(\bar{\Gamma})^c) \leq C_3 e^{-c_3t}$. \end{proposition} Proposition \ref{prop_paths_to_codings} holds only for $t$ greater than a constant, but this is enough, since we only have to enlarge $C_3$ to obtain a bound valid for all $t$. In order to prove proposition \ref{prop_paths_to_codings}, we will define a set $C_K^N(x,t)$ of ``reasonable codings'' and prove that the probability that there exists a dual path whose coding is not in $C_K^N(x,t)$ decays exponentially in $t$ (lemma \ref{lemma_long_dual_paths}). Then we will count the number of codings in $C_K^N(x,t)$ (lemma \ref{lemma_number_codings}). Therefore it will be enough to give a bound on $\mathds{P}_{q',q}(G(\gamma)^c)$ for any $\gamma \in C_K^N(x,t)$ to prove proposition \ref{prop_paths_to_codings} hence theorem \ref{thm_convergence}. Such a bound is stated in proposition \ref{prop_bound_single_coding} and will be proven in section \ref{sec_preuve_codings}. For any constant $N > 0$, for any $K \geq 2$, $x \in \mathds{Z}^d$ and $t \geq K$, the set $C_K^N(x,t)$ of ``reasonable codings'' is defined as the set of $(y_{j_1+\cdots+j_k})_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ where $(y_{i})_{i \in \{0,\dots,I\}}$ is a sequence of sites satisfiying $y_0=x$, $I \leq \frac{Nt}{K}$ and $\|y_{i+1}-y_i\|_\infty \leq \rho$ for all $i \in \{0,\dots,I-1\}$ and where $j_1,\dots,j_{\lfloor\frac{t}{K^2}\rfloor} \in \mathds{N}$ satisfy $j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor} \leq I$. We can now state lemmas \ref{lemma_long_dual_paths} and \ref{lemma_number_codings}, as well as proposition \ref{prop_bound_single_coding}. These statements together prove proposition \ref{prop_paths_to_codings}. \begin{lemma}\label{lemma_long_dual_paths} For any $q' \in [0,1]$, there exists $N=N(\mathcal{U}) > 0$ such that for any $K \geq 2$, $q \in [0,1]$, there exists a constant $\check{c}=\check{c}(\mathcal{U},K)>0$ such that for all $x \in \mathds{Z}^d$ and $t \geq K$, $\mathds{P}_{q',q}(\exists \Gamma \in \mathcal{D}(x,t,\frac{t}{K}), \bar{\Gamma} \not \in C_K^N(x,t)) \leq e^{-\check{c}t}$. \end{lemma} In the following, $N$ will always be the $N$ given by lemma \ref{lemma_long_dual_paths}. \begin{lemma}\label{lemma_number_codings} There exist constants $\lambda > 0$ and $\beta = \beta(\mathcal{U}) > 0$ such that for any $K \geq 2$, $x \in \mathds{Z}^d$ and $t \geq 2K^2$, $|C_K^N(x,t)| \leq \lambda (\beta K)^{(d+1)\frac{t}{K^2}}$. \end{lemma} \begin{proposition}\label{prop_bound_single_coding} For any $q' \in [0,1]$, there exists a constant $K_0=K_0(\mathcal{U}) \geq 2$ such that for any $K \geq K_0$, there exists $q_K \in [0,1[$ such that for any $q \in [q_K,1]$, there exist two constants $c_4=c_4(\mathcal{U},q')>0$ and $C_4=C_4(\mathcal{U},K)>0$ such that for any $x \in \mathds{Z}^d$, $t \geq K$ and $\gamma \in C_K^N(x,t)$, $\mathds{P}_{q',q}(G(\gamma)^c) \leq C_4 e^{-c_4\frac{t}{K}}$. \end{proposition} We are now going to prove lemmas \ref{lemma_long_dual_paths} and \ref{lemma_number_codings}. After that, it will suffice to prove proposition \ref{prop_bound_single_coding} to prove theorem \ref{thm_convergence}. \begin{proof}[Sketch of proof of lemma \ref{lemma_long_dual_paths}.] This can be proven with the argument of the lemma 5 of \cite{Mountford_FA1f}; the idea is that if there exists $\Gamma \in \mathcal{D}(x,t,\frac{t}{K})$ with $\bar{\Gamma} \not \in C_K^N(x,t)$, there are so many clock rings that the probability becomes very small. Indeed, let us say $\Gamma$ visits the sites $y_0=x,y_1,\dots,y_{j_1}$ in the time interval $[0,K]$, then the sites $y_{j_1},\dots,y_{j_1+j_2}$ in the time interval $[K,2K]$, etc. until the sites $y_{j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor}},\dots,y_{j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor+1}}$ in the time interval $[\lfloor\frac{t}{K^2}\rfloor K,(\lfloor\frac{t}{K^2}\rfloor+1)K]$. Then the coding of $\Gamma$ is $\bar{\Gamma} = (y_{j_1+\cdots+j_k})_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$, hence $\bar{\Gamma} \not \in C_K^N(x,t)$ implies $j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor+1} > \frac{Nt}{K}$. It yields that $\Gamma$ visits more than $\frac{Nt}{K}$ sites in a time $\frac{t}{K}$, and there must be successive clock rings at these sites. The proof of lemma 5 of \cite{Mountford_FA1f} yields that we can choose $N$ large enough depending on $\rho$, hence on $\mathcal{U}$, such that the probability of this event is at most $e^{-\check{c}t}$ with $\check{c}=\check{c}(\mathcal{U},N,K) = \check{c}(\mathcal{U},K) > 0$. \end{proof} To prove lemma \ref{lemma_number_codings}, we need the following classical combinatorial result, which will also be used in the proof of lemma \ref{lemma_percolation_structure}. \begin{lemma}\label{lemma_binomial_coeffs} For any $I,J \in \mathds{N}$, $\binom{I}{I} + \binom{I+1}{I} + \cdots + \binom{I + J}{I} = \binom{I+J+1}{I+1}$. Moreover, for any $I,J \in \mathds{N}$, $|\{(j_1,\dots,j_I)\in\mathds{N}^I \,|\, j_1+\cdots+j_I=J\}| = \binom{I+J-1}{I-1}$. \end{lemma} The proof of the first part of lemma \ref{lemma_binomial_coeffs} can be found just before the section 2 of \cite{Jones_1996} and the proof of the second part in section 1.2 of \cite{Stanley_enucomb} (weak compositions). \begin{proof}[Proof of lemma \ref{lemma_number_codings}.] Let $K \geq 2$, $x \in \mathds{Z}^d$ and $t \geq 2K^2$. By definition, elements of $C_K^N(x,t)$ have the form $(y_{j_1+\cdots+j_k})_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ with $(y_{i})_{i \in \{0,\dots,I\}}$ satisfiying $y_0=x$, $I \leq \frac{Nt}{K}$ and $\|y_{i+1}-y_i\|_\infty \leq \rho$ for all $i \in \{0,\dots,I-1\}$, and with $j_1,\dots,j_{\lfloor\frac{t}{K^2}\rfloor} \in \mathds{N}$ satisfying $j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor} \leq I$. Therefore, to count the number of elements of $C_K^N(x,t)$, it is enough to count the number of possible $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ and the number of possible $(y_{j_1+\cdots+j_k})_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ given $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$. We begin by counting the number of possible $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$. We have $j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor} \leq \frac{Nt}{K}$. Moreover, by the second part of lemma \ref{lemma_binomial_coeffs}, for any integer $0 \leq J \leq \frac{Nt}{K}$, the number of possible sequences of integers $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ such that $j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor} =J$ is at most $\binom{\lfloor\frac{t}{K^2}\rfloor+J-1} {\lfloor\frac{t}{K^2}\rfloor-1}$, hence the number of possible $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ is at most $\sum_{J=0}^{\lfloor\frac{Nt}{K}\rfloor}\binom{\lfloor\frac{t}{K^2}\rfloor+J-1} {\lfloor\frac{t}{K^2}\rfloor-1} = \binom{\lfloor\frac{t}{K^2}\rfloor+\lfloor\frac{Nt}{K}\rfloor} {\lfloor\frac{t}{K^2}\rfloor}$ by the first part of lemma \ref{lemma_binomial_coeffs}. Furthermore $\binom{\lfloor\frac{t}{K^2}\rfloor+\lfloor\frac{Nt}{K}\rfloor}{\lfloor\frac{t}{K^2}\rfloor} \leq \frac{(\lfloor\frac{t}{K^2}\rfloor+\lfloor\frac{Nt}{K}\rfloor)^{\lfloor\frac{t}{K^2}\rfloor}} {(\lfloor\frac{t}{K^2}\rfloor)!} \leq \lambda\left(\frac{e(\lfloor\frac{t}{K^2}\rfloor+\lfloor\frac{Nt}{K}\rfloor)} {\lfloor\frac{t}{K^2}\rfloor}\right)^{\lfloor\frac{t}{K^2}\rfloor} \leq \lambda\left(e+e\frac{\lfloor\frac{Nt}{K}\rfloor}{\lfloor\frac{t}{K^2}\rfloor}\right)^{\frac{t}{K^2}}$ by the Stirling formula, where $\lambda > 0$ is a constant. In addition, since $t \geq 2K^2$, $\lfloor\frac{t}{K^2}\rfloor\geq \frac{t}{2K^2}$, hence the number of possible $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ is at most $\lambda\left(e+e\frac{Nt}{K}\frac{2K^2}{t}\right)^{\frac{t}{K^2}} = \lambda\left(e+2eKN\right)^{\frac{t}{K^2}} \leq \lambda (3eKN)^{\frac{t}{K^2}}$ as $K \geq 2$ and $N$ is large. We now fix a sequence $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ and count the possible $(y_{j_1+\cdots+j_k})_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$. We know that $y_0=x$. Moreover, for all $i \in \{0,\dots,j_1+\cdots+j_{\lfloor\frac{t}{K^2}\rfloor}-1\}$, $\|y_{i+1}-y_i\|_\infty \leq \rho$, hence for each $k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor-1\}$, we have $\|y_{j_1+\cdots+j_{k+1}}-y_{j_1+\cdots+j_k}\|_\infty \leq \rho j_{k+1}$, so there are at most $(2\rho j_{k+1}+1)^d$ choices for $y_{j_1+\cdots+j_{k+1}}$ given $y_{j_1+\cdots+j_{k}}$. Therefore the number of choices for $(y_{j_1+\cdots+j_k})_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ is at most $\prod_{k=1}^{\lfloor\frac{t}{K^2}\rfloor}(2\rho j_{k}+1)^d$. Moreover, for any $n \in \mathds{N}^*$ and any positive $x_1,\dots,x_n$, we have $x_1\dots x_n \leq (\frac{x_1+\cdots+x_n}{n})^n$, therefore the number of choices is bounded by \[ \left(\frac{\sum_{k=1}^{\lfloor\frac{t}{K^2}\rfloor}(2\rho j_{k}+1)} {\lfloor\frac{t}{K^2}\rfloor}\right)^{d\lfloor\frac{t}{K^2}\rfloor} = \left(\frac{2\rho\sum_{k=1}^{\lfloor\frac{t}{K^2}\rfloor}j_{k}+\lfloor\frac{t}{K^2}\rfloor} {\lfloor\frac{t}{K^2}\rfloor}\right)^{d\lfloor\frac{t}{K^2}\rfloor} \leq \left(\frac{2\rho\frac{Nt}{K}+\lfloor\frac{t}{K^2}\rfloor} {\lfloor\frac{t}{K^2}\rfloor}\right)^{d\frac{t}{K^2}} \] since $\sum_{k=1}^{\lfloor\frac{t}{K^2}\rfloor}j_{k} \leq \frac{Nt}{K}$. As $t \geq 2K^2$, $\lfloor\frac{t}{K^2}\rfloor\geq \frac{t}{2K^2}$, thus the number of choices for $(y_{j_1+\cdots+j_k})_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ given $(j_k)_{k \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor\}}$ is bounded by $\left(2\rho\frac{Nt}{K}\frac{2K^2}{t}+1\right)^{d\frac{t}{K^2}} = (4\rho NK+1)^{d\frac{t}{K^2}} \leq (5\rho NK)^{d\frac{t}{K^2}}$. \end{proof} \section{An auxiliary process}\label{sec_aux_proc} In order to prove proposition \ref{prop_bound_single_coding}, we need to find a mechanism for the zeroes to spread in the KCM process; this mechanism uses novel ideas to deal with the complexity of general supercritical models. We begin in section \ref{subsec_bootstrap_result} by using the bootstrap percolation results of \cite{Bollobas_et_al2015} to find a mechanism allowing the zeroes to spread locally (proposition \ref{prop_Bollobas}). Then we use it in section \ref{subsec_def_aux_process} to define an auxiliary oriented percolation process which guarantees that if certain conditions are met, the KCM process is at zero at a given time (proposition \ref{prop_transfer_zeroes2}). Finally, in section \ref{subsec_prop_aux_process} we prove some properties of this auxiliary process that we will use in section \ref{sec_preuve_codings}. \subsection{Local spread of zeroes}\label{subsec_bootstrap_result} This is the place where we need the supercriticality of $\mathcal{U}$. Indeed, since $\mathcal{U}$ is supercritical, the results of \cite{Bollobas_et_al2015} yield the following proposition (see figure \ref{fig_prop_Bollobas}): \begin{proposition}[\cite{Bollobas_et_al2015}]\label{prop_Bollobas} For $d=1$ or $2$, there exists $u \in S^{d-1}$, a rectangle $R$ of the following form: \begin{itemize} \item if $d=1$, $R = [0,a_1u[ \cap \mathds{Z}$ with $a_1 u \in \mathds{Z}$; \item if $d=2$, $R = ([0,a_1[u+[0,a_2]u^\perp) \cap \mathds{Z}^2$ with $a_1 u \in \mathds{Z}^2$, where $u^\perp$ is a vector orthogonal to $u$, \end{itemize} and a sequence of sites $(x_i)_{1 \leq i \leq m}$ in $(a_1u+R) \cup (2a_1u+R)$ such that if the sites of $R$ are at zero and there are successive 0-clock rings at $x_1,x_2,\dots,x_m$ while there is no 1-clock ring in $R \cup \{x_1,\dots,x_m\}$, the sites of $a_1u+R$ are at zero afterwards. \end{proposition} \begin{figure} \parbox{0.45\textwidth}{ \begin{center} \begin{tikzpicture}[scale=0.44] \draw (-1,0)--(13,0); \draw [dashed] (-2,0)--(-1,0); \draw [dashed] (13,0)--(14,0); \draw (-2,0) node[left]{$\mathds{Z}$} ; \foreach \i in {-1,0,...,13} \draw (\i,0.2)--(\i,-0.2) ; \draw (0,0) node [below] {$0$} ; \draw [ultra thick] (0,0.2)--(0,-0.2) ; \draw (4,0) node [below] {$a_1 u$} ; \draw [ultra thick] (4,0.2)--(4,-0.2) ; \draw (8,0) node [below] {$2a_1 u$} ; \draw [ultra thick] (8,0.2)--(8,-0.2) ; \draw (12,0) node [below] {$3a_1 u$} ; \draw [ultra thick] (12,0.2)--(12,-0.2) ; \draw[decorate,decoration={brace}] (-0.5,0.2)--(3.5,0.2) node[midway,above]{$R$}; \draw[decorate,decoration={brace}] (3.5,0.2)--(7.5,0.2) node[midway,above]{$a_1u+R$}; \draw[decorate,decoration={brace}] (7.5,0.2)--(11.5,0.2) node[midway,above]{$2a_1u+R$}; \foreach \i in {4,5,...,9} \draw (\i,0) node{$\ast$}; \end{tikzpicture} $d=1$ \end{center} } \parbox{0.45\textwidth}{ \begin{center} \begin{tikzpicture}[scale=0.45] \draw [very thin, gray] (-3,-1) grid (10,12); \draw (0,0) node{$\times$} node[below,fill=white] {$0$}; \foreach \i in {-3,-2,...,10} \draw[very thin, gray, dashed] (\i,-1.5)--(\i,-1); \foreach \i in {-3,-2,...,10} \draw[very thin, gray, dashed] (\i,12)--(\i,12.5); \foreach \i in {-1,0,...,12} \draw[very thin, gray, dashed] (-3.5,\i)--(-3,\i); \foreach \i in {-1,0,...,12} \draw[very thin, gray, dashed] (10,\i)--(10.5,\i); \draw (-3.5,11) node [left] {$\mathds{Z}^2$} ; \draw (0,0)--(3,3)--(1,5)--(-2,2)--cycle ; \draw (3,3)--(1,5)--(4,8)--(6,6)--cycle ; \draw (6,6)--(4,8)--(7,11)--(9,9)--cycle ; \draw (-2,2)--(1,5) node [midway,above left,fill=white] {$R$}; \draw (1,5)--(4,8) node [midway,above left,fill=white] {$a_1u+R$}; \draw (4,8)--(7,11) node [midway,above left,fill=white] {$2a_1u+R$}; \draw [>=stealth,<->] (-0.2,-0.2)--(-2.2,1.8) node [midway,below left,fill=white] {$a_2$} ; \draw [>=stealth,<->] (0.2,-0.2)--(3.2,2.8) node [midway,below right,fill=white] {$a_1$} ; \draw [>=stealth,->] (7,0)--(7.7,0.7) node [midway,below right,fill=white] {$u$}; \draw (6,0) node [fill=white] {$u^\perp$}; \draw [>=stealth,->] (7,0)--(6.3,0.7); \draw (1,5) node{$\ast$} ; \draw (2,4) node{$\ast$} ; \draw (3,3) node{$\ast$} ; \draw (2,5) node{$\ast$} ; \draw (3,4) node{$\ast$} ; \draw (2,6) node{$\ast$} ; \draw (3,5) node{$\ast$} ; \draw (4,4) node{$\ast$} ; \draw (3,6) node{$\ast$} ; \draw (4,5) node{$\ast$} ; \draw (3,7) node{$\ast$} ; \draw (4,6) node{$\ast$} ; \draw (5,5) node{$\ast$} ; \draw (4,7) node{$\ast$} ; \draw (5,6) node{$\ast$} ; \draw (4,8) node{$\ast$} ; \draw (5,7) node{$\ast$} ; \draw (5,8) node{$\ast$} ; \draw (6,7) node{$\ast$} ; \draw (6,8) node{$\ast$} ; \end{tikzpicture} $d=2$ \end{center} } \caption{Illustration of proposition \ref{prop_Bollobas}. The $\ast$ represent the sites $x_1,\dots,x_m$. If we start with the sites of $R$ at zero and there are successive 0-clock rings at $x_1,\dots,x_m$ while there is no 1-clock ring in $R \cup \{x_1,\dots,x_m\}$, these clock rings will put $x_1,\dots,x_m$ at zero, hence the sites of $a_1u+R$ will be put at zero.} \label{fig_prop_Bollobas} \end{figure} \begin{remark} For $d \geq 3$, we expect a similar proposition to hold, maybe with $R = [0,a_1[u + \bar{R}$, $\bar{R}$ contained in the hyperplane orthogonal to $u$, but we can not prove it because an equivalent of the construction of \cite{Bollobas_et_al2015} is not available yet. Proving such a construction would be enough to extend our result to any dimension. \end{remark} \begin{proof}[Proof of proposition \ref{prop_Bollobas}.] We begin with the case $d=1$. Since $\mathcal{U}$ is supercritical there exists $u$ an unstable direction. Without loss of generality we can say that $u = 1$, therefore there exists an update rule $X$ contained in $-\mathds{N}^*$. This yields the mechanism illustrated by figure \ref{fig_preuve_prop_Bollobas}(a): if $R = \{0,\dots,\ell\}$ is sufficiently large and full of zeroes, $(\ell+1)+X$ is full of zeroes, hence if the site $\ell+1$ receives a 0-clock ring, this clock ring puts it at zero. Then $(\ell+2)+X$ is full of zeroes, thus if $\ell+2$ receives a 0-clock ring, this clock ring puts it at zero. In the same way, if the sites $\ell+3,\dots,2\ell+1$ receive successive 0-clock rings, these clock rings will put them successively at zero, therefore $\{\ell+1,\dots,2\ell+1\} = (\ell+1) +R$ will be at zero. This yields the result with $a_1 = \ell+1$ and $(x_i)_{1 \leq i \leq m} = \ell+1,\ell+2,\dots,2\ell+1$. We now consider the case $d = 2$. Since $\mathcal{U}$ is supercritical, there exists a semicircle in $S^{1}$ that contains no stable direction; we call $u$ its middle. The results of section 5 of \cite{Bollobas_et_al2015} (see in particular figure 5 and lemma 5.5 therein) prove that there exists a set of sites, called a \emph{droplet}, such that in the bootstrap percolation dynamics, if we start with all the sites of the droplet infected, other sites in the direction $u$ can be infected, creating a bigger infected droplet with the same shape (see figure \ref{fig_preuve_prop_Bollobas}(b)). We can enlarge this droplet into a rectangle $R = [0,a_1[u+[0,a_2]u^\perp$ as in figure \ref{fig_preuve_prop_Bollobas}(c); furthermore $u$ can be chosen rational\footnote{Indeed, theorem 1.10 of \cite{Bollobas_et_al2015} states that the set of stable directions is a finite union of closed intervals with rational endpoints, hence the semicircle containing no stable direction can be chosen with rational endpoints.}, hence we may enlarge $R$ enough so that $a_1 u \in \mathds{Z}^2$. Now, since $R$ contains the original droplet, if $R$ is infected the infection can grow from the droplet into a droplet big enough to contain $a_1 u + R$ while staying in $R \cup (a_1u+R) \cup (2a_1u+R)$ (see figure \ref{fig_preuve_prop_Bollobas}(c)). We call $x_1,\dots,x_m$ the sites that are successively infected during this growth (sites infected at the same time are ordered arbitrarily). Since $x_1$ is the first site infected by the bootstrap percolation dynamics starting with the sites of $R$ infected, there exists an update rule $X$ such that $x_1+X \subset R$, therefore if the KCM dynamics starts with all the sites of $R$ at zero and there is a 0-clock ring at $x_1$, this clock ring sets $x_1$ to zero. Then, if there is a 0-clock ring at $x_2$, it will set $x_2$ to zero for the same reason, and successive 0-clock rings at $x_3,\dots,x_m$ will set them successively to 0, which puts $a_1u+R$ at zero. \begin{figure} \parbox{0.4\textwidth}{ \begin{center} \begin{tikzpicture}[scale=0.35] \foreach \i in {-2,-1,...,13} \draw (\i,0) node{$\circ$}; \foreach \i in {0,1,...,5} \draw (\i,0) node{$\bullet$}; \draw [dotted] (-3.5,0)--(-2.5,0); \draw [dotted] (13.5,0)--(14.5,0); \draw (0,0) node[below] {$0$}; \draw (5,0) node[below] {$\ell$}; \draw[decorate,decoration={brace}] (-0.5,1.8)--(5.5,1.8) node[midway,above]{$R$}; \draw[dotted] (-0.5,1.8)--(-0.5,-0.2); \draw[dotted] (5.5,1.8)--(5.5,-0.2); \draw (0.5,0.4)--(3.5,0.4)--(3.5,-0.4)--(0.5,-0.4)--cycle; \draw (2,0.2) node [above] {$(\ell+1)+X$}; \draw (6,0) circle (0.4) node[above right]{$\ell+1$}; \draw (5.5,-1.6) node{$\downarrow$} ; \foreach \i in {-2,-1,...,13} \draw (\i,-4) node{$\circ$}; \foreach \i in {0,1,...,6} \draw (\i,-4) node{$\bullet$}; \draw [dotted] (-3.5,-4)--(-2.5,-4); \draw [dotted] (13.5,-4)--(14.5,-4); \draw (0,-4) node[below] {$0$}; \draw (5,-4) node[below] {$\ell$}; \draw (1.5,-3.6)--(4.5,-3.6)--(4.5,-4.4)--(1.5,-4.4)--cycle; \draw (3,-3.8) node [above] {$(\ell+2)+X$}; \draw (7,-4) circle (0.4) node[above right]{$\ell+2$}; \draw (5.5,-6) node{$\downarrow$} ; \foreach \i in {-2,-1,...,13} \draw (\i,-7) node{$\circ$}; \foreach \i in {0,1,...,7} \draw (\i,-7) node{$\bullet$}; \draw [dotted] (-3.5,-7)--(-2.5,-7); \draw [dotted] (13.5,-7)--(14.5,-7); \draw (0,-7) node[below] {$0$}; \draw (5,-7) node[below] {$\ell$}; \draw (5.5,-9) node{$\downarrow$} ; \draw (5.5,-10) node{$\dots$} ; \draw (5.5,-11) node{$\downarrow$} ; \foreach \i in {-2,-1,...,13} \draw (\i,-13.5) node{$\circ$}; \foreach \i in {0,1,...,11} \draw (\i,-13.5) node{$\bullet$}; \draw [dotted] (-3.5,-13.5)--(-2.5,-13.5); \draw [dotted] (13.5,-13.5)--(14.5,-13.5); \draw (0,-13.5) node[below] {$0$}; \draw (5,-13.5) node[below] {$\ell$}; \draw (11,-13.5) node[below] {$2\ell+1$}; \draw[decorate,decoration={brace}] (-0.5,-13.1)--(5.5,-13.1) node[midway,above]{$R$}; \draw[decorate,decoration={brace}] (5.5,-13.1)--(11.5,-13.1) node[midway,above]{$(\ell+1)+R$}; \end{tikzpicture} (a) \end{center} } \parbox{0.23\textwidth}{ \begin{center} \begin{tikzpicture}[scale=0.3] \draw (0,0)--(-3,3)--(-2,4)--(0,5)--(3,4)--(2,2)--cycle; \draw[dashed] (0,0)--(-3,3)--(0,6)--(2,7)--(5,6)--(4,4)--cycle; \draw[>=stealth,->] (-1,2)--(0.5,3.5) node [midway, above left] {$u$}; \end{tikzpicture} (b) \end{center} } \parbox{0.25\textwidth}{ \begin{center} \begin{tikzpicture}[scale=0.25] \draw [black,fill=gray!40] (0,0)--(-3,3)--(5,11)--(7,12)--(10,11)--(9,9)--cycle ; \draw[dashed] (0,0)--(-3,3)--(-2,4)--(0,5)--(3,4)--(2,2)--cycle; \draw (0,0)--(-3,3)--(1,7)--(4,4)--cycle; \draw (-3,3)--(1,7) node[midway,above left] {$R$}; \draw (1,7)--(4,4)--(8,8)--(5,11)--cycle; \draw (1,7)--(5,11) node[midway,above left] {$a_1u+R$}; \draw (5,11)--(8,8)--(12,12)--(9,15)--cycle; \draw (5,11)--(9,15) node[midway,above left] {$2a_1u+R$}; \draw [>=stealth,->] (1,-1)--(3,1) node[midway,below right]{$u$}; \draw [>=stealth,->] (-1,-1)--(-3,1) node[midway,below left]{$u^\perp$}; \end{tikzpicture} (c) \end{center} } \caption{The proof of proposition \ref{prop_Bollobas}. (a) The mechanism for $d=1$; the $\bullet$ represent zeroes and the $\circ$ represent ones. (b) The shape delimited by the solid line is the droplet of \cite{Bollobas_et_al2015}; if it is infected in the bootstrap percolation dynamics, the infection can grow to the shape delimited by the dashed line. (c) $R$ contains the original droplet (dashed line), hence if $R$ is infected, the infection can propagate to a bigger droplet (in gray) that contains $a_1u+R$ and is contained in $R \cup (a_1u+R) \cup (2a_1u+R)$.} \label{fig_preuve_prop_Bollobas} \end{figure} \end{proof} \subsection{Definition of the auxiliary process}\label{subsec_def_aux_process} Let $K > 0$, $q \in [0,1]$ and $t \geq K$. For any $y \in \mathds{Z}^d$ and $k \in \{0,\dots,\lfloor \frac{t}{K} \rfloor\}$, we will define an oriented percolation process $\zeta^{y,k}$ on $\mathds{Z}$, from time zero to time $n^{y,k} = \lfloor \frac{t}{K} \rfloor-k$ (see \cite{Durrett_1984} for an introduction to oriented percolation). For $n \in \{1,\dots, n^{y,k}\}$ and $r \in \mathds{Z}$ with $r+n$ even, the \emph{bonds} $(r-1,n-1)\rightarrow (r,n)$ and $(r+1,n-1)\rightarrow (r,n)$ can be \emph{open} or \emph{closed}. We set $\zeta_0^{y,k}(r)=\mathds{1}_{\{r=0\}}$, and for any $n \in \{1,\dots, n^{y,k}\}$, $r \in \mathds{Z}$ with $r+n$ even, $\zeta_n^{y,k}(r)=1$ if and only if $\zeta_{n-1}^{y,k}(r-1)=1$ and the bond $(r-1,n-1)\rightarrow (r,n)$ is open or $\zeta_{n-1}^{y,k}(r+1)=1$ and the bond $(r+1,n-1)\rightarrow (r,n)$ is open. For any $n \in \{1,\dots, n^{y,k}\}$, $r \in \mathds{Z}$ with $r+n$ odd, we set $\zeta_n^{y,k}(r)=0$. The state of the bonds is defined as follows. For any $n \in \{1,\dots,n^{y,k}\}$, $r \in \mathds{Z}$ with $r+n$ even: \begin{itemize} \item $(r-1,n-1)\rightarrow (r,n)$ is open if and only if \[ \left\{\forall x \in y+\frac{r-n}{2} a_1u+R, ]t-(k+n)K,t-(k+n-1)K]\cap\mathcal{P}^1_x=\emptyset\right\}, \] i.e. there is no 1-clock ring in $y+\frac{r-n}{2} a_1u+R$ during the time interval $]t-(k+n)K,t-(k+n-1)K]$; \item $(r+1,n-1)\rightarrow (r,n)$ is open if and only if \begin{gather*} \left\{\exists t-(k+n)K < t_1 < \cdots < t_m \leq t-(k+n-1)K, \forall i \in \{1,\dots,m\}, t_i \in \mathcal{P}^0_{y+\frac{r-n}{2} a_1u+x_i}\right\} \\ \cap \left\{\forall x \in y+\frac{r-n}{2} a_1u+(R \cup \{x_1,\dots,x_m\}), ]t-(k+n)K,t-(k+n-1)K]\cap\mathcal{P}^1_x=\emptyset\right\}, \end{gather*} i.e. there are successive 0-clock rings in the equivalent of $x_1,\dots,x_m$ for $y+\frac{r-n}{2} a_1u+R$ during the time interval $]t-(k+n)K,t-(k+n-1)K]$, and no 1-clock ring at these sites or in $y+\frac{r-n}{2} a_1u+R$ in this time interval. \end{itemize} We notice that if all the sites of $y+\frac{r-n}{2} a_1u+R$ are at zero at time $t-(k+n)K$ and $(r-1,n-1)\rightarrow (r,n)$ is open, the sites of $y+\frac{r-n}{2} a_1u+R$ are still at zero at time $t-(k+n-1)K$. Moreover, by proposition \ref{prop_Bollobas}, if the sites of $y+\frac{r-n}{2} a_1u+R$ are at zero at time $t-(k+n)K$ and $(r+1,n-1)\rightarrow (r,n)$ is open, the sites of $a_1u+(y+\frac{r-n}{2} a_1u+R) = y+\frac{(r+1)-(n-1)}{2} a_1u+R$ are at zero at time $t-(k+n-1)K$. This allows us to deduce (see figure \ref{fig_transfer_zeroes} for an illustration of the mechanism): \begin{proposition}\label{prop_transfer_zeroes2} If there exists $r_0 \in \mathds{Z}$ such that $\zeta_{n^{y,k}}^{y,k}(r_0)=1$ and the sites of $y+\frac{r_0-n^{y,k}}{2}a_1u+R$ are at zero at time $t-\lfloor \frac{t}{K} \rfloor K$, then the sites of $y+R$ are at zero at time $t-kK$. \end{proposition} \begin{figure} \begin{tikzpicture} \draw (0,-3)--(0,3); \draw[dashed] (0,-3)--(0,-3.5); \draw[dashed] (0,3)--(0,3.5); \draw (0,-3) node{$-$} node[right] {-3}; \draw (0,-2) node[right] {-2}; \draw (0,-1) node{$-$} node[right] {-1}; \draw (0,0) node[right] {0}; \draw (0,1) node{$-$} node[right] {1}; \draw (0,2) node[right] {2}; \draw (0,3) node{$-$} node[right] {3}; \draw (0,4) node{$r$} ; \draw[->] (0,-3.8)--(-3,-3.8); \draw (0,-3.8) node{$\shortmid$} node[below] {0}; \draw (-1,-3.8) node{$\shortmid$} node[below] {1}; \draw (-2,-3.8) node{$\shortmid$} node[below] {2}; \draw (-3,-3.8) node[below] {3}; \draw (-1.5,-4.5) node{$n$}; \draw[gray] (-1,-3)--(0,-2); \draw[gray] (-3,-3)--(0,0); \draw[gray] (-3,-1)--(0,2); \draw[gray] (-3,1)--(-1,3); \draw[gray] (-1,-3)--(-3,-1); \draw[gray] (0,-2)--(-3,1); \draw[gray] (0,0)--(-3,3); \draw[gray] (0,2)--(-1,3); \draw[gray,dashed] (-1,-3)--(-0.5,-3.5); \draw[gray,dashed] (-1,-3)--(-1.5,-3.5); \draw[gray,dashed] (-3,-3)--(-2.5,-3.5); \draw[gray,dashed] (-1,3)--(-0.5,3.5); \draw[gray,dashed] (-1,3)--(-1.5,3.5); \draw[gray,dashed] (-3,3)--(-2.5,3.5); \draw [ultra thick] (-1,-3)--(-2,-2); \draw [ultra thick] (-3,-3)--(-2,-2); \draw [ultra thick] (-3,-1)--(-2,-2); \draw [ultra thick] (-1,-1)--(-2,0); \draw [ultra thick] (-2,0)--(-3,1); \draw [ultra thick] (-2,0)--(-1,1); \draw [ultra thick] (-1,1)--(0,0); \draw [ultra thick] (-2,2)--(-3,3); \draw [ultra thick] (-1,3)--(0,2); \draw[dashed, ultra thick] (-3,-3)--(-2.5,-3.5); \draw (-3,1) node [left] {$r_0$}; \draw (-2,0)--(-3,1) node[midway, below left]{$b_1$}; \draw (-1,1)--(-2,0) node[midway, below right]{$b_2$}; \draw (0,0)--(-1,1) node[midway, above right]{$b_3$}; \draw[>=stealth,->] (0,0.2)--(-0.8,1) ; \draw[>=stealth,->] (-1,0.8)--(-1.8,0) ; \draw[>=stealth,->] (-2.2,0)--(-3,0.8) ; \end{tikzpicture} \hspace{\fill} \begin{tikzpicture} \draw (0,-3) node[left]{$y-3a_1u+R$}; \draw (0,-2) node[left]{$y-2a_1u+R$}; \draw (0,-1) node[left]{$y-a_1u+R$}; \draw (0,0) node[left]{$y+R$}; \draw (0,1) node[left]{$y+a_1u+R$}; \draw (0,2) node[left]{$y+2a_1u+R$}; \draw (0,3) node[left]{$y+3a_1u+R$}; \draw (0,-3.5)--(0,3.5); \draw (1,-3.5)--(1,3.5); \draw[dashed] (0,-4)--(0,-3.5); \draw[dashed] (1,-4)--(1,-3.5); \draw[dashed] (0,4)--(0,3.5); \draw[dashed] (1,4)--(1,3.5); \draw (0,-3.5)--(1,-3.5); \draw (0,-2.5)--(1,-2.5); \draw (0,-1.5)--(1,-1.5); \draw (0,-0.5)--(1,-0.5); \draw (0,0.5)--(1,0.5); \draw (0,1.5)--(1,1.5); \draw (0,2.5)--(1,2.5); \draw (0,3.5)--(1,3.5); \draw (0.5,-4.5) node{$t-(k+3)K$}; \draw (2.5,-3.5)--(2.5,3.5); \draw (3.5,-3.5)--(3.5,3.5); \draw[dashed] (2.5,-4)--(2.5,-3.5); \draw[dashed] (3.5,-4)--(3.5,-3.5); \draw[dashed] (2.5,4)--(2.5,3.5); \draw[dashed] (3.5,4)--(3.5,3.5); \draw (2.5,-3.5)--(3.5,-3.5); \draw (2.5,-2.5)--(3.5,-2.5); \draw (2.5,-1.5)--(3.5,-1.5); \draw (2.5,-0.5)--(3.5,-0.5); \draw (2.5,0.5)--(3.5,0.5); \draw (2.5,1.5)--(3.5,1.5); \draw (2.5,2.5)--(3.5,2.5); \draw (2.5,3.5)--(3.5,3.5); \draw (3,-4.5) node{$t-(k+2)K$}; \draw (5,-3.5)--(5,3.5); \draw (6,-3.5)--(6,3.5); \draw[dashed] (5,-4)--(5,-3.5); \draw[dashed] (6,-4)--(6,-3.5); \draw[dashed] (5,4)--(5,3.5); \draw[dashed] (6,4)--(6,3.5); \draw (5,-3.5)--(6,-3.5); \draw (5,-2.5)--(6,-2.5); \draw (5,-1.5)--(6,-1.5); \draw (5,-0.5)--(6,-0.5); \draw (5,0.5)--(6,0.5); \draw (5,1.5)--(6,1.5); \draw (5,2.5)--(6,2.5); \draw (5,3.5)--(6,3.5); \draw (5.5,-4.5) node{$t-(k+1)K$}; \draw (7.5,-3.5)--(7.5,3.5); \draw (8.5,-3.5)--(8.5,3.5); \draw[dashed] (7.5,-4)--(7.5,-3.5); \draw[dashed] (8.5,-4)--(8.5,-3.5); \draw[dashed] (7.5,4)--(7.5,3.5); \draw[dashed] (8.5,4)--(8.5,3.5); \draw (7.5,-3.5)--(8.5,-3.5); \draw (7.5,-2.5)--(8.5,-2.5); \draw (7.5,-1.5)--(8.5,-1.5); \draw (7.5,-0.5)--(8.5,-0.5); \draw (7.5,0.5)--(8.5,0.5); \draw (7.5,1.5)--(8.5,1.5); \draw (7.5,2.5)--(8.5,2.5); \draw (7.5,3.5)--(8.5,3.5); \draw (8,-4.5) node{$t-kK$}; \draw [black,fill=gray] (0,-0.5)--(1,-0.5)--(1,-1.5)--(0,-1.5)--cycle; \draw[>=stealth,->] (1.2,-1)--(2.3,-1); \draw [black,fill=gray] (2.5,-0.5)--(3.5,-0.5)--(3.5,-1.5)--(2.5,-1.5)--cycle; \draw[>=stealth,->] (3.7,-1)--(4.8,0); \draw [black,fill=gray] (5,0.5)--(6,0.5)--(6,-0.5)--(5,-0.5)--cycle; \draw[>=stealth,->] (6.2,0)--(7.3,0); \draw [black,fill=gray] (7.5,0.5)--(8.5,0.5)--(8.5,-0.5)--(7.5,-0.5)--cycle; \end{tikzpicture} \caption{An illustration of proposition \ref{prop_transfer_zeroes2} with $n^{y,k} = 3$ and $r_0=1$. The figure at the left represents the bonds of the oriented percolation process $\zeta^{y,k}$; the open bonds are the thick ones, and the path of open bonds allowing $\zeta_{n^{y,k}}^{y,k}(r_0)=1$ is outlined by arrows. The figure at the right represents the consequences on the KCM process; each vertical strip represents the state of $\bigcup_{i \in \mathds{Z}}(y+ia_1u+R)$ at a certain time. If at time $t-(k+3)K$ the rectangle $y+\frac{1-n^{y,k}}{2}a_1u+R = y-a_1u+R$ is at zero (in gray), since the bond $(0,2) \rightarrow (1,3)$ (bond $b_1$) is open, $y-a_1u+R$ is still at zero at time $t-(k+2)K$. Moreover, since $(1,1) \rightarrow (0,2)$ (bond $b_2$) is open and $y-a_1u+R$ is at zero at time $t-(k+2)K$, $a_1u+(y-a_1u+R) = y+R$ is at zero at time $t-(k+1)K$. Finally, since $(0,0) \rightarrow (1,1)$ (bond $b_3$) is open and $y+R$ is at zero at time $t-(k+1)K$, $y+R$ is still at zero at time $t-kK$.} \label{fig_transfer_zeroes} \end{figure} \subsection{Properties of the auxiliary process}\label{subsec_prop_aux_process} In this subsection we state the two oriented percolation properties of $\zeta^{y,k}$, propositions \ref{prop_extinction_time} and \ref{prop_large_deviations}, that we will use in section \ref{sec_preuve_codings} to prove proposition \ref{prop_bound_single_coding}. In order to do that, we need a bound on the probability that a bond is closed; this will be lemma \ref{lemma_prob_bonds}. It is there that we need $q$ bigger than a $q_0 > 0$; this is necessary so that the probability that there is no 1-clock ring at the sites we consider is large. For any $K > 0$, we set $q_K = 1+\frac{1}{3K|R|}\ln(1-e^{-K})$. We can then state \begin{lemma}\label{lemma_prob_bonds} There exists a constant $K_p=K_p(\mathcal{U}) > 0$ such that for $K \geq K_p$, $q \in [q_K,1]$, $t \geq K$, $y \in \mathds{Z}^d$ and $k \in \{0,\dots,\lfloor \frac{t}{K} \rfloor\}$, the probability that any given bond is closed for the process $\zeta^{y,k}$ is smaller than $e^{-\frac{K}{4}}$. \end{lemma} \begin{proof} Let $K > 0$, $q \in [q_K,1]$, $t \geq K$, $y \in \mathds{Z}^d$ and $k \in \{0,\dots,\lfloor \frac{t}{K} \rfloor\}$. Let $n \in \{1,\dots,n^{y,k}\}$, $r \in \mathds{Z}$ with $r+n$ even. We notice that if the bond $(r-1,n-1)\rightarrow (r,n)$ is closed, the bond $(r+1,n-1)\rightarrow (r,n)$ is also closed, hence it is enough to bound the probability that $(r+1,n-1)\rightarrow (r,n)$ is closed. Denoting $E_1 = \{\forall x \in y+\frac{r-n}{2} a_1u+R \cup \{x_1,\dots,x_m\}, ]t-(k+n)K,t-(k+n-1)K]\cap\mathcal{P}^1_x=\emptyset\}$ and $E_2 = \{\exists t-(k+n)K < t_1 < \cdots < t_m \leq t-(k+n-1)K, \forall i \in \{1,\dots,m\}, t_i \in \mathcal{P}^0_{y+\frac{r-n}{2} a_1u+x_i}\}$, we need to bound the probabilities of $E_1^c$ and $E_2^c$. We begin with $E_1^c$. The events $]t-(k+n)K,t-(k+n-1)K]\cap\mathcal{P}^1_x=\emptyset$ are independent and have probability $e^{-(1-q)K}$ each; moreover, $x_1,\dots,x_m$ belong to $(a_1u+R) \cup (2a_1u+R)$, so $\left|R\cup\{x_1,\dots,x_m\}\right| \leq 3|R|$; we deduce the probability of $E_1$ is \begin{gather*} e^{-\left|R\cup\{x_1,\dots,x_m\}\right|(1-q)K} \geq e^{-3|R|(1-q)K} \geq e^{-3|R|(1-q_K)K} \\ \geq e^{-3|R|\left(1-\left(1+\frac{1}{3K|R|}\ln(1-e^{-K})\right)\right)K} = e^{\ln(1-e^{-K})} = 1-e^{-K}, \end{gather*} thus the probability of $E_1^c$ is at most $e^{-K}$. Moreover, the probability of $E_2^c$ is the probability that a Poisson point process of parameter $q$ has strictly less than $m$ elements in an interval of length $K$, hence it is $\sum_{i=0}^{m-1} e^{-qK}\frac{(qK)^i}{i!}$. When $K$ is large enough, $q \in [1/2,1]$, hence this probability is smaller than $e^{-\frac{1}{2}K} \sum_{i=0}^{m-1} \frac{K^i}{i!}$, which is smaller than $e^{-\frac{K}{3}}$ when $K$ is large enough depending on $m$, hence on $\mathcal{U}$. Consequently, when $K$ is large enough depending on $\mathcal{U}$, the probability that $(r+1,n-1)\rightarrow (r,n)$ is closed is smaller than $e^{-K} + e^{-\frac{K}{3}}$, which is smaller than $e^{-\frac{K}{4}}$. \end{proof} Thanks to lemma \ref{lemma_prob_bonds}, it is possible to prove two oriented percolation properties of $\zeta^{y,k}$. Firstly, for any $K > 0$, $q \in [q_K,1]$, $t \geq K$, $y \in \mathds{Z}^d$ and $k \in \{0,\dots,\lfloor \frac{t}{K} \rfloor\}$, we define $\tau^{y,k}=\inf\{n \in \{0,\dots,n^{y,k}\} \,|\, \forall r \in \mathds{Z}, \zeta^{y,k}_n(r)=0\}$ the time of death of the process $\zeta^{y,k}$ (if the set is empty, $\tau^{y,k}$ is infinite). Since $\zeta_0^{y,k}(r)=\mathds{1}_{\{r=0\}}$, which is not identically zero, $\tau^{y,k} \geq 1$. Then we have \begin{proposition}\label{prop_extinction_time} For any $q' \in [0,1]$, there exists a constant $K_c=K_c(\mathcal{U})>0$ such that for any $K \geq K_c$, $q \in [q_K,1]$, $t \geq K$, $y \in \mathds{Z}^d$, $k \in \{0,\dots,\lfloor \frac{t}{K} \rfloor\}$, $n \in \{0,\dots,n^{y,k}\}$, $\mathds{P}_{q',q}(n \leq \tau^{y,k} < + \infty) \leq 2. 3^{2n} e^{-\frac{Kn}{24}}$. \end{proposition} \begin{proof}[Sketch of proof.] The proposition can be proven by a classical contour method like the one presented in section 10 of \cite{Durrett_1984}. The idea is that if $n \leq \tau^{y,k} < + \infty$ we can draw a ``contour of closed bonds'' around the connected component of ones in $\zeta^{y,k}$, and this contour will have length $\Omega(n)$. Furthermore, it can be seen that bonds separated by at least 5 bonds from each other are independent, because they depend on clock rings in disjoint space-time intervals. Therefore if we keep one bond out of 6, we extract $\Omega(n)$ independent closed bonds from the contour, each of them having probability $e^{-\frac{K}{4}}$ from lemma \ref{lemma_prob_bonds} when $K \geq K_p$, hence the bound. \end{proof} $\zeta^{y,k}$ also satisfies a second property. For any $K > 0$, $q \in [q_K,1]$, $t \geq K$, $y \in \mathds{Z}^d$ and $k \in \{0,\dots,\lfloor \frac{t}{K} \rfloor\}$, we denote $\mathcal{X}^{y,k}=\{r \in \{-\lfloor \frac{n^{y,k}}{2} \rfloor,\dots, \lfloor \frac{n^{y,k}}{2} \rfloor\} \,|\, \zeta^{y,k}_{n^{y,k}}(r) = 1\}$. Then we have \begin{proposition}\label{prop_large_deviations} For any $q' \in [0,1]$, $\alpha \in ]0,1[$, there exists a constant $K_g(\alpha) = K_g(\mathcal{U},\alpha) > 0$ such that for any $K \geq K_g(\alpha)$, there exist constants $c_g > 0$ and $C_g = C_g(\mathcal{U},K,\alpha) > 0$ such that for any $q \in [q_K,1]$, $t \geq K$, $y \in \mathds{Z}^d$ and $k \in \{0,\dots,\lfloor \frac{t}{K} \rfloor\}$, $\mathds{P}_{q',q}\left(\tau^{y,k}=+\infty, |\mathcal{X}^{y,k}| \leq \frac{\alpha}{2} n^{y,k}\right) \leq C_ge^{-c_g n^{y,k}}$. \end{proposition} \begin{proof}[Sketch of proof.] This proposition comes from classical results in oriented percolation. Firstly, if the process survives until time $n^{y,k}$, it has a big ``range'', which means that if we define $r^{y,k}=\sup\{r \in \mathds{Z} \,|\, \zeta^{y,k}_{n^{y,k}}(r)=1\}$ and $\ell^{y,k}=\inf\{r \in \mathds{Z} \,|\, \zeta^{y,k}_{n^{y,k}}(r)=1\}$, $r^{y,k}$ and $|\ell^{y,k}|$ are so large $\{-\lfloor \frac{n^{y,k}}{2} \rfloor,\dots, \lfloor \frac{n^{y,k}}{2} \rfloor\} \subset \{\ell^{y,k},\dots,r^{y,k}\}$; this can be proven with the contour argument in section 11 of \cite{Durrett_1984}. Moreover, the argument that proves (1) in \cite{Durrett_1984} also proves that in $\{\ell^{y,k},\dots,r^{y,k}\}$, $\zeta^{y,k}_{n^{y,k}}$ coincides with the oriented percolation process that has the same bonds, but which starts with all sites at 1 instead of just the origin. Finally, the end of section 5 of \cite{Durrett_et_al1988} contains a contour argument for the latter process which allows to prove that it has a lot of ones; we can use this argument with the same adaptations we used for the contours of proposition~\ref{prop_extinction_time}. \end{proof} \section{Proof of proposition \ref{prop_bound_single_coding}}\label{sec_preuve_codings} In this section we use the auxiliary process defined in section \ref{sec_aux_proc} to give a proof of proposition \ref{prop_bound_single_coding}. In order to do that, we need some definitions. For any $q' \in ]0,1]$, $K \geq 2$, $q \in [q_K,1]$, $x \in \mathds{Z}^d$, $t \geq K$ and $\gamma = (y_k)_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}} \in C_K^N(x,t)$, we define $k(\gamma)=\inf\{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\} \,|\, \tau^{y_k,k}=+\infty\}$ if such a $k$ exists; in this case we also denote $y(\gamma)=y_{k(\gamma)}$ (in the following, when we write $k(\gamma)$ or $y(\gamma)$ without more precision, we always assume that they exist). For any $r \in \mathcal{X}^{y(\gamma),k(\gamma)}$ we define the events \[ W^{\gamma,\eta}(r) = \left\{(\eta_{t-\lfloor\frac{t}{K}\rfloor K})_{y(\gamma)+\frac{r-n^{y(\gamma),k(\gamma)}}{2}a_1u+R} = 0\right\}, W^{\gamma,\tilde{\eta}}(r) = \left\{(\tilde{\eta}_{t-\lfloor\frac{t}{K}\rfloor K})_{y(\gamma)+\frac{r-n^{y(\gamma),k(\gamma)}}{2}a_1u+R} = 0\right\}. \] By proposition \ref{prop_transfer_zeroes2}, if $\{\exists r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\eta}(r)\} \cap \{\exists r \in \mathcal{X}^{y(\gamma),k(\gamma)},W^{\gamma,\tilde{\eta}}(r)\}$, then the sites of $y(\gamma) + R$ are at zero at time $t - k(\gamma) K$ in both processes $(\eta_t)_{t \in [0,+\infty[}$ and $(\tilde{\eta}_t)_{t \in [0,+\infty[}$, in particular $y(\gamma)$ is at zero at time $t - k(\gamma) K$ in both processes, therefore $G(\gamma)$ is satisfied. Consequently, \begin{align*} \mathds{P}_{q',q}(G(\gamma)^c) \leq & \mathds{P}_{q',q}\left(k(\gamma)\text{ does not exist}\right) +\mathds{P}_{q',q}\left(k(\gamma)\text{ exists}, |\mathcal{X}^{y(\gamma),k(\gamma)}| \leq \frac{t}{6K}\right)\\ &+\mathds{P}_{q',q}\left(\left\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\right\} \cap \left\{\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\eta}(r)^c\right\} \right) \\ &+\mathds{P}_{q',q}\left(\left\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\right\} \cap \left\{\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\tilde{\eta}}(r)^c\right\} \right). \end{align*} Therefore we only have to prove the following lemmas \ref{lemma_percolation_structure}, \ref{lemma_percolation_use} and \ref{lemma_wonderful_rectangles} to prove proposition \ref{prop_bound_single_coding}, thus ending the proof of theorem \ref{thm_convergence}: \begin{lemma}\label{lemma_percolation_structure} For any $q' \in ]0,1]$, there exists a constant $K_1=K_1(\mathcal{U}) \geq 2$ such that for any $K \geq K_1$, $q \in [q_K,1]$, there exist constants $\breve{c}_1 > 0$ and $\breve{C}_1=\breve{C}_1(K) > 0$ such that for any $x \in \mathds{Z}^d$, $t \geq K$, $\gamma \in C_K^N(x,t)$, we have $\mathds{P}_{q',q}(k(\gamma)$ does not exist$)\leq \breve{C}_1e^{-\breve{c}_1\frac{t}{K}}$. \end{lemma} \begin{lemma}\label{lemma_percolation_use} For any $q' \in ]0,1]$, there exists a constant $K_2=K_2(\mathcal{U}) \geq 2$ such that for any $K \geq K_2$, $q \in [q_K,1]$, there exist constants $\breve{c}_2 > 0$ and $\breve{C}_2=\breve{C}_2(\mathcal{U},K) > 0$ such that for any $x \in \mathds{Z}^d$, $t \geq K$, $\gamma \in C_K^N(x,t)$, $\mathds{P}_{q',q}(k(\gamma)$ exists, $|\mathcal{X}^{y(\gamma),k(\gamma)}| \leq \frac{t}{6K})\leq \breve{C}_2e^{-\breve{c}_2\frac{t}{K}}$. \end{lemma} \begin{lemma}\label{lemma_wonderful_rectangles} For any $q' \in ]0,1]$, $K \geq 2$, $q \in [q_K,1]$, there exists a constant $\breve{c}_3=\breve{c}_3(\mathcal{U},q') > 0$ such that for any $x \in \mathds{Z}^d$, $t \geq K$, $\gamma \in C_K^N(x,t)$, we get $\mathds{P}_{q',q}(\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\} \cap \{\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\eta}(r)^c\})\leq e^{-\breve{c}_3\frac{t}{K}}$ and $\mathds{P}_{q',q}(\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\} \cap \{\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\tilde{\eta}}(r)^c\})\leq e^{-\breve{c}_3\frac{t}{K}}$. \end{lemma} \begin{proof}[Proof of lemma \ref{lemma_percolation_structure}.] We set $K_1 = \max(K_c,48(\ln36+1))$, which depends only on $\mathcal{U}$. Let $q' \in ]0,1]$, $K \geq K_1$, $q \in [q_K,1]$, $x \in \mathds{Z}^d$, $t \geq K$ and $\gamma=(y_k)_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}} \in C_K^N(x,t)$. If $k(\gamma)$ does not exist, $\tau^{y_k,k}$ is finite for $k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}$, therefore if we call $k_1=0$ and $k_i=\sum_{j=1}^{i-1}\tau^{y_{k_j},k_j}$ for $i \geq 2$, $\tau^{y_{k_i},k_i}$ is finite as long as $k_i \leq \lfloor\frac{t}{K^2}\rfloor$. We will use proposition \ref{prop_extinction_time} to bound the probability that this happens. We call $L = \max\{i \geq 1 \,|\,k_i \leq \lfloor\frac{t}{K^2}\rfloor\}$; we then have $\sum_{i=1}^{L}\tau^{y_{k_i},k_i} > \lfloor\frac{t}{K^2}\rfloor$, hence if $n_L$ is the integer satisfying $n_L = \lfloor\frac{t}{K^2}\rfloor - \sum_{i=1}^{L-1}\tau^{y_{k_i},k_i}$, we have $n_L \leq \tau^{y_{k_L},k_L} < +\infty$. Furthermore, if $n_1,\dots,n_{L-1}$ are integers satisfiying $n_i = \tau^{y_{k_i},k_i}$ for $i \in \{1,\dots,L-1\}$, we get $n_1+\cdots+n_L=\lfloor\frac{t}{K^2}\rfloor$, $k_i = \sum_{j=1}^{i-1}n_{j}$ for all $i \in \{1,\dots,L\}$ (we denote $\sum_{j=1}^{i-1}n_{j}=N_i$). In addition, since $\tau^{y_{k},k} \geq 1$ for any $k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}$, $L \leq \lfloor\frac{t}{K^2}\rfloor+1$. We deduce \[ \mathds{P}_{q',q}(k(\gamma)\text{ does not exist}) \] \[ \leq \sum_{M \leq \left\lfloor\frac{t}{K^2}\right\rfloor+1,n_1+\cdots+n_M=\left\lfloor\frac{t}{K^2}\right\rfloor} \mathds{P}_{q',q}(L=M,\forall 1 \leq i \leq M-1,\tau^{y_{N_i},N_i} =n_i, n_M \leq \tau^{y_{N_M},N_M} <+\infty). \] Moreover, the events $\{\tau^{y_{k_{N_i}},N_i}=n_i\}$, $i \in \{1,\dots,M-1\}$ and $\{n_M \leq \tau^{y_{k_{N_M}},N_M} <+\infty\}$ depend only on clock rings in the time intervals $]t-(N_i+n_i)K,t-N_iK] = ]t-N_{i+1}K,t-N_iK]$, $i \in \{1,\dots,M-1\}$ and $]t-(N_M+n_M)K,t-N_MK]$, which are disjoint, thus the events are independent, hence \[ \mathds{P}_{q',q}(L=M,\forall 1 \leq i \leq M-1,\tau^{y_{N_i},N_i} =n_i, n_M \leq \tau^{y_{N_M},N_M} <+\infty) \] \[ \leq \left(\prod_{i=1}^{M-1}\mathds{P}_{q',q}\left(\tau^{y_{N_i},N_i}=n_i\right)\right) \mathds{P}_{q',q}\left(n_M \leq \tau^{y_{N_M},N_M} <+\infty\right) \leq \prod_{i=1}^{M}\mathds{P}_{q',q}\left(n_i \leq \tau^{y_{N_i},N_i} <+\infty\right) \] \[ \leq \prod_{i=1}^{M} 2 . 3^{2n_i}e^{-\frac{Kn_i}{24}} = 2^M 3^{2 \sum_{i=1}^M n_i}e^{-\frac{K}{24}\sum_{i=1}^M n_i} = 2^M 3^{2 \left\lfloor\frac{t}{K^2}\right\rfloor}e^{-\frac{K}{24}\left\lfloor\frac{t}{K^2}\right\rfloor} \] by proposition \ref{prop_extinction_time} and since $n_1+\cdots+n_M=\left\lfloor\frac{t}{K^2}\right\rfloor$. Consequently, \[ \mathds{P}_{q',q}(k(\gamma)\text{ does not exist}) \leq \sum_{M \leq \left\lfloor\frac{t}{K^2}\right\rfloor+1,n_1+\cdots+n_M=\left\lfloor\frac{t}{K^2}\right\rfloor} 2^M 3^{2 \left\lfloor\frac{t}{K^2}\right\rfloor}e^{-\frac{K}{24}\left\lfloor\frac{t}{K^2}\right\rfloor}. \] In addition, lemma \ref{lemma_binomial_coeffs} yields that for any $M \in \{1,\dots,\lfloor\frac{t}{K^2}\rfloor+1\}$, we have $|\{(n_1,\dots,n_M)\in\mathds{N}^M \,|\, n_1+\cdots+n_M=\lfloor\frac{t}{K^2}\rfloor\}| = \binom{M+\lfloor\frac{t}{K^2}\rfloor-1}{M-1} = \binom{M+\lfloor\frac{t}{K^2}\rfloor-1}{\lfloor\frac{t}{K^2}\rfloor}$, and by the Stirling formula there exists a constant $\lambda > 0$ such that \[ \binom{M+\left\lfloor\frac{t}{K^2}\right\rfloor-1}{\left\lfloor\frac{t}{K^2}\right\rfloor} \leq \frac{\left(M+\left\lfloor\frac{t}{K^2}\right\rfloor-1\right)^{\!\left\lfloor\frac{t}{K^2}\right\rfloor}} {\left\lfloor\frac{t}{K^2}\right\rfloor!} \leq \lambda\!\left(\!\frac{e\left(M+\left\lfloor\frac{t}{K^2}\right\rfloor-1\right)} {\left\lfloor\frac{t}{K^2}\right\rfloor}\!\right)^{\!\left\lfloor\frac{t}{K^2}\right\rfloor} \leq \lambda\!\left(\!\frac{e\left(\lfloor\frac{t}{K^2}\rfloor+\left\lfloor\frac{t}{K^2}\right\rfloor\right)} {\left\lfloor\frac{t}{K^2}\right\rfloor}\!\right)^{\!\left\lfloor\frac{t}{K^2}\right\rfloor} \] since $M \leq \lfloor\frac{t}{K^2}\rfloor+1$. We deduce $|\{(n_1,\dots,n_M)\in\mathds{N}^M \,|\, n_1+\cdots+n_M=\lfloor\frac{t}{K^2}\rfloor\}| \leq \lambda(2e)^{\lfloor\frac{t}{K^2}\rfloor}$. Therefore \[ \mathds{P}_{q',q}(k(\gamma)\text{ does not exist}) \leq \sum_{M=1}^{\left\lfloor\frac{t}{K^2}\right\rfloor+1} \lambda(2e)^{\left\lfloor\frac{t}{K^2}\right\rfloor} 2^M 3^{2 \left\lfloor\frac{t}{K^2}\right\rfloor}e^{-\frac{K}{24}\left\lfloor\frac{t}{K^2}\right\rfloor} \] \[ \leq \lambda(2e)^{\left\lfloor\frac{t}{K^2}\right\rfloor} 2^{\left\lfloor\frac{t}{K^2}\right\rfloor+2} 3^{2 \left\lfloor\frac{t}{K^2}\right\rfloor} e^{-\frac{K}{24}\left\lfloor\frac{t}{K^2}\right\rfloor} = 4\lambda\left(36ee^{-\frac{K}{24}}\right)^{\left\lfloor\frac{t}{K^2}\right\rfloor}. \] In addition, since $K \geq 48(\ln36+1)$, $36ee^{-\frac{K}{48}} \leq 36ee^{-\ln36-1} = 1$, so $36ee^{-\frac{K}{24}} \leq e^{-\frac{K}{48}}$, hence \[ \mathds{P}_{q',q}(k(\gamma)\text{ does not exist}) \leq 4\lambda e^{-\frac{K}{48}\left\lfloor\frac{t}{K^2}\right\rfloor} \leq 4\lambda e^{-\frac{K}{48}\left(\frac{t}{K^2}-1\right)} = 4\lambda e^{\frac{K}{48}}e^{-\frac{t}{48 K}}, \] which is the lemma. \end{proof} \begin{proof}[Proof of lemma \ref{lemma_percolation_use}.] This proof is an application of proposition \ref{prop_large_deviations}. We set $K_2 = \max (4,K_g(1/2))$, which depends only on $\mathcal{U}$. Let $q' \in ]0,1]$, $K \geq K_2$, $q \in [q_K,1]$ and $x \in \mathds{Z}^d$. It is enough to prove the lemma for $t \geq \max(K,\frac{3K^2}{K-3})$; indeed, if the lemma holds for $t \geq \max(K,\frac{3K^2}{K-3})$, one has only to enlarge $\breve{C}_2$ to prove it for $t \geq K$. Therefore we set $t \geq \max(K,\frac{3K^2}{K-3})$ and $\gamma = (y_k)_{k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}} \in C_K^N(x,t)$. If $k(\gamma)$ exists but $|\mathcal{X}^{y(\gamma),k(\gamma)}| \leq \frac{t}{6K}$, we have $\tau^{y(\gamma),k(\gamma)} = +\infty$ and $|\mathcal{X}^{y(\gamma),k(\gamma)}| \leq \frac{t}{6K}$, hence \[ \mathds{P}_{q',q}\left(k(\gamma)\text{ exists}, |\mathcal{X}^{y(\gamma),k(\gamma)}| \leq \frac{t}{6K}\right) \leq \sum_{k=0}^{\left\lfloor\frac{t}{K^2}\right\rfloor} \mathds{P}_{q',q}\left(\tau^{y_k,k} = +\infty, |\mathcal{X}^{y_k,k}| \leq \frac{t}{6K}\right). \] We are going to bound the term on the right. For any $k \in \{0,\dots,\lfloor\frac{t}{K^2}\rfloor\}$, we have $n^{y_k,k} = \lfloor\frac{t}{K}\rfloor-k \geq \lfloor\frac{t}{K}\rfloor-\lfloor\frac{t}{K^2}\rfloor \geq \frac{t}{K}-1-\frac{t}{K^2}$, and since $t \geq \frac{3K^2}{K-3}$, $(K-3)t \geq 3K^2$ thus $\frac{1}{3}\frac{t}{K}-\frac{t}{K^2} \geq 1$, so $n^{y_k,k} \geq \frac{2}{3}\frac{t}{K}$, hence if we choose $\alpha=\frac{1}{2}$ we have $\frac{\alpha}{2}n^{y_k,k} \geq \frac{t}{6K}$. Therefore by proposition \ref{prop_large_deviations}, \[ \mathds{P}_{q',q}\left(\tau^{y_k,k} = +\infty, |\mathcal{X}^{y_k,k}| \leq \frac{t}{6K}\right) \leq C_g e^{-c_g n^{y_k,k}} \leq C_g e^{-c_g \frac{2}{3}\frac{t}{K}} \] since $n^{y_k,k} \geq \frac{2}{3}\frac{t}{K}$. Consequently \[ \mathds{P}_{q',q}\left(k(\gamma)\text{ exists}, |\mathcal{X}^{y(\gamma),k(\gamma)}| \leq \frac{t}{6K}\right) \leq \left(\left\lfloor\frac{t}{K^2}\right\rfloor+1\right)C_g e^{-\frac{2c_g}{3}\frac{t}{K}} \leq \left(\frac{t}{K}+1\right)C_g e^{-\frac{2c_g}{3}\frac{t}{K}}, \] which yields lemma \ref{lemma_percolation_use}. \end{proof} \begin{proof}[Proof of lemma \ref{lemma_wonderful_rectangles}.] Let $q' \in ]0,1]$, $K \geq 2$, $q \in [q_K,1]$, $x \in \mathds{Z}^d$, $t \geq K$ and $\gamma \in C_K^N(x,t)$. The argument is elementary: we notice that there is a positive probability that a rectangle is full of zeroes in the initial configurations of the two processes since they have laws $\nu_{q'}$ and $\nu_q$, as well as a positive probability that there is no 1-clock ring in the rectangle in the time interval $[0,t-K\lfloor\frac{t}{K}\rfloor]$. Therefore there is a positive probability that a rectangle is full of zeroes in both processes at time $t-K\lfloor\frac{t}{K}\rfloor$, so if there are $\frac{t}{6K}$ elements in $\mathcal{X}^{y(\gamma),k(\gamma)}$, the probability that none of the corresponding rectangles is full of zeroes in both processes at time $t-K\lfloor\frac{t}{K}\rfloor$ is of order $e^{-\breve{c}_3\frac{t}{K}}$. We notice that $\mathcal{X}^{y(\gamma),k(\gamma)}$ depends only on clock rings in the time interval $]t-K\lfloor\frac{t}{K}\rfloor,t]$, hence if $\mathcal{F}$ is the $\sigma$-algebra generated by the clock rings in $]t-K\lfloor\frac{t}{K}\rfloor,t]$, for $\hat{\eta}=\eta$ or $\tilde{\eta}$, we have \begin{equation}\label{eq_wonderful_rectangles} \begin{split} \mathds{P}_{q',q}\left(\left\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\right\} \cap \{\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\hat{\eta}}(r)^c\}\right) \\ = \mathds{E}_{q',q}\left(\mathds{1}_{\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\}} \mathds{P}_{q',q}(\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\hat{\eta}}(r)^c | \mathcal{F})\right). \end{split} \end{equation} Moreover, \[ \mathds{P}_{q',q}(\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\hat{\eta}}(r)^c | \mathcal{F}) \] \[ = \mathds{P}_{q',q} \left(\left.\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, \exists x' \in y(\gamma)+\frac{r-n^{y(\gamma),k(\gamma)}}{2}a_1u+R, \hat{\eta}_{t-\lfloor\frac{t}{K}\rfloor K}(x') \neq 0 \right| \mathcal{F}\right) \leq \] \[ \mathds{P}_{q',q}\left(\!\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, \exists x'\in y(\gamma)+\frac{r-n^{y(\gamma),k(\gamma)}}{2}a_1u+R, \left. \hat{\eta}_0(x') \neq 0 \text{ or } \mathcal{P}_{x'}^1 \cap \left[0,t-\left\lfloor\frac{t}{K}\right\rfloor K\right] \neq \emptyset \right| \mathcal{F}\!\right) \] \[ = \prod_{r \in \mathcal{X}^{y(\gamma),k(\gamma)}} \mathds{P}_{q',q} \left( \exists x' \in y(\gamma)+\frac{r-n^{y(\gamma),k(\gamma)}}{2}a_1u+R, \hat{\eta}_0(x') \neq 0 \text{ or } \mathcal{P}_{x'}^1 \cap \left[0,t-\left\lfloor\frac{t}{K}\right\rfloor K\right] \neq \emptyset \right) \] since the events $\{\exists x' \in y(\gamma)+\frac{r-n^{y(\gamma),k(\gamma)}}{2}a_1u+R,\hat{\eta}_0(x') \neq 0$ or $\mathcal{P}_{x'}^1 \cap [0,t-\lfloor\frac{t}{K}\rfloor K] \neq \emptyset \}$ depend only on the state of $\hat{\eta}_0$ and on the clock rings of the time interval $[0,t-K\lfloor\frac{t}{K}\rfloor]$ at the sites of $y(\gamma)+\frac{r-n^{y(\gamma),k(\gamma)}}{2}a_1u+R$, so they are mutually independent and independent of $\mathcal{F}$. Therefore the invariance by translation yields \[ \mathds{P}_{q',q}(\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\hat{\eta}}(r)^c | \mathcal{F}) \leq \mathds{P}_{q',q}\left(\exists x' \in R,\hat{\eta}_0(x') \neq 0\text{ or } \mathcal{P}_{x'}^1 \cap \left[0,t-\!\left\lfloor\frac{t}{K}\right\rfloor\! K\right] \neq \emptyset\right)^{|\mathcal{X}^{y(\gamma),k(\gamma)}|} \] \[ = \left(1-\mathds{P}_{q',q}\left(\forall x' \in R,\hat{\eta}_0(x') = 0, \mathcal{P}_{x'}^1 \cap \left[0,t-\left\lfloor\frac{t}{K}\right\rfloor K\right] = \emptyset\right)\right)^{|\mathcal{X}^{y(\gamma),k(\gamma)}|} \] \[ = \left(1-\left(\mathds{P}_{q',q}\left(\hat{\eta}_0(0) = 0\right)\mathds{P}_{q',q}\left( \mathcal{P}_{0}^1 \cap \left[0,t-\left\lfloor\frac{t}{K}\right\rfloor K\right] = \emptyset\right)\right)^{|R|}\right)^{|\mathcal{X}^{y(\gamma),k(\gamma)}|}. \] Furthermore, since $t-\left\lfloor\frac{t}{K}\right\rfloor K \leq K$ and $q \geq q_K = 1+\frac{1}{3K|R|}\ln(1-e^{-K})$, \[ \mathds{P}_{q',q}\left( \mathcal{P}_{0}^1 \cap \left[0,t-\left\lfloor\frac{t}{K}\right\rfloor K\right] = \emptyset\right) = e^{-(1-q)\left(t-\left\lfloor\frac{t}{K}\right\rfloor K\right)} \geq e^{\frac{1}{3K|R|}\ln(1-e^{-K})K} = (1-e^{-K})^{\frac{1}{3|R|}} \geq \left(\frac{1}{2}\right)^{\frac{1}{3|R|}} \] since $K \geq 2$. This implies \[ \mathds{P}_{q',q}(\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\hat{\eta}}(r)^c | \mathcal{F}) \leq \left(1-\mathds{P}_{q',q}\left(\hat{\eta}_0(0) = 0\right)^{|R|} \left(\frac{1}{2}\right)^{\frac{1}{3}}\right)^{|\mathcal{X}^{y(\gamma),k(\gamma)}|}. \] In addition, if $\hat{\eta}=\eta$, $\mathds{P}_{q',q}(\hat{\eta}_0(0) = 0) = q'$, so $1-\mathds{P}_{q',q}(\eta_0(0) = 0)^{|R|} (\frac{1}{2})^{\frac{1}{3}} = 1-(q')^{|R|}2^{-\frac{1}{3}}$, and if $\hat{\eta}=\tilde{\eta}$, $1-\mathds{P}_{q',q}(\hat{\eta}_0(0) = 0)^{|R|} (\frac{1}{2})^{\frac{1}{3}} = 1-q^{|R|}2^{-\frac{1}{3}}$. Moreover, since $K \geq 2$, $q \geq q_K = 1+\frac{1}{3K|R|}\ln(1-e^{-K}) \geq 1+ \frac{1}{6|R|}\ln(1-e^{-2}) \geq \frac{1}{2}$, hence $1-\mathds{P}_{q',q}(\tilde{\eta}_0(0) = 0)^{|R|} (\frac{1}{2})^{\frac{1}{3}} \leq 1-2^{-|R|-\frac{1}{3}}$. This implies that if $\breve{c}_3'$ is the minimum of $-\ln(1-(q')^{|R|}2^{-\frac{1}{3}})$ and $-\ln(1-2^{-|R|-\frac{1}{3}})$ (which depends only on $\mathcal{U}$ and $q'$), for $\hat{\eta}=\eta$ or $\tilde{\eta}$ we have $\mathds{P}_{q',q}(\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\hat{\eta}}(r)^c | \mathcal{F}) \leq e^{-\breve{c}_3'|\mathcal{X}^{y(\gamma),k(\gamma)}|}$. Consequently, (\ref{eq_wonderful_rectangles}) yields \[ \mathds{P}_{q',q}\left(\left\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\right\} \cap \{\forall r \in \mathcal{X}^{y(\gamma),k(\gamma)}, W^{\gamma,\hat{\eta}}(r)^c\}\right) \] \[ \leq \mathds{E}_{q',q}\left(\mathds{1}_{\{|\mathcal{X}^{y(\gamma),k(\gamma)}| > \frac{t}{6K}\}} e^{-\breve{c}_3'|\mathcal{X}^{y(\gamma),k(\gamma)}|}\right) \leq e^{-\breve{c}_3'\frac{t}{6K}}, \] which is the lemma. \end{proof} \section*{Acknowledgements} I would like to thank my PhD advisor Cristina Toninelli; I also would like to thank Ivailo Hartarsky for his careful reading of this paper and for his suggestions, as well as for pointing me to some references.
1,108,101,564,100
arxiv
\section{High mass halos} \begin{figure*} \centering \includegraphics[scale=0.6]{figs/predicting_pk_from_obs_h2_v1.pdf} \caption{} \label{fig:predict_h2_v1} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.6]{figs/predicting_pk_from_obs_h2_v2.pdf} \caption{} \label{fig:predict_h2_v2} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.6]{figs/predicting_pk_from_obs_h2_v3.pdf} \caption{} \label{fig:predict_h2_v3} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.6]{figs/predicting_pk_from_obs_h2_v4.pdf} \caption{} \label{fig:predict_h2_v4} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.6]{figs/predicting_pk_from_obs_h2_v5.pdf} \caption{} \label{fig:predict_h2_v5} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figs/Kai/Figure2.png} \caption{Prediction of the Astrophysical parameters of the simulation by a neural network using the Cosmological parameters and the pressure profiles of individual halos. The network seems to be able to predict the parameters of the simulation to some degree.} \label{fig:predict_astro} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figs/Kai/NeuronGradientAbs.png} \caption{Absolute neuron gradients for the neural network used for Figure \ref{fig:predict_astro} for each of the output neurons. The $11$ input parameters have been varied by some value and the resulting gradient was computed. The vertical red line separates the Cosmological information from the $9$ radial bins of the pressure profiles, where $1$ is the closest to the center of the halo. It seems that the network relies on the cosmological parameters for its predictions to a dominant degree. Out of the pressure profile information, only the innermost two seem to have a non-negligible contribution on the predicted values. This makes it likely that the network actually learns the Latin Hypercube configurations, instead of some underlying Physics.} \label{fig:predict_astro_gradient} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{figs/Kai/nocosmo.png} \caption{Predictions of the same test set for an identical network architecture as used for \ref{fig:predict_astro}, but trained on the individual pressure profiles without the Cosmological information. The parameters can no longer be predicted to a reasonable degree. The feedback parameters could therefore not be learned from the individual halo pressure profiles.} \label{fig:predict_astro_no_cosmo} \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figs/Kai/4layers_24_06_hist.png} \caption{Error of the predicted power spectrum residuals by the neural network using all input parameters and the Cosmological information. To avoid the learning of the Latin Hypercube as was the case in Figure \ref{fig:predict_astro}, all halos from randomly selected simulations were removed from the train set as opposed to just selecting random halos. This way the test set is solely comprised of LH configurations the network has not yet seen. It appears as if the network can predict the power spectrum for $k=.6,1.,10$ very well almost up to cosmic variance. The $k=.3$ predictions seems to be a little less accurate. (The mean absolute error for each k value is [0.0011822 0.00302704 0.00818195 0.02800818] (not weighted so it is expected that k=.3 is smaller even thought the plot looks worse)).} \label{fig:predict_res_nn} \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figs/Kai/2_8_inputs_n_correct_legend.png} \caption{Error of the predicted power spectrum residuals using only the individual halo number density profiles (without Cosmological information).} \label{fig:predict_res_nn_n} \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figs/Kai/2_8_inputs_p_correct_legend.png} \caption{Error of the predicted power spectrum residuals using only the individual halo pressure profiles (without Cosmological information). \eb{Is this actually individual halo pressure profiles, or mean across a mass bin?}} \label{fig:predict_res_nn_p} \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figs/Kai/n+cosmo_astro.png} \caption{Prediction of Astrophysical parameters from mean number density profiles and Cosmological information. To prevent the learning of the LH configurations the test set was created using the same routine as in Figure \ref{fig:predict_res_nn}.} \label{fig:predict_astro_nn_n_mean+cosmo} \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figs/Kai/p+cosmo_astro.png} \caption{Prediction of Astrophysical parameters from mean pressure profiles and Cosmological information. To prevent the learning of the LH configurations the test set was created using the same routine as in Figure \ref{fig:predict_res_nn}.} \label{fig:predict_astro_nn_p_mean+cosmo} \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figs/Kai/n+cosmo_astro_individ.png} \caption{Prediction of Astrophysical parameters from individual number density profiles and Cosmological information. To prevent the learning of the LH configurations the test set was created using the same routine as in Figure \ref{fig:predict_res_nn}.} \label{fig:predict_astro_nn_n_individ+cosmo} \end{figure*} \begin{figure*} \centering \includegraphics[width =\textwidth]{figs/Kai/p+cosmo_astro_individ.png} \caption{Prediction of Astrophysical parameters from individual pressure profiles and Cosmological information. To prevent the learning of the LH configurations the test set was created using the same routine as in Figure \ref{fig:predict_res_nn}.} \label{fig:predict_astro_nn_p_individ+cosmo} \end{figure*} \end{comment} \begin{comment} \section{Main findings} \begin{itemize} \item We confirm that baryon fraction contains lots of information about impact of feedback on matter power spectrum, even over wide range of feedback models relative to van Daalen, and for low mass halos, and for multiple cosmologies. \eb{can we say something about the physical reason for this?}. Seems that $f_b$ alone doesn't do very well -- need $f_b$ relative to cosmic. \item While we can use $f_b$ and profiles to predict impact of feedback, they don't constrain feedback parameters very well. This implies that multiple combinations of feedback parameters can lead to same suppression. \eb{is this true for all parameters?}. \eb{doesn't this contradict Moser et al. paper?}\todo{(1) try network using fewer layers/neurons, (2) Fisher matrix for feedback parameters with data being the mean pressure profile.} \item Information content of Y profile \begin{itemize} \item Y500 alone doesn't work very well, maybe because degenerate with cosmology \item Y500 + $\Omega_m$ does quite well, don't need sigma8, and don't need profile. Is this weird? Would be nice to not know cosmology. \todo{(1) Why does $\Omega_m$ impact $Y$? (2) re-plot $Y$-$M$ relation for different $\Omega_M$} \item combining Y500 with fb/cosmic improves performance at high k relative to fb/cosmic alone. but there is no improvement in adding Y500 to fb+Om. \item y profile does better than Y500, and about as well as fb+Om \item y profile does about as well with cosmo as without \end{itemize} \item $f_b$ alone is not sufficient to predict impact of feedback because things also depend on cosmology. We understand this dependnece is related to baryons split between multiple halos. \eb{but why does changing $\Omega_m$ not significantly impact $f_b$ in some cases, even when number of halos is changing dramatically?} Does this explanation actually work? seems to be sensitivte to halo mass. Does van daalen explain \begin{itemize} \item Impact of cosmology on $f_b$ seems to be very sensitive to halo mass range \end{itemize} \item Model from Simba does not work for illustris, and vice versa. However, it is possible to train a model that works for both. \eb{what is difference between these models?} \end{itemize} \section{Remaining questions} \begin{itemize} \item Is measurement of $\Delta P$ robust if we're missing high-mass halos? \todo{Understand the $dP/dM$ plot better - is there a better plot?} \item figure out plots \item Try RF with individual profiles? Instead: train on mean profile, and variance profile \item Try neural network with mean profile? Done - doesn't work so well \item Where is information in profile coming from? \item is knowing impact on $P(k)$ to 25\% sufficient for future surveys? \item compare to DES constraints - we have measurement of $Y_{500}$, can we turn this into constraint on feedback? \item How difficult is it to measure $f_b$? Combine ksz with lensing? \item how do measurement errors impact findings? \end{itemize} \end{comment} \section{Fisher matrix} \begin{equation} F_{ij} = \frac{d P}{d \Delta_{p,i}} C^{-1}\frac{d P}{d \Delta_{p,j}} \end{equation} \begin{equation} \frac{dP}{d \Delta_p} = \sum_i \frac{\partial P}{\partial \theta_i} \frac{\partial \theta_i}{\partial \Delta_p} \end{equation} \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2 \frac{f_c (\Omega_b/\Omega_m) M - cY/T -M_0}{M} \\ &=& -2\frac{f_c\Omega_b}{\Omega_m} \left(1 - \frac{cY \Omega_m}{f_c TM \Omega_b} - \frac{M_0 \Omega_m}{f_c M\Omega_b} \right) . \end{eqnarray} when $Y$ is very small, we should have $\Delta P/ P \sim -2 f_c \Omega_b/\Omega_m - M_0/M$. We should also have $Y/Y^{\rm SS} = f_c - M_0\Omega_m/(M \Omega_b)$ when $\Delta P/P = 0$. In this simple model, the connection between these points will be a linear function of $Y/Y^{\rm SS}$. When the halo mass is very large, we expect $f_c = 1$ and $M_0/M_{\rm gas} = 0$, so the model will linearly interpolate between $(\Delta P/P,Y/Y^{\rm SS}) = (-2\Omega_b/\Omega_m,0)$ and $(\Delta P/P,Y/Y^{\rm SS}) = (0,1)$. This linear relation is shown with the dashed line in the $Y$ columns of Figs.~\ref{fig:Y_fb_DeltaP} and in \ref{fig:scatter_plot_all_ks}. At smaller halo mass, $f_c < 1$ and $M_0/M$ will become significant. We can also write \begin{eqnarray} M_{\rm ej} &=& f_c(\Omega_b/\Omega_m)M - f_{b} M - M_0, \end{eqnarray} (where $f_b$ includes stellar mass following van Daalen) which leads to \begin{eqnarray}\label{eq:DelP_P_fb} \frac{\Delta P}{P_{\rm DMO}} = -2 \frac{f_c\Omega_b}{\Omega_m} \left( 1 -\frac{f_{b}\Omega_m}{f_c \Omega_b} - \frac{\Omega_m M_0}{f_c \Omega_b M} \right). \end{eqnarray} In other words, we find a linear relation between $\Delta P/P_{\rm DMO}$ and $f_b \Omega_m/\Omega_b$ with endpoints at $(\Delta P/P_{\rm DMO},f_b \Omega_m/\Omega_b) = (-2\Omega_b/\Omega_m,0)$ and $(\Delta P/P_{\rm DMO},f_b \Omega_m/\Omega_b) = (0,1)$ for high-mass halos. This relation is shown as the dashed line in the $f_b$ columns of Figs.~\ref{fig:Y_fb_DeltaP}. \section{To do} \section{Introduction}\label{sec:intro} The statistics of the matter distribution on scales $k \gtrsim 0.1\,h{\rm Mpc}^{-1}$ are tightly constrained by current weak lensing surveys \citep[e.g.][]{Asgari:2021,DESY3cosmo}. However, modeling the matter distribution on small scales $k \gtrsim 1\,h{\rm Mpc}^{-1}$ to extract cosmological information is complicated by the effects of baryonic feedback \citep{Rudd:2008}. Energetic output from active galactic nuclei (AGN) and stellar processes (e.g. winds and supernovae) directly impacts the distribution of gas on small scales, thereby changing the total matter distribution \citep[e.g.][]{Chisari:2019}.\footnote{Changes to the gas distribution can also gravitationally influence the dark matter distribution, further modifying the total matter distribution.} The coupling between these processes and the large-scale gas distribution is challenging to model theoretically and in simulations because of the large dynamic range involved, from the scales of individual stars to the scales of galaxy clusters. While it is generally agreed that feedback leads to a suppression of the matter power spectrum on scales $0.1\,h{\rm Mpc}^{-1} \lesssim k \lesssim 20\,h{\rm Mpc}^{-1}$, the amplitude of this suppression remains uncertain by tens of percent \citep{vanDaalen:2020, Villaescusa-Navarro:2021:ApJ:} (see also Fig.~\ref{fig:Pk_Bk_CV}). This systematic uncertainty limits constraints on cosmological parameters from current weak lensing surveys \cite[e.g.][]{DESY3cosmo,Asgari:2021}. For future surveys, such as the Vera Rubin Observatory LSST \citep{TheLSSTDarkEnergyScienceCollaboration:2018:arXiv:} and \textit{Euclid} \citep{EuclidCollaboration:2020:A&A:}, the problem will become even more severe given expected increases in statistical precision. In order to reduce the uncertainties associated with feedback, we would like to identify observable quantities that carry information about the impact of feedback on the matter power distribution and develop approaches to extract this information \DIFaddbegin \DIFadd{\mbox \citep[e.g.][]{Nicola:2022:JCAP:}}\hspace{0pt }\DIFaddend . Recently, \citet{vanDaalen:2020} showed that the halo baryon fraction, $f_b$, in halos with $M \sim 10^{14}\,M_{\odot}$ carries significant information about suppression of the matter power spectrum caused by baryonic feedback. \DIFaddbegin \DIFadd{Notably, they found that the relation between $f_b$ and matter power suppression was robust to changing feedback prescription. }\DIFaddend Note that $f_b$ as defined by \citet{vanDaalen:2020} counts baryons in both the intracluster medium as well as those in stars. The connection between $f_b$ and feedback is expected, since one of the main drivers of feedback's impact on the matter distribution is the ejection of gas from halos by AGN. Therefore, when feedback is strong, halos will be depleted of baryons and $f_b$ will be lower. The conversion of baryons into stars --- which will not significantly impact the matter power spectrum on large scales --- does not impact $f_b$, since $f_b$ includes baryons in stars as well as the ICM. \citet{vanDaalen:2020} specifically consider the measurement of $f_b$ in halos with $6\times 10^{13} M_{\odot} \lesssim M_{500c} \lesssim 10^{14}\,M_{\odot}$. In much more massive halos, the energy output of AGN is small compared to the binding energy of the halo, preventing gas from being expelled. In smaller halos, \citet{vanDaalen:2020} find that the correlation between power spectrum suppression and $f_b$ is less clear. Although $f_b$ carries information about feedback, it is somewhat unclear how one would measure $f_b$ in practice. Observables such as the kinematic Sunyaev Zel'dovich (kSZ) effect can be used to constrain the gas density; combined with some estimate of stellar mass, $f_b$ could then be inferred. However, measuring the kSZ is challenging, and current measurements have low signal-to-noise \citep{Hand:2012,Hill:2016,Soergel:2016}. Moreover, \citet{vanDaalen:2020} consider a relatively limited range of feedback prescriptions. It is unclear whether a broader range of feedback models could lead to a greater spread in the relationship between $f_b$ and baryonic effects on the power spectrum. In any case, it is worthwhile to consider other potential observational probes of feedback. Another potentially powerful probe of baryonic feedback is the thermal SZ (tSZ) effect. The tSZ effect is caused by inverse Compton scattering of CMB photons with a population of electrons at high temperature. This scattering process leads to a spectral distortion in the CMB that can be reconstructed from multi-frequency CMB observations. The amplitude of this distortion is sensitive to the line-of-sight integral of the electron pressure. Since feedback changes the distribution and thermodynamics of the gas, it stands to reason that it could impact the tSZ signal. Indeed, several works using both data \citep[e.g][]{Pandey:2019,Pandey:2022,Gatti:2022} and simulations \citep[e.g.][]{Scannapieco:2008,Bhattacharya:2008,Wadekar:2022} have shown that the tSZ signal from low-mass (group scale) halos is sensitive to feedback. Excitingly, the sensitivity of tSZ measurements is expected to increase dramatically in the near future due to high-sensitivity CMB measurements from e.g. SPT-3G \citep{Benson:2014:SPIE:}, Advanced ACTPol \citep{Henderson:2016:JLTP:}, Simons Observatory \citep{Ade:2019:JCAP:}, and CMB Stage 4 \citep{CMBS4}. The goal of this work is to investigate what information the tSZ signals from low-mass halos contain about the impact of feedback on the small-scale matter distribution. The tSZ signal, which we denote with the Compton $y$ parameter, carries different information from $f_b$. For one, $y$ is sensitive only to the gas and not to stellar mass. Moreover, $y$ carries sensitivity to both the gas density and pressure, unlike $f_b$ which depends only on the gas density.\footnote{Of course, sensitivity to gas temperature does not necessarily mean that the tSZ is a more useful probe of feedback.} The $y$ signal is also be easier to measure than $f_b$, since it can be estimated simply by cross-correlating halos with a tSZ map. The signal-to-noise of such cross-correlation measurements is already high with current data, on the order of 10s of $\sigma$ \citep{Vikram:2017,Pandey:2019,Pandey:2022,Sanchez:2022}. In this paper, we investigate the information content of the tSZ signal using the CAMELS simulations. As we describe in more detail in \S\ref{sec:camels}, CAMELS is a suite of many hydrodynamical simulations run across a range of different feedback prescriptions and different cosmological parameters. The relatively small volume of the CAMELS simulations ($(25/h)^3\,{\rm Mpc^3}$) means that we are somewhat limited in the halo masses and scales that we can probe. We therefore view our analysis as an exploratory work that investigates the information content of low-mass halos for constraining feedback and how to extract this information; more accurate results over a wider range of halo mass and $k$ may be obtained in the future using the same methods applied to larger volume simulations. By training statistical models on the CAMELS simulations, we explore what information about feedback exists in tSZ observables, and how robust this information is to changes in subgrid prescriptions. We consider three very different prescriptions for feedback based on the SIMBA \citep{Dave:2019:MNRAS:}, Illustris-TNG \citep{Pillepich:2018:MNRAS:} and Astrid \citep{Bird:2022:MNRAS:, Ni:2022:MNRAS:} models across a wide range of possible parameter values (including variations in cosmology). The flexibility of the statistical models we employ means that it is possible to uncover more complex relationships between e.g. $f_b$, $y$, and the baryonic suppression of the power spectrum than considered in \citet{vanDaalen:2020}. Finally, we apply our trained statistical models to recently published measurements of the $y$ signal in low-mass halos. In particular, we consider the inferred values of $Y$ from the lensing-tSZ correlation analysis of Atacama Cosmology Telescope (ACT) and Dark Energy Survey (DES) \cite{Madhavacheril:2020:PhRvD:, Amon:2022:PhRvD:, Secco:2022:PhRvD:b} data presented in \citet{Gatti:2022} and \citet{Pandey:2022}. In addition to providing interesting constraints on the impact of feedback, these results highlight the potential of future similar analyses with e.g. Dark Energy Spectroscopic Experiment (DESI; \citealt{DESI}) and CMB Stage 4 \citep{CMBS4}. Two recent works --- \citet{Moser:2021} and \citet{Wadekar:2022} --- have used the CAMELS simulations to explore the information content of the tSZ signal for constraining feedback. These works focus on the ability of tSZ observations to constrain the parameters of subgrid feedback models in hydrodynamical simulations. Here, in contrast, we attempt to connect the observable quantities directly to the impact of feedback on the matter power spectrum. Additionally, unlike some of the results presented in \citet{Moser:2021} and \citet{Wadekar:2022}, we consider the full parameter space explored by the CAMELS simulations rather than the small variations around a fiducial point that are relevant to calculation of the Fisher matrix. Finally, we only focus on the intra-halo gas profile of the halos in the mass range captured by the CAMELS simulations (c.f. \cite{Moser:2021}). We do not expect the inter-halo gas pressure to be captured by the small boxes used here as it may be sensitive to higher halo masses \citep{Pandey:2020}. Nonlinear evolution of the matter distribution induces non-Gaussianity, and hence there is additional information to be recovered beyond the power spectrum. Recent measurements detect higher-order matter correlations at cosmological scales at $O(10\sigma)$\citep{Secco:2022:PhRvD:b, Gatti:2022:PhRvD:}, and the significance of these measurements is expected to rapidly increase with up-coming surveys \citep{Pyne:2021:MNRAS:}. Jointly analyzing two-point and three-point correlations of the matter field can help with self-calibration of systematic parameters and improve cosmological constraints. As described in \citet{Foreman:2020:MNRAS:}, the matter bispectrum is expected to be impacted by baryonic physics at $O(10\%)$ over the scales of interest. With these considerations in mind, we also investigate whether the SZ observations carry information about the impact of baryonic feedback on the matter bispectrum. The plan of the paper is as follows. In \S\ref{sec:camels} we discuss the CAMELS simulation and the data products that we use in this work. In \S\ref{sec:results_sims}, we present the results of our explorations with the CAMELS simulations, focusing on the information content of the tSZ signal for inferring the amount of matter power spectrum suppression. In \S\ref{sec:results_data}, we apply our analysis to the DES and ACT measurements. We summarize our results and conclude in \S\ref{sec:conclusion}. \section{CAMELS simulations and observables} \label{sec:camels} \subsection{Overview of CAMELS simulations} \begin{table*} \begin{tabular}{@{}|c|c|l|@{}} \toprule Simulation & Type/Code & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Astrophysical parameters varied\\ \& its meaning\end{tabular}} \\ \midrule IllustrisTNG & \begin{tabular}[c]{@{}c@{}}Magneto-hydrodynamic/\\ AREPO\end{tabular} & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$: (Energy of Galactic winds)/SFR \\ $A_{\rm SN2}$: Speed of galactic winds\\ $A_{\rm AGN1}$: Energy/(BH accretion rate)\\ $A_{\rm AGN2}$: Jet ejection speed or burstiness\end{tabular} \\ \midrule SIMBA & Hydrodynamic/GIZMO & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$ : Mass loading of galactic winds\\ $A_{\rm SN2}$ : Speed of galactic winds\\ $A_{\rm AGN1}$ : Momentum flux in QSO-mode of feedback\\ $A_{\rm AGN2}$ : Jet speed in kinetic mode of feedback\end{tabular} \\ \midrule Astrid & Hydrodynamic/pSPH & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$: (Energy of Galactic winds)/SFR \\ $A_{\rm SN2}$: Speed of galactic winds\\$A_{\rm AGN1}$: Energy/(BH accretion rate)\\ $A_{\rm AGN2}$: Thermal feedback efficiency\end{tabular} \\ \bottomrule \end{tabular} \caption{Summary of three varieties of simulations used in this analysis. In addition to the four astrophysical parameters mentioned, all simulations vary two cosmological parameters, $\Omega_{\rm m}$ and $\sigma_8$. \label{tab:feedback}} \end{table*} We investigate the use of SZ signals for constraining the impact of feedback on the matter distribution using approximately 3000 cosmological simulations run by the CAMELS collaboration \citep{Villaescusa-Navarro:2021:ApJ:}. One half of these are gravity-only N-body simulations and the other half are hydrodynamical simulations with matching initial conditions. The simulations are run using three different hydrodynamical sub-grid codes, Illustris-TNG \citep{Pillepich:2018:MNRAS:}, SIMBA \citep{Dave:2019:MNRAS:} and Astrid \citep{Bird:2022:MNRAS:, Ni:2022:MNRAS:}. As detailed in \citet{Villaescusa-Navarro:2021:ApJ:}, for each sub-grid implementation six parameters are varied: two cosmological parameters ($\Omega_m$ and $\sigma_8$) and four dealing with baryonic astrophysics. Of these, two deal with supernovae feedback ($A_{\rm SN1}$ and $A_{\rm SN2}$) and two deal with AGN feedback ($A_{\rm AGN1}$ and $A_{\rm AGN2}$). The meanings of these parameters for each subgrid model are summarized in Table~\ref{tab:feedback}. Note that the astrophysical parameters might have somewhat different physical meanings for different subgrid prescriptions and there is usually a complex interplay between them regarding their impact on the properties of galaxy and gas. For example, the parameter $A_{\rm SN1}$ approximately corresponds to the pre-factor for the overall energy output in galactic wind feedback per-unit star-formation in both TNG and Astrid simulations. However, in the SIMBA simulations it corresponds to the wind mass outflow rate per unit star-formation. Similarly, the $A_{\rm AGN2}$ parameter controls the burstiness and the temperature of the heated gas during the AGN bursts in the TNG simulations. Whereas in the SIMBA suite, it corresponds to the speed of the continuously-driven AGN jets. However, in the Astrid suite, it corresponds to the thermal feedback efficiency. As we describe in \S~\ref{sec:fbY}, this can result in counter-intuitive impact on the matter power spectrum in the Astrid simulation, relative to TNG and SIMBA. For each of the sub-grid physics prescriptions, three varieties of simulations are provided. These include 27 sims for which the parameters are fixed and initial conditions are varied (cosmic variance, or CV, set), 66 sims varying only one parameter at a time (1P set) and 1000 sims varying parameters in a six dimensional latin hyper-cube (LH set). We use the CV sims to estimate the variance expected in the matter power suppression due to stochasticity (see Fig.~\ref{fig:Pk_Bk_CV}). We use the 1P sims to understand how the matter suppression responds to variation in each parameter individually. Finally we use the full LH set to effectively marginalize over the full parameter space varying all six parameters. \DIFaddbegin \DIFadd{We use publicly available power spectrum and bispectrum measurements for these simulation boxes.}\footnote{\url{https://www.camel-simulations.org/data}} \DIFadd{Where unavailable, we calculate the power spectrum and bispectrum, using the publicly available code }\texttt{\DIFadd{Pylians}}\DIFadd{.}\footnote{\url{https://github.com/franciscovillaescusa/Pylians3}} \DIFaddend \DIFdelbegin \DIFdelend \subsection{Baryonic effects on the power spectrum and bispectrum} \begin{figure*} \includegraphics[width=\textwidth]{figs/figs_new/Pk_Bk_CV.pdf} \caption[]{Far left: Baryonic suppression of the matter power spectrum, $\Delta P$, in the CAMELS simulations\DIFdelbeginFL \DIFdelFL{at }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{. The dark-blue, red and orange shaded region correspond to $1\sigma$ error estimated with }\DIFaddendFL the \DIFdelbeginFL \DIFdelFL{fiducial values }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{CV suite }\DIFaddendFL of \DIFdelbeginFL \DIFdelFL{the feedback parameters}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{TNG, SIMBA and Astrid respectively}\DIFaddendFL . \DIFdelbeginFL \DIFdelFL{Each curve }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{The light-blue region }\DIFaddendFL corresponds to \DIFdelbeginFL \DIFdelFL{a different prescription for }\DIFdelendFL the \DIFdelbeginFL \DIFdelFL{subgrid physics as indicated in }\DIFdelendFL \DIFaddbeginFL \DIFaddFL{$1\sigma$ error estimated with }\DIFaddendFL the \DIFdelbeginFL \DIFdelFL{legend}\DIFdelendFL \DIFaddbeginFL \DIFaddFL{LH suite of TNG, showing a significantly larger spread}\DIFaddendFL . Middle and right panels: the impact of baryonic physics on the matter bispectrum \DIFaddbeginFL \DIFaddFL{suppression }\DIFaddendFL for the same set of simulations \DIFaddbeginFL \DIFaddFL{for equilaterla and squeezed triangle configurations respectively}\DIFaddendFL . } \label{fig:Pk_Bk_CV} \end{figure*} The left panel of Fig.~\ref{fig:Pk_Bk_CV} shows the measurement of the power spectrum suppression caused by baryonic effects in the Illustris, SIMBA, and Astrid simulations for their fiducial feedback settings. To compute the suppression, we use the matter power spectra and bispectra of the hydrodynamical simulations (hydro) and the dark-matter only (DMO) simulations generated at varying initial conditions (ICs). The power spectrum and bispectrum for all the simulations are provided by the CAMELS collaboration and are publicly available. For each of the 27 unique IC runs, we calculate the ratios $\Delta P/P_{\rm DMO} = (P_{\rm hydro} - P_{\rm DMO})/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO} = (B_{\rm hydro} - B_{\rm DMO})/B_{\rm DMO}$. As the hydro-dynamical and the N-body simulations are run with same initial conditions, the ratios $\Delta P/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO}$ are independent of sample variance. It is clear that the amplitude of suppression of the small-scale matter power spectrum can be significant: suppression on the order of tens of percent is reached for all three simulations. It is also clear that the impact is significantly different between the three simulations. Even for the simulations in closest agreement (Illustris-TNG and Astrid), the measurements of $\Delta P/P_{\rm DMO}$ disagree by a factor of five at $k = 5\,h/{\rm Mpc}$. The width of the curves in Fig.~\ref{fig:Pk_Bk_CV} represents the standard deviation measured across the cosmic variance simulations, which all have the same parameter values but different initial conditions. \DIFaddbegin \DIFadd{For bispectrum, we show both the equilateral and squeezed triangle configurations with cosine of angle between long sides fixed to $\mu = 0.9$. }\DIFaddend Interestingly, the spread in $\Delta P/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO}$ increases with increasing $k$ over the range $0.1 \,h/{\rm Mpc} \lesssim k \lesssim 10\,h/{\rm Mpc}$. This increase is driven by stochasticity arising from baryonic feedback. The middle and right panels show the impact of feedback on the bispectrum for the equilateral and squeezed triangle configurations, respectively. Throughout this work, we will focus on the regime $0.3\,h/{\rm Mpc}< k < 10\,h/{\rm Mpc}$. Larger scales \DIFdelbegin \DIFdel{are not well constrained by }\DIFdelend \DIFaddbegin \DIFadd{modes are not present in }\DIFaddend the $(25 {\rm Mpc}/h)^3$ CAMELS simulations, and in any case, the impact of feedback on large scales is typically small. Much smaller scales, on the other hand, are difficult to model even in the absence of baryonic feedback \DIFaddbegin \DIFadd{\mbox \citep{Schneider:2016:JCAP:}}\hspace{0pt }\DIFaddend . In Appendix.~\ref{app:volume_res_comp} we show how the matter power suppression changes when changing the resolution of the volume of the simulation boxes. When comparing with the original IllustrisTNG boxes, we find that while the box sizes do not change the measured power suppression significantly, the resolution of the boxes has a non-negligible impact. \DIFaddbegin \DIFadd{This is expected since the physical effect of feedback mechanisms depend on the resolution of the simulations. }\DIFaddend Note that the error-bars presented in Fig.~\ref{fig:Pk_Bk_CV} will also depend on the resolution and size of the simulation box as well as on the feedback parameter values assumed. We defer a detailed study of the covariance dependence on the simulation properties to a future study. \subsection{Measuring gas profiles around halos} We use 3D grids of various fields (e.g. gas density and pressure) made available by the CAMELS team to extract the profiles of these fields around dark matter halos. The grids are generated with resolution of 0.05 Mpc/$h$. Following \citet{vanDaalen:2020}, we define $f_b$ as $(M_{\rm gas} + M_{\rm stars})/M_{\rm total}$, where $M_{\rm gas}$, $M_{\rm stars}$ and $M_{\rm total}$ are the mass in gas, stars and all the components, respectively. The gas mass is computed by integrating the gas number density profile around each halo. We typically measure $f_b$ within the spherical overdensity radius $r_{\rm 500c}$.\footnote{We define spherical overdensity radius ($r_{\Delta c}$, where \DIFdelbegin \DIFdel{$\Delta = 200/500$}\DIFdelend \DIFaddbegin \DIFadd{$\Delta = 200, 500$}\DIFaddend ) and overdensity mass ($M_{\Delta c}$) such that the mean density within $r_{\Delta}$ is $\Delta$ times the critical density $\rho_{\rm crit}$, $M_{\Delta} = \Delta \frac{4}{3} \pi r^3_{\Delta} \rho_{\rm crit}$.} The SZ effect is sensitive to the electron pressure. We compute the electron pressure profiles, $P_e$, using $P_e = 2(X_{\rm H} + 1)/(5X_{\rm H} + 3)P_{\rm th}$, where $P_{\rm th}$ is the total thermal pressure, and $X_{\rm H}= 0.76$ is the primordial hydrogen fraction. Given the electron pressure profile, we measure the integrated SZ signal within $r_{\rm 500c}$ as: \begin{equation}\DIFaddbegin \label{eq:Y500_from_Pe} \DIFaddend Y_{\rm 500c} = \frac{\sigma_{\rm T}}{m_e c^2}\int_0^{r_{\rm 500c}} 4\pi r^2 \, P_e(r) \, dr, \end{equation} where, $\sigma_{\rm T}$ is the Thomson scattering cross-section, $m_{e}$ is the electron mass and $c$ is the speed of light. We normalize the SZ observables by the self-similar expectation \citep{Battaglia:2012:ApJ:a},\footnote{Note that we use spherical overdensity mass corresponding to $\Delta = 500$ and hence adjust the coefficients accordingly, while keeping other approximations used in their derivations as the same.} \begin{equation} \label{eq:y_ss} Y^{\rm SS} = 131.7 h^{-1}_{70} \,\bigg( \frac{M_{500c}}{10^{15} h^{-1}_{70} M_{\odot}} \bigg)^{5/3} \frac{\Omega_b}{0.043} \frac{0.25}{\Omega_m} \, {\rm kpc^2}, \end{equation} where, $M_{200c}$ is mass inside the $r_{200c}$ and $h_{70} = h/0.7$. This calculation, which scales as $M^{5/3}$, assumes hydrostatic equilibrium and that the baryon fraction is equal to cosmic baryonic fraction. Hence deviations from this self-similar scaling provide a probe of the effects of baryonic feedback. Our final SZ observable is defined as $Y_{500c}/Y^{\rm SS}$. On the other hand, \DIFdelbegin \DIFdel{that }\DIFdelend the amplitude of the pressure profile approximately scales as $M^{2/3}$. Therefore, when considering the pressure profile as the observable, we factor out a $M^{2/3}$ scaling. \section{Results I: Simulations} \label{sec:results_sims} \subsection{Inferring feedback parameters from $f_b$ and $y$} \label{sec:fisher} We first consider how the halo $Y$ signal can be used to constrain the parameters describing the subgrid physics models. This question has been previously investigated using the CAMELS simulations by \citet{Moser:2021} and \citet{Wadekar:2022}. The rest of our analysis will focus on constraining changes to the power spectrum and bispectrum, and our intention here is mainly to provide a basis of comparison for those results. Similar to \citet{Wadekar:2022}, we treat the mean $\log Y$ value in two mass bins ($10^{12} < M (M_{\odot}/h) < 5\times 10^{12}$ and $5 \times 10^{12} < M (M_{\odot}/h) < 10^{14}$) as our observable, which we refer to as $\vec{d}$. Here and throughout our investigations with CAMELS we ignore the contributions of measurement uncertainty since our intention is mainly to assess the information content of the SZ signals. We therefore use the CV simulations to determine the covariance, $\mathbf{C}$, of the $\vec{d}$. Note that the level of cosmic variance will depend on the volume probed, and can be quite large for the CAMELS simulations. The Fisher matrix, $F_{ij}$ is then given by \begin{equation} F_{ij} = \frac{\partial \vec{d}^T}{\partial \theta_i} \mathbf{C}^{-1} \frac{\partial \vec{d}}{\partial \theta_i}, \end{equation} where $\theta_i$ refers to the $i$th parameter value. Calculation of the derivatives $\partial \vec{d}/\partial \theta_i$ is complicated by the large amount of stochasticity between the CAMELS simulations. To perform the derivative calculation, we use a \DIFdelbegin \DIFdel{spline }\DIFdelend \DIFaddbegin \DIFadd{radial basis function }\DIFaddend interpolation method based on \citet{Moser:2022,Cromer:2022}. We show an example of the derivative calculation in Appendix~\ref{app:emulation}. We additionally assume a prior \DIFdelbegin \DIFdel{of $\sigma(\ln A) = 1$ }\DIFdelend \DIFaddbegin \DIFadd{on a parameter $p$ of $\sigma(\ln p) = 1$ }\DIFaddend on the feedback parameters and \DIFdelbegin \DIFdel{$\sigma(A) = 1$ }\DIFdelend \DIFaddbegin \DIFadd{$\sigma(p) = 1$ }\DIFaddend on the cosmological parameters. The parameter constraints corresponding to our calculated Fisher matrix are shown in Fig.~\ref{fig:fisher}. We show results only for $\Omega_m$, $A_{\rm SN1}$ and $A_{\rm AGN2}$, but additionally marginalize over $\sigma_8$, $A_{\rm SN2}$ and $A_{\rm AGN1}$. The degeneracy directions seen in our results are consistent with those in \citet{Wadekar:2022}. We we find a weaker constraint on $A_{\rm AGN2}$, likely owing to the large sample variance contribution to our calculation. It is clear from Fig.~\ref{fig:fisher} that the marginalized constraints on the feedback parameters are weak. If information about $\Omega_m$ is not used, we effectively have no information about the feedback parameters. Even when $\Omega_m$ is fixed, the constraints on the feedback parameters are not very precise. This finding is consistent with \citet{Wadekar:2022}, for which measurement uncertainty was the main source of variance rather than sample variance. Part of the reason for the poor constraints is the degeneracy between the AGN and SN parameters. Degeneracies between the impacts of feedback parameters and cosmology on $Y$, as well as the potentially complex relation between feedback parameters and the changing matter distribution motivate us to consider instead direct inference of changes to the statistics of the matter distribution from the $Y$ observables. \DIFaddbegin \DIFadd{However, note that the conclusions here will depend upon the simulation volume, which would change the covariance and will capture the effects like super sample covariance. }\DIFaddend \begin{figure} \centering \includegraphics[scale=0.6]{figs/Y500_log_dec7.pdf} \caption{Forecast parameter constraints on the feedback parameters when $\log Y$ in two halo mass bins is treated as the observable. We assume that the only contribution to the variance of this observable is sample variance coming from the finite volume of the CAMELS simulations. } \label{fig:fisher} \end{figure} \subsection{$f_b$ and $y$ as probes of baryonic effects on the matter power spectrum} \label{sec:fbY} As discussed above, \citet{vanDaalen:2020} observed a tight correlation between suppression of the matter power spectrum and the baryon fraction, $f_b$, in halos with $6\times 10^{13} M_{\odot} \lesssim M_{500c} \lesssim 10^{14}\,M_{\odot}$. That relation was found to hold regardless of the details of the feedback implementation, suggesting that by measuring $f_b$ in high-mass halos, one could robustly infer the impact of baryonic feedback on the power spectrum. We begin by investigating the connection between matter power spectrum suppression and $f_b$ in low-mass, $M \sim 10^{13}\,M_{\odot}$, halos. We also consider a wider range of feedback models than \citet{vanDaalen:2020}, including the SIMBA and Astrid models. \begin{figure} \includegraphics[width=0.95\columnwidth]{figs/figs_new/vanDaleen+19_with_camels_SIMBA_all_params_Y500c.pdf} \caption[]{We show the relation between matter power suppression at $k=2 h/{\rm Mpc}$ and baryon fraction of halos in mass range $10^{13} < M\,(M_{\odot}/h) < 10^{14}$ in the SIMBA simulation suite. In each of six panels, the points are colored corresponding to the parameter value given in the associated colorbar. } \label{fig:Pk_SIMBA_allparams} \end{figure} Fig.~\ref{fig:Pk_SIMBA_allparams} shows the impact of cosmological and feedback parameters on the relationship between the power spectrum suppression ($\Delta P/P_{\rm DMO}$) and the ratio $Y_{\rm 500c}/Y^{\rm SS}$ for the SIMBA simulations. Each point corresponds to a single simulation, taking the average over all halos with $10^{13} < M (M_{\odot}/h) < 10^{14}$ when computing $Y_{\rm 500c}/Y^{\rm SS}$. We observe that the largest $\Delta P/P_{\rm DMO}$ occurs when $A_{\rm AGN2}$ is large. This is caused by powerful AGN feedback ejecting gas from halos, leading to a significant reduction in the matter power spectrum, as described by e.g. \citet{vanDaalen:2020}. For SIMBA, the parameter $A_{\rm AGN2}$ controls the velocity of the ejected gas, with higher velocities (i.e. higher $A_{\rm AGN2}$) leading to more ejected gas. On the other hand, when $A_{\rm SN2}$ is large, $\Delta P/P_{\rm DMO}$ is small. This is because \DIFdelbegin \DIFdel{supernovae prevent the gas from accreting onto the supermassive blackhole, reducing the strength of }\DIFdelend \DIFaddbegin \DIFadd{efficient supernovae feedback prevents the formation of massive galaxies which host AGN and hences reduces the strength of the }\DIFaddend AGN feedback. The parameter $A_{\rm AGN1}$, on the other hand, controls the radiative quasar mode of feedback, which has slower gas outflows and thus a smaller impact on the matter distribution. It is also clear from Fig.~\ref{fig:Pk_SIMBA_allparams} that increasing $\Omega_{\rm m}$ reduces $|\Delta P/P_{\rm DMO}|$, relatively independently of the other parameters. By increasing $\Omega_m$, the ratio $\Omega_b/\Omega_m$ decreases, meaning that halos of a given mass have fewer baryons, and the impact of feedback is therefore reduced. We propose a very simple toy model for this effect in \S\ref{sec:simple_model}. The impact of $\sigma_8$ in Fig.~\ref{fig:Pk_SIMBA_allparams} is less clear. For halos in the mass range shown, we find that increasing $\sigma_8$ leads to a roughly monotonic decrease in $f_b$, presumably because higher $\sigma_8$ means that there are more halos amongst which the same amount of baryons must be distributed. This effect would not occur for cluster-scale halos because these are rare and large enough to gravitationally dominate their local environments, giving them $f_b \sim \Omega_b/\Omega_m$, regardless of $\sigma_8$. In any case, no clear trend with $\sigma_8$ is seen in Fig.~\ref{fig:Pk_SIMBA_allparams} because $\sigma_8$ does not correlate strongly with $\Delta P/P_{\rm DMO}$. Fig.~\ref{fig:Y_fb_DeltaP} shows the relationship between $\Delta P/P_{\rm DMO}$ and $f_b$ or $Y_{500}$ in different halo mass bins and for different feedback models, colored by the value of $A_{AGN2}$. As in Fig.~\ref{fig:Pk_SIMBA_allparams}, each point represents an average over all halos in the indicated mass range for a particular CAMELS simulation (i.e. at fixed values of cosmological and feedback parameters). Note that the meaning of $A_{\rm AGN2}$ is not exactly the same across the different feedback models, as noted in \S\ref{sec:camels}. For TNG and SIMBA we expect increasing $A_{\rm AGN2}$ to lead to stronger AGN feedback driving more gas out of the halos, leading to more power suppression. However, for Astrid, increasing $A_{\rm AGN2}$ parameter would more strongly regulate and suppress the black hole growth in the box. This drastically reduces the number of high mass black holes and hence effectively reducing the amount of jet feedback that can push the gas out of the halos, leading to less matter power suppression. Therefore, in Fig.~\ref{fig:Y_fb_DeltaP}, we redefine the $A_{\rm AGN2}$ parameter for Astrid to be $1/A_{\rm AGN2}$ when plotting. For the highest mass bin ($M > 5 \times 10^{13} M_{\odot}/h$, rightmost column) our results are in agreement with \citet{vanDaalen:2020}: we find that there is a fairly tight relation between between $f_b/(\Omega_b/\Omega_m)$ and the matter power suppression. This relation is roughly consistent across different feedback subgrid models, although the different models appear to populate different parts of this relation. Moreover, varying $A_{AGN2}$ appears to move points along this relation, rather than broadening the relation. This is in contrast to $\Omega_m$, which as shown in Fig.~\ref{fig:Pk_SIMBA_allparams}, tends to move simulations in the direction orthogonal to the narrow $f_b$-$\Delta P/P_{\rm DMO}$ locus. For this reason, and given current constraints on $\Omega_m$, we restrict this plot to simulations with $0.2 < \Omega_m < 0.4$. The dashed curves shown in Fig.~\ref{fig:Pk_SIMBA_allparams} correspond to the toy model discussed in \S\ref{sec:simple_model}. At low halo mass, the relation between $f_b/(\Omega_b/\Omega_m)$ and $\Delta P/P_{\rm DMO}$ appears similar to the high mass bin, although it is somewhat flatter at high $f_b$, and somewhat steeper at low $f_b$. Again the results are fairly consistent cross the different feedback prescriptions, although points with high $f_b/(\Omega_b/\Omega_m)$ are largely absent for SIMBA. \DIFaddbegin \DIFadd{This is largely because the feedback mechanisms are highly efficient in SIMBA, driving the gas out of their parent halos. }\DIFaddend The relationships between $Y$ and $\Delta P/P$ appear quite similar to those between $\Delta P/P$ and \DIFdelbegin \DIFdel{$f_b$}\DIFdelend \DIFaddbegin \DIFadd{$f_b/(\Omega_b/\Omega_m)$}\DIFaddend . This is not too surprising because $Y$ is sensitive to the gas density, which dominates \DIFdelbegin \DIFdel{$f_b$}\DIFdelend \DIFaddbegin \DIFadd{$f_b/(\Omega_b/\Omega_m)$}\DIFaddend . However, $Y$ is also sensitive to the gas temperature. Our results suggest that variations in the gas temperature are not significantly impacting the $Y$-$\Delta P/P$ relation. These results suggest the possibility of using the tSZ signal rather than \DIFdelbegin \DIFdel{$f_b$ }\DIFdelend \DIFaddbegin \DIFadd{$f_b/(\Omega_b/\Omega_m)$ }\DIFaddend to infer the impact of feedback on the matter distribution. This will be the focus of the remainder of the paper. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/vanDaleen+19_with_camels_A_AGN2.pdf} \caption[]{Impact of baryonic physics on the matter power spectrum at $k=2 h/{\rm Mpc}$ for the Illustris, SIMBA and Astrid simulations (top, middle, and bottom rows). Each point corresponds to an average across halos in the indicated mass ranges in a different CAMELS simulation. We restrict the figure to simulations that have $0.2 < \Omega_{\rm m} < 0.4$. The dashed curves illustrate the behavior of the model described in \S\ref{sec:simple_model} in the regime that the radius to which gas is ejected by AGN is larger than the halo radius, and larger than $2\pi/k$. \DIFaddbeginFL \DIFaddFL{Note that for Astrid simulations (marked by asterisk), we take inverse of $A_{\rm AGN2}$ parameter while plotting as described in the main text. }\DIFaddendFL } \label{fig:Y_fb_DeltaP} \end{figure*} Fig.~\ref{fig:scatter_plot_all_ks} shows the same quantities as Fig.~\ref{fig:Y_fb_DeltaP}, but now for a fixed halo mass range ($10^{13} < M/(M_{\odot}/h) < 10^{14}$), fixed subgrid prescription (Astrid), and varying values of $k$. We find roughly similar results when using the different subgrid physics prescriptions. At low $k$, we find that there is a regime at high \DIFdelbegin \DIFdel{$Y$ }\DIFdelend \DIFaddbegin \DIFadd{$f_b/(\Omega_b/\Omega_m)$ }\DIFaddend for which $\Delta P /P_{\rm DMO}$ changes negligibly. It is only when \DIFdelbegin \DIFdel{$Y$ }\DIFdelend \DIFaddbegin \DIFadd{$f_b/(\Omega_b/\Omega_m)$ }\DIFaddend becomes very low that $\Delta P/P_{\rm DMO}$ begins to change. On the other hand, at high $k$, there is a near-linear relation between \DIFdelbegin \DIFdel{$Y$ }\DIFdelend \DIFaddbegin \DIFadd{$f_b/(\Omega_b/\Omega_m)$ }\DIFaddend and $\Delta P/P$. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/vanDaleen+19_with_camels_Astrid_all_ks.pdf} \caption[]{Similar to Fig.~\ref{fig:Y_fb_DeltaP}, but for different values of $k$. For simplicity, we show only the Astrid simulations for halos in the mass range $10^{13} < M (M_{\odot}/h) < 10^{14}$. The dashed curves illustrate the behavior of the model described in \S\ref{sec:simple_model} in the regime that the radius to which gas is ejected by AGN is larger than the halo radius, and larger than $2\pi/k$. As expected, this model performs best in the limit of high $k$ and large halo mass. } \label{fig:scatter_plot_all_ks} \end{figure*} \subsection{A toy model for power suppression} \label{sec:simple_model} We now describe a simple model for the effects of feedback on the relation between $f_b$ or $Y$ and $\Delta P/P$ that explains some of the features seen in Figs.~\ref{fig:Pk_SIMBA_allparams}, \ref{fig:Y_fb_DeltaP} and \ref{fig:scatter_plot_all_ks}. Following expectations from the literature, we assume in this model that it is ejection of gas from halos by AGN feedback that is responsible for changes to the matter power spectrum. SN feedback, on the other hand, prevents gas from accreting onto the SMBH, and therefore reduces the impact of AGN feedback. This scenario is consistent with the fact that at high SN feedback, we see that $\Delta P/P_{\rm DMO}$ goes to zero (second panel from the bottom in Fig.~\ref{fig:Pk_SIMBA_allparams}). We identify three relevant scales: (1) the halo radius, $R_h$, (2) the distance to which gas is ejected by the AGN, $R_{\rm ej}$, and (3) the scale at which the power spectrum is measured, $2\pi/k$. If $R_{\rm ej} < 2\pi/k$, then there will be no impact on $\Delta P$ at $k$: this corresponds to a rearrangement of the matter distribution on scales below where we measure the power spectrum. If, on the other hand, $R_{\rm ej} < R_h$, then there will be no impact on $f_b$ or $Y$, since the gas is not ejected out of the halo. Thus, we can consider four different regimes: \begin{itemize} \item Regime 1: $R_{\rm ej} < R_h$ and $R_{\rm ej} < 2\pi /k$. In this regime, changes to the feedback parameters have no impact on $f_b$ or $\Delta P$. \item Regime 2: $R_{\rm ej} > R_h$ and $R_{\rm ej} < 2\pi/k$. In this regime, changes to the feedback parameters result in movement along the $f_b$ or $Y$ axis without changing $\Delta P$. Gas is being removed from the halo, but the resultant changes to the matter distribution are below the scale at which we measure the power spectrum. Note that Regime 2 cannot occur when $R_h > 2\pi/k$ (i.e. high-mass halos at large $k$). \item Regime 3: $R_{\rm ej} > R_h$ and $R_{\rm ej} > 2\pi/k$. In this regime, changing the feedback amplitude directly changes the amount of gas ejected from halos as well as $\Delta P/P_{\rm DMO}$. \item Regime 4: $R_{\rm ej} < R_h$ and $R_{\rm ej} > 2 \pi/k$. In this regime, gas is not ejected out of the halo, so $f_b$ and $Y$ should not change. In principle, the redistribution of gas within the halo could lead to changes in $\Delta P/P_{\rm DMO}$. However, as we discuss below, this does not seem to happen in practice. \end{itemize} Let us now consider the behavior of $\Delta P/P_{\rm DMO}$ and $f_b$ or $Y$ as the feedback parameters are varied in Regime 3. A halo of mass $M$ is associated with an overdensity $\delta_m$ in the absence of feedback, which is changed to $\delta'_m$ \DIFaddbegin \DIFadd{due to ejection of baryons }\DIFaddend as a result of feedback. In Regime 3, some amount of gas, $M_{\rm ej}$, is completely removed from the halo. This changes the size of the overdensity associated with the halo to \begin{eqnarray} \frac{\delta_m'}{\delta_m} &=& 1 - \frac{M_{\rm ej}} {M}. \end{eqnarray} The change to the power spectrum is then \begin{eqnarray} \label{eq:deltap_over_p} \frac{\Delta P}{P_{\rm DMO}} &\sim& \left( \frac{\delta_m'}{\delta_m} \right)^2 -1 \approx -2\frac{M_{\rm ej}}{M}, \end{eqnarray} where we have assumed that $M_{\rm ej}$ is small compared to $M$. We have ignored the $k$ dependence here, but in Regime 3, the ejection radius is larger than the scale of interest, so the calculated $\Delta P/P_{\rm DMO}$ should apply across a range of $k$ in this regime. The ejected gas mass can be related to the gas mass in the absence of feedback. We write the gas mass in the absence of feedback as $f_c (\Omega_b/\Omega_m) M$, where $f_c$ encapsulates non-feedback processes that result the halo having less than the cosmic baryon fraction. We then have \begin{eqnarray} M_{\rm ej} &=& f_c(\Omega_b/\Omega_m)M - f_{b} M - M_0, \end{eqnarray} where $M_0$ is the mass that has been removed from the gaseous halo, but that does not change the power spectrum, e.g. the conversion of gas to stars. Substituting into Eq.~\ref{eq:deltap_over_p}, we have \begin{eqnarray}\label{eq:DelP_P_fb} \frac{\Delta P}{P_{\rm DMO}} = -2 \frac{f_c\Omega_b}{\Omega_m} \left( 1 -\frac{f_{b}\Omega_m}{f_c \Omega_b} - \frac{\Omega_m M_0}{f_c \Omega_b M} \right). \end{eqnarray} In other words, for Regime 3, we find a linear relation between $\Delta P/P_{\rm DMO}$ and $f_b \Omega_m/\Omega_b$. For high mass halos, we should have $f_c \approx 1$ and $M_0/M \approx 0$. In this limit, the relationship between $f_b$ and $\Delta P/P$ becomes \begin{eqnarray}\label{eq:DelP_P_fb} \frac{\Delta P}{P_{\rm DMO}} = -2 \frac{\Omega_b}{\Omega_m} \left( 1 -\frac{f_{b}\Omega_m}{\Omega_b} \right), \end{eqnarray} which is linear between $(\Delta P/P_{\rm DMO},f_b \Omega_m/\Omega_b) = (-2\Omega_b/\Omega_m,0)$ and $(\Delta P/P_{\rm DMO},f_b \Omega_m/\Omega_b) = (0,1)$. We show this relation as the dashed line in the $f_b$ columns of Figs.~\ref{fig:Y_fb_DeltaP}. We can repeat the above argument for $Y$. Unlike the case with $f_b$, processes other than the removal of gas may reduce $Y$; these include, e.g., changes to the gas temperature in the absence of AGN feedback, or nonthermal pressure support. We account for these with a term $Y_0$, defined such that when $M_{\rm ej} = M_0 = 0$, we have $Y + Y_0 = f_c (\Omega_b/\Omega_m) MT /\alpha$, where we have assumed constant gas temperature, $T$, and $\alpha$ is a dimensionful constant of proportionality. We then have \begin{eqnarray} \frac{\alpha(Y+Y_0)}{T} = f_c (\Omega_b / \Omega_m)M - M_{\rm ej} - M_0. \end{eqnarray} Substituting the above equation into Eq.~\ref{eq:deltap_over_p} we have \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{f_c\Omega_b}{\Omega_m} \left(1 - \frac{\alpha (Y+Y_0) \Omega_m}{f_c TM \Omega_b} - \frac{\Omega_m M_0}{f_c \Omega_b M} \right) . \end{eqnarray} Following Eq.~\ref{eq:y_ss}, we define the self-similar value of $Y$, $Y^{\rm SS}$, via \begin{eqnarray} \alpha Y^{\rm SS}/T = (\Omega_b/\Omega_m)M, \end{eqnarray} leading to \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{f_c\Omega_b}{\Omega_m} \left(1 - \frac{(Y+Y_0)}{f_c Y^{\rm SS}} - \frac{\Omega_m M_0}{f_c \Omega_b M}\right). \end{eqnarray} Again taking the limit that $f_c \approx 1$ and $M_0/M \approx 0$, we have \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{\Omega_b}{\Omega_m} \left(1 - \frac{(Y+Y_0)}{ Y^{\rm SS}} \right). \end{eqnarray} Thus, we see that in Regime 3, the relation between $Y/Y^{SS}$ and $\Delta P/P_{\rm DMO}$ is linear. The $Y/Y^{SS}$ columns of Figs.~\ref{fig:Y_fb_DeltaP} show this relationship, assuming $Y_0 = 0$. In summary, we interpret the results of Figs.~\ref{fig:Y_fb_DeltaP} and \ref{fig:scatter_plot_all_ks} in the following way. Starting at low feedback amplitude, we are initially in Regime 1. In this regime, the simulations cluster around $f_b f_c \Omega_m/\Omega_b \approx 1$ (or $Y \approx Y_0$) and $\Delta P/P \approx 0$ since changing the feedback parameters in this regime does not impact $f_b$ or $\Delta P/P$. For high mass halos, we have $f_c \approx 1$ and $Y_0 \approx 0$ (although SIMBA appears to have $Y_0 >0$, even at high mass); for low mass halos, $f_c < 1$ and $Y_0 >0$. As we increase the AGN feedback amplitude, the behavior is different depending on halo mass and $k$: \begin{itemize} \item For low halo masses or low $k$, increasing the AGN feedback amplitude leads the simulations into Regime 2. Increasing the feedback amplitude in this regime moves points to lower $Y/Y^{\rm SS}$ (or $f_b \Omega_m/\Omega_b$) without significantly impacting $\Delta P/P_{\rm DMO}$. Eventually, when the feedback amplitude is sufficiently strong, these halos enter Regime 3, and we see a roughly linear decline in $\Delta P/P_{\rm DMO}$ with decreasing $Y/Y^{\rm SS}$ (or $f_b\Omega_m/\Omega_b$), as discussed above. \item For high mass halos and high $k$, we never enter Regime 2 since it is not possible to have $R_{\rm ej} > R_h$ and $R_{\rm ej} < 2\pi/k$ when $R_h$ is very large. In this case, we eventually enter Regime 3, leading to a linear trend of decreasing $\Delta P/P_{\rm DMO}$ with decreasing $Y/Y^{\rm SS}$ or $f_b \Omega_m/\Omega_b$, as predicted by the above discussion. This behavior is especially clear in Fig.~\ref{fig:scatter_plot_all_ks}: at high $k$, the trend closely follows the predicted linear relation. At low $k$, on the other hand, we see a more prominent Regime 2 region. The transition between these two regimes is expected to occur when $k \sim 2\pi/R_h$, which is roughly $5\,h^{-1}{\rm Mpc}$ for the halo mass regime shown in the figure. This expectation is roughly confirmed in the figure. \end{itemize} Interestingly, we never see Regime 4 behavior: when the halo mass is large and $k$ is large, we do not see rapid changes in $\Delta P/P$ with little change to $f_b$ and $Y$. This could be because this regime corresponds to movement of the gas entirely within the halo. If the gas has time to re-equilibrate, it makes sense that we would see little change to $\Delta P/P$. \subsection{Predicting the power spectrum suppression from the halo observables} While the toy model described above roughly captures the trends between $Y$ (or $f_b$) and $\Delta P/P_{\rm DMO}$, it of course does not capture all of the physics associated with feedback. It is also clear that there is significant scatter in the relationships between observable quantities and $\Delta P$. It is possible that this scatter is reduced in some higher dimensional space that includes more observables. To address both of these issues, we now train statistical models to learn the relationships between observable quantities and $\Delta P/P_{\rm DMO}$. We will focus on results obtained with random forest regression \citep{Breiman2001}. We have also tried using neural networks to infer these relationships, but have not found any significant improvement with respect to the random forest results, presumably because the space is low-dimensional (i.e. we consider at most about five observable quantities at a time). \DIFaddbegin \DIFadd{We leave a detailed comparison with other decision tree based approaches, such as gradient boosted tree \mbox \citep{Friedman_boosted_tree:01} }\hspace{0pt to a future study. }\DIFaddend \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/train_test_variantions_updated.pdf} \caption[]{We show the results of the random forest (RF) regressor at predicting $\Delta P/P$ for variations in the test and train sample using the different subgrid physics models (TNG, SIMBA, Astrid) \DIFaddbeginFL \DIFaddFL{on LH suite of simulations}\DIFaddendFL . The RF model is trained on $f_b$ for halos with $5\times10^{12} < M (M_{\odot}/h) < 10^{14}$ as well as $\Omega_m$. The errorbars correspond to the 16-th and 84-th percentile error on the recovered probability density function (PDF) of the power suppression \DIFaddbeginFL \DIFaddFL{on the test set of LH simulations}\DIFaddendFL , where as the marker corresponds to the peak of the PDF. The gray band corresponds to the expected $1\sigma$ error from the CV simulations on the power suppression. We find that we can predict the power suppression robustly when the training and test simulation codes are the same. At high $k$, training on two simulations and predicting on a third works well. At low $k$, however, the RF model becomes biased when testing on a third simulation. For the rest of the paper, we present results trained on all three simulations. } \label{fig:Pk_Y_CV} \end{figure*} We train a random forest model to go from observable quantities (e.g. $f_b/(\Omega_b/\Omega_m)$ and $Y_{500}/Y^{SS}$) to a prediction for $\Delta P/P_{\rm DMO}$ at multiple $k$ values. The random forest model uses 100 trees with a ${\rm max}_{\rm depth} = 10$.\footnote{We use a publicly available code: \url{https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html}\DIFaddbegin \DIFadd{. We also verified that our conclusions are robust to changing the settings of the random forest.}\DIFaddend } In this section we analyze the halos in the mass bin $5\times 10^{12} < M_{\rm halo} (M_{\odot}/h) < 10^{14}$ but we also show the results for halos with lower masses in Appendix~\ref{app:low_mass}. We also consider supplying the values of $\Omega_{\rm m}$ as input to the random forest, since it can be constrained precisely through other observations (e.g. CMB observations), and as we showed in \S\ref{sec:fbY}, the cosmological parameters can impact the observables.\footnote{One might worry that using cosmological information to constrain $\Delta P/P_{\rm DMO}$ defeats the whole purpose of constraining $\Delta P/P_{\rm DMO}$ in order to improve cosmological constraints. However, observations, such as those of CMB primary anisotropies, already provide precise constraints on the matter density without using information in the small-scale matter distribution. } Ultimately, we are interested in making predictions for $\Delta P/P_{\rm DMO}$ using observable quantities. However, the sample variance in the CAMELS simulations limits the precision with which we can measure $\Delta P/P_{\rm DMO}$. It is not possible to predict $\Delta P/P_{\rm DMO}$ to better than this precision. We will therefore normalize the uncertainties in the RF predictions by the cosmic variance error. In order to obtain the uncertainty in the prediction, we randomly split the data into 70\% training and 30\% test set. After training the RF regressor using the training set and a given observable, we make compute the 16th and 84th percentile of the distribution of prediction errors evaluated on the test set. This constitutes our assessment of prediction uncertainty. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot1_y500_fb_comp_FINAL_v2.pdf} \caption{ Similar to Fig.~\ref{fig:Pk_Y_CV} but when training the RF model on different observables from all three simulations (TNG, SIMBA and Astrid) to predict $\Delta P/P$ of a random subset of the the three simulations not used in training. We find that jointly training on the deviation of the integrated SZ profile from the self-similar expectation, $Y_{500c}/Y^{\rm SS}$ and $\Omega_m$ results in inference of power suppression that is comparable to cosmic variance errors, with small improvements when additionally adding the baryon fraction ($f_b$) of halos in the above mass range. } \label{fig:predict_y500_fb} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot2_yprof_comp_FINAL_v2.pdf} \caption{ Same as Fig.~\ref{fig:predict_y500_fb} but when using the full pressure profile and electron number density profiles instead of the integrated quantities. We again find that with pressure profile and $\Omega_m$ information we can recover robust and precise constraints on the matter power suppression. } \label{fig:predict_profiles} \end{figure*} Fig.~\ref{fig:Pk_Y_CV} shows the accuracy of the RF predictions for $\Delta P/P$ when trained on $f_b$ and $\Omega_m$, normalized to the sample variance error in $\Delta P/P$. As we will show later in this section, this combination of inputs results in precise constraints on the matter power suppression. We show the results of training and testing on a single simulation suite, and also the results of training/testing across different simulation suites. It is clear that when training and testing on the same simulation suite, the RF learns a model that comes close to the best possible uncertainty on $\Delta P/P_{\rm DMO}$ (i.e. cosmic variance). When training on one or two simulation suites and testing another, however, the predictions show bias at low $k$. This suggests that the model learned from one simulation does not generalize very well to another in this regime. This result is somewhat different from the findings of \citet{vanDaalen:2020}, where it was found that the relationship between $f_b$ and $\Delta P/P_{\rm DMO}$ \textit{did} generalize to different simulations. This difference may result from the fact that we are considering a wider range of feedback prescriptions than in \citet{vanDaalen:2020}, as well as considering significant variations in cosmological parameters. Fig.~\ref{fig:Pk_Y_CV} also shows the results of testing and training on all three simulations (black points with errorbars). Encouragingly, we find that in this case, the predictions are of comparable accuracy to those obtained from training and predicting on the same simulation suite. This suggests that there is a general relationship across all feedback models that can be learned to go from $\Omega_m$ and $f_b$ to $\Delta P/P_{\rm DMO}$. Henceforth, we will show results trained on all simulation suites and tested on all simulations suites. Of course, this result does not imply that our results will generalize to some completely different feedback prescription. In Fig.~\ref{fig:predict_y500_fb} we show the results of training the random forest on different combinations of $f_b$, $Y_{500}$ and $\Omega_m$. Consistent with the findings of \citet{vanDaalen:2020}, we find that $f_b/(\Omega_b/\Omega_m)$ results in robust constraints on the matter power suppression (blue points with errors). These constraints come close to the cosmic variance limit across a wide range of $k$. We additionally find that providing $f_b$ and $\Omega_m$ as separate inputs to the RF improves on the combination $f_b/(\Omega_b/\Omega_m)$, yielding smaller variance in the predicted $\Delta P/P_{\rm DMO}$, with the \DIFdelbegin \DIFdel{most dramatic }\DIFdelend \DIFaddbegin \DIFadd{largest }\DIFaddend improvement at small scales. This is not surprising given the predictions of our simple model, for which it is clear that $\Delta P/P$ can be impacted by both $\Omega_m$ and $f_b / (\Omega_b /\Omega_b)$ independently. Similarly, it is clear from Fig.~\ref{fig:Pk_SIMBA_allparams} that changing $\Omega_m$ changes the relationship between $\Delta P/P$ and the halo gas-derived quantities (like $Y$ and $f_b$). We next consider a model trained on $Y_{500c}/Y^{SS}$ (orange points in Fig.~\ref{fig:predict_y500_fb}). This model yields reasonable predictions for $\Delta P/P$, although not quite as good as the model trained on $f_b/(\Omega_b/\Omega_m)$. The $Y/Y^{SS}$ model yields somewhat larger errorbars, and the distribution of $\Delta P/P$ predictions is highly asymmetric. When we train the RF model jointly on $Y_{500c}/Y^{SS}$ and $\Omega_m$ (green points), we find that the predictions improve considerably, particularly at high $k$. In this case, the predictions are typically symmetric around the true $\Delta P/P$, have smaller uncertainty compared to the model trained on $f_b/(\Omega_b/\Omega_m)$, and comparable uncertainty to the model trained on $\{ f_b/(\Omega_b/\Omega_m)$,$\Omega_m \}$. We thus conclude that when combined with matter density information, $Y/Y^{\rm SS}$ provides a powerful probe of baryonic effects on the matter power spectrum. Above we have considered the integrated tSZ signal from halos, $Y_{500c}$. Measurements in data, however, can potentially probe the tSZ profiles rather than only the integrated tSZ signal (although the instrumental resolution may limit the extent to which this is possible). In Fig.~\ref{fig:predict_profiles} we consider RF models trained on the full profiles instead of just the integrated quantities. The electron pressure and number density profiles are measured in eight logarithmically spaced bins between $0.1 < r/r_{200c} < 1$. We find that while the ratio $P_e(r)/P^{\rm SS}$ results in robust constraints, jointly providing the information on $\Omega_{m}$ makes them more precise. Similar to the integrated profile case, we find that additionally providing the electron density profile information only marginally improves the constraints. We also show the results when jointly using the measured pressure profiles for both the low and high mass halos to infer the matter power suppression. We find that this leads to only a marginal improvements in the constraints. This would suggest that the deviation of the thermal pressure from the expected self-similar relation already captures the strength of feedback adequately with minimal improvements with adding information from lower mass halos. Note that we have input the 3D pressure and electron density profiles in this case. Even though observed SZ maps are projected quantities, we can infer the 3D pressure profiles from the model used to analyze the projected correlations. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot4_Bkeq_comp_all.pdf} \caption{Same as Fig.~\ref{fig:predict_y500_fb}, but for bispectrum suppression for equilateral triangle configurations at different scales. We find that having pressure profile information results in unbiased constraints here as well. } \label{fig:predict_Bk_eq} \end{figure*} \subsection{Predicting the bispectrum suppression with $f_b$ and electron pressure} In Fig.~\ref{fig:predict_Bk_eq}, we test our methodology on \DIFdelbegin \DIFdel{bi-spectrum suppression}\DIFdelend \DIFaddbegin \DIFadd{bispectrum suppression, $\Delta B(k)/B(k)$}\DIFaddend . Similar to the matter power spectrum, we train and test our model on a combination of the three simulations. We train and test on equilateral triangle \DIFdelbegin \DIFdel{bi-spectrum }\DIFdelend \DIFaddbegin \DIFadd{bispectrum }\DIFaddend configurations with different scales $k$. We again see that information about the electron pressure and $\Omega_m$ results in a precise and unbiased constraints on the bispectrum suppression. The constraints improve as we go to the small scales. In Appendix~\ref{app:Bk_sq} we show similar methodology applied to squeezed \DIFdelbegin \DIFdel{bi-spectrum }\DIFdelend \DIFaddbegin \DIFadd{bispectrum }\DIFaddend configurations. However several caveats are important to consider here. The bispectrum is more sensitive to the higher mass ($M > 5\times 10^{13} M_{\odot}/h$) halos, which are missing from the CAMELS simulations. This results in the sensitivity of the bispectrum suppression to also be biased in these small boxes. We also note that there additionally can be more important dependencies on the simulation resolution which are beyond the scope of this study. However, we expect the qualitative methodology applied here to remain valid with the set of larger simulations. \DIFaddbegin \DIFadd{Finally, there would be some degeneracy between the power spectrum suppression and the bispectrum suppression as they both stem from same underlying physics which we defer to a future study. }\DIFaddend \section{Results II: ACTxDES measurements and forecast} \label{sec:results_data} \begin{figure*} \includegraphics[scale = 0.45]{figs/figs_new/power_supp_data_forecast_v2.pdf} \caption[]{Constraints on the matter power suppression obtained from the inferred $Y_{\rm 500c}/Y^{\rm SS}$ (and fixing $\Omega_{\rm m} = 0.3$) from the shear-$y$ correlations obtained from DESxACT analysis \citep{Pandey:2022}. We also show the expected improvements from future halo-$y$ correlations from DESIxSO using the constraints in \citep{Pandey:2020}. We compare these to the inferred constraints obtained using cosmic shear \citep{Chen:2022:MNRAS:} and additionally including X-ray and kSZ data \citep{Schneider:2022:MNRAS:}. We also compare with the results from larger simulations; OWLS \citep{Schaye:2010:MNRAS:}, BAHAMAS \citep{McCarthy:2017:MNRAS:} and TNG-300 \citep{Springel:2018:MNRAS:}. } \label{fig:Pk_data_forecast} \end{figure*} Our analysis above has resulted in a statistical model (random forest) that predicts $\Delta P/P_{\rm DMO}$ (and the corresponding uncertainties) given values of $Y_{500c}$ for low-mass halos and $\Omega_m$. This model is robust to significant variations in the feedback prescription, at least across the SIMBA, Illustris and Astrid models. We now apply this model to constraints on $Y_{500c}$ coming from the cross-correlation of galaxy lensing shear with tSZ maps. \citet{Gatti:2022} and \citet{Pandey:2022} measured cross-correlations of DES galaxy lensing with Compton $y$ maps from a combination of Advanced-ACT \citep{Madhavacheril:2020:PhRvD:} and {\it Planck} data \citep{PlanckCollaboration:2016:A&A:} over an area of 400 \DIFdelbegin \DIFdel{deg. sq. }\DIFdelend \DIFaddbegin \DIFadd{sq. deg. }\DIFaddend They analyze these cross-correlations using a halo model framework, where the pressure profile in halos is parameterized using a generalized Navarro-Frenk-White profile \cite{Navarro:1996:ApJ:, Battaglia:2012:ApJ:b}. This pressure profile is described using four free parameters, allowing for scaling with mass, redshift and distance from halo center. A tomographic analysis of this shear-y correlation constrains these parameter and hence the pressure profile of halos in a wide range of masses. The constraints on these parameterized profiles can be translated directly into constraints on $Y_{500c}$ for halos in the mass range that we have used to infer the constraints on matter power suppression from the trained random forest model as described in previous section. Note that the shear-$y$ correlation has sensitivity across the range relevant to our trained model ($M > 5 \times 10^{12} M_{\odot}/h$), but that the sensitivity is reduced towards the low end of this range. Fig.~\ref{fig:Pk_data_forecast} shows the results of feeding the inferred $Y_{\rm 500c}$ constraints from \citet{Pandey:2022} into our random forest model to infer the impact of baryonic feedback on the matter power spectrum (black points with errorbars). Note that in this inference we fix the matter density parameter, $\Omega_{m} = 0.3$, same value as used by the CAMELS CV simulations \DIFaddbegin \DIFadd{as we use these to estimate the halo mass function}\DIFaddend . The shear-tSZ analysis correlation provides constraints on the \DIFaddbegin \DIFadd{parameters of the }\DIFaddend 3D pressure \DIFdelbegin \DIFdel{profiles of halos is a wide range of masses. We }\DIFdelend \DIFaddbegin \DIFadd{profile of halos and its evolution with mass and redshift. We first use these parameter constraints to first }\DIFaddend generate 400 samples of the inferred 3D profiles of the halos at $z=0$ in 10 logarithmic mass bins \DIFaddbegin \DIFadd{in range $12.7 < \log_{10}(M) < 14$}\DIFaddend . Then we perform the volume integral of these profiles to infer the $Y_{\rm 500c}(M, z)$ \DIFdelbegin \DIFdel{. }\DIFdelend \DIFaddbegin \DIFadd{(see Eq.~\ref{eq:Y500_from_Pe}). }\DIFaddend Afterwards, we generate the stacked normalized integrated pressure for each sample $j$ \DIFdelbegin \DIFdel{as: }\DIFdelend \DIFaddbegin \DIFadd{by integrating over the halo masses as: }\DIFaddend \begin{equation}\label{eq:Pe_stacked_data} \bigg\langle \frac{Y_{\rm 500c}}{Y^{\rm SS}} \bigg\rangle^j = \frac{1}{\bar{n}^j} \int dM \bigg(\frac{dn}{dM}\bigg)^j_{\rm CAMELS} \DIFdelbegin \DIFdel{\frac{Y_{\rm 500c}(M)}{Y^{\rm SS}} }\DIFdelend \DIFaddbegin \DIFadd{\frac{Y^j_{\rm 500c}(M)}{Y^{\rm SS}} }\DIFaddend \end{equation} where, $\bar{n}^j = \int dM (dn/dM)^j_{\rm CAMELS}$ and $(dn/dM)^j_{\rm CAMELS}$ is a randomly chosen halo mass function from the CV set of boxes of TNG, SIMBA or Astrid. This allows for incorporating the \DIFdelbegin \DIFdel{uncertainties and the impact }\DIFdelend \DIFaddbegin \DIFadd{impact (and its corresponding uncertainties) }\DIFaddend of having a small box size on the halo mass function. \DIFaddbegin \DIFadd{Note that due to small box size, there is deficit of high mass halos and hence the functional form differs from other fitting functions in literature, e.g. \mbox \cite{Tinker:2008:ApJ:}}\hspace{0pt . }\DIFaddend Thereafter, we feed-in these stacked pressure profiles to the random forest regressor, jointly trained on TNG, SIMBA and Astrid. For each of the input pressure profile, we recover the value of matter power suppression, $\Delta P/P_{\rm DMO}$. Finally, in Fig.~\ref{fig:Pk_data_forecast}, we plot the mean and the 18th and 64th percentile of the recovered $\Delta P/P_{\rm DMO}$ distribution \DIFdelbegin \DIFdel{. }\DIFdelend \DIFaddbegin \DIFadd{from the 400 samples. We note that our inference of uncertainties is robust to the number of samples considered. }\DIFaddend In the same figure, we also show the constraints from \citet{Chen:2022:MNRAS:} and \citet{Schneider:2022:MNRAS:} obtained using the analysis of complementary datasets. \citet{Chen:2022:MNRAS:} analyze the small scale cosmic shear measurements from DES Year-3 data release using a baryon correction model. Note that in this analysis, they only use a limited range of cosmologies, particularly restricting to high $\sigma_8$ range due to the emulator calibration. Moreover they also impose cosmology constraints from the large scale analysis of the DES data. Note that unlike the procedure presented here, their modeling and constraints are sensitive to the priors on $\sigma_8$. Therefore, their constraints might be optimistic in this case. \citet{Schneider:2022:MNRAS:} analyze the X-ray data (as presented in \citealt{Giri:2021:JCAP:}) and kSZ data from ACT and SDSS \citep{Schaan:2021:PhRvD:} and the cosmic shear measurement from kids \citep{Asgari:2021}, using another version of baryon correction model. A joint analysis from these complementary dataset leads to crucial degeneracy breaking in the parameters. It would be interesting to include the tSZ observations presented here in the same framework as it can potentially make the constraints more precise. Several caveats about our analysis with data are in order. First, the lensing-SZ correlation is most sensitive to halos in the mass range of $M_{\rm halo} \geq 10^{13} M_{\odot}/h$. However, our RF model operates on halos with mass in the range of $5 \times 10^{12} \geq M_{\rm halo} \leq 10^{14} M_{\odot}/h$, with limited volume of the simulations restricting the number of halos above $10^{13} M_{\odot}/h$. We have attempted to account for this selection effect by using the halo mass function from the CV sims of the CAMELS simulations when calculating the stacked profile. However, using a larger volume simulation suite would be a better alternative (also see discussion in Appendix~\ref{app:volume_res_comp}). Moreover, the CAMELS simulation suite fix the $\Omega_b$ to a fiducial value. There might be some non-trivial effects on the inferences when varying that parameter, as that would impact the distribution of baryons, especially to low mass halos and its interplay with the changing baryonic feedback. In order to shift the sensitivity of the data correlations to lower halo masses, it would be preferable to analyze the galaxy-SZ and halo-SZ correlations. In \citet{Pandey:2020} we forecast the constraints on the inferred 3D pressure profile from the future halo-SZ correlations using DESI and CMB-S4 SZ maps for a wide range of halo masses. In Fig.~\ref{fig:Pk_data_forecast} we also show the expected constraints on the matter power suppression using the halo-SZ correlations from halos in $M_h > 5\times 10^{12} M_{\odot}/h$. We again follow the same methodology as described above to create a stacked normalized integrated pressure (see Eq.~\ref{eq:Pe_stacked_data}). Moreover, we also fix $\Omega=0.3$ to predict the matter power suppression. Note that we shift the mean value of $\Delta P/P_{\rm DMO}$ to the recovered value from BAHAMAS high-AGN simulations \citep{McCarthy:2017:MNRAS:}. As we can see in Fig.~\ref{fig:Pk_data_forecast}, we can expect to obtain significantly more precise constraints from these future observations. \section{\DIFdelbegin \DIFdel{Conclusion}\DIFdelend \DIFaddbegin \DIFadd{Conclusions}\DIFaddend } \label{sec:conclusion} We have shown that the tSZ signals from low-mass halos contain significant information about the impacts of baryonic feedback on the small-scale matter distribution. Using models trained on hydrodynamical simulations with a wide range of feedback implementations, we demonstrate that information about baryonic effects on the power spectrum and bispectrum can be robustly extracted. By applying these same models to measurements with ACT and DES, we have shown that current tSZ measurements already constrain the impact of feedback on the matter distribution. Our results suggest that using simulations to learn the relationship between halo gas observables and baryonic effects on the matter distribution is a promising way forward for constraining these effects with data. Our main findings from our explorations with the CAMELS simulations are the following: \begin{itemize} \item In agreement with \citet{vanDaalen:2020}, we find that baryon fraction in halos correlates with the power spectrum suppression. \DIFaddbegin \DIFadd{We find that the correlation is especially robust in small scales. }\DIFaddend \item We find that there can be significant scatter in relationship between baryon fraction and power spectrum suppression at low halo mass, and that the relationship varies to some degree with feedback implementation. However, the bulk trends appear to be consistent regardless of feedback implementation. \item We propose a simple model that qualitatively (and in some cases quantitatively) captures the broad features in the relationships between feedback, $\Delta P/P_{\rm DMO}$ (at different values of $k$), and halo gas-related observables like $f_b$ and $Y$ (at different halo masses). \item Despite significant scatter in the relations between $Y$ and $\Delta P/P$ at low halo mass, we find that simple random forest models yield tight and robust constraints on $\Delta P/P$ given information about $Y$ in low-mass halos and $\Omega_m$. \item Using the pressure profile instead of just the integrated $Y_{\rm 500c}$ signal provides additional information about $\Delta P/P_{\rm DMO}$, leading to 20-50\% improvements when not using any cosmological information. When additionally providing the $\Omega_m$ information, the improvements in constraints on power or \DIFdelbegin \DIFdel{bi-spectrum }\DIFdelend \DIFaddbegin \DIFadd{bispectrum }\DIFaddend suppression are modest when using the full pressure profile relative to the integrated quantities. \item The pressure profiles and baryon fractions also carry information about baryonic effects on the \DIFdelbegin \DIFdel{bi-spectrum}\DIFdelend \DIFaddbegin \DIFadd{bispectrum}\DIFaddend . \end{itemize} Our main findings from our analysis of constraints from the DESxACT shear-$y$ correlation analysis are \begin{itemize} \item The measured electron pressure profile from the data analysis of tSZ and LSS can then be used infer the matter power suppression. We infer competitive constraints on this suppression using the measurements from shear-y correlation from DES-ACT data. \item We also show that the constraints would improve significantly in the future, particularly using the halo catalog from DESI and tSZ map from S4. \end{itemize} With data from future galaxy and CMB surveys, we expect constraints on the tSZ signal from halos across a wide mass and redshift range to improve significantly. These improvements will come from both the galaxy side (e.g. halos detected over larger areas of the sky down and out to higher redshifts) and the CMB side (more sensitive tSZ maps over larger areas of the sky). Our forecast for DESI and CMB Stage 4 in Fig.~\ref{fig:Pk_data_forecast} suggests that very tight constraints can be obtained on the impact of baryonic feedback on the matter power spectrum. By combining these results with weak lensing constraints on the small-scale matter distribution, we expect to be able to extract significantly more cosmological information. \bibliographystyle{mnras} \section{To do} \section{Introduction}\label{sec:intro} The statistics of the matter distribution on scales $k \gtrsim 0.1\,h{\rm Mpc}^{-1}$ are tightly constrained by current weak lensing surveys \citep[e.g.][]{Asgari:2021,DESY3cosmo}. However, modeling the matter distribution on small scales $k \gtrsim 1\,h{\rm Mpc}^{-1}$ to extract cosmological information is complicated by the effects of baryonic feedback \citep{Rudd:2008}. Energetic output from active galactic nuclei (AGN) and stellar processes (e.g. winds and supernovae) directly impacts the distribution of gas on small scales, thereby changing the total matter distribution \citep[e.g.][]{Chisari:2019}.\footnote{Changes to the gas distribution can also gravitationally influence the dark matter distribution, further modifying the total matter distribution.} The coupling between these processes and the large-scale gas distribution is challenging to model theoretically and in simulations because of the large dynamic range involved, from the scales of individual stars to the scales of galaxy clusters. While it is generally agreed that feedback leads to a suppression of the matter power spectrum on scales $0.1\,h{\rm Mpc}^{-1} \lesssim k \lesssim 20\,h{\rm Mpc}^{-1}$, the amplitude of this suppression remains uncertain by tens of percent \citep{vanDaalen:2020, Villaescusa-Navarro:2021:ApJ:} (see also Fig.~\ref{fig:Pk_Bk_CV}). This systematic uncertainty limits constraints on cosmological parameters from current weak lensing surveys \cite[e.g.][]{DESY3cosmo,Asgari:2021}. For future surveys, such as the Vera Rubin Observatory LSST \citep{TheLSSTDarkEnergyScienceCollaboration:2018:arXiv:} and \textit{Euclid} \citep{EuclidCollaboration:2020:A&A:}, the problem will become even more severe given expected increases in statistical precision. In order to reduce the uncertainties associated with feedback, we would like to identify observable quantities that carry information about the impact of feedback on the matter power distribution and develop approaches to extract this information \citep[e.g.][]{Nicola:2022:JCAP:}. Recently, \citet{vanDaalen:2020} showed that the halo baryon fraction, $f_b$, in halos with $M \sim 10^{14}\,M_{\odot}$ carries significant information about suppression of the matter power spectrum caused by baryonic feedback. Notably, they found that the relation between $f_b$ and matter power suppression was robust to changing feedback prescription. Note that $f_b$ as defined by \citet{vanDaalen:2020} counts baryons in both the intracluster medium as well as those in stars. The connection between $f_b$ and feedback is expected, since one of the main drivers of feedback's impact on the matter distribution is the ejection of gas from halos by AGN. Therefore, when feedback is strong, halos will be depleted of baryons and $f_b$ will be lower. The conversion of baryons into stars --- which will not significantly impact the matter power spectrum on large scales --- does not impact $f_b$, since $f_b$ includes baryons in stars as well as the ICM. \citet{vanDaalen:2020} specifically consider the measurement of $f_b$ in halos with $6\times 10^{13} M_{\odot} \lesssim M_{500c} \lesssim 10^{14}\,M_{\odot}$. In much more massive halos, the energy output of AGN is small compared to the binding energy of the halo, preventing gas from being expelled. In smaller halos, \citet{vanDaalen:2020} find that the correlation between power spectrum suppression and $f_b$ is less clear. Although $f_b$ carries information about feedback, it is somewhat unclear how one would measure $f_b$ in practice. Observables such as the kinematic Sunyaev Zel'dovich (kSZ) effect can be used to constrain the gas density; combined with some estimate of stellar mass, $f_b$ could then be inferred. However, measuring the kSZ is challenging, and current measurements have low signal-to-noise \citep{Hand:2012,Hill:2016,Soergel:2016}. Moreover, \citet{vanDaalen:2020} consider a relatively limited range of feedback prescriptions. It is unclear whether a broader range of feedback models could lead to a greater spread in the relationship between $f_b$ and baryonic effects on the power spectrum. In any case, it is worthwhile to consider other potential observational probes of feedback. Another potentially powerful probe of baryonic feedback is the thermal SZ (tSZ) effect. The tSZ effect is caused by inverse Compton scattering of CMB photons with a population of electrons at high temperature. This scattering process leads to a spectral distortion in the CMB that can be reconstructed from multi-frequency CMB observations. The amplitude of this distortion is sensitive to the line-of-sight integral of the electron pressure. Since feedback changes the distribution and thermodynamics of the gas, it stands to reason that it could impact the tSZ signal. Indeed, several works using both data \citep[e.g][]{Pandey:2019,Pandey:2022,Gatti:2022} and simulations \citep[e.g.][]{Scannapieco:2008,Bhattacharya:2008,Moser:2022,Wadekar:2022} have shown that the tSZ signal from low-mass (group scale) halos is sensitive to feedback. Excitingly, the sensitivity of tSZ measurements is expected to increase dramatically in the near future due to high-sensitivity CMB measurements from e.g. SPT-3G \citep{Benson:2014:SPIE:}, Advanced ACTPol \citep{Henderson:2016:JLTP:}, Simons Observatory \citep{Ade:2019:JCAP:}, and CMB Stage 4 \citep{CMBS4}. The goal of this work is to investigate what information the tSZ signals from low-mass halos contain about the impact of feedback on the small-scale matter distribution. The tSZ signal, which we denote with the Compton $y$ parameter, carries different information from $f_b$. For one, $y$ is sensitive only to the gas and not to stellar mass. Moreover, $y$ carries sensitivity to both the gas density and pressure, unlike $f_b$ which depends only on the gas density.\footnote{Of course, sensitivity to gas temperature does not necessarily mean that the tSZ is a more useful probe of feedback.} The $y$ signal is also be easier to measure than $f_b$, since it can be estimated simply by cross-correlating halos with a tSZ map. The signal-to-noise of such cross-correlation measurements is already high with current data, on the order of 10s of $\sigma$ \citep{Vikram:2017,Pandey:2019,Pandey:2022,Sanchez:2022}. In this paper, we investigate the information content of the tSZ signal using the Cosmology and Astrophysics with MachinE Learning Simulations (CAMELS) simulations. As we describe in more detail in \S\ref{sec:camels}, CAMELS is a suite of many hydrodynamical simulations run across a range of different feedback prescriptions and different cosmological parameters. The relatively small volume of the CAMELS simulations ($(25/h)^3\,{\rm Mpc^3}$) means that we are somewhat limited in the halo masses and scales that we can probe. We therefore view our analysis as an exploratory work that investigates the information content of low-mass halos for constraining feedback and how to extract this information; more accurate results over a wider range of halo mass and $k$ may be obtained in the future using the same methods applied to larger volume simulations. By training statistical models on the CAMELS simulations, we explore what information about feedback exists in tSZ observables, and how robust this information is to changes in subgrid prescriptions. We consider three very different prescriptions for feedback based on the SIMBA \citep{Dave:2019:MNRAS:}, Illustris-TNG \citep{Pillepich:2018:MNRAS:} and Astrid \citep{Bird:2022:MNRAS:, Ni:2022:MNRAS:} models across a wide range of possible parameter values (including variations in cosmology). The flexibility of the statistical models we employ means that it is possible to uncover more complex relationships between e.g. $f_b$, $y$, and the baryonic suppression of the power spectrum than considered in \citet{vanDaalen:2020}. The work presented here is complementary to \citet{Delgado:23} which explores the information content in the baryon fraction of halos encompassing broader mass range ($M > 10^{10} M_{\odot}/h$), finding a broad correlation with the matter power suppression. Finally, we apply our trained statistical models to recently published measurements of the $y$ signal in low-mass halos. In particular, we consider the inferred values of $Y$ from the lensing-tSZ correlation analysis of Atacama Cosmology Telescope (ACT) and Dark Energy Survey (DES) \cite{Madhavacheril:2020:PhRvD:, Amon:2022:PhRvD:, Secco:2022:PhRvD:b} data presented in \citet{Gatti:2022} and \citet{Pandey:2022}. In addition to providing interesting constraints on the impact of feedback, these results highlight the potential of future similar analyses with e.g. Dark Energy Spectroscopic Experiment (DESI; \citealt{DESI}) and CMB Stage 4 \citep{CMBS4}. Two recent works --- \citet{Moser:2022} and \citet{Wadekar:2022} --- have used the CAMELS simulations to explore the information content of the tSZ signal for constraining feedback. These works focus on the ability of tSZ observations to constrain the parameters of subgrid feedback models in hydrodynamical simulations. Here, in contrast, we attempt to connect the observable quantities directly to the impact of feedback on the matter power spectrum. Additionally, unlike some of the results presented in \citet{Moser:2022} and \citet{Wadekar:2022}, we consider the full parameter space explored by the CAMELS simulations rather than the small variations around a fiducial point that are relevant to calculation of the Fisher matrix. Finally, we only focus on the intra-halo gas profile of the halos in the mass range captured by the CAMELS simulations (c.f. \cite{Moser:2022}). We do not expect the inter-halo gas pressure to be captured by the small boxes used here as it may be sensitive to higher halo masses \citep{Pandey:2020}. Nonlinear evolution of the matter distribution induces non-Gaussianity, and hence there is additional information to be recovered beyond the power spectrum. Recent measurements detect higher-order matter correlations at cosmological scales at $\mathcal{O}(10\sigma)$\citep{Secco:2022:PhRvD:b, Gatti:2022:PhRvD:}, and the significance of these measurements is expected to rapidly increase with up-coming surveys \citep{Pyne:2021:MNRAS:}. Jointly analyzing two-point and three-point correlations of the matter field can help with self-calibration of systematic parameters and improve cosmological constraints. As described in \citet{Foreman:2020:MNRAS:}, the matter bispectrum is expected to be impacted by baryonic physics at $\mathcal{O}(10\%)$ over the scales of interest. With these considerations in mind, we also investigate whether the SZ observations carry information about the impact of baryonic feedback on the matter bispectrum. The plan of the paper is as follows. In \S\ref{sec:camels} we discuss the CAMELS simulation and the data products that we use in this work. In \S\ref{sec:results_sims}, we present the results of our explorations with the CAMELS simulations, focusing on the information content of the tSZ signal for inferring the amount of matter power spectrum suppression. In \S\ref{sec:results_data}, we apply our analysis to the DES and ACT measurements. We summarize our results and conclude in \S\ref{sec:conclusion}. \section{CAMELS simulations and observables} \label{sec:camels} \subsection{Overview of CAMELS simulations} \begin{table*} \begin{tabular}{@{}|c|c|l|@{}} \toprule Simulation & Type/Code & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Astrophysical parameters varied\\ \& its meaning\end{tabular}} \\ \midrule IllustrisTNG & \begin{tabular}[c]{@{}c@{}}Magneto-hydrodynamic/\\ AREPO\end{tabular} & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$: (Energy of Galactic winds)/SFR \\ $A_{\rm SN2}$: Speed of galactic winds\\ $A_{\rm AGN1}$: Energy/(BH accretion rate)\\ $A_{\rm AGN2}$: Jet ejection speed or burstiness\end{tabular} \\ \midrule SIMBA & Hydrodynamic/GIZMO & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$ : Mass loading of galactic winds\\ $A_{\rm SN2}$ : Speed of galactic winds\\ $A_{\rm AGN1}$ : Momentum flux in QSO and jet mode of feedback\\ $A_{\rm AGN2}$ : Jet speed in kinetic mode of feedback\end{tabular} \\ \midrule Astrid & Hydrodynamic/pSPH & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$: (Energy of Galactic winds)/SFR \\ $A_{\rm SN2}$: Speed of galactic winds\\$A_{\rm AGN1}$: Energy/(BH accretion rate)\\ $A_{\rm AGN2}$: Thermal feedback efficiency\end{tabular} \\ \bottomrule \end{tabular} \caption{Summary of three varieties of simulations used in this analysis. In addition to the four astrophysical parameters mentioned, all simulations vary two cosmological parameters, $\Omega_{\rm m}$ and $\sigma_8$. \label{tab:feedback}} \end{table*} We investigate the use of SZ signals for constraining the impact of feedback on the matter distribution using approximately 3000 cosmological simulations run by the CAMELS collaboration \citep{Villaescusa-Navarro:2021:ApJ:}. One half of these are gravity-only N-body simulations and the other half are hydrodynamical simulations with matching initial conditions. The simulations are run using three different hydrodynamical sub-grid codes, Illustris-TNG \citep{Pillepich:2018:MNRAS:}, SIMBA \citep{Dave:2019:MNRAS:} and Astrid \citep{Bird:2022:MNRAS:, Ni:2022:MNRAS:}. As detailed in \citet{Villaescusa-Navarro:2021:ApJ:}, for each sub-grid implementation six parameters are varied: two cosmological parameters ($\Omega_m$ and $\sigma_8$) and four parameters dealing with baryonic astrophysics. Of these, two deal with supernovae feedback ($A_{\rm SN1}$ and $A_{\rm SN2}$) and two deal with AGN feedback ($A_{\rm AGN1}$ and $A_{\rm AGN2}$). The meanings of these parameters for each subgrid model are summarized in Table~\ref{tab:feedback}. Note that the astrophysical parameters might have somewhat different physical meanings for different subgrid prescriptions and there is usually a complex interplay between them regarding their impact on the properties of galaxies and gas. For example, the parameter $A_{\rm SN1}$ approximately corresponds to the pre-factor for the overall energy output in galactic wind feedback per-unit star-formation in both TNG \citep{Pillepich:2018:MNRAS:} and Astrid \citep{Bird:2022:MNRAS:} simulations. However, in the SIMBA simulations it corresponds to the to the wind mass outflow rate per unit star-formation calibrated from the Feedback In Realistic Environments (FIRE) zoom-in simulations \citep{Angles-Alcazar:2017:MNRAS:}. Similarly, the $A_{\rm AGN2}$ parameter controls the burstiness and the temperature of the heated gas during the AGN bursts in the TNG simulations \citep{Weinberger:2017:MNRAS:}. Whereas in the SIMBA suite, it corresponds to the corresponds to the speed of the kinetic AGN jets with constant momentum flux \citep{Angles-Alcazar:2017:MNRAS:, Dave:2019:MNRAS:}. However, in the Astrid suite, it corresponds to the thermal feedback efficiency. As we describe in \S~\ref{sec:fbY}, this can result in counter-intuitive impact on the matter power spectrum in the Astrid simulation, relative to TNG and SIMBA. For each of the sub-grid physics prescriptions, three varieties of simulations are provided. These include 27 sims for which the parameters are fixed and initial conditions are varied (cosmic variance, or CV, set), 66 simulations varying only one parameter at a time (1P set) and 1000 sims varying parameters in a six dimensional latin hyper-cube (LH set). We use the CV simulations to estimate the variance expected in the matter power suppression due to stochasticity (see Fig.~\ref{fig:Pk_Bk_CV}). We use the 1P sims to understand how the matter suppression responds to variation in each parameter individually. Finally we use the full LH set to effectively marginalize over the full parameter space varying all six parameters. We use publicly available power spectrum and bispectrum measurements for these simulation boxes.\footnote{\url{https://www.camel-simulations.org/data}} Where unavailable, we calculate the power spectrum and bispectrum, using the publicly available code \texttt{Pylians}.\footnote{\url{https://github.com/franciscovillaescusa/Pylians3}} \subsection{Baryonic effects on the power spectrum and bispectrum} \begin{figure*} \includegraphics[width=\textwidth]{figs/figs_new/Pk_Bk_CV.pdf} \caption[]{Far left: Baryonic suppression of the matter power spectrum, $\Delta P/P_{\rm DMO}$, in the CAMELS simulations. The dark-blue, red and orange shaded region correspond to $1\sigma$ error estimated with the CV suite of TNG, SIMBA and Astrid respectively. The light-blue region corresponds to the $1\sigma$ error estimated with the LH suite of TNG, showing a significantly larger spread. Middle and right panels: the impact of baryonic physics on the matter bispectrum suppression for the same set of simulations for equilaterla and squeezed triangle configurations respectively. } \label{fig:Pk_Bk_CV} \end{figure*} The left panel of Fig.~\ref{fig:Pk_Bk_CV} shows the measurement of the power spectrum suppression caused by baryonic effects in the TNG, SIMBA, and Astrid simulations for their fiducial feedback settings. To compute the suppression, we use the matter power spectra and bispectra of the hydrodynamical simulations (hydro) and the dark-matter only (DMO) simulations generated at varying initial conditions (ICs). The power spectrum and bispectrum for all the simulations are provided by the CAMELS collaboration and are publicly available \citep{Villaescusa-Navarro:2021:ApJ:}. For each of the 27 unique IC runs, we calculate the ratios $\Delta P/P_{\rm DMO} = (P_{\rm hydro} - P_{\rm DMO})/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO} = (B_{\rm hydro} - B_{\rm DMO})/B_{\rm DMO}$. As the hydro-dynamical and the N-body simulations are run with same initial conditions, the ratios $\Delta P/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO}$ are independent of sample variance. It is clear that the amplitude of suppression of the small-scale matter power spectrum can be significant: suppression on the order of tens of percent is reached for all three simulations. It is also clear that the impact is significantly different between the three simulations. Even for the simulations in closest agreement (Illustris-TNG and Astrid), the measurements of $\Delta P/P_{\rm DMO}$ disagree by a factor of five at $k = 5\,h/{\rm Mpc}$. The width of the curves in Fig.~\ref{fig:Pk_Bk_CV} represents the standard deviation measured across the cosmic variance simulations, which all have the same parameter values but different initial conditions. For bispectrum, we show both the equilateral and squeezed triangle configurations with cosine of angle between long sides fixed to $\mu = 0.9$. Interestingly, the spread in $\Delta P/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO}$ increases with increasing $k$ over the range $0.1 \,h/{\rm Mpc} \lesssim k \lesssim 10\,h/{\rm Mpc}$. This increase is driven by stochasticity arising from baryonic feedback. The middle and right panels show the impact of feedback on the bispectrum for the equilateral and squeezed triangle configurations, respectively. Throughout this work, we will focus on the regime $0.3\,h/{\rm Mpc}< k < 10\,h/{\rm Mpc}$. Larger scales modes are not present in the $(25 {\rm Mpc}/h)^3$ CAMELS simulations, and in any case, the impact of feedback on large scales is typically small. Much smaller scales, on the other hand, are difficult to model even in the absence of baryonic feedback \citep{Schneider:2016:JCAP:}. In Appendix.~\ref{app:volume_res_comp} we show how the matter power suppression changes when changing the resolution and the volume of the simulation boxes. When comparing with the original IllustrisTNG boxes, we find that while the box sizes do not change the measured power suppression significantly, the resolution of the boxes has a non-negligible impact. This is expected since the physical effect of feedback mechanisms depend on the resolution of the simulations. Note that the error-bars presented in Fig.~\ref{fig:Pk_Bk_CV} will also depend on the resolution and size of the simulation box as well as on the feedback parameter values assumed. We defer a detailed study of the covariance dependence on the simulation properties to a future study. \subsection{Measuring gas profiles around halos} We use 3D grids of various fields (e.g. gas density and pressure) made available by the CAMELS team to extract the profiles of these fields around dark matter halos. The grids are generated with resolution of 0.05 Mpc/$h$. Following \citet{vanDaalen:2020}, we define $f_b$ as $(M_{\rm gas} + M_{\rm stars})/M_{\rm total}$, where $M_{\rm gas}$, $M_{\rm stars}$ and $M_{\rm total}$ are the mass in gas, stars and all the components, respectively. The gas mass is computed by integrating the gas number density profile around each halo. We typically measure $f_b$ within the spherical overdensity radius $r_{\rm 500c}$.\footnote{We define spherical overdensity radius ($r_{\Delta c}$, where $\Delta = 200, 500$) and overdensity mass ($M_{\Delta c}$) such that the mean density within $r_{\Delta}$ is $\Delta$ times the critical density $\rho_{\rm crit}$, $M_{\Delta} = \Delta \frac{4}{3} \pi r^3_{\Delta} \rho_{\rm crit}$.} The SZ effect is sensitive to the electron pressure. We compute the electron pressure profiles, $P_e$, using $P_e = 2(X_{\rm H} + 1)/(5X_{\rm H} + 3)P_{\rm th}$, where $P_{\rm th}$ is the total thermal pressure, and $X_{\rm H}= 0.76$ is the primordial hydrogen fraction. Given the electron pressure profile, we measure the integrated SZ signal within $r_{\rm 500c}$ as: \begin{equation}\label{eq:Y500_from_Pe} Y_{\rm 500c} = \frac{\sigma_{\rm T}}{m_e c^2}\int_0^{r_{\rm 500c}} 4\pi r^2 \, P_e(r) \, dr, \end{equation} where, $\sigma_{\rm T}$ is the Thomson scattering cross-section, $m_{e}$ is the electron mass and $c$ is the speed of light. We normalize the SZ observables by the self-similar expectation \citep{Battaglia:2012:ApJ:a},\footnote{Note that we use spherical overdensity mass corresponding to $\Delta = 500$ and hence adjust the coefficients accordingly, while keeping other approximations used in their derivations as the same.} \begin{equation} \label{eq:y_ss} Y^{\rm SS} = 131.7 h^{-1}_{70} \,\bigg( \frac{M_{500c}}{10^{15} h^{-1}_{70} M_{\odot}} \bigg)^{5/3} \frac{\Omega_b}{0.043} \frac{0.25}{\Omega_m} \, {\rm kpc^2}, \end{equation} where, $M_{200c}$ is mass inside the $r_{200c}$ and $h_{70} = h/0.7$. This calculation, which scales as $M^{5/3}$, assumes hydrostatic equilibrium and that the baryon fraction is equal to cosmic baryonic fraction. Hence deviations from this self-similar scaling provide a probe of the effects of baryonic feedback. Our final SZ observable is defined as $Y_{500c}/Y^{\rm SS}$. On the other hand, the amplitude of the pressure profile approximately scales as $M^{2/3}$. Therefore, when considering the pressure profile as the observable, we factor out a $M^{2/3}$ scaling. \section{Results I: Simulations} \label{sec:results_sims} \subsection{Inferring feedback parameters from $f_b$ and $y$} \label{sec:fisher} We first consider how the halo $Y$ signal can be used to constrain the parameters describing the subgrid physics models. This question has been previously investigated using the CAMELS simulations by \citet{Moser:2022} and \citet{Wadekar:2022}. The rest of our analysis will focus on constraining changes to the power spectrum and bispectrum, and our intention here is mainly to provide a basis of comparison for those results. Similar to \citet{Wadekar:2022}, we treat the mean $\log Y$ value in two mass bins ($10^{12} < M (M_{\odot}/h) < 5\times 10^{12}$ and $5 \times 10^{12} < M (M_{\odot}/h) < 10^{14}$) as our observable, which we refer to as $\vec{d}$. Here and throughout our investigations with CAMELS we ignore the contributions of measurement uncertainty since our intention is mainly to assess the information content of the SZ signals. We therefore use the CV simulations to determine the covariance, $\mathbf{C}$, of the $\vec{d}$. Note that the level of cosmic variance will depend on the volume probed, and can be quite large for the CAMELS simulations. The Fisher matrix, $F_{ij}$ is then given by \begin{equation} F_{ij} = \frac{\partial \vec{d}^T}{\partial \theta_i} \mathbf{C}^{-1} \frac{\partial \vec{d}}{\partial \theta_i}, \end{equation} where $\theta_i$ refers to the $i$th parameter value. Calculation of the derivatives $\partial \vec{d}/\partial \theta_i$ is complicated by the large amount of stochasticity between the CAMELS simulations. To perform the derivative calculation, we use a radial basis function interpolation method based on \citet{Moser:2022,Cromer:2022}. We show an example of the derivative calculation in Appendix~\ref{app:emulation}. We additionally assume a prior on a parameter $p$ of $\sigma(\ln p) = 1$ on the feedback parameters and $\sigma(p) = 1$ on the cosmological parameters. The parameter constraints corresponding to our calculated Fisher matrix are shown in Fig.~\ref{fig:fisher}. We show results only for $\Omega_m$, $A_{\rm SN1}$ and $A_{\rm AGN2}$, but additionally marginalize over $\sigma_8$, $A_{\rm SN2}$ and $A_{\rm AGN1}$. The degeneracy directions seen in our results are consistent with those in \citet{Wadekar:2022}. We we find a weaker constraint on $A_{\rm AGN2}$, likely owing to the large sample variance contribution to our calculation. It is clear from Fig.~\ref{fig:fisher} that the marginalized constraints on the feedback parameters are weak. If information about $\Omega_m$ is not used, we effectively have no information about the feedback parameters. Even when $\Omega_m$ is fixed, the constraints on the feedback parameters are not very precise. This finding is consistent with \citet{Wadekar:2022}, for which measurement uncertainty was the main source of variance rather than sample variance. Part of the reason for the poor constraints is the degeneracy between the AGN and SN parameters. Degeneracies between the impacts of feedback parameters and cosmology on $Y$, as well as the potentially complex relation between feedback parameters and the changing matter distribution motivate us to consider instead direct inference of changes to the statistics of the matter distribution from the $Y$ observables. However, note that the conclusions here will depend upon the simulation volume, which would change the covariance and will capture the effects like super sample covariance. \begin{figure} \centering \includegraphics[scale=0.6]{figs/Y500_log_dec7.pdf} \caption{Forecast parameter constraints on the feedback parameters when $\log Y$ in two halo mass bins is treated as the observable. We assume that the only contribution to the variance of this observable is sample variance coming from the finite volume of the CAMELS simulations. } \label{fig:fisher} \end{figure} \subsection{$f_b$ and $y$ as probes of baryonic effects on the matter power spectrum} \label{sec:fbY} As discussed above, \citet{vanDaalen:2020} observed a tight correlation between suppression of the matter power spectrum and the baryon fraction, $f_b$, in halos with $6\times 10^{13} M_{\odot} \lesssim M_{500c} \lesssim 10^{14}\,M_{\odot}$. That relation was found to hold regardless of the details of the feedback implementation, suggesting that by measuring $f_b$ in high-mass halos, one could robustly infer the impact of baryonic feedback on the power spectrum. We begin by investigating the connection between matter power spectrum suppression and integrated tSZ parameter in low-mass, $M \sim 10^{13}\,M_{\odot}$, halos to test if similar correlation exists (c.f. \citet{Delgado:23} for similar figure between $f_b$ and $\Delta P/P_{\rm DMO}$). We also consider a wider range of feedback models than \citet{vanDaalen:2020}, including the SIMBA and Astrid models. \begin{figure} \includegraphics[width=0.95\columnwidth]{figs/figs_new/vanDaleen+19_with_camels_SIMBA_all_params_Y500c.pdf} \caption[]{We show the relation between matter power suppression at $k=2 h/{\rm Mpc}$ and the integrated tSZ, $Y_{500c}/Y^{\rm SS}$, of halos in mass range $10^{13} < M\,(M_{\odot}/h) < 10^{14}$ in the SIMBA simulation suite. In each of six panels, the points are colored corresponding to the parameter value given in the associated colorbar. } \label{fig:Pk_SIMBA_allparams} \end{figure} Fig.~\ref{fig:Pk_SIMBA_allparams} shows the impact of cosmological and feedback parameters on the relationship between the power spectrum suppression ($\Delta P/P_{\rm DMO}$) and the ratio $Y_{\rm 500c}/Y^{\rm SS}$ for the SIMBA simulations. Each point corresponds to a single simulation, taking the average over all halos with $10^{13} < M (M_{\odot}/h) < 10^{14}$ when computing $Y_{\rm 500c}/Y^{\rm SS}$. We observe that the largest suppression (i.e. more negative $\Delta P/P_{\rm DMO}$) occurs when $A_{\rm AGN2}$ is large. This is caused by powerful AGN jet-mode feedback ejecting gas from halos, leading to a significant reduction in the matter power spectrum, as described by e.g. \citet{vanDaalen:2020, Borrow:2020:MNRAS:, Gebhardt:23}. For SIMBA, the parameter $A_{\rm AGN2}$ controls the velocity of the ejected gas, with higher velocities (i.e. higher $A_{\rm AGN2}$) leading to more ejected gas. On the other hand, when $A_{\rm SN2}$ is large, $\Delta P/P_{\rm DMO}$ is small. This is because efficient supernovae feedback prevents the formation of massive galaxies which host AGN and hences reduces the strength of the AGN feedback. The parameter $A_{\rm AGN1}$, on the other hand, controls the radiative quasar mode of feedback, which has slower gas outflows and thus a smaller impact on the matter distribution. It is also clear from Fig.~\ref{fig:Pk_SIMBA_allparams} that increasing $\Omega_{\rm m}$ reduces $|\Delta P/P_{\rm DMO}|$, relatively independently of the other parameters. By increasing $\Omega_m$, the ratio $\Omega_b/\Omega_m$ decreases, meaning that halos of a given mass have fewer baryons, and the impact of feedback is therefore reduced. We propose a very simple toy model for this effect in \S\ref{sec:simple_model}. The impact of $\sigma_8$ in Fig.~\ref{fig:Pk_SIMBA_allparams} is less clear. For halos in the mass range shown, we find that increasing $\sigma_8$ leads to a roughly monotonic decrease in $Y_{500c}$ (and $f_b$), presumably because higher $\sigma_8$ means that there are more halos amongst which the same amount of baryons must be distributed. This effect would not occur for cluster-scale halos because these are rare and large enough to gravitationally dominate their local environments, giving them $f_b \sim \Omega_b/\Omega_m$, regardless of $\sigma_8$. In any case, no clear trend with $\sigma_8$ is seen in Fig.~\ref{fig:Pk_SIMBA_allparams} because $\sigma_8$ does not correlate strongly with $\Delta P/P_{\rm DMO}$. Fig.~\ref{fig:Y_fb_DeltaP} shows the relationship between $\Delta P/P_{\rm DMO}$ and $f_b$ or $Y_{500}$ in different halo mass bins and for different feedback models, colored by the value of $A_{AGN2}$. As in Fig.~\ref{fig:Pk_SIMBA_allparams}, each point represents an average over all halos in the indicated mass range for a particular CAMELS simulation (i.e. at fixed values of cosmological and feedback parameters). Note that the meaning of $A_{\rm AGN2}$ is not exactly the same across the different feedback models, as noted in \S\ref{sec:camels}. For TNG and SIMBA we expect increasing $A_{\rm AGN2}$ to lead to stronger AGN feedback driving more gas out of the halos, leading to more power suppression. However, for Astrid, increasing $A_{\rm AGN2}$ parameter would more strongly regulate and suppress the black hole growth in the box. This drastically reduces the number of high mass black holes and hence effectively reducing the amount of feedback that can push the gas out of the halos, leading to less matter power suppression. Therefore, in Fig.~\ref{fig:Y_fb_DeltaP}, we redefine the $A_{\rm AGN2}$ parameter for Astrid to be $1/A_{\rm AGN2}$ when plotting. For the highest mass bin ($10^{13} < M (M_{\odot}/h) < 10^{14}$, rightmost column) our results are in agreement with \citet{vanDaalen:2020}: we find that there is a robust correlation between between $f_b/(\Omega_b/\Omega_m)$ and the matter power suppression (also see \citet{Delgado:23}). This relation is roughly consistent across different feedback subgrid models, although the different models appear to populate different parts of this relation. Moreover, varying $A_{AGN2}$ appears to move points along this relation, rather than broadening the relation. This is in contrast to $\Omega_m$, which as shown in Fig.~\ref{fig:Pk_SIMBA_allparams}, tends to move simulations in the direction orthogonal to the narrow $f_b$-$\Delta P/P_{\rm DMO}$ locus. For this reason, and given current constraints on $\Omega_m$, we restrict this plot to simulations with $0.2 < \Omega_m < 0.4$. The dashed curves shown in Fig.~\ref{fig:Y_fb_DeltaP} correspond to the toy model discussed in \S\ref{sec:simple_model}. At low halo mass, the relation between $f_b/(\Omega_b/\Omega_m)$ and $\Delta P/P_{\rm DMO}$ appears similar to the high mass bin, although it is somewhat flatter at high $f_b$, and somewhat steeper at low $f_b$. Again the results are fairly consistent cross the different feedback prescriptions, although points with high $f_b/(\Omega_b/\Omega_m)$ are largely absent for SIMBA. This is largely because the feedback mechanisms are highly efficient in SIMBA, driving the gas out of their parent halos. The relationships between $Y$ and $\Delta P/P_{\rm DMO}$ appear quite similar to those between $\Delta P/P_{\rm DMO}$ and $f_b/(\Omega_b/\Omega_m)$. This is not too surprising because $Y$ is sensitive to the gas density, which dominates $f_b/(\Omega_b/\Omega_m)$. However, $Y$ is also sensitive to the gas temperature. Our results suggest that variations in the gas temperature are not significantly impacting the $Y$-$\Delta P/P_{\rm DMO}$ relation. These results suggest the possibility of using the tSZ signal rather than $f_b/(\Omega_b/\Omega_m)$ to infer the impact of feedback on the matter distribution. This will be the focus of the remainder of the paper. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/vanDaleen+19_with_camels_A_AGN2.pdf} \caption[]{Impact of baryonic physics on the matter power spectrum at $k=2 h/{\rm Mpc}$ for the Illustris, SIMBA and Astrid simulations (top, middle, and bottom rows). Each point corresponds to an average across halos in the indicated mass ranges in a different CAMELS simulation. We restrict the figure to simulations that have $0.2 < \Omega_{\rm m} < 0.4$. The dashed curves illustrate the behavior of the model described in \S\ref{sec:simple_model} in the regime that the radius to which gas is ejected by AGN is larger than the halo radius, and larger than $2\pi/k$. Note that for Astrid simulations (marked by asterisk), we take inverse of $A_{\rm AGN2}$ parameter while plotting as described in the main text. } \label{fig:Y_fb_DeltaP} \end{figure*} Fig.~\ref{fig:scatter_plot_all_ks} shows the same quantities as Fig.~\ref{fig:Y_fb_DeltaP}, but now for a fixed halo mass range ($10^{13} < M/(M_{\odot}/h) < 10^{14}$), fixed subgrid prescription (Astrid), and varying values of $k$. We find roughly similar results when using the different subgrid physics prescriptions. At low $k$, we find that there is a regime at high $f_b/(\Omega_b/\Omega_m)$ for which $\Delta P /P_{\rm DMO}$ changes negligibly. It is only when $f_b/(\Omega_b/\Omega_m)$ becomes very low that $\Delta P/P_{\rm DMO}$ begins to change. On the other hand, at high $k$, there is a near-linear relation between $f_b/(\Omega_b/\Omega_m)$ and $\Delta P/P_{\rm DMO}$. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/vanDaleen+19_with_camels_Astrid_all_ks.pdf} \caption[]{Similar to Fig.~\ref{fig:Y_fb_DeltaP}, but for different values of $k$. For simplicity, we show only the Astrid simulations for halos in the mass range $10^{13} < M (M_{\odot}/h) < 10^{14}$. The dashed curves illustrate the behavior of the model described in \S\ref{sec:simple_model} in the regime that the radius to which gas is ejected by AGN is larger than the halo radius, and larger than $2\pi/k$. As expected, this model performs best in the limit of high $k$ and large halo mass. } \label{fig:scatter_plot_all_ks} \end{figure*} \subsection{A toy model for power suppression} \label{sec:simple_model} We now describe a simple model for the effects of feedback on the relation between $f_b$ or $Y$ and $\Delta P/P_{\rm DMO}$ that explains some of the features seen in Figs.~\ref{fig:Pk_SIMBA_allparams}, \ref{fig:Y_fb_DeltaP} and \ref{fig:scatter_plot_all_ks}. Following expectations from the literature, we assume in this model that it is ejection of gas from halos by AGN feedback that is responsible for changes to the matter power spectrum. However SN feedback, on the other hand, prevents gas from accreting onto the SMBH, and therefore reduces the impact of AGN feedback \citep{Angles-Alcazar:2017:MNRAS:, Habouzit:2017:MNRAS:}. This scenario is consistent with the fact that at high SN feedback, we see that $\Delta P/P_{\rm DMO}$ goes to zero (second panel from the bottom in Fig.~\ref{fig:Pk_SIMBA_allparams}). We identify three relevant scales: (1) the halo radius, $R_h$, (2) the distance to which gas is ejected by the AGN, $R_{\rm ej}$, and (3) the scale at which the power spectrum is measured, $2\pi/k$. If $R_{\rm ej} < 2\pi/k$, then there will be no impact on $\Delta P$ at $k$: this corresponds to a rearrangement of the matter distribution on scales below where we measure the power spectrum. If, on the other hand, $R_{\rm ej} < R_h$, then there will be no impact on $f_b$ or $Y$, since the gas is not ejected out of the halo. Thus, we can consider four different regimes: \begin{itemize} \item Regime 1: $R_{\rm ej} < R_h$ and $R_{\rm ej} < 2\pi /k$. In this regime, changes to the feedback parameters have no impact on $f_b$ or $\Delta P$. \item Regime 2: $R_{\rm ej} > R_h$ and $R_{\rm ej} < 2\pi/k$. In this regime, changes to the feedback parameters result in movement along the $f_b$ or $Y$ axis without changing $\Delta P$. Gas is being removed from the halo, but the resultant changes to the matter distribution are below the scale at which we measure the power spectrum. Note that Regime 2 cannot occur when $R_h > 2\pi/k$ (i.e. high-mass halos at large $k$). \item Regime 3: $R_{\rm ej} > R_h$ and $R_{\rm ej} > 2\pi/k$. In this regime, changing the feedback amplitude directly changes the amount of gas ejected from halos as well as $\Delta P/P_{\rm DMO}$. \item Regime 4: $R_{\rm ej} < R_h$ and $R_{\rm ej} > 2 \pi/k$. In this regime, gas is not ejected out of the halo, so $f_b$ and $Y$ should not change. In principle, the redistribution of gas within the halo could lead to changes in $\Delta P/P_{\rm DMO}$. However, as we discuss below, this does not seem to happen in practice. \end{itemize} We note that in this simple model, we have neglected the impact of potentially important baryonic processes such as preventive feedback \citep{Pandya:2020:ApJ:, Pandya:2021:MNRAS:} and exclusive impact of feedback on the temperature of the hot gas \citep{Ostriker:2005:ApJ:} which can break hydro-static equilibrium. Let us now consider the behavior of $\Delta P/P_{\rm DMO}$ and $f_b$ or $Y$ as the feedback parameters are varied in Regime 3. A halo of mass $M$ is associated with an overdensity $\delta_m$ in the absence of feedback, which is changed to $\delta'_m$ due to ejection of baryons as a result of feedback. In Regime 3, some amount of gas, $M_{\rm ej}$, is completely removed from the halo. This changes the size of the overdensity associated with the halo to \begin{eqnarray} \frac{\delta_m'}{\delta_m} &=& 1 - \frac{M_{\rm ej}} {M}. \end{eqnarray} The change to the power spectrum is then \begin{eqnarray} \label{eq:deltap_over_p} \frac{\Delta P}{P_{\rm DMO}} &\sim& \left( \frac{\delta_m'}{\delta_m} \right)^2 -1 \approx -2\frac{M_{\rm ej}}{M}, \end{eqnarray} where we have assumed that $M_{\rm ej}$ is small compared to $M$. We have ignored the $k$ dependence here, but in Regime 3, the ejection radius is larger than the scale of interest, so the calculated $\Delta P/P_{\rm DMO}$ should apply across a range of $k$ in this regime. The ejected gas mass can be related to the gas mass in the absence of feedback. We write the gas mass in the absence of feedback as $f_c (\Omega_b/\Omega_m) M$, where $f_c$ encapsulates non-feedback processes that result in the halo having less than the cosmic baryon fraction. We then have \begin{eqnarray} M_{\rm ej} &=& f_c(\Omega_b/\Omega_m)M - f_{b} M - M_0, \end{eqnarray} where $M_0$ is the mass that has been removed from the gaseous halo, but that does not change the power spectrum, e.g. the conversion of gas to stars. Substituting into Eq.~\ref{eq:deltap_over_p}, we have \begin{eqnarray}\label{eq:DelP_P_fb} \frac{\Delta P}{P_{\rm DMO}} = -2 \frac{f_c\Omega_b}{\Omega_m} \left( 1 -\frac{f_{b}\Omega_m}{f_c \Omega_b} - \frac{\Omega_m M_0}{f_c \Omega_b M} \right). \end{eqnarray} In other words, for Regime 3, we find a linear relation between $\Delta P/P_{\rm DMO}$ and $f_b \Omega_m/\Omega_b$. For high mass halos, we should have $f_c \approx 1$ and $M_0/M \approx 0$. In this limit, the relationship between $f_b$ and $\Delta P/P_{\rm DMO}$ becomes \begin{eqnarray}\label{eq:DelP_P_fb} \frac{\Delta P}{P_{\rm DMO}} = -2 \frac{\Omega_b}{\Omega_m} \left( 1 -\frac{f_{b}\Omega_m}{\Omega_b} \right), \end{eqnarray} which is linear between $(\Delta P/P_{\rm DMO},f_b \Omega_m/\Omega_b) = (-2\Omega_b/\Omega_m,0)$ and $(\Delta P/P_{\rm DMO},f_b \Omega_m/\Omega_b) = (0,1)$. We show this relation as the dashed line in the $f_b$ columns of Figs.~\ref{fig:Y_fb_DeltaP} and Fig.~\ref{fig:scatter_plot_all_ks}. We can repeat the above argument for $Y$. Unlike the case with $f_b$, processes other than the removal of gas may reduce $Y$; these include, e.g., changes to the gas temperature in the absence of AGN feedback, or nonthermal pressure support. We account for these with a term $Y_0$, defined such that when $M_{\rm ej} = M_0 = 0$, we have $Y + Y_0 = f_c (\Omega_b/\Omega_m) MT /\alpha$, where we have assumed constant gas temperature, $T$, and $\alpha$ is a dimensionful constant of proportionality. We then have \begin{eqnarray} \frac{\alpha(Y+Y_0)}{T} = f_c (\Omega_b / \Omega_m)M - M_{\rm ej} - M_0. \end{eqnarray} Substituting the above equation into Eq.~\ref{eq:deltap_over_p} we have \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{f_c\Omega_b}{\Omega_m} \left(1 - \frac{\alpha (Y+Y_0) \Omega_m}{f_c TM \Omega_b} - \frac{\Omega_m M_0}{f_c \Omega_b M} \right) . \end{eqnarray} Following Eq.~\ref{eq:y_ss}, we define the self-similar value of $Y$, $Y^{\rm SS}$, via \begin{eqnarray} \alpha Y^{\rm SS}/T = (\Omega_b/\Omega_m)M, \end{eqnarray} leading to \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{f_c\Omega_b}{\Omega_m} \left(1 - \frac{(Y+Y_0)}{f_c Y^{\rm SS}} - \frac{\Omega_m M_0}{f_c \Omega_b M}\right). \end{eqnarray} Again taking the limit that $f_c \approx 1$ and $M_0/M \approx 0$, we have \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{\Omega_b}{\Omega_m} \left(1 - \frac{(Y+Y_0)}{ Y^{\rm SS}} \right). \end{eqnarray} Thus, we see that in Regime 3, the relation between $Y/Y^{\rm SS}$ and $\Delta P/P_{\rm DMO}$ is linear. The $Y/Y^{\rm SS}$ columns of Figs.~\ref{fig:Y_fb_DeltaP} show this relationship, assuming $Y_0 = 0$. In summary, we interpret the results of Figs.~\ref{fig:Y_fb_DeltaP} and \ref{fig:scatter_plot_all_ks} in the following way. Starting at low feedback amplitude, we are initially in Regime 1. In this regime, the simulations cluster around $f_b f_c \Omega_m/\Omega_b \approx 1$ (or $Y \approx Y_0$) and $\Delta P/P_{\rm DMO} \approx 0$ since changing the feedback parameters in this regime does not impact $f_b$ or $\Delta P/P_{\rm DMO}$. For high mass halos, we have $f_c \approx 1$ and $Y_0 \approx 0$ (although SIMBA appears to have $Y_0 >0$, even at high mass); for low mass halos, $f_c < 1$ and $Y_0 >0$. As we increase the AGN feedback amplitude, the behavior is different depending on halo mass and $k$: \begin{itemize} \item For low halo masses or low $k$, increasing the AGN feedback amplitude leads the simulations into Regime 2. Increasing the feedback amplitude in this regime moves points to lower $Y/Y^{\rm SS}$ (or $f_b \Omega_m/\Omega_b$) without significantly impacting $\Delta P/P_{\rm DMO}$. Eventually, when the feedback amplitude is sufficiently strong, these halos enter Regime 3, and we see a roughly linear decline in $\Delta P/P_{\rm DMO}$ with decreasing $Y/Y^{\rm SS}$ (or $f_b\Omega_m/\Omega_b$), as discussed above. \item For high mass halos and high $k$, we never enter Regime 2 since it is not possible to have $R_{\rm ej} > R_h$ and $R_{\rm ej} < 2\pi/k$ when $R_h$ is very large. In this case, we eventually enter Regime 3, leading to a linear trend of decreasing $\Delta P/P_{\rm DMO}$ with decreasing $Y/Y^{\rm SS}$ or $f_b \Omega_m/\Omega_b$, as predicted by the above discussion. This behavior is especially clear in Fig.~\ref{fig:scatter_plot_all_ks}: at high $k$, the trend closely follows the predicted linear relation. At low $k$, on the other hand, we see a more prominent Regime 2 region. The transition between these two regimes is expected to occur when $k \sim 2\pi/R_h$, which is roughly $5\,h^{-1}{\rm Mpc}$ for the halo mass regime shown in the figure. This expectation is roughly confirmed in the figure. \end{itemize} Interestingly, we never see Regime 4 behavior: when the halo mass is large and $k$ is large, we do not see rapid changes in $\Delta P/P_{\rm DMO}$ with little change to $f_b$ and $Y$. This could be because this regime corresponds to movement of the gas entirely within the halo. If the gas has time to re-equilibrate, it makes sense that we would see little change to $\Delta P/P_{\rm DMO}$. \subsection{Predicting the power spectrum suppression from the halo observables} While the toy model described above roughly captures the trends between $Y$ (or $f_b$) and $\Delta P/P_{\rm DMO}$, it of course does not capture all of the physics associated with feedback. It is also clear that there is significant scatter in the relationships between observable quantities and $\Delta P$. It is possible that this scatter is reduced in some higher dimensional space that includes more observables. To address both of these issues, we now train statistical models to learn the relationships between observable quantities and $\Delta P/P_{\rm DMO}$. We will focus on results obtained with random forest regression \citep{Breiman2001}. We have also tried using neural networks to infer these relationships, but have not found any significant improvement with respect to the random forest results, presumably because the space is low-dimensional (i.e. we consider at most about five observable quantities at a time). We leave a detailed comparison with other decision tree based approaches, such as gradient boosted tree \citep{Friedman_boosted_tree:01} to a future study. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/train_test_variantions_updated.pdf} \caption[]{ The random forest regressor was used to predict the power suppression, represented by $\Delta P/P_{\rm DMO}$, in the LH suite of simulations at four different scales $k$ using subgrid physics models TNG, SIMBA, and Astrid. The model was trained using average $f_b$ of halos with masses between $5\times10^{12} < M (M_{\odot}/h) < 10^{14}$ and the cosmological parameter $\Omega_m$. The errorbars and gray band indicate the uncertainty in the predictions normalized by the uncertainity in the CV suite at each scale, with the former showing the 16-84 percentile error on the test set and the latter representing the expected 1$\sigma$ error from the CV suite. The model performs well when the training and test simulations are the same. When tested on an independent simulation, it remains robust at high $k$ but becomes biased at low $k$. Results in the remainder of the paper are based on training the model on all three simulations. The data points at each scale are staggered for clarity. } \label{fig:Pk_Y_CV} \end{figure*} We train a random forest model to go from observable quantities (e.g. $f_b/(\Omega_b/\Omega_m)$ and $Y_{500}/Y^{\rm SS}$) to a prediction for $\Delta P/P_{\rm DMO}$ at multiple $k$ values. The random forest model uses 100 trees with a ${\rm max}_{\rm depth} = 10$.\footnote{We use a publicly available code: \url{https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html}. We also verified that our conclusions are robust to changing the settings of the random forest.} In this section we analyze the halos in the mass bin $5\times 10^{12} < M_{\rm halo} (M_{\odot}/h) < 10^{14}$ but we also show the results for halos with lower masses in Appendix~\ref{app:low_mass}. We also consider supplying the values of $\Omega_{\rm m}$ as input to the random forest, since it can be constrained precisely through other observations (e.g. CMB observations), and as we showed in \S\ref{sec:fbY}, the cosmological parameters can impact the observables.\footnote{One might worry that using cosmological information to constrain $\Delta P/P_{\rm DMO}$ defeats the whole purpose of constraining $\Delta P/P_{\rm DMO}$ in order to improve cosmological constraints. However, observations, such as those of CMB primary anisotropies, already provide precise constraints on the matter density without using information in the small-scale matter distribution. } Ultimately, we are interested in making predictions for $\Delta P/P_{\rm DMO}$ using observable quantities. However, the sample variance in the CAMELS simulations limits the precision with which we can measure $\Delta P/P_{\rm DMO}$. It is not possible to predict $\Delta P/P_{\rm DMO}$ to better than this precision. We will therefore normalize the uncertainties in the RF predictions by the cosmic variance error. In order to obtain the uncertainty in the prediction, we randomly split the data into 70\% training and 30\% test set. After training the RF regressor using the training set and a given observable, we make compute the 16th and 84th percentile of the distribution of prediction errors evaluated on the test set. This constitutes our assessment of prediction uncertainty. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot1_y500_fb_comp_FINAL_v2.pdf} \caption{ Similar to Fig.~\ref{fig:Pk_Y_CV} but when training the RF model on different observables from all three simulations (TNG, SIMBA and Astrid) to predict $\Delta P/P_{\rm DMO}$ of a random subset of the the three simulations not used in training. We find that jointly training on the deviation of the integrated SZ profile from the self-similar expectation, $Y_{500c}/Y^{\rm SS}$ and $\Omega_m$ results in inference of power suppression that is comparable to cosmic variance errors, with small improvements when additionally adding the baryon fraction ($f_b$) of halos in the above mass range. } \label{fig:predict_y500_fb} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot2_yprof_comp_FINAL_v2.pdf} \caption{ Same as Fig.~\ref{fig:predict_y500_fb} but when using the full pressure profile and electron number density profiles instead of the integrated quantities. We again find that with pressure profile and $\Omega_m$ information we can recover robust and precise constraints on the matter power suppression. } \label{fig:predict_profiles} \end{figure*} Fig.~\ref{fig:Pk_Y_CV} shows the accuracy of the RF predictions for $\Delta P/P_{\rm DMO}$ when trained on stacked $f_b$ (for halos in $5\times 10^{12} < M_{\rm halo} (M_{\odot}/h) < 10^{14}$) and $\Omega_m$, normalized to the sample variance error in $\Delta P/P_{\rm DMO}$. As we will show later in this section, this combination of inputs results in precise constraints on the matter power suppression. Specifically to obtain the constraints, after training the RF regressor on the train simulations, we predict the $\Delta P/P_{\rm DMO}$ on test simulation boxes at four scales. Thereafter, we create a histogram of the difference between truth and predicted $\Delta P/P_{\rm DMO}$, normalized by the variance obtained from the CV set of simulations, for each respective suite of simulations (see Fig.~\ref{fig:Pk_Bk_CV}). In Fig.~\ref{fig:Pk_Y_CV}, each errorbar corresponds to the 16th and 84th percentile from this histogram and the marker corresponds to its peak. We show the results of training and testing on a single simulation suite, and also the results of training/testing across different simulation suites. It is clear that when training and testing on the same simulation suite, the RF learns a model that comes close to the best possible uncertainty on $\Delta P/P_{\rm DMO}$ (i.e. cosmic variance). When training on one or two simulation suites and testing another, however, the predictions show bias at low $k$. This suggests that the model learned from one simulation does not generalize very well to another in this regime. This result is somewhat different from the findings of \citet{vanDaalen:2020}, where it was found that the relationship between $f_b$ and $\Delta P/P_{\rm DMO}$ \textit{did} generalize to different simulations. This difference may result from the fact that we are considering a wider range of feedback prescriptions than in \citet{vanDaalen:2020}, as well as considering significant variations in cosmological parameters. Fig.~\ref{fig:Pk_Y_CV} also shows the results of testing and training on all three simulations (black points with errorbars). Encouragingly, we find that in this case, the predictions are of comparable accuracy to those obtained from training and predicting on the same simulation suite. This suggests that there is a general relationship across all feedback models that can be learned to go from $\Omega_m$ and $f_b$ to $\Delta P/P_{\rm DMO}$. Henceforth, we will show results trained on all simulation suites and tested on all simulations suites. Of course, this result does not imply that our results will generalize to some completely different feedback prescription. In Fig.~\ref{fig:predict_y500_fb} we show the results of training the random forest on different combinations of $f_b$, $Y_{500}$ and $\Omega_m$. Consistent with the findings of \citet{vanDaalen:2020}, we find that $f_b/(\Omega_b/\Omega_m)$ results in robust constraints on the matter power suppression (blue points with errors). These constraints come close to the cosmic variance limit across a wide range of $k$. We additionally find that providing $f_b$ and $\Omega_m$ as separate inputs to the RF improves on the combination $f_b/(\Omega_b/\Omega_m)$, yielding smaller variance in the predicted $\Delta P/P_{\rm DMO}$, with the largest improvement at small scales. This is not surprising given the predictions of our simple model, for which it is clear that $\Delta P/P_{\rm DMO}$ can be impacted by both $\Omega_m$ and $f_b / (\Omega_b /\Omega_b)$ independently. Similarly, it is clear from Fig.~\ref{fig:Pk_SIMBA_allparams} that changing $\Omega_m$ changes the relationship between $\Delta P/P_{\rm DMO}$ and the halo gas-derived quantities (like $Y$ and $f_b$). We next consider a model trained on $Y_{500c}/Y^{\rm SS}$ (orange points in Fig.~\ref{fig:predict_y500_fb}). This model yields reasonable predictions for $\Delta P/P_{\rm DMO}$, although not quite as good as the model trained on $f_b/(\Omega_b/\Omega_m)$. The $Y/Y^{\rm SS}$ model yields somewhat larger errorbars, and the distribution of $\Delta P/P_{\rm DMO}$ predictions is highly asymmetric. When we train the RF model jointly on $Y_{500c}/Y^{\rm SS}$ and $\Omega_m$ (green points), we find that the predictions improve considerably, particularly at high $k$. In this case, the predictions are typically symmetric around the true $\Delta P/P_{\rm DMO}$, have smaller uncertainty compared to the model trained on $f_b/(\Omega_b/\Omega_m)$, and comparable uncertainty to the model trained on $\{ f_b/(\Omega_b/\Omega_m)$,$\Omega_m \}$. We thus conclude that when combined with matter density information, $Y/Y^{\rm SS}$ provides a powerful probe of baryonic effects on the matter power spectrum. Above we have considered the integrated tSZ signal from halos, $Y_{500c}$. Measurements in data, however, can potentially probe the tSZ profiles rather than only the integrated tSZ signal (although the instrumental resolution may limit the extent to which this is possible). In Fig.~\ref{fig:predict_profiles} we consider RF models trained on the stacking the full electron density and pressure profiles in the halo mass range instead of just the integrated quantities. The electron pressure and number density profiles are measured in eight logarithmically spaced bins between $0.1 < r/r_{200c} < 1$. We find that while the ratio $P_e(r)/P^{\rm SS}$ results in robust constraints, jointly providing the information on $\Omega_{m}$ makes them more precise. Similar to the integrated profile case, we find that additionally providing the electron density profile information only marginally improves the constraints. We also show the results when jointly using the measured pressure profiles for both the low and high mass halos to infer the matter power suppression. We find that this leads to only a marginal improvements in the constraints. This would suggest that the deviation of the thermal pressure from the expected self-similar relation already captures the strength of feedback adequately with minimal improvements with adding information from lower mass halos. Note that we have input the 3D pressure and electron density profiles in this case. Even though observed SZ maps are projected quantities, we can infer the 3D pressure profiles from the model used to analyze the projected correlations. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot4_Bkeq_comp_all.pdf} \caption{Same as Fig.~\ref{fig:predict_y500_fb}, but for bispectrum suppression for equilateral triangle configurations at different scales. We find that having pressure profile information results in unbiased constraints here as well. } \label{fig:predict_Bk_eq} \end{figure*} \subsection{Predicting the bispectrum suppression with $f_b$ and electron pressure} In Fig.~\ref{fig:predict_Bk_eq}, we test our methodology on bispectrum suppression, $\Delta B(k)/B(k)$. Similar to the matter power spectrum, we train and test our model on a combination of the three simulations. We train and test on equilateral triangle bispectrum configurations with different scales $k$. We again see that information about the electron pressure and $\Omega_m$ results in a precise and unbiased constraints on the bispectrum suppression. The constraints improve as we go to the small scales. In Appendix~\ref{app:Bk_sq} we show similar methodology applied to squeezed bispectrum configurations. However several caveats are important to consider here. The bispectrum is more sensitive to the higher mass ($M > 5\times 10^{13} M_{\odot}/h$) halos, which are missing from the CAMELS simulations. This results in the sensitivity of the bispectrum suppression to also be biased in these small boxes. We also note that there additionally can be more important dependencies on the simulation resolution which are beyond the scope of this study. However, we expect the qualitative methodology applied here to remain valid with the set of larger simulations. Finally, there would be some degeneracy between the power spectrum suppression and the bispectrum suppression as they both stem from same underlying physics which we defer to a future study. \section{Results II: ACTxDES measurements and forecast} \label{sec:results_data} \begin{figure*} \includegraphics[scale = 0.45]{figs/figs_new/power_supp_data_forecast_v2.pdf} \caption[]{Constraints on the matter power suppression obtained from the inferred $Y_{\rm 500c}/Y^{\rm SS}$ (and fixing $\Omega_{\rm m} = 0.3$) from the shear-$y$ correlations obtained from DESxACT analysis \citep{Pandey:2022}. We also show the expected improvements from future halo-$y$ correlations from DESIxSO using the constraints in \citet{Pandey:2020}. We compare these to the inferred constraints obtained using cosmic shear \citep{Chen:2022:MNRAS:} and additionally including X-ray and kSZ data \citep{Schneider:2022:MNRAS:}. We also compare with the results from larger simulations; OWLS \citep{Schaye:2010:MNRAS:}, BAHAMAS \citep{McCarthy:2017:MNRAS:} and TNG-300 \citep{Springel:2018:MNRAS:}. } \label{fig:Pk_data_forecast} \end{figure*} Our analysis above has resulted in a statistical model (random forest) that predicts $\Delta P/P_{\rm DMO}$ (and the corresponding uncertainties) given values of $Y_{500c}$ for low-mass halos and $\Omega_m$. This model is robust to significant variations in the feedback prescription, at least across the SIMBA, Illustris and Astrid models. We now apply this model to constraints on $Y_{500c}$ coming from the cross-correlation of galaxy lensing shear with tSZ maps. \citet{Gatti:2022} and \citet{Pandey:2022} measured cross-correlations of DES galaxy lensing with Compton $y$ maps from a combination of Advanced-ACT \citep{Madhavacheril:2020:PhRvD:} and {\it Planck} data \citep{PlanckCollaboration:2016:A&A:} over an area of 400 sq. deg. They analyze these cross-correlations using a halo model framework, where the pressure profile in halos is parameterized using a generalized Navarro-Frenk-White profile \citep{Navarro:1996:ApJ:, Battaglia:2012:ApJ:b}. This pressure profile is described using four free parameters, allowing for scaling with mass, redshift and distance from halo center. A tomographic analysis of this shear-y correlation constrains these parameter and hence the pressure profile of halos in a wide range of masses. The constraints on these parameterized profiles can be translated directly into constraints on $Y_{500c}$ for halos in the mass range that we have used to infer the constraints on matter power suppression from the trained random forest model as described in the previous section. Note that the shear-$y$ correlation has sensitivity across the range relevant to our trained model ($M > 5 \times 10^{12} M_{\odot}/h$), but that the sensitivity is reduced towards the low end of this range. Fig.~\ref{fig:Pk_data_forecast} shows the results of feeding the inferred $Y_{\rm 500c}$ constraints from \citet{Pandey:2022} into our random forest model to infer the impact of baryonic feedback on the matter power spectrum (black points with errorbars). Note that in this inference we fix the matter density parameter, $\Omega_{m} = 0.3$, same value as used by the CAMELS CV simulations as we use these to estimate the halo mass function. The shear-tSZ analysis correlation provides constraints on the parameters of the 3D pressure profile of halos and its evolution with mass and redshift. We first use these parameter constraints to first generate 400 samples of the inferred 3D profiles of the halos at $z=0$ in 10 logarithmic mass bins in range $12.7 < \log_{10}(M) < 14$. Then we perform the volume integral of these profiles to infer the $Y_{\rm 500c}(M, z)$ (see Eq.~\ref{eq:Y500_from_Pe}). Afterwards, we generate the stacked normalized integrated pressure for each sample $j$ by integrating over the halo masses as: \begin{equation}\label{eq:Pe_stacked_data} \bigg\langle \frac{Y_{\rm 500c}}{Y^{\rm SS}} \bigg\rangle^j = \frac{1}{\bar{n}^j} \int dM \bigg(\frac{dn}{dM}\bigg)^j_{\rm CAMELS} \frac{Y^j_{\rm 500c}(M)}{Y^{\rm SS}} \end{equation} where, $\bar{n}^j = \int dM (dn/dM)^j_{\rm CAMELS}$ and $(dn/dM)^j_{\rm CAMELS}$ is a randomly chosen halo mass function from the CV set of boxes of TNG, SIMBA or Astrid. This allows for incorporating the impact (and its corresponding uncertainties) of having a small box size on the halo mass function. Note that due to small box size, there is deficit of high mass halos and hence the functional form differs from other fitting functions in literature, e.g. \cite{Tinker:2008:ApJ:}. Thereafter, we feed-in these stacked pressure profiles to the random forest regressor, jointly trained on TNG, SIMBA and Astrid. For each of the input pressure profile, we recover the value of matter power suppression, $\Delta P/P_{\rm DMO}$. Finally, in Fig.~\ref{fig:Pk_data_forecast}, we plot the mean and the 18th and 64th percentile of the recovered $\Delta P/P_{\rm DMO}$ distribution from the 400 samples. We note that our inference of uncertainties is robust to the number of samples considered. In the same figure, we also show the constraints from \citet{Chen:2022:MNRAS:} and \citet{Schneider:2022:MNRAS:} obtained using the analysis of complementary datasets. \citet{Chen:2022:MNRAS:} analyze the small scale cosmic shear measurements from DES Year-3 data release using a baryon correction model. Note that in this analysis, they only use a limited range of cosmologies, particularly restricting to high $\sigma_8$ range due to the emulator calibration. Moreover they also impose cosmology constraints from the large scale analysis of the DES data. Note that unlike the procedure presented here, their modeling and constraints are sensitive to the priors on $\sigma_8$. Therefore, their constraints might be optimistic in this case. \citet{Schneider:2022:MNRAS:} analyze the X-ray data (as presented in \citealt{Giri:2021:JCAP:}) and kSZ data from ACT and SDSS \citep{Schaan:2021:PhRvD:} and the cosmic shear measurement from KiDS \citep{Asgari:2021}, using another version of baryon correction model. A joint analysis from these complementary dataset leads to crucial degeneracy breaking in the parameters. It would be interesting to include the tSZ observations presented here in the same framework as it can potentially make the constraints more precise. Several caveats about our analysis with data are in order. First, the lensing-SZ correlation is most sensitive to halos in the mass range of $M_{\rm halo} \geq 10^{13} M_{\odot}/h$. However, our RF model operates on halos with mass in the range of $5 \times 10^{12} \geq M_{\rm halo} \leq 10^{14} M_{\odot}/h$, with limited volume of the simulations restricting the number of halos above $10^{13} M_{\odot}/h$. We have attempted to account for this selection effect by using the halo mass function from the CV sims of the CAMELS simulations when calculating the stacked profile. However, using a larger volume simulation suite would be a better alternative (also see discussion in Appendix~\ref{app:volume_res_comp}). Moreover, the CAMELS simulation suite fix the $\Omega_b$ to a fiducial value. There might be some non-trivial effects on the inferences when varying that parameter, as that would impact the distribution of baryons, especially to low mass halos and its interplay with the changing baryonic feedback. In order to shift the sensitivity of the data correlations to lower halo masses, it would be preferable to analyze the galaxy-SZ and halo-SZ correlations. In \citet{Pandey:2020} we forecast the constraints on the inferred 3D pressure profile from the future halo-SZ correlations using DESI and CMB-S4 SZ maps for a wide range of halo masses. In Fig.~\ref{fig:Pk_data_forecast} we also show the expected constraints on the matter power suppression using the halo-SZ correlations from halos in $M_h > 5\times 10^{12} M_{\odot}/h$. We again follow the same methodology as described above to create a stacked normalized integrated pressure (see Eq.~\ref{eq:Pe_stacked_data}). Moreover, we also fix $\Omega=0.3$ to predict the matter power suppression. Note that we shift the mean value of $\Delta P/P_{\rm DMO}$ to the recovered value from BAHAMAS high-AGN simulations \citep{McCarthy:2017:MNRAS:}. As we can see in Fig.~\ref{fig:Pk_data_forecast}, we can expect to obtain significantly more precise constraints from these future observations. \section{Conclusions} \label{sec:conclusion} We have shown that the tSZ signals from low-mass halos contain significant information about the impacts of baryonic feedback on the small-scale matter distribution. Using models trained on hydrodynamical simulations with a wide range of feedback implementations, we demonstrate that information about baryonic effects on the power spectrum and bispectrum can be robustly extracted. By applying these same models to measurements with ACT and DES, we have shown that current tSZ measurements already constrain the impact of feedback on the matter distribution. Our results suggest that using simulations to learn the relationship between halo gas observables and baryonic effects on the matter distribution is a promising way forward for constraining these effects with data. Our main findings from our explorations with the CAMELS simulations are the following: \begin{itemize} \item In agreement with \citet{vanDaalen:2020}, we find that baryon fraction in halos correlates with the power spectrum suppression. We find that the correlation is especially robust in small scales. \item We find (in agreement with \citet{Delgado:23}) that there can be significant scatter in the relationship between baryon fraction and power spectrum suppression at low halo mass, and that the relationship varies to some degree with feedback implementation. However, the bulk trends appear to be consistent regardless of feedback implementation. \item We propose a simple model that qualitatively (and in some cases quantitatively) captures the broad features in the relationships between feedback, $\Delta P/P_{\rm DMO}$ (at different values of $k$), and halo gas-related observables like $f_b$ and $Y$ (at different halo masses). \item Despite significant scatter in the relations between $Y$ and $\Delta P/P_{\rm DMO}$ at low halo mass, we find that simple random forest models yield tight and robust constraints on $\Delta P/P_{\rm DMO}$ given information about $Y$ in low-mass halos and $\Omega_m$. \item Using the pressure profile instead of just the integrated $Y_{\rm 500c}$ signal provides additional information about $\Delta P/P_{\rm DMO}$, leading to 20-50\% improvements when not using any cosmological information. When additionally providing the $\Omega_m$ information, the improvements in constraints on power or bispectrum suppression are modest when using the full pressure profile relative to the integrated quantities. \item The pressure profiles and baryon fractions also carry information about baryonic effects on the bispectrum. \end{itemize} Our main findings from our analysis of constraints from the DESxACT shear-$y$ correlation analysis are \begin{itemize} \item The measured electron pressure profile from the data analysis of tSZ and LSS can then be used to infer the matter power suppression. We infer competitive constraints on this suppression using the measurements from shear-y correlation from DES-ACT data. \item We also show that the constraints would improve significantly in the future, particularly using the halo catalog from DESI and tSZ map from S4. \end{itemize} With data from future galaxy and CMB surveys, we expect constraints on the tSZ signal from halos across a wide mass and redshift range to improve significantly. These improvements will come from both the galaxy side (e.g. halos detected over larger areas of the sky down and out to higher redshifts) and the CMB side (more sensitive tSZ maps over larger areas of the sky). Our forecast for DESI and CMB Stage 4 in Fig.~\ref{fig:Pk_data_forecast} suggests that very tight constraints can be obtained on the impact of baryonic feedback on the matter power spectrum. By combining these results with weak lensing constraints on the small-scale matter distribution, we expect to be able to extract significantly more cosmological information. \bibliographystyle{mnras} \section{Introduction}\label{sec:intro} The statistics of the matter distribution on scales $k \gtrsim 0.1\,h{\rm Mpc}^{-1}$ are tightly constrained by current weak lensing surveys \citep[e.g.][]{Asgari:2021,DESY3cosmo}. However, modeling the matter distribution on these scales to extract cosmological information is complicated by the effects of baryonic feedback \citep{Rudd:2008}. Energetic output from active galactic nuclei (AGN) and stellar processes (e.g. winds and supernovae) directly impacts the distribution of gas on small scales, thereby changing the total matter distribution \citep[e.g.][]{Chisari:2019}.\footnote{Changes to the gas distribution can also gravitationally influence the dark matter distribution, further modifying the total matter distribution.} The coupling between these processes and the large-scale gas distribution is challenging to model theoretically and in simulations because of the large dynamic range involved, from the scales of individual stars to the scales of galaxy clusters. While it is generally agreed that feedback leads to a suppression of the matter power spectrum on scales $0.1\,h{\rm Mpc}^{-1} \lesssim k \lesssim 20\,h{\rm Mpc}^{-1}$, the amplitude of this suppression remains uncertain by tens of percent \citep{vanDaalen:2020, Villaescusa-Navarro:2021:ApJ:} (see also Fig.~\ref{fig:Pk_Bk_CV}). This systematic uncertainty limits constraints on cosmological parameters from current weak lensing surveys \cite[e.g.][]{DESY3cosmo,Asgari:2021}. For future surveys, such as the Vera Rubin Observatory LSST \citep{TheLSSTDarkEnergyScienceCollaboration:2018:arXiv:} and \textit{Euclid} \citep{EuclidCollaboration:2020:A&A:}, the problem will become even more severe given expected increases in statistical precision. In order to reduce the systematic uncertainties associated with feedback, we would like to identify observable quantities that carry information about the impact of feedback on the matter distribution and develop approaches to extract this information \citep[e.g.][]{Nicola:2022:JCAP:}. Recently, \citet{vanDaalen:2020} showed that the halo baryon fraction, $f_b$, in halos with $M \sim 10^{14}\,M_{\odot}$ carries significant information about suppression of the matter power spectrum caused by baryonic feedback. They found that the relation between $f_b$ and matter power suppression was robust to at least some changes in the subgrid prescriptions for feedback physics. Note that $f_b$ as defined by \citet{vanDaalen:2020} counts baryons in both the intracluster medium as well as those in stars. The connection between $f_b$ and feedback is expected, since one of the main drivers of feedback's impact on the matter distribution is the ejection of gas from halos by AGN. Therefore, when feedback is strong, halos will be depleted of baryons and $f_b$ will be lower. The conversion of baryons into stars --- which will not significantly impact the matter power spectrum on large scales --- does not impact $f_b$, since $f_b$ includes baryons in stars as well as the ICM. \citet{vanDaalen:2020} specifically consider the measurement of $f_b$ in halos with $6\times 10^{13} M_{\odot} \lesssim M_{500c} \lesssim 10^{14}\,M_{\odot}$. In much more massive halos, the energy output of AGN is small compared to the binding energy of the halo, preventing gas from being expelled. In smaller halos, \citet{vanDaalen:2020} found that the correlation between power spectrum suppression and $f_b$ is less clear. Although $f_b$ carries information about feedback, it is somewhat unclear how one would measure $f_b$ in practice. Observables such as the kinematic Sunyaev Zel'dovich (kSZ) effect can be used to constrain the gas density; combined with some estimate of stellar mass, $f_b$ could then be inferred. However, measuring the kSZ is challenging, and current measurements have low signal-to-noise \citep{Hand:2012,Hill:2016,Soergel:2016}. Moreover, \citet{vanDaalen:2020} consider a relatively limited range of feedback prescriptions. It is unclear whether a broader range of feedback models could lead to a greater spread in the relationship between $f_b$ and baryonic effects on the power spectrum. In any case, it is worthwhile to consider other potential observational probes of feedback. Another potentially powerful probe of baryonic feedback is the thermal SZ (tSZ) effect. The tSZ effect is caused by inverse Compton scattering of CMB photons with a population of electrons at high temperature. This scattering process leads to a spectral distortion in the CMB that can be reconstructed from multi-frequency CMB observations. The amplitude of this distortion is sensitive to the line-of-sight integral of the electron pressure. Since feedback changes the distribution and thermodynamics of the gas, it stands to reason that it could impact the tSZ signal. Indeed, several works using both data \citep[e.g][]{Pandey:2019,Pandey:2022,Gatti:2022} and simulations \citep[e.g.][]{Scannapieco:2008,Bhattacharya:2008,Moser:2022,Wadekar:2022} have shown that the tSZ signal from low-mass (group scale) halos is sensitive to feedback. Excitingly, the sensitivity of tSZ measurements is expected to increase dramatically in the near future due to high-sensitivity CMB measurements from e.g. SPT-3G \citep{Benson:2014:SPIE:}, Advanced ACTPol \citep{Henderson:2016:JLTP:}, Simons Observatory \citep{Ade:2019:JCAP:}, and CMB Stage 4 \citep{CMBS4}. The goal of this work is to investigate what information the tSZ signals from low-mass halos contain about the impact of feedback on the small-scale matter distribution. The tSZ signal, which we denote with the Compton $y$ parameter, carries different information from $f_b$. For one, $y$ is sensitive only to the gas and not to stellar mass. Moreover, $y$ carries sensitivity to both the gas density and temperature, unlike $f_b$ which depends only on the gas density. The $y$ signal is also easier to measure than $f_b$, since it can be estimated simply by cross-correlating halos with a tSZ map. The signal-to-noise of such cross-correlation measurements is already high with current data, on the order of 10s of $\sigma$ \citep{Vikram:2017,Pandey:2019,Pandey:2022,Sanchez:2022}. In this paper, we investigate the information content of the tSZ signal from group-scale halos using the Cosmology and Astrophysics with MachinE Learning Simulations (CAMELS) simulations. As we describe in more detail in \S\ref{sec:camels}, CAMELS is a suite of many hydrodynamical simulations run across a range of different feedback prescriptions and different cosmological parameters. The relatively small volume of the CAMELS simulations ($(25/h)^3\,{\rm Mpc^3}$) means that we are somewhat limited in the halo masses and scales that we can probe. We therefore view our analysis as an exploratory work that investigates the information content of low-mass halos for constraining feedback and how to extract this information; more accurate results over a wider range of halo mass and $k$ may be obtained in the future using the same methods applied to larger volume simulations. By training statistical models on the CAMELS simulations, we explore what information about feedback exists in tSZ observables, and how robust this information is to changes in subgrid feedback prescriptions. We consider three very different prescriptions for feedback based on the SIMBA \citep{Dave:2019:MNRAS:}, Illustris-TNG (\citealt{Pillepich:2018:MNRAS:}, henceforth TNG) and Astrid \citep{Bird:2022:MNRAS:, Ni:2022:MNRAS:} models across a wide range of possible parameter values, including variations in cosmology. The flexibility of the statistical models we employ means that it is possible to uncover more complex relationships between e.g. $f_b$, $y$, and the baryonic suppression of the power spectrum than considered in \citet{vanDaalen:2020}. The work presented here is complementary to \citet{Delgado:23} which explores the information content in the baryon fraction of halos encompassing broader mass range ($M > 10^{10} M_{\odot}/h$), finding a broad correlation with the matter power suppression. Finally, we apply our trained statistical models to recent measurements of the $y$ signal from low-mass halos by \citet{Gatti:2022} and \citet{Pandey:2022}. These analyses inferred the halo-integrated $y$ signal from the cross-correlation of galaxy lensing and the tSZ effect using lensing data from the Dark Energy Survey (DES) \citep{Amon:2022:PhRvD:, Secco:2022:PhRvD:b} and tSZ measurements from the Atacama Cosmology Telescope (ACT) \citep{Madhavacheril:2020:PhRvD:}. In addition to providing interesting constraints on the impact of feedback, these results highlight the potential of future similar analyses with e.g. Dark Energy Spectroscopic Experiment (DESI; \citealt{DESI}), Simons Observatory \citep{Ade:2019:JCAP:}, and CMB Stage 4 \citep{CMBS4}. Two recent works --- \citet{Moser:2022} and \citet{Wadekar:2022} --- have used the CAMELS simulations to explore the information content of the tSZ signal for constraining feedback. These works focus on the ability of tSZ observations to constrain the parameters of subgrid feedback models in hydrodynamical simulations. Here, in contrast, we attempt to connect the observable quantities directly to the impact of feedback on the matter power spectrum and bispectrum. Additionally, unlike some of the results presented in \citet{Moser:2022} and \citet{Wadekar:2022}, we consider the full parameter space explored by the CAMELS simulations rather than the small variations around a fiducial point that are relevant to calculation of the Fisher matrix. Finally, we only focus on the intra-halo gas profile of the halos in the mass range captured by the CAMELS simulations (c.f. \citealt{Moser:2022}). We do not expect the inter-halo gas pressure to be captured by the small boxes used here as it may be sensitive to higher halo masses \citep{Pandey:2020}. Nonlinear evolution of the matter distribution induces non-Gaussianity, and hence there is additional information to be recovered beyond the power spectrum. Recent measurements detect higher-order matter correlations at cosmological scales at $\mathcal{O}(10\sigma)$\citep{Secco:2022:PhRvD:b, Gatti:2022:PhRvD:}, and the significance of these measurements is expected to rapidly increase with up-coming surveys \citep{Pyne:2021:MNRAS:}. Jointly analyzing two-point and three-point correlations of the matter field can help with self-calibration of systematic parameters and improve cosmological constraints. As described in \citet{Foreman:2020:MNRAS:}, the matter bispectrum is expected to be impacted by baryonic physics at $\mathcal{O}(10\%)$ over the scales of interest. With these considerations in mind, we also investigate whether the SZ observations carry information about the impact of baryonic feedback on the matter bispectrum. The plan of the paper is as follows. In \S\ref{sec:camels} we discuss the CAMELS simulation and the data products that we use in this work. In \S\ref{sec:results_sims}, we present the results of our explorations with the CAMELS simulations, focusing on the information content of the tSZ signal for inferring the impact of feedback on the matter distribution. In \S\ref{sec:results_data}, we apply our analysis to the DES and ACT measurements. We summarize our results and conclude in \S\ref{sec:conclusion}. \section{CAMELS simulations and observables} \label{sec:camels} \subsection{Overview of CAMELS simulations} \begin{table*} \begin{tabular}{@{}|c|c|l|@{}} \toprule Simulation & Type/Code & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Astrophysical parameters varied\\ \& its meaning\end{tabular}} \\ \midrule TNG & \begin{tabular}[c]{@{}c@{}}Magneto-hydrodynamic/\\ AREPO\end{tabular} & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$: (Energy of Galactic winds)/SFR \\ $A_{\rm SN2}$: Speed of galactic winds\\ $A_{\rm AGN1}$: Energy/(BH accretion rate)\\ $A_{\rm AGN2}$: Jet ejection speed or burstiness\end{tabular} \\ \midrule SIMBA & Hydrodynamic/GIZMO & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$ : Mass loading of galactic winds\\ $A_{\rm SN2}$ : Speed of galactic winds\\ $A_{\rm AGN1}$ : Momentum flux in QSO and jet mode of feedback\\ $A_{\rm AGN2}$ : Jet speed in kinetic mode of feedback\end{tabular} \\ \midrule Astrid & Hydrodynamic/pSPH & \begin{tabular}[c]{@{}l@{}}$A_{\rm SN1}$: (Energy of Galactic winds)/SFR \\ $A_{\rm SN2}$: Speed of galactic winds\\$A_{\rm AGN1}$: Energy/(BH accretion rate)\\ $A_{\rm AGN2}$: Thermal feedback efficiency\end{tabular} \\ \bottomrule \end{tabular} \caption{Summary of the three feedback models used in this analysis. For each model, four feedback parameters are varied: $A_{\rm AGN 1}$, $A_{\rm AGN 2}$, $A_{\rm SN 1}$, and $A_{\rm SN 2}$. The meanings of these parameters are different for each model, and are summarized in the rightmost column. In addition to these four astrophysical parameters, the cosmological parameters $\Omega_{\rm m}$ and $\sigma_8$ were also varied. \label{tab:feedback}} \end{table*} We investigate the use of SZ signals for constraining the impact of feedback on the matter distribution using approximately 3000 cosmological simulations run by the CAMELS collaboration \citep{Villaescusa-Navarro:2021:ApJ:}. One half of these are gravity-only N-body simulations and the other half are hydrodynamical simulations with matching initial conditions. The simulations are run using three different hydrodynamical sub-grid codes, TNG \citep{Pillepich:2018:MNRAS:}, SIMBA \citep{Dave:2019:MNRAS:} and Astrid \citep{Bird:2022:MNRAS:, Ni:2022:MNRAS:}. As detailed in \citet{Villaescusa-Navarro:2021:ApJ:}, for each sub-grid implementation six parameters are varied: two cosmological parameters ($\Omega_{\rm m}$ and $\sigma_8$) and four parameters dealing with baryonic astrophysics. Of these, two deal with supernovae feedback ($A_{\rm SN1}$ and $A_{\rm SN2}$) and two deal with AGN feedback ($A_{\rm AGN1}$ and $A_{\rm AGN2}$). The meanings of the feedback parameters for each subgrid model are summarized in Table~\ref{tab:feedback}. Note that the astrophysical parameters have somewhat different physical meanings for the different subgrid prescriptions, and there is usually a complex interplay between the parameters and their impact on the properties of galaxies and gas. For example, the parameter $A_{\rm SN1}$ approximately corresponds to the pre-factor for the overall energy output in galactic wind feedback per-unit star-formation in both the TNG \citep{Pillepich:2018:MNRAS:} and Astrid \citep{Bird:2022:MNRAS:} simulations. However, in the SIMBA simulations it corresponds to the to the wind-driven mass outflow rate per unit star-formation calibrated from the Feedback In Realistic Environments (FIRE) zoom-in simulations \citep{Angles-Alcazar:2017:MNRAS:b}. Similarly, the $A_{\rm AGN2}$ parameter controls the burstiness and the temperature of the heated gas during the AGN bursts in the TNG simulations \citep{Weinberger:2017:MNRAS:}. In the SIMBA suite, it corresponds to the speed of the kinetic AGN jets with constant momentum flux \citep{Angles-Alcazar:2017:MNRAS:a, Dave:2019:MNRAS:}. However, in the Astrid suite, it corresponds to the efficiency of thermal mode of AGN feedback. As we describe in \S~\ref{sec:fbY}, this can result in counter-intuitive impact on the matter power spectrum in the Astrid simulation, relative to TNG and SIMBA. For each of the sub-grid physics prescriptions, three varieties of simulations are provided. These include 27 sims for which the parameters are fixed and initial conditions are varied (cosmic variance, or CV, set), 66 simulations varying only one parameter at a time (1P set) and 1000 sims varying parameters in a six dimensional latin hyper-cube (LH set). We use the CV simulations to estimate the variance expected in the matter power suppression due to stochasticity (see Fig.~\ref{fig:Pk_Bk_CV}). We use the 1P sims to understand how the matter suppression responds to variation in each parameter individually. Finally we use the full LH set to effectively marginalize over the full parameter space varying all six parameters. We use publicly available power spectrum and bispectrum measurements for these simulation boxes \citep{Villaescusa-Navarro:2021:ApJ:}.\footnote{See also \url{https://www.camel-simulations.org/data}.} Where unavailable, we calculate the power spectrum and bispectrum, using the publicly available code \texttt{Pylians}.\footnote{\url{https://github.com/franciscovillaescusa/Pylians3}} \subsection{Baryonic effects on the power spectrum and bispectrum} \begin{figure*} \includegraphics[width=\textwidth]{figs/figs_new/Pk_Bk_CV.pdf} \caption[]{Far left: Baryonic suppression of the matter power spectrum, $\Delta P/P_{\rm DMO}$, in the CAMELS simulations. The dark-blue, red and orange shaded regions correspond to the $1\sigma$ range of the cosmic variance (CV) suite of TNG, SIMBA and Astrid simulations, respectively. The light-blue region corresponds to the $1\sigma$ range associated with the latin hypercube (LH) suite of TNG, illustrating the range of feedback models explored across all parameter values. Middle and right panels: the impact of baryonic feedback on the matter bispectrum for equilateral and squeezed triangle configurations, respectively. } \label{fig:Pk_Bk_CV} \end{figure*} The left panel of Fig.~\ref{fig:Pk_Bk_CV} shows the measurement of the power spectrum suppression caused by baryonic effects in the TNG, SIMBA, and Astrid simulations for their fiducial feedback settings. The right two panels of the figure show the impact of baryonic effects on the bispectrum for two different tringle configurations (equilateral and squeezed). To compute these quantitites, we use the matter power spectra and bispectra of the hydrodynamical simulations (hydro) and the dark-matter only (DMO) simulations generated at varying initial conditions (ICs). For each of the 27 unique IC runs, we calculate the ratios $\Delta P/P_{\rm DMO} = (P_{\rm hydro} - P_{\rm DMO})/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO} = (B_{\rm hydro} - B_{\rm DMO})/B_{\rm DMO}$. As the hydro-dynamical and the N-body simulations are run with same initial conditions, the ratios $\Delta P/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO}$ are roughly independent of sample variance. It is clear that the amplitude of suppression of the small-scale matter power spectrum can be significant: suppression on the order of tens of percent is reached for all three simulations. It is also clear that the impact is significantly different between the three simulations. Even for the simulations in closest agreement (TNG and Astrid), the measurements of $\Delta P/P_{\rm DMO}$ disagree by more than a factor of two at $k = 5\,h/{\rm Mpc}$. The width of the curves in Fig.~\ref{fig:Pk_Bk_CV} represents the standard deviation measured across the cosmic variance simulations, which all have the same parameter values but different initial conditions. For the bispectrum, we show both the equilateral and squeezed triangle configurations with cosine of angle between long sides fixed to $\mu = 0.9$. Interestingly, the spread in $\Delta P/P_{\rm DMO}$ and $\Delta B/B_{\rm DMO}$ increases with increasing $k$ over the range $0.1 \,h/{\rm Mpc} \lesssim k \lesssim 10\,h/{\rm Mpc}$. This increase is driven by stochasticity arising from baryonic feedback. The middle and right panels show the impact of feedback on the bispectrum for the equilateral and squeezed triangle configurations, respectively. Throughout this work, we will focus on the regime $0.3\,h/{\rm Mpc}< k < 10\,h/{\rm Mpc}$. Larger scales modes are not present in the $(25 {\rm Mpc}/h)^3$ CAMELS simulations, and in any case, the impact of feedback on large scales is typically small. Much smaller scales, on the other hand, are difficult to model even in the absence of baryonic feedback \citep{Schneider:2016:JCAP:}. In Appendix~\ref{app:volume_res_comp} we show how the matter power suppression changes when varying the resolution and volume of the simulation boxes. When comparing with the original TNG boxes, we find that while the box sizes do not change the measured power suppression significantly, the resolution of the boxes has a non-negligible impact. This is expected since the physical effect of feedback mechanisms depend on the resolution of the simulations. Note that the errorbars presented in Fig.~\ref{fig:Pk_Bk_CV} will also depend on the default choice of feedback values assumed. \subsection{Measuring gas profiles around halos} We use 3D grids of various fields (e.g. gas density and pressure) made available by the CAMELS team to extract the profiles of these fields around dark matter halos. The grids are generated with resolution of 0.05 Mpc/$h$. Following \citet{vanDaalen:2020}, we define $f_b$ as $(M_{\rm gas} + M_{\rm stars})/M_{\rm total}$, where $M_{\rm gas}$, $M_{\rm stars}$ and $M_{\rm total}$ are the mass in gas, stars and all the components, respectively. The gas mass is computed by integrating the gas number density profile around each halo. We typically measure $f_b$ within the spherical overdensity radius $r_{\rm 500c}$.\footnote{We define spherical overdensity radius ($r_{\Delta c}$, where $\Delta = 200, 500$) and overdensity mass ($M_{\Delta c}$) such that the mean density within $r_{\Delta}$ is $\Delta$ times the critical density $\rho_{\rm crit}$, $M_{\Delta} = \Delta \frac{4}{3} \pi r^3_{\Delta} \rho_{\rm crit}$.} The SZ effect is sensitive to the electron pressure. We compute the electron pressure profiles, $P_e$, using $P_e = 2(X_{\rm H} + 1)/(5X_{\rm H} + 3)P_{\rm th}$, where $P_{\rm th}$ is the total thermal pressure, and $X_{\rm H}= 0.76$ is the primordial hydrogen fraction. Given the electron pressure profile, we measure the integrated SZ signal within $r_{\rm 500c}$ as: \begin{equation}\label{eq:Y500_from_Pe} Y_{\rm 500c} = \frac{\sigma_{\rm T}}{m_e c^2}\int_0^{r_{\rm 500c}} 4\pi r^2 \, P_e(r) \, dr, \end{equation} where, $\sigma_{\rm T}$ is the Thomson scattering cross-section, $m_{e}$ is the electron mass and $c$ is the speed of light. We normalize the SZ observables by the self-similar expectation \citep{Battaglia:2012:ApJ:a},\footnote{Note that we use spherical overdensity mass corresponding to $\Delta = 500$ and hence adjust the coefficients accordingly, while keeping other approximations used in their derivations as the same.} \begin{equation} \label{eq:y_ss} Y^{\rm SS} = 131.7 h^{-1}_{70} \,\bigg( \frac{M_{500c}}{10^{15} h^{-1}_{70} M_{\odot}} \bigg)^{5/3} \frac{\Omega_{\rm b}}{0.043} \frac{0.25}{\Omega_{\rm m}} \, {\rm kpc^2}, \end{equation} where, $M_{200c}$ is mass inside $r_{200c}$ and $h_{70} = h/0.7$. This calculation, which scales as $M^{5/3}$, assumes hydrostatic equilibrium and that the baryon fraction is equal to cosmic baryonic fraction. Hence deviations from this self-similar scaling provide a probe of the effects of baryonic feedback. Our final SZ observable is defined as $Y_{500c}/Y^{\rm SS}$. On the other hand, the amplitude of the pressure profile approximately scales as $M^{2/3}$. Therefore, when considering the pressure profile as the observable, we factor out a $M^{2/3}$ scaling. \section{Results I: Simulations} \label{sec:results_sims} \subsection{Inferring feedback parameters from $f_b$ and $y$} \label{sec:fisher} We first consider how the halo $Y$ signal can be used to constrain the parameters describing the subgrid physics models. This question has been previously investigated using the CAMELS simulations by \citet{Moser:2022} and \citet{Wadekar:2022}. The rest of our analysis will focus on constraining changes to the power spectrum and bispectrum, and our intention here is mainly to provide a basis of comparison for those results. Similar to \citet{Wadekar:2022}, we treat the mean $\log(Y_{500c}/M^{5/3})$ value of all the halos in two mass bins ($10^{12} < M (M_{\odot}/h) < 5\times 10^{12}$ and $5 \times 10^{12} < M (M_{\odot}/h) < 10^{14}$) as our observable; we refer to this observable as $\vec{d}$. In this section, we restrict our analysis to only the TNG simulations. Here and throughout our investigations with CAMELS we ignore the contributions of measurement uncertainty since our intention is mainly to assess the information content of the SZ signals. We therefore use the CV simulations to determine the covariance, $\mathbf{C}$, of the $\vec{d}$. Note that the level of cosmic variance will depend on the volume probed, and can be quite large for the CAMELS simulations. Given this covariance, we use the Fisher matrix formalism to forecast the precision with which the feedback and cosmological parameters can be constrained. The Fisher matrix, $F_{ij}$, is given by \begin{equation} F_{ij} = \frac{\partial \vec{d}^T}{\partial \theta_i} \mathbf{C}^{-1} \frac{\partial \vec{d}}{\partial \theta_i}, \end{equation} where $\theta_i$ refers to the $i$th parameter value. Calculation of the derivatives $\partial \vec{d}/\partial \theta_i$ is complicated by the large amount of stochasticity between the CAMELS simulations. To perform the derivative calculation, we use a radial basis function interpolation method based on \citet{Moser:2022,Cromer:2022}. We show an example of the derivative calculation in Appendix~\ref{app:emulation}. We additionally assume a Gaussian prior on parameter $p$ with $\sigma(\ln p) = 1$ for the feedback parameters and $\sigma(p) = 1$ for the cosmological parameters. The forecast parameter covariance matrix, $\mathbf{C}_p$, is then related to the Fisher matrix by $\mathbf{C}_p = \mathbf{F}^{-1}$. The parameter constraints corresponding to our calculated Fisher matrix are shown in Fig.~\ref{fig:fisher}. We show results only for $\Omega_{\rm m}$, $A_{\rm SN1}$ and $A_{\rm AGN2}$, but additionally marginalize over $\sigma_8$, $A_{\rm SN2}$ and $A_{\rm AGN1}$. The degeneracy directions seen in our results are consistent with those in \citet{Wadekar:2022}. We we find a weaker constraint on $A_{\rm AGN2}$, likely owing to the large sample variance contribution to our calculation. It is clear from Fig.~\ref{fig:fisher} that the marginalized constraints on the feedback parameters are weak. If information about $\Omega_{\rm m}$ is not used, we effectively have no information about the feedback parameters. Even when $\Omega_{\rm m}$ is fixed, the constraints on the feedback parameters are not very precise. This finding is consistent with \citet{Wadekar:2022}, for which measurement uncertainty was the main source of variance rather than sample variance. Part of the reason for the poor constraints is the degeneracy between the AGN and SN parameters. As we show below, the impacts of SN and AGN feedback can have opposite impacts on the $Y$ signal; moreover, even $A_{\rm AGN1}$ and $A_{\rm AGN2}$ can have opposite impacts on $Y$. These degeneracies, as well as degeneracies with cosmological parameters like $\Omega_m$, make it difficult to extract tight constraints on the feedback parameters from measurements of $Y$. However, for the purposes of cosmology, we are ultimately most interested in the impact of feedback on the matter distribution, and not the values of the feedback parameters themselves. These considerations motivate us to instead explore direct inference of changes to the statistics of the matter distribution from the $Y$ observables. This will be the focus of the rest of the paper. \begin{figure} \centering \includegraphics[scale=0.5]{figs/Y500_log_dec7.pdf} \caption{Forecast constraints on the feedback parameters when $\log Y_{500c}/Y^{\rm SS}$ in two halo mass bins is treated as the observable. Even when the cosmological model is fixed (red contours), the AGN parameters (e.g. $A_{AGN2}$) remain effectively unconstrained (note that we impose a Gaussian prior with $\sigma(\ln p) = 1$ on all feedback parameters, $p$). When the cosmological model is free (blue contours), all feedback parameters are unconstrained. We assume that the only contribution to the variance of the observable is sample variance coming from the finite volume of the CAMELS simulations. } \label{fig:fisher} \end{figure} \subsection{$f_b$ and $y$ as probes of baryonic effects on the matter power spectrum} \label{sec:fbY} As discussed above, \citet{vanDaalen:2020} observed a tight correlation between suppression of the matter power spectrum and the baryon fraction, $f_b$, in halos with $6\times 10^{13} M_{\odot} \lesssim M_{500c} \lesssim 10^{14}\,M_{\odot}$. That relation was found to hold regardless of the details of the feedback implementation, suggesting that by measuring $f_b$, one could robustly infer the impact of baryonic feedback on the power spectrum. We begin by investigating the connection between matter power spectrum suppression and integrated tSZ parameter in low-mass, $M \sim 10^{13}\,M_{\odot}$, halos to test if similar correlation exists (c.f. \citet{Delgado:23} for a similar figure between $f_b$ and $\Delta P/P_{\rm DMO}$). We also consider a wider range of feedback models than \citet{vanDaalen:2020}, including the SIMBA and Astrid models. \begin{figure} \includegraphics[width=0.95\columnwidth]{figs/figs_new/vanDaleen+19_with_camels_SIMBA_all_params_Y500c.pdf} \caption[]{We show the relation between matter power suppression at $k=2 h/{\rm Mpc}$ and the integrated tSZ signal, $Y_{500c}/Y^{\rm SS}$, of halos in the mass range $10^{13} < M\,(M_{\odot}/h) < 10^{14}$ in the SIMBA simulation suite. In each of six panels, the points are colored corresponding to the parameter value given in the associated colorbar. } \label{fig:Pk_SIMBA_allparams} \end{figure} Fig.~\ref{fig:Pk_SIMBA_allparams} shows the impact of cosmological and feedback parameters on the relationship between the power spectrum suppression ($\Delta P/P_{\rm DMO}$) and the ratio $Y_{\rm 500c}/Y^{\rm SS}$ for the SIMBA simulations. Each point corresponds to a single simulation, taking the average over all halos with $10^{13} < M (M_{\odot}/h) < 10^{14}$ when computing $Y_{\rm 500c}/Y^{\rm SS}$. Note that since the halo mass function rapidly declines at high masses, the average will be dominated by the low mass halos. We observe that the largest suppression (i.e. more negative $\Delta P/P_{\rm DMO}$) occurs when $A_{\rm AGN2}$ is large. This is caused by powerful AGN jet-mode feedback ejecting gas from halos, leading to a significant reduction in the matter power spectrum, as described by e.g. \citet{vanDaalen:2020, Borrow:2020:MNRAS:, Gebhardt:23}. For SIMBA, the parameter $A_{\rm AGN2}$ controls the velocity of the ejected gas, with higher velocities (i.e. higher $A_{\rm AGN2}$) leading to gas ejected to larger distances. On the other hand, when $A_{\rm SN2}$ is large, $\Delta P/P_{\rm DMO}$ is small. This is because efficient supernovae feedback prevents the formation of massive galaxies which host AGN and hences reduces the strength of the AGN feedback. The parameter $A_{\rm AGN1}$, on the other hand, controls the radiative quasar mode of feedback, which has slower gas outflows and thus a smaller impact on the matter distribution. It is also clear from Fig.~\ref{fig:Pk_SIMBA_allparams} that increasing $\Omega_{\rm m}$ reduces $|\Delta P/P_{\rm DMO}|$, relatively independently of the other parameters. By increasing $\Omega_{\rm m}$, the ratio $\Omega_{\rm b}/\Omega_{\rm m}$ decreases, meaning that halos of a given mass have fewer baryons, and the impact of feedback is therefore reduced. We propose a very simple toy model for this effect in \S\ref{sec:simple_model}. The impact of $\sigma_8$ in Fig.~\ref{fig:Pk_SIMBA_allparams} is less clear. For halos in the mass range shown, we find that increasing $\sigma_8$ leads to a roughly monotonic decrease in $Y_{500c}$ (and $f_b$), presumably because higher $\sigma_8$ means that there are more halos amongst which the same amount of baryons must be distributed. This effect would not occur for cluster-scale halos because these are rare and large enough to gravitationally dominate their local environments, giving them $f_b \sim \Omega_{\rm b}/\Omega_{\rm m}$, regardless of $\sigma_8$. In any case, no clear trend with $\sigma_8$ is seen in Fig.~\ref{fig:Pk_SIMBA_allparams} because $\sigma_8$ does not correlate strongly with $\Delta P/P_{\rm DMO}$. Fig.~\ref{fig:Y_fb_DeltaP} shows the relationship between $\Delta P/P_{\rm DMO}$ at $k = 2\,h/{\rm Mpc}$ and $f_b$ or $Y_{500c}$ in different halo mass bins and for different amounts of feedback, colored by the value of $A_{\rm AGN2}$. As in Fig.~\ref{fig:Pk_SIMBA_allparams}, each point represents an average over all halos in the indicated mass range for a particular CAMELS simulation (i.e. at fixed values of cosmological and feedback parameters). Note that the meaning of $A_{\rm AGN2}$ is not exactly the same across the different feedback models, as noted in \S\ref{sec:camels}. For TNG and SIMBA we expect increasing $A_{\rm AGN2}$ to lead to stronger AGN feedback driving more gas out of the halos, leading to more power suppression without strongly regulating the growth of black holes. However, for Astrid, increasing $A_{\rm AGN2}$ parameter would more strongly regulate and suppress the black hole growth in the box since controls the efficiency of thermal mode of AGN feedback \citep{Ni:2022:MNRAS:}. This drastically reduces the number of high mass black holes and hence effectively reducing the amount of feedback that can push the gas out of the halos, leading to less matter power suppression. We see this difference reflected in Fig.~\ref{fig:Y_fb_DeltaP} where for the Astrid simulations the points corresponding to high $A_{\rm AGN2}$, result in $\Delta P/P_{\rm DMO} \sim 0$, in contrast to TNG and SIMBA suite of simulations. For the highest mass bin ($10^{13} < M (M_{\odot}/h) < 10^{14}$, rightmost column of Fig.~\ref{fig:Y_fb_DeltaP}) our results are in agreement with \citet{vanDaalen:2020}: we find that there is a robust correlation between between $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ and the matter power suppression (also see \citet{Delgado:23}). This relation is roughly consistent across different feedback subgrid models, although the different models appear to populate different parts of this relation. Moreover, varying $A_{\rm AGN2}$ appears to move points along this relation, rather than broadening the relation. This is in contrast to $\Omega_{\rm m}$, which as shown in Fig.~\ref{fig:Pk_SIMBA_allparams}, tends to move simulations in the direction orthogonal to the narrow $Y_{500c}$-$\Delta P/P_{\rm DMO}$ locus. For this reason, and given current constraints on $\Omega_{\rm m}$, we restrict Fig.~\ref{fig:Y_fb_DeltaP} to simulations with $0.2 < \Omega_{\rm m} < 0.4$. The dashed curves shown in Fig.~\ref{fig:Y_fb_DeltaP} correspond to the toy model discussed in \S\ref{sec:simple_model}. At low halo mass, the relation between $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ and $\Delta P/P_{\rm DMO}$ appears similar to that for the high-mass bin, although it is somewhat flatter at high $f_b$, and somewhat steeper at low $f_b$. Again the results are fairly consistent across the different feedback prescriptions, although points with high $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ are largely absent for SIMBA. This is because the feedback mechanisms are highly efficient in SIMBA, driving the gas out of their parent halos. The relationships between $Y$ and $\Delta P/P_{\rm DMO}$ appear quite similar to those between $\Delta P/P_{\rm DMO}$ and $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$. This is not too surprising because $Y$ is sensitive to the gas density, which dominates $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$. However, $Y$ is also sensitive to the gas temperature. Our results suggest that variations in gas temperature are not significantly impacting the $Y_{500c}$-$\Delta P/P_{\rm DMO}$ relation. The possibility of using the tSZ signal to infer the impact of feedback on the matter distribution rather than $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ is therefore appealing. This will be the focus of the remainder of the paper. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/vanDaleen+19_with_camels_A_AGN2_nast.pdf} \caption[]{Impact of baryonic physics on the matter power spectrum at $k=2 h/{\rm Mpc}$ for the TNG, SIMBA and Astrid simulations (top, middle, and bottom rows). Each point corresponds to an average across halos in the indicated mass ranges in a different CAMELS simulation. We restrict the figure to simulations that have $0.2 < \Omega_{\rm m} < 0.4$. The dashed curves illustrate the behavior of the model described in \S\ref{sec:simple_model} when the gas ejection distance is large compared to the halo radius and $2\pi/k$. } \label{fig:Y_fb_DeltaP} \end{figure*} Fig.~\ref{fig:scatter_plot_all_ks} shows the same quantities as Fig.~\ref{fig:Y_fb_DeltaP}, but now for a fixed halo mass range ($10^{13} < M/(M_{\odot}/h) < 10^{14}$), fixed subgrid prescription (TNG), and varying values of $k$. We find roughly similar results when using the different subgrid physics prescriptions. At low $k$, we find that there is a regime at high $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ for which $\Delta P /P_{\rm DMO}$ changes negligibly. It is only when $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ becomes very low that $\Delta P/P_{\rm DMO}$ begins to change. On the other hand, at high $k$, there is a near-linear relation between $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ and $\Delta P/P_{\rm DMO}$. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/vanDaleen+19_with_camels_TNG_all_ks.pdf} \caption[]{Similar to Fig.~\ref{fig:Y_fb_DeltaP}, but for different values of $k$. For simplicity, we show only the TNG simulations for halos in the mass range $10^{13} < M (M_{\odot}/h) < 10^{14}$. The dashed curves illustrate the behavior of the model described in \S\ref{sec:simple_model} in the regime that the radius to which gas is ejected by AGN is larger than the halo radius, and larger than $2\pi/k$. As expected, this model performs best in the limit of high $k$ and large halo mass. } \label{fig:scatter_plot_all_ks} \end{figure*} \subsection{A toy model for power suppression} \label{sec:simple_model} We now describe a simple model for the effects of feedback on the relation between $f_b$ or $Y$ and $\Delta P/P_{\rm DMO}$ that explains some of the features seen in Figs.~\ref{fig:Pk_SIMBA_allparams}, \ref{fig:Y_fb_DeltaP} and \ref{fig:scatter_plot_all_ks}. We assume in this model that it is removal of gas from halos by AGN feedback that is responsible for changes to the matter power spectrum. SN feedback, on the other hand, can prevent gas from accreting onto the SMBH and therefore reduce the impact of AGN feedback \citep{Angles-Alcazar:2017:MNRAS:c, Habouzit:2017:MNRAS:}. This scenario is consistent with the fact that at high SN feedback, we see that $\Delta P/P_{\rm DMO}$ goes to zero (second panel from the bottom in Fig.~\ref{fig:Pk_SIMBA_allparams}). Stellar feedback can also prevent gas from accreting onto low-mass halos \citep{Pandya:2020:ApJ:, Pandya:2021:MNRAS:}. In some sense, the distinction between gas that is ejected by AGN and gas that is prevented from accreting onto halos by stellar feedback does not matter for our simple model. Rather, all that matters is that some amount of gas that would otherwise be in the halo is instead outside of the halo as a result of feedback effects, and it is this gas which is responsible for changes to the matter power spectrum. We identify three relevant scales: (1) the halo radius, $R_h$, (2) the distance to which gas is ejected by the AGN, $R_{\rm ej}$, and (3) the scale at which the power spectrum is measured, $2\pi/k$. If $R_{\rm ej} \ll 2\pi/k$, then there will be no impact on $\Delta P$ at $k$: this corresponds to a rearrangement of the matter distribution on scales well below where we measure the power spectrum. If, on the other hand, $R_{\rm ej} \ll R_h$, then there will be no impact on $f_b$ or $Y$, since the gas is not ejected out of the halo. We therefore consider four regimes defined by the relative amplitudes of $R_h$, $R_{\rm ej}$, and $2\pi/k$, as described below. Note that there is not a one-to-one correspondence between physical scale in configuration space and $2\pi/k$; therefore, the inequalities below should be considered as approximate. The four regimes are: \begin{itemize} \item Regime 1: $R_{\rm ej} < R_h$ and $R_{\rm ej} < 2\pi /k$. In this regime, changes to the feedback parameters have no impact on $f_b$ or $\Delta P$. \item Regime 2: $R_{\rm ej} > R_h$ and $R_{\rm ej} < 2\pi/k$. In this regime, changes to the feedback parameters result in movement along the $f_b$ or $Y$ axis without changing $\Delta P$. Gas is being removed from the halo, but the resultant changes to the matter distribution are below the scale at which we measure the power spectrum. Note that Regime 2 cannot occur when $R_h > 2\pi/k$ (i.e. high-mass halos at large $k$). \item Regime 3: $R_{\rm ej} > R_h$ and $R_{\rm ej} > 2\pi/k$. In this regime, changing the feedback amplitude directly changes the amount of gas ejected from halos as well as $\Delta P/P_{\rm DMO}$. \item Regime 4: $R_{\rm ej} < R_h$ and $R_{\rm ej} > 2 \pi/k$. In this regime, gas is not ejected out of the halo, so $f_b$ and $Y$ should not change. In principle, the redistribution of gas within the halo could lead to changes in $\Delta P/P_{\rm DMO}$. However, as we discuss below, this does not seem to happen in practice. \end{itemize} Let us now consider the behavior of $\Delta P/P_{\rm DMO}$ and $f_b$ or $Y$ as the feedback parameters are varied in Regime 3. A halo of mass $M$ is associated with an overdensity $\delta_m$ in the absence of feedback, which is changed to $\delta'_m$ due to ejection of baryons as a result of feedback. In Regime 3, some amount of gas, $M_{\rm ej}$, is completely removed from the halo. This changes the size of the overdensity associated with the halo to \begin{eqnarray} \frac{\delta_m'}{\delta_m} &=& 1 - \frac{M_{\rm ej}} {M}. \end{eqnarray} The change to the power spectrum is then \begin{eqnarray} \label{eq:deltap_over_p} \frac{\Delta P}{P_{\rm DMO}} &\sim& \left( \frac{\delta_m'}{\delta_m} \right)^2 -1 \approx -2\frac{M_{\rm ej}}{M}, \end{eqnarray} where we have assumed that $M_{\rm ej}$ is small compared to $M$. We have ignored the $k$ dependence here, but in Regime 3, the ejection radius is larger than the scale of interest, so the calculated $\Delta P/P_{\rm DMO}$ should apply across a range of $k$ in this regime. The ejected gas mass can be related to the gas mass in the absence of feedback. We write the gas mass in the absence of feedback as $f_c (\Omega_{\rm b}/\Omega_{\rm m}) M$, where $f_c$ encapsulates non-feedback processes that result in the halo having less than the cosmic baryon fraction. We then have \begin{eqnarray} M_{\rm ej} &=& f_c(\Omega_{\rm b}/\Omega_{\rm m})M - f_{b} M - M_0, \end{eqnarray} where $M_0$ is the mass that has been removed from the gaseous halo, but that does not change the power spectrum, e.g. the conversion of gas to stars. Substituting into Eq.~\ref{eq:deltap_over_p}, we have \begin{eqnarray}\label{eq:DelP_P_fb} \frac{\Delta P}{P_{\rm DMO}} = -2 \frac{f_c\Omega_{\rm b}}{\Omega_{\rm m}} \left( 1 -\frac{f_{b}\Omega_{\rm m}}{f_c \Omega_{\rm b}} - \frac{\Omega_{\rm m} M_0}{f_c \Omega_{\rm b} M} \right). \end{eqnarray} In other words, for Regime 3, we find a linear relation between $\Delta P/P_{\rm DMO}$ and $f_b \Omega_{\rm m}/\Omega_{\rm b}$. For high mass halos, we should have $f_c \approx 1$ and $M_0/M \approx 0$. In this limit, the relationship between $f_b$ and $\Delta P/P_{\rm DMO}$ becomes \begin{eqnarray}\label{eq:DelP_P_fb_2} \frac{\Delta P}{P_{\rm DMO}} = -2 \frac{\Omega_{\rm b}}{\Omega_{\rm m}} \left( 1 -\frac{f_{b}\Omega_{\rm m}}{\Omega_{\rm b}} \right), \end{eqnarray} which is linear between endpoints at $(\Delta P/P_{\rm DMO},f_b \Omega_{\rm m}/\Omega_{\rm b}) = (-2\Omega_{\rm b}/\Omega_{\rm m},0)$ and $(\Delta P/P_{\rm DMO},f_b \Omega_{\rm m}/\Omega_{\rm b}) = (0,1)$. We show this relation as the dashed line in the $f_b$ columns of Figs.~\ref{fig:Y_fb_DeltaP} and Fig.~\ref{fig:scatter_plot_all_ks}. We can repeat the above argument for $Y$. Unlike the case with $f_b$, processes other than the removal of gas may reduce $Y$; these include, e.g., changes to the gas temperature in the absence of AGN feedback, or nonthermal pressure support. We account for these with a term $Y_0$, defined such that when $M_{\rm ej} = M_0 = 0$, we have $Y + Y_0 = f_c (\Omega_{\rm b}/\Omega_{\rm m}) MT /\alpha$, where we have assumed constant gas temperature, $T$, and $\alpha$ is a dimensionful constant of proportionality. We ignore detailed modeling of variation in the temperature of the gas due to feedback and departures from hydro-static equilibrium \citep{Ostriker:2005:ApJ:}. We then have \begin{eqnarray} \frac{\alpha(Y+Y_0)}{T} = f_c (\Omega_{\rm b} / \Omega_{\rm m})M - M_{\rm ej} - M_0. \end{eqnarray} Substituting the above equation into Eq.~\ref{eq:deltap_over_p} we have \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{f_c\Omega_{\rm b}}{\Omega_{\rm m}} \left(1 - \frac{\alpha (Y+Y_0) \Omega_{\rm m}}{f_c TM \Omega_{\rm b}} - \frac{\Omega_{\rm m} M_0}{f_c \Omega_{\rm b} M} \right) . \nonumber \\ \end{eqnarray} Following Eq.~\ref{eq:y_ss}, we define the self-similar value of $Y$, $Y^{\rm SS}$, via \begin{eqnarray} \alpha Y^{\rm SS}/T = (\Omega_{\rm b}/\Omega_{\rm m})M, \end{eqnarray} leading to \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{f_c\Omega_{\rm b}}{\Omega_{\rm m}} \left(1 - \frac{(Y+Y_0)}{f_c Y^{\rm SS}} - \frac{\Omega_{\rm m} M_0}{f_c \Omega_{\rm b} M}\right). \end{eqnarray} Again taking the limit that $f_c \approx 1$ and $M_0/M \approx 0$, we have \begin{eqnarray} \frac{\Delta P}{P_{\rm DMO}} &=& -2\frac{\Omega_{\rm b}}{\Omega_{\rm m}} \left(1 - \frac{(Y+Y_0)}{ Y^{\rm SS}} \right). \end{eqnarray} Thus, we see that in Regime 3, the relation between $Y/Y^{\rm SS}$ and $\Delta P/P_{\rm DMO}$ is linear. The $Y/Y^{\rm SS}$ columns of Figs.~\ref{fig:Y_fb_DeltaP} show this relationship, assuming $Y_0 = 0$. In summary, we interpret the results of Figs.~\ref{fig:Y_fb_DeltaP} and \ref{fig:scatter_plot_all_ks} in the following way. Starting at low feedback amplitude, we are initially in Regime 1. In this regime, the simulations cluster around $f_b f_c \Omega_{\rm m}/\Omega_{\rm b} \approx 1$ (or $Y \approx Y_0$) and $\Delta P/P_{\rm DMO} \approx 0$ since changing the feedback parameters in this regime does not impact $f_b$ or $\Delta P/P_{\rm DMO}$. For high mass halos, we have $f_c \approx 1$ and $Y_0 \approx 0$ (although SIMBA appears to have $Y_0 >0$, even at high mass); for low mass halos, $f_c < 1$ and $Y_0 >0$. As we increase the AGN feedback amplitude, the behavior is different depending on halo mass and $k$: \begin{itemize} \item For low halo masses or low $k$, increasing the AGN feedback amplitude leads the simulations into Regime 2. Increasing the feedback amplitude in this regime moves points to lower $Y/Y^{\rm SS}$ (or $f_b \Omega_{\rm m}/\Omega_{\rm b}$) without significantly impacting $\Delta P/P_{\rm DMO}$. Eventually, when the feedback amplitude is sufficiently strong, these halos enter Regime 3, and we see a roughly linear decline in $\Delta P/P_{\rm DMO}$ with decreasing $Y/Y^{\rm SS}$ (or $f_b\Omega_{\rm m}/\Omega_{\rm b}$), as discussed above. \item For high mass halos and high $k$, we never enter Regime 2 since it is not possible to have $R_{\rm ej} > R_h$ and $R_{\rm ej} < 2\pi/k$ when $R_h$ is very large. In this case, we eventually enter Regime 3, leading to a linear trend of decreasing $\Delta P/P_{\rm DMO}$ with decreasing $Y/Y^{\rm SS}$ or $f_b \Omega_{\rm m}/\Omega_{\rm b}$, as predicted by the above discussion. This behavior is especially clear in Fig.~\ref{fig:scatter_plot_all_ks}: at high $k$, the trend closely follows the predicted linear relation. At low $k$, on the other hand, we see a more prominent Regime 2 region. The transition between these two regimes is expected to occur when $k \sim 2\pi/R_h$, which is roughly $5\,h^{-1}{\rm Mpc}$ for the halo mass regime shown in the figure. This expectation is roughly confirmed in the figure. \end{itemize} Interestingly, we never see Regime 4 behavior: when the halo mass is large and $k$ is large, we do not see rapid changes in $\Delta P/P_{\rm DMO}$ with little change to $f_b$ and $Y$. This could be because this regime corresponds to movement of the gas entirely within the halo. If the gas has time to re-equilibrate, it makes sense that we would see little change to $\Delta P/P_{\rm DMO}$ in this regime. \subsection{Predicting the power spectrum suppression from the halo observables} While the toy model described above roughly captures the trends between $Y$ (or $f_b$) and $\Delta P/P_{\rm DMO}$, it of course does not capture all of the physics associated with feedback. It is also clear that there is significant scatter in the relationships between observable quantities and $\Delta P$. It is possible that this scatter is reduced in some higher dimensional space that includes more observables. To address both of these issues, we now train statistical models to learn the relationships between observable quantities and $\Delta P/P_{\rm DMO}$. We will focus on results obtained with random forest regression \citep{Breiman2001}. We have also tried using neural networks to infer these relationships, but have not found any significant improvement with respect to the random forest results, presumably because the space is low-dimensional (i.e. we consider at most about five observable quantities at a time). We leave a detailed comparison with other decision tree based approaches, such as gradient boosted trees \citep{Friedman_boosted_tree:01} to a future study. \begin{figure*} \includegraphics[width=0.95\textwidth]{figs/figs_new/train_test_variantions_updated.pdf} \caption[]{ We show the results of the random forest regressor predictions for the baryonic power suppression, represented by $\Delta P/P_{\rm DMO}$, across the LH suite of simulations at four different scales $k$ using the subgrid physics models for TNG, SIMBA, and Astrid. The model was trained using the average $f_b$ of halos with masses between $5\times10^{12} < M (M_{\odot}/h) < 10^{14}$ and the cosmological parameter $\Omega_{\rm m}$. The errorbars indicate the uncertainty in the predictions normalized by the uncertainity in the CV suite at each scale, showing the 16-84 percentile error on the test set. The gray band represents the expected 1$\sigma$ error from the CV suite. The model performs well when the training and test simulations are the same. When tested on an independent simulation, it remains robust at high $k$ but becomes biased at low $k$. The results presented in the remainder of the paper are based on training the model on all three simulations. The data points at each scale are staggered for clarity. } \label{fig:Pk_Y_CV} \end{figure*} We train a random forest model to go from observable quantities (e.g. $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ and $Y_{500c}/Y^{\rm SS}$) to a prediction for $\Delta P/P_{\rm DMO}$ at multiple $k$ values. The random forest model uses 100 trees with a ${\rm max}_{\rm depth} = 10$.\footnote{We use a publicly available code: \url{https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html}. We also verified that our conclusions are robust to changing the settings of the random forest.} In this section we analyze the halos in the mass bin $5\times 10^{12} < M_{\rm halo} (M_{\odot}/h) < 10^{14}$ but we also show the results for halos with lower masses in Appendix~\ref{app:low_mass}. We also consider supplying the values of $\Omega_{\rm m}$ as input to the random forest, since it can be constrained precisely through other observations (e.g. primary CMB observations), and as we showed in \S\ref{sec:fbY}, the cosmological parameters can impact the observables.\footnote{One might worry that using cosmological information to constrain $\Delta P/P_{\rm DMO}$ defeats the whole purpose of constraining $\Delta P/P_{\rm DMO}$ in order to improve cosmological constraints. However, observations, such as those of CMB primary anisotropies, already provide precise constraints on the matter density without using information in the small-scale matter distribution. } Ultimately, we are interested in making predictions for $\Delta P/P_{\rm DMO}$ using observable quantities. However, the sample variance in the CAMELS simulations limits the precision with which we can measure $\Delta P/P_{\rm DMO}$. It is not possible to predict $\Delta P/P_{\rm DMO}$ to better than this precision. We will therefore normalize the uncertainties in the RF predictions by the cosmic variance error. In order to obtain the uncertainty in the predictions, we randomly split the data into 70\% training and 30\% test set. After training the RF regressor using the training set and a given observable, we make compute the 16th and 84th percentile of the distribution of prediction errors evaluated on the test set. This constitutes our assessment of prediction uncertainty. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot1_y500_fb_comp_FINAL_v2.pdf} \caption{ Similar to Fig.~\ref{fig:Pk_Y_CV}, but showing results when training the RF model on different observables from all three simulations (TNG, SIMBA and Astrid) to predict $\Delta P/P_{\rm DMO}$ of a random subset of the the three simulations not used in training. We find that jointly training on the deviation of the integrated SZ profile from the self-similar expectation, $Y_{500c}/Y^{\rm SS}$ and $\Omega_{\rm m}$ results in inference of power suppression that is comparable to cosmic variance errors, with small improvements when additionally adding the baryon fraction ($f_b$) of halos in the above mass range. } \label{fig:predict_y500_fb} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot2_yprof_comp_FINAL_v2.pdf} \caption{ Same as Fig.~\ref{fig:predict_y500_fb} but showing results from using the full pressure profile, $P_e(r)$, and electron number density profiles, $n_e(r)$, instead of the integrated quantities. We again find that with pressure profile and $\Omega_{\rm m}$ information we can recover robust and precise constraints on the matter power suppression. } \label{fig:predict_profiles} \end{figure*} Fig.~\ref{fig:Pk_Y_CV} shows the accuracy of the RF predictions for $\Delta P/P_{\rm DMO}$ when trained on stacked $f_b$ (for halos in $5\times 10^{12} < M_{\rm halo} (M_{\odot}/h) < 10^{14}$) and $\Omega_{\rm m}$, normalized to the sample variance error in $\Delta P/P_{\rm DMO}$. As we will show later in this section, this combination of inputs results in precise constraints on the matter power suppression. Specifically to obtain the constraints, after training the RF regressor on the train simulations, we predict the $\Delta P/P_{\rm DMO}$ on test simulation boxes at four scales. Thereafter, we create a histogram of the difference between truth and predicted $\Delta P/P_{\rm DMO}$, normalized by the variance obtained from the CV set of simulations, for each respective suite of simulations (see Fig.~\ref{fig:Pk_Bk_CV}). In Fig.~\ref{fig:Pk_Y_CV}, each errorbar corresponds to the 16th and 84th percentile from this histogram and the marker corresponds to its peak. We show the results of training and testing on a single simulation suite, and also the results of training/testing across different simulation suites. It is clear that when training and testing on the same simulation suite, the RF learns a model that comes close to the best possible uncertainty on $\Delta P/P_{\rm DMO}$ (i.e. cosmic variance). When training on one or two simulation suites and testing another, however, the predictions show bias at low $k$. This suggests that the model learned from one simulation does not generalize very well to another in this regime. This result is somewhat different from the findings of \citet{vanDaalen:2020}, where it was found that the relationship between $f_b$ and $\Delta P/P_{\rm DMO}$ did generalize to different simulations. This difference may result from the fact that we are considering a wider range of feedback prescriptions than in \citet{vanDaalen:2020}, as well as considering significant variations in cosmological parameters. Fig.~\ref{fig:Pk_Y_CV} also shows the results of testing and training on all three simulations (black points with errorbars). Encouragingly, we find that in this case, the predictions are of comparable accuracy to those obtained from training and predicting on the same simulation suite. This suggests that there is a general relationship across all feedback models that can be learned to go from $\Omega_{\rm m}$ and $f_b$ to $\Delta P/P_{\rm DMO}$. Henceforth, we will show results trained on all simulation suites and tested on all simulations suites. Of course, this result does not imply that our results will generalize to some completely different feedback prescription. In Fig.~\ref{fig:predict_y500_fb} we show the results of training the random forest on different combinations of $f_b$, $Y_{500c}$ and $\Omega_{\rm m}$. Consistent with the findings of \citet{vanDaalen:2020}, we find that $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$ results in robust constraints on the matter power suppression (blue points with errors). These constraints come close to the cosmic variance limit across a wide range of $k$. We additionally find that providing $f_b$ and $\Omega_{\rm m}$ as separate inputs to the RF improves the precision of the predictions for $\Delta P/P_{\rm DMO}$ relative to using just the combination $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$, with the largest improvement coming at small scales. This is not surprising given the predictions of our simple model, for which it is clear that $\Delta P/P_{\rm DMO}$ can be impacted by both $\Omega_{\rm m}$ and $f_b / (\Omega_{\rm b} /\Omega_{\rm b})$ independently. Similarly, it is clear from Fig.~\ref{fig:Pk_SIMBA_allparams} that changing $\Omega_{\rm m}$ changes the relationship between $\Delta P/P_{\rm DMO}$ and the halo gas-derived quantities (like $Y$ and $f_b$). We next consider a model trained on $Y_{500c}/Y^{\rm SS}$ (orange points in Fig.~\ref{fig:predict_y500_fb}). This model yields reasonable predictions for $\Delta P/P_{\rm DMO}$, although not quite as good as the model trained on $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$. The $Y/Y^{\rm SS}$ model yields somewhat larger errorbars, and the distribution of $\Delta P/P_{\rm DMO}$ predictions is highly asymmetric. When we train the RF model jointly on $Y_{500c}/Y^{\rm SS}$ and $\Omega_{\rm m}$ (green points), we find that the predictions improve considerably, particularly at high $k$. In this case, the predictions are typically symmetric around the true $\Delta P/P_{\rm DMO}$, have smaller uncertainty compared to the model trained on $f_b/(\Omega_{\rm b}/\Omega_{\rm m})$, and comparable uncertainty to the model trained on $\{ f_b/(\Omega_{\rm b}/\Omega_{\rm m})$,$\Omega_{\rm m} \}$. We thus conclude that when combined with matter density information, $Y/Y^{\rm SS}$ provides a powerful probe of baryonic effects on the matter power spectrum. Above we have considered the integrated tSZ signal from halos, $Y_{500c}$. Measurements in data, however, can potentially probe the tSZ profiles rather than only the integrated tSZ signal (although the instrumental resolution may limit the extent to which this is possible). In Fig.~\ref{fig:predict_profiles} we consider RF models trained on the stacking the full electron density and pressure profiles in the halo mass range instead of just the integrated quantities. The electron pressure and number density profiles are measured in eight logarithmically spaced bins between $0.1 < r/r_{200c} < 1$. We find that while the ratio $P_e(r)/P^{\rm SS}$ results in robust predictions for $\Delta P/P_{\rm DMO}$, simultaneously providing $\Omega_{\rm m}$ makes the predictions more precise. Similar to the integrated profile case, we find that additionally providing the electron density profile information only marginally improves the constraints. We also show the results when jointly using the measured pressure profiles for both the low and high mass halos to infer the matter power suppression. We find that this leads to only a marginal improvements in the constraints. Note that we have input the 3D pressure and electron density profiles in this case. Even though observed SZ maps are projected quantities, we can infer the 3D pressure profiles from the model used to analyze the projected correlations. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figs/figs_new/plot4_Bkeq_comp_all.pdf} \caption{Same as Fig.~\ref{fig:predict_y500_fb}, but for the impact of feedback on the bispectrum in equilateral triangle configurations. We find that the inclusion of pressure profile information results in unbiased constraints on feedback effects on the bispectrum. } \label{fig:predict_Bk_eq} \end{figure*} \subsection{Predicting baryonic effects on the bispectrum with $f_b$ and the electron pressure} In Fig.~\ref{fig:predict_Bk_eq}, we repeat our analysis from above to make predictions for baryonic effects on the matter bispectrum, $\Delta B(k)/B(k)$. Similar to the matter power spectrum, we train and test our model on a combination of the three simulations. We train and test on equilateral triangle bispectrum configurations with different scales $k$. We again see that information about the electron pressure and $\Omega_{\rm m}$ results in precise and unbiased constraints on the impact of baryonic physics on the bispectrum. The constraints improve as we go to the small scales. In Appendix~\ref{app:Bk_sq} we show similar methodology applied to squeezed bispectrum configurations. However, there are several important caveats to these results. The bispectrum is sensitive to high-mass ($M > 5\times 10^{13} M_{\odot}/h$) halos \citep{Foreman:2020:MNRAS:} which are missing from the CAMELS simulations. Consequently, our measurements of baryonic effects on the bispectrum can be biased when using CAMELS. The simulation resolution can also impact the bispectrum significantly. A future analysis with larger volume sims at high resolution could use the methodology introduced here to obtain more robust results. Finally, there would is likely to be covariance between the power spectrum suppression and baryonic effects on the bispectrum, as they both stem from same underlying physics. We defer a complete exploration of these effects to future work. \section{Results II: ACTxDES measurements and forecast} \label{sec:results_data} \begin{figure*} \includegraphics[scale = 0.45]{figs/figs_new/power_supp_data_forecast_v2.pdf} \caption[]{Constraints on the impact of feedback on the matter power spectrum obtained using our trained random forest model applied to measurements of $Y_{\rm 500c}/Y^{\rm SS}$ from the DESxACT analysis of \citet{Pandey:2022} (black points with errorbars). We also show the expected improvements from future halo-$y$ correlations from DESIxSO using the constraints in \citet{Pandey:2020}. We compare these to the inferred constraints obtained using cosmic shear \citep{Chen:2023:MNRAS:} and additionally including X-ray and kSZ data \citep{Schneider:2022:MNRAS:}. We also compare with the results from larger simulations: OWLS \citep{Schaye:2010:MNRAS:}, BAHAMAS \citep{McCarthy:2017:MNRAS:} and TNG-300 \citep{Springel:2018:MNRAS:}. } \label{fig:Pk_data_forecast} \end{figure*} Our analysis above has resulted in a statistical model (i.e. a random forest regressor) that predicts the matter power suppression $\Delta P/P_{\rm DMO}$ given values of $Y_{500c}$ for low-mass halos. This model is robust to significant variations in the feedback prescription, at least across the SIMBA, TNG and Astrid models. We now apply this model to constraints on $Y_{500c}$ coming from the cross-correlation of galaxy lensing shear with tSZ maps measured using Dark Energy Survey (DES) and Atacama Cosmology Telescope (ACT) data. \citet{Gatti:2022} and \citet{Pandey:2022} measured the cross-correlations of DES galaxy lensing with Compton $y$ maps from a combination of Advanced ACT \citep{Madhavacheril:2020:PhRvD:} and {\it Planck} data \citep{PlanckCollaboration:2016:A&A:} over an area of 400 sq. deg. They analyze these cross-correlations using a halo model framework, where the pressure profile in halos was parameterized using a generalized Navarro-Frenk-White profile \citep{Navarro:1996:ApJ:, Battaglia:2012:ApJ:b}. This pressure profile is described using four free parameters, allowing for scaling with mass, redshift and distance from halo center. The constraints on the parameterized pressure profiles can be translated directly into constraints on $Y_{500c}$ for halos in the mass range relevant to our random forest models. We use the parameter constraints from \citet{Pandey:2022} to generate 400 samples of the inferred 3D profiles of halos at $z=0$ (i.e. the redshift at which the RF models are trained) in ten logarithmically-spaced mass bins in range $12.7 < \log_{10}(M/h^{-1} M_{\odot}) < 14$. We then perform the volume integral of these profiles to infer the $Y_{\rm 500c}(M, z)$ (see Eq.~\ref{eq:Y500_from_Pe}). Next, we generate a halo-averaged value of $Y_{500c}/Y^{\rm SS}$ for the $j$th sample by integrating over the halo mass distribution in CAMELS: \begin{equation}\label{eq:Pe_stacked_data} \bigg\langle \frac{Y_{\rm 500c}}{Y^{\rm SS}} \bigg\rangle^j = \frac{1}{\bar{n}^j} \int dM \bigg(\frac{dn}{dM}\bigg)^j_{\rm CAMELS} \frac{Y^j_{\rm 500c}(M)}{Y^{\rm SS}} \end{equation} where $\bar{n}^j = \int dM (dn/dM)^j_{\rm CAMELS}$ and $(dn/dM)^j_{\rm CAMELS}$ are a randomly chosen halo mass function from the CV set of boxes of TNG, SIMBA or Astrid. This procedure allows us to incorporate the impact and uncertainties of the CAMELS box size on the halo mass function. Note that due to the small box size of CAMELS, there is a deficit of high mass halos and hence the functional form of the mass function differs somewhat from other fitting functions in literature, e.g. \cite{Tinker:2008:ApJ:}. Fig.~\ref{fig:Pk_data_forecast} shows the results feeding the $Y_{500c}/Y^{\rm SS}$ values calculated above into our trained RF model to infer the impact of baryonic feedback on the matter power spectrum (black points with errorbars). The RF model used is that trained on the TNG, SIMBA and Astrid simulations. The errorbars represent the 16th and 84th percentile of the recovered $\Delta P/P_{\rm DMO}$ distribution using the 400 samples described above. Note that in this inference we fix the matter density parameter, $\Omega_{\rm m} = 0.3$, same value as used by the CAMELS CV simulations as we use these to estimate the halo mass function. In the same figure, we also show the constraints from \citet{Chen:2023:MNRAS:} and \citet{Schneider:2022:MNRAS:} obtained using the analysis of complementary datasets. \citet{Chen:2023:MNRAS:} analyze the small scale cosmic shear measurements from DES Year-3 data release using a baryon correction model. Note that in this analysis, they only use a limited range of cosmologies, particularly restricting to high $\sigma_8$ due to the requirements of emulator calibration. Moreover they also impose cosmology constraints from the large scale analysis of the DES data. Note that unlike the procedure presented here, their modeling and constraints are sensitive to the priors on $\sigma_8$. \citet{Schneider:2022:MNRAS:} analyze the X-ray data (as presented in \citealt{Giri:2021:JCAP:}) and kSZ data from ACT and SDSS \citep{Schaan:2021:PhRvD:} and the cosmic shear measurement from KiDS \citep{Asgari:2021}, using another version of baryon correction model. A joint analysis from these complementary dataset leads to crucial degeneracy breaking in the parameters. It would be interesting to include the tSZ observations presented here in the same framework as it can potentially make the constraints more precise. Several caveats about our analysis with data are in order. First, the lensing-SZ correlation is most sensitive to halos in the mass range of $M_{\rm halo} \geq 10^{13} M_{\odot}/h$. However, our RF model operates on halos with mass in the range of $5 \times 10^{12} \geq M_{\rm halo} \leq 10^{14} M_{\odot}/h$, with the limited volume of the simulations restricting the number of halos above $10^{13} M_{\odot}/h$. We have attempted to account for this selection effect by using the halo mass function from the CV sims of the CAMELS simulations when calculating the stacked profile. However, using a larger volume simulation suite would be a better alternative (also see discussion in Appendix~\ref{app:volume_res_comp}). Moreover, the CAMELS simulation suite also fix the value of $\Omega_{\rm b}$. There may be a non-trivial impact on the inference of $\Delta P/P_{\rm DMO}$ when varying that parameter. Note, though, that $\Omega_b$ is tightly constrained by other cosmological observations. Lastly, the sensitivity of the lensing-SZ correlations using DES galaxies is between $0.1 < z < 0.6$. However, in this study we extrapolate those constraints to $z=0$ using the pressure profile model of \citet{Battaglia:2012:ApJ:b}. We note that inference obtained at the peak sensitivity redshift would be a better alternative but we do not expect this to have a significant impact on the conclusions here. In order to shift the sensitivity of the data correlations to lower halo masses, it would be preferable to analyze the galaxy-SZ and halo-SZ correlations. In \citet{Pandey:2020} we forecast the constraints on the inferred 3D pressure profile from the future halo-SZ correlations using DESI and CMB-S4 SZ maps for a wide range of halo masses. In Fig.~\ref{fig:Pk_data_forecast} we also show the expected constraints on the matter power suppression using the halo-SZ correlations from halos in the range $M_{500c} > 5\times 10^{12} M_{\odot}/h$. We again follow the same methodology as described above to create a stacked normalized integrated pressure (see Eq.~\ref{eq:Pe_stacked_data}). Moreover, we also fix $\Omega=0.3$ to predict the matter power suppression. Note that we shift the mean value of $\Delta P/P_{\rm DMO}$ to the recovered value from BAHAMAS high-AGN simulations \citep{McCarthy:2017:MNRAS:}. As we can see in Fig.~\ref{fig:Pk_data_forecast}, we can expect to obtain significantly more precise constraints from these future observations. \section{Conclusions} \label{sec:conclusion} We have shown that the tSZ signals from low-mass halos contain significant information about the impacts of baryonic feedback on the small-scale matter distribution. Using models trained on hydrodynamical simulations with a wide range of feedback implementations, we demonstrate that information about baryonic effects on the power spectrum and bispectrum can be robustly extracted. By applying these same models to measurements with ACT and DES, we have shown that current tSZ measurements already constrain the impact of feedback on the matter distribution. Our results suggest that using simulations to learn the relationship between halo gas observables and baryonic effects on the matter distribution is a promising way forward for constraining these effects with data. Our main findings from our explorations with the CAMELS simulations are the following: \begin{itemize} \item In agreement with \citet{vanDaalen:2020}, we find that baryon fraction in halos correlates with the power spectrum suppression. We find that the correlation is especially robust at small scales. \item We find (in agreement with \citealt{Delgado:23}) that there can be significant scatter in the relationship between baryon fraction and power spectrum suppression at low halo mass, and that the relationship varies to some degree with feedback implementation. However, the bulk trends appear to be consistent regardless of feedback implementation. \item We propose a simple model that qualitatively (and in some cases quantitatively) captures the broad features in the relationships between the impact of feedback on the power spectrum, $\Delta P/P_{\rm DMO}$, at different values of $k$, and halo gas-related observables like $f_b$ and $Y_{500c}$ at different halo masses. \item Despite significant scatter in the relations between $Y_{500c}$ and $\Delta P/P_{\rm DMO}$ at low halo mass, we find that simple random forest models yield tight and robust constraints on $\Delta P/P_{\rm DMO}$ given information about $Y_{500c}$ in low-mass halos and $\Omega_{\rm m}$. \item Using the pressure profile instead of just the integrated $Y_{\rm 500c}$ signal provides additional information about $\Delta P/P_{\rm DMO}$, leading to 20-50\% improvements when not using any cosmological information. When additionally providing the $\Omega_{\rm m}$ information, the improvements in constraints on baryonic changes to the power spectrum or bispectrum are modest when using the full pressure profile relative to integrated quantities like $Y_{500c}$. \item The pressure profiles and baryon fractions also carry information about baryonic effects on the bispectrum. \end{itemize} Our main results from our analysis of constraints from the DESxACT shear-$y$ correlation analysis are \begin{itemize} \item We have used the DES-ACT measurement of the shear-tSZ correlation from \cite{Gatti:2022} and \cite{Pandey:2022} to infer $Y_{500c}$ for halos in the mass range relevant to our random forest models. Feeding the measured $Y_{500c}$ into these models, we have inferred the impact of baryonic effects on the power spectrum, as shown in Fig.~\ref{fig:Pk_data_forecast}. \item We show that constraints on baryonic effects on the power spectrum will improve significantly in the future, particularly using halo catalogs from DESI and tSZ maps from CMB-S4. \end{itemize} With data from future galaxy and CMB surveys, we expect constraints on the tSZ signal from halos across a wide mass and redshift range to improve significantly. These improvements will come from both the galaxy side (e.g. halos detected over larger areas of the sky, down to lower halo masses, and out to higher redshifts) and the CMB side (more sensitive tSZ maps over larger areas of the sky). Our forecast for DESI and CMB Stage 4 in Fig.~\ref{fig:Pk_data_forecast} suggests that very tight constraints can be obtained on the impact of baryonic feedback on the matter power spectrum. We expect that these constraints on the impact of baryonic feedback will enable the extraction of more cosmological information from the small-scale matter distribution. \section{Acknowledgements} DAA acknowledges support by NSF grants AST-2009687 and AST-2108944, CXO grant TM2-23006X, and Simons Foundation award CCA-1018464. \section{Data Availability} The TNG and SIMBA simulations used in this work are part of the CAMELS public data release \citep{Villaescusa-Navarro:2021:ApJ:} and are available at \url{https://camels.readthedocs.io/en/latest/}. The Astrid simulations used in this work will be made public before the end of the year 2023. The data used to make the plots presented in this paper are available upon request. \bibliographystyle{mnras}
1,108,101,564,101
arxiv
\section*{Acknowledgements} This work was supported by the CNPq-FAPEAL grant of Brazil. The author would like thank Professor A.M. Kamchatnov for invaluable discussion.
1,108,101,564,102
arxiv
\section{Introduction} One of the oldest baryonic remnants of the early universe in our Galaxy is the massive black hole in Sgr A*. At redshifts higher than $z\sim 4$, black holes are thought to grow rapidly through radiatively inefficient accretion (Inayoshi et al 2016) and the merger of subsystems harbouring lower mass black holes (Volonteri 2010). After that time, the growth is regulated by the infall of gas, stars and dark matter. The last few e-folds of mass over 10 Gyr are grown via radiatively {\it efficient} accretion (Rees \& Volonteri 2007). The conversion efficiency must be $\epsilon\approx 10\%$ to explain the UV/x-ray background (Soltan 1982; Yu \& Tremaine 2002; Zhang \& Lu 2019). A black hole with mass $M_\bullet$ today has converted $\epsilon M_\bullet c^2$ of its rest mass into emergent energy. Over the past 10 Gyr, Sgr A*, for which $M_{\bullet} = 4.15\times 10^6$$M_{\odot}$\ (Gravity Collaboration 2019), must have released $\sim 10^{60}$ erg in relativistic particles and electromagnetic radiation to get to its current state. In the Milky Way, we observe the x-ray/$\gamma$-ray bubbles with an inferred energy 10$^{56-57}$ erg. The first evidence of a kiloparsec-scale outflow in the Galaxy came from bipolar {\it Rosat} 1.5 keV x-ray emission inferred to be associated with the Galactic Centre (Bland-Hawthorn \& Cohen 2003). In Fig.~\ref{f:fermi}, this same component is directly associated with the {\it Fermi} $\gamma$-ray bubbles (1-100 GeV) discovered by Su et al (2010). Star formation activity fails on energetic grounds by a factor of 400 based on what we see today (Miller \& Bregman 2016), or $\sim$100 if we allow for past starbursts within the limits imposed by the resolved stellar population (Nataf 2016; Bland-Hawthorn et al 2013, hereafter BH2013). The source of the x-ray/$\gamma$-ray bubbles can only be from nuclear activity: all contemporary leptonic models of the x-ray/$\gamma$-ray bubbles agree on this point with timescales for the event falling in the range 2 to 8 Myr (Guo \& Matthews 2012; Miller \& Bregman 2016; Narayanan \& Slatyer 2017; cf. Carretti et al 2012). These must be driven by the AGN (jet and/or accretion-disk wind) on a timescale of order a few Myr $-$ for a comprehensive review, see Yang et al (2018). AGN jets are remarkably effective at blowing bubbles regardless of the jet orientation because the jet head is diffused or deflected by each interaction with density anomalies in a fractal ISM (Zovaro et al 2019). The evidence for an active jet today in the Galactic Centre is weak (Bower \& Backer 1998). Su \& Finkbeiner (2012) found a jet-like feature in $\gamma$-rays extending from $({\ell,b})$ $\approx$ (-11\hbox{${}^\circ$}, 44\hbox{${}^\circ$}) to (11\hbox{${}^\circ$}, -44\hbox{${}^\circ$}); this axis is indicated in Fig.~\ref{f:nidever}. In recent simulations, the AGN jet drills its way through the multiphase ISM with a speed of roughly 1 kpc per Myr (Mukherjee et al 2018, Appendix A). If the tentative claims are not confirmed, this may indicate that either the AGN outflow was not accompanied by a jet, or the jet has already pushed through the inner disk gas and has now dispersed. Absorption-line UV spectroscopy of background AGN and halo stars reveals cool gas clouds entrained in the outflow (Fox et al 2015; Bordoloi et al 2017; Savage et al 2017; Karim et al 2018); ${\rm H\:\scriptstyle\rm I}$\ clouds are also seen (Di Teodoro et al 2018). Modeling of the cloud kinematics yields similar timescales for the wind ($\sim$6--9 Myr; Fox et al. 2015, Bordoloi et al. 2017). An updated shock model of the ${\rm O\:\scriptstyle VII}$\ and ${\rm O\:\scriptstyle VIII}$\ x-ray emission over the bubble surfaces indicates the initial burst took place $4\pm 1$ Myr ago (Miller \& Bregman 2016). A number of authors (e.g. Zubovas \& Nayakshin 2012) tie the {\it localized} x-ray/$\gamma$-ray activity to the formation of the young stellar annulus ($M_\star \sim 10^4$$M_{\odot}$) in orbit about the Galactic Centre (q.v. Koyama 2018). These stars, with uncertain ages in the range 3-8 Myr, are mostly on elliptic orbits and stand out in a region dominated by an old stellar population (Paumard et al 2006; Yelda et al 2014). A useful narrative of how this situation can arise is given by Lucas et al (2013): a clumpy prolate cloud with a dimension of order the impact radius, and oriented perpendicular to the accretion plane, sets up accretion timescales that can give rise to high-mass stars in elliptic prograde and retrograde orbits. Nuclear activity peaked during the golden age of galaxy formation ($z=1-3$; Hopkins \& Beacom 2006), but it is observed to occur in a few percent of galaxies at lower levels today. Given that most galaxies possess nuclear black holes, this activity may be ongoing and stochastic in a significant fraction, even if only detectable for a small percentage of sources at a given epoch (Novak et al 2011). If most of the activity occurred after $z \sim 1$, this argues for \textit{Fermi} bubble-like outbursts roughly every $\sim$10 Myr or so. Each burst may have lasted up to $\sim$1 Myr at a time (Guo \& Mathews 2012), flickering on shorter timescales. This argues that $\sim$10\% of all galaxies are undergoing a Seyfert phase at any time but where most outcomes, like the \textit{Fermi} bubbles, are not easily detectable (cf. Sebastian et al 2019). Independent of the mechanical timescales, BH2013 show that the high levels of \Ha\ emission along the Magellanic Stream are consistent with a Seyfert ionizing flare event 2-3 Myr ago (see Fig.~\ref{f:nidever}); starburst-driven radiation fails by two orders of magnitude. Ionisation cones are not uncommon in active galaxies today (e.g. Pogge 1988; Tsvetanov et al 1996) and can extend to $\sim$100 kpc distances (Kreimeyer \& Veilleux 2013). Here we revisit our earlier work in light of new observations and a better understanding of the Magellanic Stream's distance from the Galaxy. In \S 2, we update what has been learnt about the ionization, metallicity and gas content of the Magellanic Stream and its orbit properties. \S 3 builds up a complete model of the Galactic UV radiation field and include a major AGN contribution to illustrate the impact of nuclear activity. In \S 4, we carry out time-dependent ionization calculations to update the likely timescale for the Seyfert flare. \S 5 concludes with suggested follow-up observations and discusses the implications of our findings. \section{New observations} \subsection{Magellanic Stream: gas and metal content} Since its discovery in the 1970s, many authors have studied the physical properties of the gas along the Magellanic Stream (Putman et al 1998; Br\"{u}ns et al 2005; Kalberla et al 2005; Stanimirovic et al 2008; Fox et al 2010; Nigra et al 2012). The Stream lies along a great arc that spans more than half the sky (e.g. Nidever et al 2010). Its metallicity content is generally about one tenth of the solar value, consistent with the idea that the gas came from the SMC and/or the outer regions of the LMC (Fox et al 2013), although a filament tracing back to the inner LMC has an elevated level of metal enrichment (Richter et al 2013). The inferred total mass of the Magellanic Stream is ultimately linked to its distance $D$ from the Galactic Centre. The total ${\rm H\:\scriptstyle\rm I}$\ mass of the Magellanic gas system (corrected for He) is $5\times 10^8\; (D/55\; {\rm kpc})^2$$M_{\odot}$\ (Br\"{u}ns et al 2005) but this may not even be the dominant mass component (Bland-Hawthorn et al 2007; d'Onghia \& Fox 2016). Fox et al (2014) find that the plasma content may dominate over the neutral gas by a factor of a few such that the Stream's total gas mass is closer to $2.0\times 10^9\; (D/55\; {\rm kpc})^2$$M_{\odot}$. We discuss the likely value of $D$ measured along the South Galactic Pole (SGP) in the next section. \subsection{Magellanic Stream: orbit trajectory} The precise origin of the trailing and leading arms of the Magellanic Stream is unclear. Theoretical models for the Stream date back to at least the early seminal work of Fujimoto \& Sofue (1976, 1977). For three decades, in the absence of a distance indicator, the Stream's distance over the SGP was traditionally taken to be the midpoint in the LMC and SMC distances, i.e. $D=55$ kpc, a distance which is now thought to be too small. In a series of papers, Kallivayalil and collaborators show that the proper motions of the LMC and SMC are 30\% higher than original longstanding estimates (e.g. Kallivayalil et al 2006, 2013). Over the same period, mass estimates of the Galaxy have decreased to $M_{\rm vir} = 1.3\pm 0.3\times 10^{12}$$M_{\odot}$\ (q.v. Bland-Hawthorn \& Gerhard 2016; McMillan 2017). Thus the orbit of the Magellanic System must be highly elliptic. Contemporary models consider the LMC and SMC to be on their first infall with an orbital period of order a Hubble time (Besla et al 2007, 2012; Nichols et al 2011). The Stream is a consequence of the tidal interaction between both dwarfs. The models move most of the trailing Stream material to beyond 75 kpc over the SGP. Here we take a representative model for the Stream particles from a recent hydrodynamical simulation (Guglielmo et al 2014), adopting a smooth fit to the centroid of the particle trajectory and some uncertainty about that trajectory. In passing, we note that while the trailing Stream is understood in these models, the `leading arm' is unlikely to be explaine as a tidal extension in the same way because of the strong ram-pressure confinement imposed by the Galactic corona ahead of the Magellanic Clouds (Tepper-Garcia et al 2019). It can be debris arising from an earlier interaction of the LMC-SMC system protected by a Magellanic corona, for example. Thus, the origin of the `leading arm' is unclear and its distance is poorly constrained. Most of the cool gas ahead of the Clouds lies outside of the ionization cones in Fig.~\ref{f:nidever}. \subsection{Magellanic Stream: ionization} Weiner \& Williams (1996) first discovered elevated levels of H$\alpha$ emission along the Magellanic Stream, detections that have been confirmed and extended through follow-up observations (Weiner et al 2002; Putman et al 2003; BH2013; Barger et al 2017). There have been several attempts to understand this emission over the past two decades in terms of Galactic sources (Bland-Hawthorn \& Maloney 1999), particle trapping by magnetic fields (Konz et al 2001), thermal conduction and mixing (Fox et al 2005), cloud-halo (Weiner \& Williams 1996) and cloud-cloud interactions (Bland-Hawthorn et al 2007). While these sources can contribute to ionization and heating along the Magellanic Stream, in light of new evidence, we believe that only the Seyfert flare model (BH2013) survives as a likely candidate for the brightest emission. Further evidence for non-thermal photons being the source of the ionization comes from a UV spectroscopic study carried out with the {\it Hubble Space Telescope} ({\it HST}) of distant quasars that lie behind the Magellanic Stream. Fox et al (2014) infer ionization levels along the Stream from UV absorption features arising from ${\rm H\:\scriptstyle\rm I}$, ${\rm Si\:\scriptstyle II}$, ${\rm Si\:\scriptstyle III}$, ${\rm Si\:\scriptstyle IV}$, \ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi. They find that there are three patches along the Stream that require enhanced levels of hard ionization (30-50 eV photons) relative to stellar photons. One is highly localized at the LMC; the other regions lie towards the NGP and SGP. We argue below that these regions fall within the `ionization cones' of the Seyfert event. These data are presented and modelled in \S 4. \begin{figure*}[hbtp] \centering \includegraphics[scale=0.4]{Figs/galactic_model_zoom.png} \caption{Our model for the ionizing radiation field over the South Galactic Hemisphere arising from the opaque Galactic disk and the Large Magellanic cloud (\S 3). The units of the contours are $\log$(phot cm$^{-2}$ s$^{-1}$). Small contributions from the hot Galactic corona and the cosmic UV background are also included. The $X-Z$ plane runs through the LMC, the SGP and the Galactic Centre defined by the plane of Magellanic longitude. The ionizing flux contours are spaced in equal log intervals.} \label{f:Galaxy} \end{figure*} \begin{figure*}[hbtp] \centering \includegraphics[scale=0.5]{Figs/agn_fe03_paralleltracks.png} \caption{The ionizing field presented in Fig.~\ref{f:Galaxy} with the added contribution of a Seyfert flare event, but shown on a larger physical scale. The units of the contours are $\log$(phot cm$^{-2}$ s$^{-1}$). For illustration, we show the impact of a sub-Eddington flare ($f_E=0.3$). This flux level is needed to reproduce what we observe but is inconsistent with Sgr A* activity today. A more likely scenario is that the event occurred in the past and what is seen today is the fading recombination of this flare (BH2013). The black trajectory is a fit to the orbit path of the Magellanic Stream particles (Guglielmo et al 2014) that uses updated parameters for the Galaxy and is quite typical of modern simulations. The blue and red tracks represent the $3\sigma$ uncertainties for the distribution of Stream particles. The ionizing flux contours are spaced in equal log intervals. A schematic movie of pulsing AGN radiation is available at \url{http://www.physics.usyd.edu.au/~jbh/share/Movies/SgrA*_ionized_cone.gif}. We also include a simulation of flickering AGN radiation (Novak et al 2011) impinging on the Magellanic Stream at \url{http://www.physics.usyd.edu.au/~jbh/share/Movies/MilkyWay_ionized_cone.mp4}; the movie ends when the Magellanic Clouds reach their observed position today. } \label{f:Seyfert} \end{figure*} \section{New models} \subsection{Galactic ionization model} We model the Magellanic Stream H$\alpha$ emission and carbon absorption features using the Galactic ionization model presented by Bland-Hawthorn \& Maloney (1999, 2002), updated with the time-dependent calculations in BH2013. A cross-section through the 3D model across the South Galactic Hemisphere passing through the Galactic Centre and the LMC is shown in Fig.~\ref{f:Galaxy}. The Galactic disk parameters remain unchanged from earlier work where we considered the expected emission arising from stars. The total flux at a frequency $\nu$ reaching an observer located at a distance $D$ is obtained from integrating the specific intensity $I_\nu$ over the surface of a disk, i.e. \begin{equation} F_\nu = \int_A I_\nu({\bf n})({\bf n}.{\bf N}) {{dA}\over{D^2}} \label{e:flux} \end{equation} where ${\bf n}$ and ${\bf N}$ are the directions of the line of sight and the outward normal to the surface of the disk, respectively. In order to convert readily to an \Ha\ surface brightness, we transform equation~(\ref{e:flux}) to a photon flux after including the effect of disk opacity $\tau_{D}$ at the Lyman limit such that \begin{equation} \varphi_\star = \int_\nu {{F_\nu}\over{h\nu}} \exp(-\tau_{D}/\cos\theta)\ \cos \theta\ d\nu \label{e:star} \end{equation} for which $\vert \theta \vert \neq \pi/2$ and where $\varphi_\star$ is the photoionizing flux from the stellar population, ${\bf n}.{\bf N} = \cos\theta$ and $h$ is Planck's constant. This is integrated over frequency above the Lyman limit ($\nu=13.6\; {\rm eV}/h$) to infinity to convert to units of photon flux (phot cm$^{-2}$ s$^{-1}$). The mean vertical opacity of the disk over the stellar spectrum is $\tau_{D}=2.8\pm 0.4$, equivalent to a vertical escape fraction of $f_{\star,{\rm esc}} \approx 6$\% perpendicular to the disk (${\bf n}.{\bf N} = 1$). The photon spectrum of the Galaxy is a complex time-averaged function of energy $N_\star$ (photon rate per unit energy) such that $4\pi D^2 \varphi_\star = \int_0^{\infty} N_\star(E)\: dE$. For a given ionizing luminosity, we can determine the expected H$\alpha$ surface brightness at the distance of the Magellanic Stream. For an optically thick cloud ionized on one side, we relate the emission measure to the ionizing photon flux using ${\cal E}_m = 1.25\times 10^{-6} \varphi_\star$ cm$^{-6}$ pc (Bland-Hawthorn \& Maloney 1999). In Appendix A, we relate ${\cal E}_m$ to the more familiar milliRayleigh units (mR) used widely in diffuse detection work. The Galactic UV contribution at the distance $D$ of the Magellanic Stream is given by \begin{equation} \mu_{\star,{\rm H}\alpha} = 10\zeta \left({f_{\star,{\rm esc}}}\over{0.06} \right) \left({D}\over{{\rm 75\ kpc}}\right)^{-2} \ \ \ {\rm mR} . \label{f:Gal} \end{equation} The correction factor $\zeta \approx 2$ is included to accommodate weakly constrained ionizing contributions from old stellar populations and fading supernova bubbles in the disk (BH2013). After Barger et al (2013) and Fox et al (2014), we incorporate the UV contribution from the Large Magellanic Cloud (LMC) but with an important modification. Barger et al (2013) showed how the LMC UV ionizing intensity is sufficient to ionize the Magellanic Bridge in close proximity; the SMC UV radiation field can be neglected. This is assisted by the orientation of the LMC disk with respect to the Bridge. In our treatment, the LMC's greater distance and orientation does not assist the ionization of the Magellanic Stream. We treat the LMC as a point source with a total ionizing luminosity reduced by a factor $\exp({-\tau_L}$); $\tau_L=1.7$ is the mean LMC disk opacity which we scale from the Galactic disk opacity ($\tau_D=2.8$) by the ratio of their metallicities (Fox et al 2014). We stress that the LMC cannot be the source of the Magellanic Stream ionization. Its imprint over the local ${\rm H\:\scriptstyle\rm I}$\ is clearly seen in Barger et al (2013, Fig. 16). One interesting prospect, suggested by the referee, is that one or more ultraluminous x-ray sources (ULX) in the LMC have produced a flash of hard UV/x-ray radiation in the recent past. In fact, a few such sources are observed there (Kaaret et al 2017) and may be responsible for the localized \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ enhancement at the LMC (Fox et al 2014). We include the super-Eddington accretion spectrum in our later models (\S 4.3, \S 4.4.1) to emphasize this point. \smallskip \noindent {\sl Other sources.} We have used updated parameters for the Galactic corona from Miller \& Bregman (2016), but the UV emission from the halo remains negligible (i.e. a few percent at most) compared to the Galactic disk ($\varphi_\star \sim 5\times 10^4$ phot cm$^{-2}$ s$^{-1}$ at 75 kpc along the SGP). The cosmic ionizing intensity is taken from Weymann et al (2001) but this is of the same order as the hot corona ($\varphi_\star\lesssim 10^{3.5}$ phot cm$^{-2}$ s$^{-1}$ at 75 kpc). An earlier model attempted to explain the emission in terms of the Stream's interaction with the halo (Bland-Hawthorn et al 2007). The direct interaction of the clouds with the coronal gas is too weak to generate sufficient emission through collisional processes, but these authors show that a `shock cascade' arises if sufficient gas is stripped from the clouds such that the following clouds collide with the ablated material. This can be made to work if the Stream is moving through comparatively dense coronal material ($n_{\rm hot}\sim 10^{-4}$ cm$^{-3}$). But the greater Stream distance ($D\gta75$ kpc; e.g. Jin \& Lynden-Bell 2008) makes this less likely (Tepper-Garcia et al 2015). Barger et al (2017) adopt a massive hot halo in order to maximise the contribution from the shock cascade; whether such a corona is possible is still an open question (cf. Faerman et al 2017; Bregman et al 2018). The shock cascade model as presented above struggles to produce a Stream H$\alpha$ background of $\sim 100$ mR although there are other factors to consider in future models. The respective roles of magnetic fields (Konz et al 2001), thermal conduction (Vieser \& Hensler 2007) and turbulent mixing (Li et al 2019) have not been considered together in a dynamic turbulent boundary layer. They can work for or against each other in amplifying the observed recombination emission. Radiative/particle MHD models on galactic scales are in their infancy (Sutherland 2010; Bland-Hawthorn et al 2015) but will need to be addressed in future years. \begin{figure*}[hbtp] \centering \includegraphics[scale=0.3]{Figs/Madau.png} \caption{Accretion disk model (Madau 1988; Madau et al 2014) for high mass accretion rates. The thick disk is defined within $r \lta 500\: r_g \approx 20$ AU, where $r_g = GM_\bullet/c^2$ is the black hole gravitational radius. It produces an ionizing radiation field that is strongly dependent on viewing angle and photon energy. The vertical dashed line indicates the cut-off imposed by the dusty torus on much larger physical scales. (Left) Specific luminosity as a function of angle from the SGP evaluated at two different photon energies. (Right) The same model as the LHS now plotted on a linear scale, normalized at the Lyman limit, to emphasize the self truncation of the disk radiation field, particularly at higher energies. } \label{f:Madau} \end{figure*} \begin{figure*}[hbtp] \centering \includegraphics[scale=0.4]{Figs/em_lms_log_dots_fe03.png} \includegraphics[scale=0.4]{Figs/em_lms_log_dots_fe10.png} \caption{The predicted H$\alpha$ intensity along the Magellanic Stream ($D=75$ kpc) as a function of Magellanic longitude. The data points are taken from Weiner \& Williams (1996; W96), Weiner et al (2002; W02), WHAM Survey (BH2013), and Putman et al (2003; P03). W96 and W02 are small aperture measurements within a 10\hbox{${}^\circ$}\ window at unpublished sky positions. The longitudes of the LMC, SMC and SGP are indicated. The topmost continuous black track corresponds to the middle track shown in Fig.~\ref{f:Seyfert} for $f_E=0.3$; this is the instantaneous H$\alpha$ emission in the flash at $T_o = 0$ for optically thick gas. But since Sgr A* is in a dormant state, what we see today must have faded from the initial flash. We show the predicted trend for the H$\alpha$ emission after 0.8 and 1.5 Myr (which includes 0.5 Myr for the light crossing time to the Stream and back) for an assumed density of $n_H$ $=$ 0.1 cm$^{-3}$. The density cannot be much lower if we are to produce the desired H$\alpha$ emission; a higher density reduces the fading time. The downward arrows indicate where the in-cone predicted emission drops to the Galactic model outside of the cones. This is shown as dashed lines along the bottom. } \label{f:WHAM} \end{figure*} \subsection{Seyfert ionization model} If the Galaxy went through a Seyfert phase in the recent past, it could conceivably have been so UV-bright that it lit up the Magellanic Stream over the SGP through photoionization (Fig. 4). The Magellanic Stream has detectable H$\alpha$ emission along its length five times more luminous than can be explained by UV escaping from the Galactic stellar population or an earlier starburst (BH2013, Appendix B). The required star-formation rate is at least two orders of magnitude larger than allowed by the recent star formation history of the Galactic Centre (see \S 2). An accretion flare from Sgr A$^*$ is a much more probable candidate for the ionization source because (a) an accretion disk converts gas to ionizing radiation with much greater efficiency than star formation, thus minimizing the fuelling requirements; (b) there is an abundance of material in the vicinity of Sgr A* to fuel such an outburst. We now consider the impact of past Seyfert activity using arguments that are independent of the x-ray/$\gamma$-ray mechanical timescales (\S 1), but consistent with them. We derive the likely radiation field of an accretion disk around a supermassive black hole. BH2013 show how a Seyfert flare with an AGN spectrum that is 10\% of the Eddington luminosity ($f_E=0.1$) for a $4\times 10^6$M$_\odot$ black hole can produce sufficient UV radiation to ionize the Magellanic Stream ($D \gtrsim 50$ kpc). But since Sgr A* is quiescent today, what we see has faded significantly from the original flash. H$\alpha$ recombines faster than the gas cools for realistic gas densities ($n_e \sim 0.1-1$ cm$^{-3}$) and the well established Stream metallicity ($Z\approx 0.1\: Z_\odot$; Fox et al 2013). Thus, they find the event must have happened within the last few million years, consistent with jet-driven models of the 10 kpc bipolar bubbles. This timescale includes the double-crossing time (the time for the flare radiation to hit the Stream $+$ the time for the recombination flux to arrive at Earth), the time for the ionization front to move into the neutral gas and the recombination time. \smallskip\noindent {\sl Accretion disk model.} The Shakura-Sunyaev treatment for sub-critical accretion produces a thin Keplerian disk that can cool on an infall timescale, leading to a wide-angle thermal broadband emitter. They assumed an unknown source of turbulent stress generated the viscosity, e.g. through strong shearing in the disk. But magnetorotational instability has supplanted hydrodynamical turbulence because even a weak magnetic field threaded through the disk suffices to trigger the onset of viscosity (Balbus \& Hawley 1991). The maximum temperature of the thin disk is \begin{equation} T_{\rm max}(r) \approx 54\:(r/r_s)^{-3/4} M_{\bullet,8}^{-1/4} f_E^{1/4} \ \ {\rm eV} \end{equation} where $M_{\bullet,8}$ is the mass of the black hole in units of 10$^8$$M_{\odot}$\ (Novikov \& Thorne 1973). Thus the continuum radiation peaks above 100 eV for a sub-critical accretion disk orbiting the black hole in Sgr A* which is sufficiently hardened to account for the anomalous Stream ionization observed in UV absorption lines (Fox et al 2014). Strictly speaking, $T_{\rm max}$ is for a maximally rotating (Kerr) black hole; we need to halve this value for a stationary black hole. In order to account for the mechanical luminosity of the x-ray/$\gamma$-ray bubbles, various authors (e.g. Zubovas \& Nayaksin 2012; Nakashima et al 2013) argue for an even more powerful outburst of order the Eddington luminosity ($f_E \approx 1$). A quasi- or super-Eddington event in fact helps all aspects of our work. This is more likely to generate sufficient UV to explain the Stream's ionization while providing sufficient mechanical luminosity to drive a powerful jet or wind. But the Shakura-Sunyaev algebraic formalism breaks down at high mass accretion rates ($f_E > 0.3$) forming a geometrically thick, radiation-supported torus. These develop a central funnel around the rotation axis from which most of the radiation arises. In Fig 4, ihe radiation field and spectral hardness now have a strong dependence on polar angle (e.g. Paczynski \& Wiita 1980; Madau 1988). The hot funnels may help to accelerate material along collimated jets (Abramowicz \& Piran 1980) which could further harden the radiation field and constrict the ionization pattern. In Fig.~\ref{f:Madau}, we adopt the thick accretion disk model of Madau (1988) that ventures into the domain of mildly super-Eddington accretion rates. (A supplementary discussion of this model is provided by Acosta-Pulido et al 1990.) The specific intensity of the thick disk (in units of erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$) is given by \begin{equation} 4\pi I_\nu = 1.0\times 10^{-14} T_s^{11/4}[\beta/(1-\beta)]^{1/2} x^{3/2} e^{-1}(1-e^{-x})^{-1/2} \end{equation} where $x=h\nu/kT_s$, $\beta$ is the ratio of the gas pressure to the total pressure ($\sim 10^{-4}$), and $T_s$ is the disk surface temperature which has a weak dependence on the black hole mass and other factors, i.e. $T_s \propto M_\bullet^{-4/15}\beta^{-2/15}$. This parametric model allows us to compute the ionizing spectrum for different viewing orientations of the disk. The most important attribute of an accretion disk model for our work is the photon flux and primary geometric parameters (e.g. inner and outer cut-off radii), with other considerations like spectral shape being of secondary importance (Tarter 1969; Dove \& Shull 1994; Maloney 1999). Madau (1988) includes a correction for scattering off the inner funnel which tends to harden the ionizing spectrum and boost its intensity. But it does not necessarily generate highly super-Eddington luminosities due to advection of heat onto the black hole (Madau et al 2014, Fig. 1). In \S 4.3, we consider a broad range of ionizing continua to uncover how the spectral hardness in the 10$-$100 eV window influences the predicted UV diagnostics. \section{UV ionization of the Magellanic Stream} \subsection{Expected emission from an active nucleus} An accreting black hole converts rest-mass energy with an efficiency factor $\epsilon$ ($\sim 10\%$) into radiation with a luminosity $L_\bullet = \epsilon \dot m c^2 = 2\epsilon G \dot{m} M_\bullet/ r_s$, for which $\dot m$ is the mass accretion rate and $r_s$ is the Schwarzschild radius; for a recent review, see Zhang \& Lu (2019). The accretion disk luminosity can limit the accretion rate through radiation pressure; the Eddington limit is given by $L_E = 4\pi G M_\bullet m_p c\sigma_T^{-1}$ where $\sigma_T$ is the Thomson cross-section for electron scattering. For the condition $L_\bullet = L_E$, radiation pressure from the accretion disk at the Galactic Centre limits the maximum accretion rate to $\dot m \sim 0.2$ M$_\odot$ yr$^{-1}$. Active galactic nuclei appear to spend most of their lives operating at a fraction $f_E$ of the Eddington limit with rare bursts arising from accretion events (Hopkins \& Hernquist 2006). The orbital period of the Magellanic System is of order a Hubble time (Besla et al 2012) so we can consider the Stream to be a stationary target relative to ionization timescales. BH2013 show that for an absorbing cloud that is optically thick, the ionizing flux can be related directly to an H$\alpha$ surface brightness. The former is given by \begin{equation} \varphi_\bullet = 1.1\times 10^6 \left({{f_E}\over{0.1}}\right) \left({f_{\bullet,{\rm esc}}}\over{1.0} \right)\left(D\over{\rm 75\ kpc}\right)^{-2} \ \ \ {\rm phot\ cm}^{-2}\ {\rm s}^{-1} . \label{e:phi_agn} \end{equation} The dust levels are very low in the Stream, consistent with its low metallicity (Fox et al 2013). We have included a term for the UV escape fraction from the AGN accretion disk $f_{\bullet,{\rm esc}}$ (${\bf n}.{\bf N}=1$). The spectacular evacuated cavities observed at 21cm by Lockman \& McClure-Griffiths (2016) suggest there is little to impede the radiation along the poles, at least on large scales (Fig.~\ref{f:fermi}). Some energy is lost due to Thomson scattering, but this is only a few percent in the best constrained sources (e.g. NGC 1068; Krolik \& Begelman 1988). In principle, the high value of $f_{\bullet,{\rm esc}}$ can increase $f_{\star,{\rm esc}}$ but the stellar bulge is not expected to make more than a 10-20\% contribution to the total stellar budget (Bland-Hawthorn \& Maloney 2002); a possible contribution is accommodated by the factor $\zeta$ (equation~[\ref{f:Gal}]). \begin{figure*}[htbp] \centering \includegraphics[scale=0.35]{Figs/TimeDepRecomb.png} \caption{The decline in the \Ha\ surface brightness with time since the Seyfert flare event for gas clouds at a distance of $D=75$ kpc. The event ends abruptly in time at zero and the recombination signal declines depending on the cloud density. The light travel time there and back (roughly 0.5 Myr; BH2013) is not included here. Three tracks are shown (solid lines) for an Eddington fraction $f_E=0.1$ representing gas ionized at three different densities ($0.03-0.3$ cm$^{-3}$). The dotted lines show three tracks for an Eddington fraction of $f_E=1$. The value of the ionization parameter is shown at the time of the flare event. The hatched horizontal band is the observed \Ha\ surface brightness over the SGP. The red tracks are plausible models that explain the \Ha\ emission and these all fall within the red hatched region. The denser hatching in blue is a shorter duration fully consistent with timescales derived from the UV diagnostics (see text). Consistency between the independent diagnostics argues for $\log u=-3$ (independent of $f_E$) as characteristic of the \Ha\ emission along the Stream. } \label{f:fading} \end{figure*} The expected surface brightness for clouds that lie within an `ionization cone' emanating from the Galactic Centre is given by \begin{equation} \mu_{\bullet,{\rm H}\alpha} = 440 \; \left({{f_E}\over{0.1}}\right) \left({f_{\bullet,{\rm esc}}}\over{1.0} \right)\left(D\over{\rm 75\ kpc}\right)^{-2} \ \ \ {\rm mR} . \label{e:mu_agn} \end{equation} This provides us with an upper limit or `peak brightness' along the spin axis of the accretion disk assuming our model is correct. A few of the clumps exceed $\mu$(H$\alpha$) $\approx$ 440 mR in Fig. 6 by about a factor of 2. Our model parameters are only approximate. Equation~\ref{e:mu_agn} is also applicable to isotropic emission within the ionization cone from an unresolved point source if the restriction is caused by an external screen, e.g. a dusty torus on scales much larger than the accretion disk. But here we consider thick accretion disk models that have highly angle-dependent radiation fields. This is evident for Madau's radiation model in Fig.~\ref{f:Madau} with its footprint on the halo ionizing field shown in \ref{f:Seyfert}. Here the obscuring torus has a half-opening angle $\theta_T = 30^\circ$; the accretion disk isophotes are seen to taper at $\theta_A = 20^\circ$. Both of these values are illustrative and not well constrained by present the observations. \subsection{Time-dependent analytical model of H recombination} Thus far, we have assumed that some finite depth on the outer surface of a distant gas cloud comes into ionization equilibrium with the impinging radiation field. But what if the source of the ionizing radiation fades with time, consistent with the low Eddington fraction inferred today in the vicinity of Sgr A*? Then the ionization rate will decrease from the initial value for which equilibrium was established. We can treat the time-dependence of the H recombination lines analytically (BH2013); due to the presence of metal-line cooling, the C and Si ions require a more complex treatment with the time-dependent {\sl Mappings V} code. This analysis is covered in the next section where we find, in fact, that the \Ha\ and UV diagnostics arise in different regions of the Magellanic Stream. After Sharp \& Bland-Hawthorn (2010), we assume an exponential decline for $\varphi_i$, with a characteristic timescale for the ionizing source $\tau_s$. The time-dependent equation for the electron fraction $x_e = n_e/n_H$ is \begin{eqnarray} {dx_e\over dt} &=& -\alpha n_H x_e^2 + \zeta_0 e^{-t/\tau_s}(1-x_e) \label{e:dxdt} \end{eqnarray} where $\zeta$ is the ionization rate per atom. This was solved for in BH2013 (Appendix A). If we let $\tau_s \rightarrow 0$, so that $\varphi_i$ declines instantaneously to zero, we are left with \begin{equation} {dx_e\over dt} = -\alpha n_H x_e^2 \end{equation} For the initial condition $x_e=1$ at $t=0$, we get \begin{equation} x_e = \left(1+t/\tau_{\rm rec}\right)^{-1} \label{e:ionfrac} \end{equation} for which the recombination time $\tau_{\rm rec} = 1/\alpha n_H$ and $\alpha_B = 2.6\times 10^{-13}$ cm$^3$ s$^{-1}$ for the recombination coefficient (appropriate for hydrogen at $10^4$ K). Thus the emission measure \begin{equation} {\cal E}_m = 1.25 \varphi_6 x_e^2(t)\; {\rm cm^{-6}\;pc} \end{equation} where $\varphi_6$ is the ionizing photon flux in units of $10^6$ phot cm$^{-2}$ s$^{-1}$. It follows from equation~\ref{e:mu_agn} that \begin{equation} \mu_{\bullet}(t) = 440 \; \left({{f_E}\over{0.1}}\right) \left({f_{\bullet,{\rm esc}}}\over{1.0} \right)\left(D\over{\rm 75\ kpc}\right)^{-2} (1+t/\tau_{\rm rec})^{-2} \ \ \ {\rm mR} . \label{e:mu_agn_t} \end{equation} \begin{figure}[htbp] \centering \includegraphics[scale=0.43]{Figs/carbon-agn_5.eps} \caption{ {\sl Mappings V} ionization calculation for a {\it continuously radiating} power-law source in Fig.~\ref{f:spec}(e) ($\alpha=-1$). At the front face, the radiation hits a cold slab of gas with sub-solar metallicity ($Z=0.1 Z_\odot$) and ionization parameter, $\log u = -3.0$. In the upper panel, the change in the log ratio of two carbon (C) ions is shown as a function of depth into the slab. The log column densities on the horizontal axis are total H densities (${\rm H\:\scriptstyle\rm I}$+${\rm H}^+$) and are much larger than for the fading models. The dot-dashed line is the electron temperature $T_e$ of the gas as a function of depth, as indicated by the RHS. The lower panel shows the log ratio of each C ion to the total carbon content as a function of depth. The \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ model track is to be compared to the data points in Fig.~\ref{f:CIV}. } \label{f:carbon} \medskip \end{figure} \begin{figure*}[htbp] \centering \includegraphics[scale=0.4]{Figs/Fox.png} \caption{The column-density ratio of \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ (Fox et al 2014) along the Magellanic Leading Arm (left of shaded band) and trailing Stream (right of shaded band) presented as a function of Magellanic longitude $\ell_{\rm M}$. Detections are shown as solid symbols with typical $1\sigma$ errors being twice the size of the symbol; upper limits are shown as blue triangles, lower limits as magenta triangles. The NGP, LMC longitude and SGP are all indicated as vertical long-dashed lines. The measured values within the shaded vertical band fall along the LMC sight line. The domain of the ionization cones (NGP and SGP; see Fig. 2) is indicated by the two $\cap$-shaped curves; the dotted lines indicate $\pm 0.25$ dex in $\log u$. The specific Madau accretion disk model used is discussed in \S 3. Note that some of the enhanced \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ values -- seen against the `leading arm' of the Stream -- fall within the NGP cone. The slightly elevated values in the LMC's vicinity may be due to hard (e.g. ULX) ionizing sources within the dwarf galaxy. The red sinusoid (\S 5) is an attempt to force-fit the distribution of \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ line ratios with spherical harmonics as a function of the sky coordinates. } \label{f:CIV} \medskip \end{figure*} Equations (\ref{e:mu_agn}) and (\ref{e:mu_agn_t}) have several important implications. Note that the peak brightness of the emission depends only on the AGN parameters and the Stream distance, not the local conditions within the Stream. (This assumes that the gas column density is large enough to absorb all of the incident ionizing flux, a point we return to in \S \ref{s:critical}.) Hence, in our flare model, the Stream gas just before the ionizing photon flux switches off may not be uniform in density or column density, but it would appear uniformly bright in \Ha. After the ionizing source turns off, this ceases to be true: the highest-density regions fade first, because they have the shortest recombination times; the differential fading scales as $1/(1+t/\tau_{\rm rec})^2$. This is clearly seen in BH2013 (Fig. 6) which shows the \Ha\ surface brightness versus $n_e$ for fixed times after the flare has ended: at any given time, it is the lowest density gas that has the brightest \Ha\ emission, even as all of the Stream is decreasing in brightness. In Fig.~\ref{f:fading}, we show two sets of three fading curves defined by two values of $f_E$, 0.1 and 1, the range suggested by most AGN models of the x-ray/$\gamma$ ray bubbles (e.g. Guo \& Mathews 2012), although higher super-Eddington values have been proposed (e.g. Zubovas \& Nayakshin 2012). The three curves cover the most likely range of cloud density $n_H$ (derived in the next section). The model sets overlap for different combinations of $f_E$ and $n_H$. The hatched horizontal band is the median \Ha\ surface brightness over the SGP as discussed in BH2013. The horizontal axis is the elapsed time since the Seyfert flare event. The red tracks are reasonable models that explain the \Ha\ emission and these all fall within the red hatched region. The denser hatching in blue is a more restricted duration to explain the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ values at the SGP (\S 4.4). With the UV diagnostic constraints from {\it HST}, we find that $\log u_o \sim -3$ is a reasonable estimate of the initial ionization parameter that gave rise to the \Ha\ emission we see today. In Fig.~\ref{f:carbon}, we see that a {\it continuous} radiation field fixed at $u_o$ can produce the UV diagnostics observed but such models require very large ${\rm H\:\scriptstyle\rm I}$\ column densities (\S 4.3). This situation is unrealistic given the weak AGN activity observed today. Thus, to accommodate the fading intensity of the source, we must start at a much higher $u$ to account for the UV diagnostics. Below, we find that the observed C and Si absorption lines are unlikely to arise from the same gas that produces \Ha. \subsection{Critical column density associated with flare ionization} \label{s:critical} We have assumed until now that the Magellanic Stream has sufficient hydrogen everywhere within the observed solid angle to absorb the ionizing UV radiation from the Seyfert nucleus (e.g. Nidever et al 2008). For a continuous source of radiation (e.g. Fig.~\ref{f:carbon}), this requires an H column density greater than a critical column density \ifmmode{N_{\rm cr}}\else{$N_{\rm cr}$}\fi\ given by \begin{equation} \ifmmode{N_{\rm cr}}\else{$N_{\rm cr}$}\fi \approx 3.9\times 10^{19} \phi_6 (\ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi/0.1)^{-1}\;\; {\rm cm}^{-2} \label{e:Ncrit0} \end{equation} where $\phi_6$ is the ionizing UV luminosity in units of 10$^6$ phot cm$^{-2}$ s$^{-1}$. For simplicity, we set $D=75$ kpc and $f_{\rm esc}=1$. By substituting from equation~\ref{e:phi_agn}, we find \begin{equation} \ifmmode{N_{\rm cr}}\else{$N_{\rm cr}$}\fi \approx 4.2\times 10^{20}\; (\ifmmode{f_{\rm E}}\else{$f_{\rm E}$}\fi/0.1)(\ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi/0.1)^{-1}\;\; \rm{cm^{-2}} \label{e:Ncrit1} \end{equation} where \ifmmode{f_{\rm E}}\else{$f_{\rm E}$}\fi\ is the Eddington fraction and \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi\ is the local hydrogen volume density in units of cm$^{-3}$. Thus \begin{equation} \ifmmode{N_{\rm cr}}\else{$N_{\rm cr}$}\fi \approx 1\times 10^{20}\; u_{-3}\;\; \rm{cm^{-2}} \label{e:Ncrit2} \end{equation} where $u_{-3}$ is the ionization parameter in units of $10^{-3}$, consistent with Fig.~\ref{f:fading}, and where it follows \begin{equation} u_{-3} = 0.37 (\ifmmode{f_{\rm E}}\else{$f_{\rm E}$}\fi/0.1)(\ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi/0.1)^{-1} . \label{e:Ncrit3} \end{equation} Barger et al (2017, Fig. 12) plot the ${\rm H\:\scriptstyle\rm I}$\ column densities (averaged over the same 1 degree beam as their WHAM \Ha\ observations) versus the \Ha\ intensity. The measured values suggest that the total HI column may fall below that set by equation~\ref{e:Ncrit1} except in high-density regions ($\ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi \geq 1$). In this case, the peak \Ha\ surface brightness will be reduced by a factor $\sim N_{\rm H}/N_{\rm cr}$ from the value predicted by equation \ref{e:mu_agn}. This will contribute to, and could even dominate (see \S \ref{s:patchy}) the spread in the observed $\mu(\Ha)$. For lines of sight with $N_{\rm H} > N_{\rm cr}$, there is a constant `ionized column' recombination rate, balancing the incident ionizing flux. At the time of the Seyfert flash, once ionization equilibrium is reached (note that equations~\ref{e:Ncrit0} to \ref{e:Ncrit3} only apply when the central source is switched on), regions of lower gas density will extend deeper along the line-of-sight (and hence to larger \ifmmode{N_{\rm p}}\else{$N_{\rm p}$}\fi) to compensate for the lower \ifmmode{\langle n_{\rm e} \rangle}\else{$\langle n_{\rm e} \rangle$}\fi. \begin{figure*}[htbp] \centering \includegraphics[scale=0.7]{Figs/sources-2.pdf} \caption{ The broad distribution of ionizing continua explored within the current work using {\sl Mappings V}. The offsets along the vertical axis are arbitrary: all model spectra are normalized to the same photon number in the window indicated by the vertical dashed lines (1, 2), important for the production of H, Si and C ions. From top to bottom: generic (a) broad-line and (b) narrow-line Seyfert spectra from OPTXAGNF code (Done et al 2012; Jin et al 2012) where the dot-dashed line includes a 0.2 keV `soft Compton' corona $-$ both are scaled to $M_{\bullet}$ in Sgr A* ($R_c=60 R_g$, $f_{\rm PL}=0.4$, $\Gamma\approx 2$); (c) Seyfert spectrum derived by JBH2013 from NGC 1068 observations; (d) Starburst99 spectra for impulsive burst (red) and extended 4~Myr phase (black) assuming a Kroupa IMF; (e) power-law spectra with $f_\nu \propto \nu^{\alpha}$ for which $\alpha=-1.0, -1.5, -2.0$; (f) a total of four ULX spectra from OPTXAGNF code split between $M_\bullet=100$$M_{\odot}$\ (dotted line) and $M_\bullet=1000$$M_{\odot}$\ (solid line), both models with an inner disk ($R_c=6 R_g$), and one case each with an extended component ($R_c=20 R_g$, $f_{\rm PL}=0.2$, $\Gamma\approx 2$) $-$ all models are fed for 1 Myr at $f_E=1$; (g) hot star from CMFGEN code with solar metallicity, surface temperature 41,000 K and surface gravity $\log\;g=3.75$ (Hillier 2012).} \label{f:spec} \end{figure*} \subsection{Time-dependent Mappings model of C, Si recombination} We use the {\sl Mappings V} code (Sutherland \& Dopita 2017) to study the ionization, recombination and cooling of the C and Si ions at the surface of Magellanic Stream clouds. To determine the expected column depths of the different ionization states, we explore a broad range of {\sl Mappings V} photoionization models extending across black-hole accretion disk, starburst and individual stellar sources. The full range of models is illustrated in Fig.~\ref{f:spec}. The vertical dashed lines at 13.6 eV and 64.5 eV\footnote{The \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ ionization potential is 47.9 eV but \ifmmode{{\rm C\:\scriptstyle V}}\else{${\rm C\:\scriptstyle V}$}\fi\ at 64.5 eV is important for reducing \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ in the presence of a hard ionizing continuum.} delimit the most important energy range in the production of the H, Si and C ions in our study. For both the AGN and starburst/stellar photoionization models, we assume: (i) a constant density ionization-bounded slab with $n_{\rm H}=0.01$ cm$^{-3}$, (ii) a gas-phase metallicity of $Z=0.1 Z_{\odot}$ (Fox et al 2013) made consistent for all elements with concordance abundances (Asplund 2005); (iii) at these low metallicities, we can ignore depletion onto dust grains; (iv) we assume that the gas column density is large enough everywhere to absorb all of the incident ionizing flux (\S 4.3). The results are only weakly dependent on $n_{\rm H}$ but have a strong dependence on the ionization parameter $u=\varphi/(c n_H)=10^6\varphi_6/(c n_H)$, which is how we choose to discuss the main results. The set up for all photoionization models is given in Table~\ref{t:models} where the required ionized and neutral column densities are listed in columns 4 and 5. In column 3, we show the instantaneous electron temperature $T_e$ after the flash occurs; values indicated in italics are generally too low for sustained enhancements in all of the UV diagnostics (ion ratios, column densities, etc.). We explore each of the models below but, in summary, we find that for a fading source, only the accretion-disk driven radiation fields at high ionization parameter ($\log u \;\lower 0.5ex\hbox{$\buildrel > \over \sim\ $} -2$) generate the high temperatures required to reproduce the observed UV diagnostics. We show illustrative plots for each diagnostic below, but the results are tabulated in Appendix B (Tables 2-3). \begin{figure*}[htbp] \centering \includegraphics[scale=0.4]{Figs/star_U-1.pdf} \includegraphics[scale=0.4]{Figs/ULX_U-1.pdf} \caption{ {\sl Mappings V} time-dependent ionization calculation (top) for the fading star cluster model in Fig.~\ref{f:spec}(d); (bottom) for the fading ULX model with the hardest spectrum in Fig.~\ref{f:spec}(f). We show the time evolution of the ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ and \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ ratios (left) and projected column density of all four ions (right). The light travel time there and back (roughly 0.5 Myr; BH2013) is not included here. The grey horizontal band encloses most of the high-ionization data points along the Magellanic Stream (Fig.~\ref{f:CIV}; Fox et al 2014). A stellar or starburst, photoionizing spectrum fails to produce sufficient \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ or ${\rm Si\:\scriptstyle IV}$\ absorption regardless of its bolometric luminosity. In principle, a ULX spectrum can produce the observed UV diagnostic ratios and column densities; this may account for the enhanced \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ localised around the LMC (Fig.~\ref{f:CIV}). The initial ionization parameter at the front face of the slab is $\log u_o = -1$ for both models ($Z=0.1Z_\odot$). At $\log u_o=-2$, the tracks in the top figures fall below the grey band, and the tracks in the bottom figures cross the grey band in half the time. The results for more ions are presented in Appendix B. } \label{f:stellarfading} \medskip \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[scale=0.5]{Figs/jbh2013-fading.pdf} \caption{ {\sl Mappings V} time-dependent ionization calculation for the fading AGN model in Fig.~\ref{f:spec}(c). The initial ionization parameter at the front face of the slab is $\log u_o = -1$ (upper) and $\log u_o = -2$ (lower) where $Z=0.1Z_\odot$. On the LHS, the grey band refers to the observed \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ ratios; there is no UV constraint for ${\rm H}^+$/${\rm H\:\scriptstyle\rm I}$. The grey horizontal bands enclose most of the high-ionization data points along the Magellanic Stream (Fig.~\ref{f:CIV}; Fox et al 2014). The AGN models considered (Table~1) give essentially the same results with only small differences in the trends. On the RHS, the evolution in projected column density is shown for four metal ions and ${\rm H\:\scriptstyle\rm I}$\ determined from UV spectroscopy. The top grey band refers to ${\rm H\:\scriptstyle\rm I}$\ for which most values quoted in Fox et al (2014) are upper limits; the bottom grey bands refers to the metal ions. For the LHS tracks to fall within the grey band simultaneously, over the allowed range of $u_o$ ($-2 < \log u_o < -1$), the estimated time span is $2-4$ Myr. The light travel time there and back (roughly 0.5 Myr; BH2013) is not included here. The results for more ions are quantified in Appendix B. } \label{f:AGNfading} \medskip \end{figure*} \input{initialconditions} \subsubsection{Stellar, starburst and ULX models} In Fig.~\ref{f:stellarfading} (top), even though star forming regions in either the LMC or the Galaxy do not contribute significantly to the ionization of the Magellanic Stream, for completeness, we include a {\it Mappings V} time-dependent ionization calculation for the fading star cluster model in Fig.~\ref{f:spec}(d). We present the evolution of the ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ and \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ ratio (left) and the evolution in projected column density of ${\rm H\:\scriptstyle\rm I}$\ and all four metal ions (right). The grey horizontal band encloses most of the data points along the Magellanic Stream (Fox et al 2014). A comparison of both figures shows that the gas layer is cooling down through metal-line (and H) recombination. A stellar or starburst photoionizing spectrum fails to produce sufficient \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ or ${\rm Si\:\scriptstyle IV}$\ absorption (Tables 1-3), regardless of its bolometric luminosity. For the incident ionizing radiation field, we also explore the CMFGEN O-star grid of Hillier (2012), and settle on an O-star with $T_{\rm eff}=41000$ K and $\log\:g=3.75$, which represents a somewhat harder version of the typical Milky Way O-star. Importantly, this ionizing spectrum is unable to excite appreciable amounts of \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ or ${\rm Si\:\scriptstyle IV}$. The same holds true for static photoionization models. Lower $u$ values ($\log\,u < -2.0$) and stellar photoionization both fall short of producing such high columns and column ratios that, taken together, are a serious challenge for any model. Typical \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ columns from the hard stellar spectra rarely exceed $10^{10}$ cm$^{-2}$ for a reasonable range of $u$. But there is a special case we need to consider that is not factored into the existing Starburst99 models. Ultraluminous x-ray sources (ULX) are known to be associated with vigorous star-forming regions and, indeed, a few have been observed in the LMC (Kaaret et al 2017). In Fig.~\ref{f:stellarfading}, we show that the hard spectrum of the ULX source can in principle achieve the observed UV diagnostics along the Magellanic Stream. We do not believe one or more ULX sources account for the enhanced values over the SGP although they could account for the slightly elevated levels observed near the LMC (Fig.~\ref{f:CIV}). There are numerous problems with an LMC explanation as has been explored in earlier work. Barger et al (2013) show that the mutual ionization of the LMC and SMC on their local gas is well established, as are their respective orientations. The \Ha\ surface brightness declines with radius for both sources. Furthermore, the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ ratios rise dramatically as we move {\it away} from the LMC in Magellanic longitude $\ell_M$ (Fig.~\ref{f:CIV}) which is entirely inconsistent for the LMC being responsible. The extent of the LMC ionization is illustrated in Fig.~3. \subsubsection{AGN models} \label{s:agn} The bursty stellar ionizing radiation from the LMC or from the Galaxy fails by two orders of magnitude to explain the Stream (BH2013, Appendix B). We believe the most reasonable explanation is the fading radiation field of a Seyfert flare event. In Fig.~\ref{f:fading}, the incident AGN radiation field strength is defined in terms of the initial ionization parameter $u$ and explore 3 tracks that encompass the range expected across the Magellanic Stream: $\log\,u=-3.5, -3.0, -2.5$. As argued in \S 4.2, this range can account for the Stream \Ha\ emissivity but the UV signatures likely arise under different conditions. We now investigate the C and Si diagnostics because of their potential to provide an independent estimate of when the Seyfert flare occurred. Here, we explore a wide range of models summarised in Fig.~\ref{f:spec}, including generic models of Seyfert galaxies, power-law spectra and the ionizing Seyfert spectrum that includes a `big blue bump' based on the BH2013 model (Appendix C within), where we assumed a hot component (power-law) fraction of 10\% relative to the big blue bump ($k_2=k_1$ in equation 3 of BH2013). The time-dependent models were run by turning on the source of ionization, waiting for the gas to reach ionization/thermal equilibrium, and then turning off the ionizing photon flux. The sound crossing time of the warm ionized layers is too long ($\;\lower 0.5ex\hbox{$\buildrel > \over \sim\ $}$10 Myr) in the low density regime relevant to our study for isobaric conditions to prevail; essentially all of our results are in the isochoric limit. We provide a synopsis of our extensive modelling in Fig.~\ref{f:AGNfading} and Table~\ref{t:models}. In order to account for the Si and C ion ratios and projected column densities, we must `over-ionize' the gas, at least initially. This pushes us to a higher-impact ionization parameter at the front of the slab. Given that the fading source must also account for the \Ha\ emissivity along the Magellanic Stream, we can achieve the higher values of $u$, specifically $\log u > -3$, by considering gas at even lower density (\ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi $<$ 0.01 cm$^{-3}$) consistent with the Stream's properties. Specifically, for the UV diagnostic sight lines, the ${\rm H\:\scriptstyle\rm I}$\ column is in the range $\log$\ifmmode{\rm N}_{\scriptscriptstyle H}\else{N$_{\scriptscriptstyle H}$}\fi\ $=$ 17.8-18.3 when detected (Fox et al 2014, Fig. 4), although for most sight lines, only an upper limit in that range is possible. For our canonical Stream depth of $L\sim 1$ kpc, this leads to \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi\ $\sim$ 0.001 cm$^{-3}$. Such low densities lead to initially higher gas temperatures, and slower cool-down and recombination rates. The range of densities derived in this way is reasonable. The high end of the range explains the presence of both ${\rm H\:\scriptstyle\rm I}$\ and H$_2$ in absorption along the Stream (Richter et al 2013). More generally, for a spherical cloud, its mass is approximately $M_c \sim f_n\rho_c d_c^3/2$ where the subscript $n$ denotes that the filling factor refers to the neutral cloud prior to external ionization. From the projected ${\rm H\:\scriptstyle\rm I}$\ and \Ha\ data combined, the Magellanic Stream clouds rarely exceed $d_c \approx 300$ pc in depth and $N_c\approx 10^{21}$ cm$^{-2}$ in column, indicating total gas densities of $n_H = \rho_c/m_p \lta $ a few atoms cm$^{-3}$ in the densest regions, but extending to a low density tail (reaching to 3 dex smaller values) for most of the projected gas distribution. In Fig.~\ref{f:AGNfading}, we see that the higher ionization parameters (upper: $\log u=-1$, lower: $\log u=-2$) are ideal for reproducing the UV diagnostics (grey bands). The very high photon fluxes (relative to the adopted \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi\ $\sim$ 0.001 cm$^{-3}$) generate high temperatures in the gas ($\sim 20-30,000$ K depending on $\log u$; Table~\ref{t:models}) and the harder spectrum ensures the higher ion columns (see Fig.~\ref{f:carbon}). This gas cools in time creating enhanced amounts of lower ionization states like \ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi. The lower initial densities ensure the cooling time is not too rapid. UV diagnostics like \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ decline on timescales of order a few Myr. Note that AGN models run at higher $u$ ($\log u > -1$) are unphysical within the context of our framework. This would either require even lower gas densities in the slab, which are inconsistent with the observed column densities in the Stream, or an AGN source at Sgr A* that has super-Eddington accretion ($f_E > 1$). While such sources appear to exist around low-mass black holes (e.g. Kaaret et al 2017), we are unaware of a compelling argument for super-Eddington accretion in Seyfert nuclei (cf. Begelman \& Bland-Hawthorn 1997). In any event, going to an arbitrarily high $u$ with a hard ionizing spectrum overproduces \ifmmode{{\rm C\:\scriptstyle V}}\else{${\rm C\:\scriptstyle V}$}\fi\ and higher states at the expense of \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi. \subsection{Constraining the lookback time of the Seyfert flash} \label{s:UVtwo} If Sgr A* was radiating at close to the Eddington limit within the last 0.5 Myr, the entire Magellanic Stream over the SGP would be almost fully ionized (e.g. Fig.~\ref{f:carbon}) -- this is not observed. Instead, we are witnessing the Stream at a time when the central source has switched off and the gas is cooling down. The different ions (H, \ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi, ${\rm Si\:\scriptstyle II}$, \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi, ${\rm Si\:\scriptstyle IV}$) recombine and cool at different rates depending on the local gas conditions. We can exploit the relative line strengths to determine a unique timescale while keeping in mind that the observed diagnostics probably arise in more than one environment. Taken together, the \Ha\ surface brightness and the UV diagnostic ratios observed along the Stream tell a consistent story about the lookback time to the last major ionizing event from Sgr A* (cf. Figs. 6, 7, 11). These timescales are inferred from detailed modelling but the model parameters are well motivated. For a Stream distance of 75 kpc or more, the Eddington fraction is in the range $0.1 < f_E < 1$. For our model to work, we require the \Ha\ emission and UV absorption lines to arise from different regions. For the same burst luminosity, the initial ionization parameter $u_o^{\rm H\alpha}$ to account for the \Ha\ emission is $\log u_o^{\rm H\alpha} \sim -3$ impinging on gas densities above \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi\ $\sim$ 0.1 cm$^{-3}$. The initial conditions $u_o^{\rm UV}$ for the UV diagnostics are somewhat different with $\log u_o^{\rm UV} \sim -1$ to $-2$ operating with gas densities above \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi\ $\sim$ 0.001 cm$^{-3}$. In Fig.~\ref{f:AGNfading}, the AGN models are able to account for the UV diagnostics. The time span is indicated by when both the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ tracks fall within the grey band accommodating most of the `ionization cone' data points in Fig.~\ref{f:CIV}. The lower time limit is defined by $\log u=-2$ (both AGN model tracks in band) and the upper time limit by $\log u=-1$. Taken together, this indicates a lookback time for the AGN flash of about $2.5-4.5$ Myr. As shown in Fig. 7, the UV diagnostics are more restrictive than the \Ha\ constraint. When looking at both figures, we must include the double-crossing time of $2 T_c \approx 0.5$ Myr (BH2013) to determine the total lookback time. \subsection{Fading source: rapid cut-off or slow decay?} Our model assumption that the flare abruptly turned off is not necessarily correct, and the behaviour of the \Ha\ emission and the UV diagnostics can be different when the flare decay time is non-zero. To understand this behaviour, in Fig.~\ref{f:muHalpha}, we reproduce for the reader's convenience Fig. 8 from BH2013. This shows the \Ha\ surface brightness relative to the peak value as a function of $\tau$, the time since the source's flare began to decline in units of the recombination time. Each curve is labeled with the ratio of the recombination time to the $e$-folding timescale for the flare decay, $\tau_s$. Note that the limiting case $\tau_{\rm rec}/\tau_s= \infty$ is for a source that instantly turns off. We refer the reader to Appendix A of BH2013 for mathematical details, but the important point is the following. If $\tau_{\rm rec}/\tau_s$ is small, say 0.2, the surface brightness does not begin declining until $\tau\approx 20$. This is just a reflection of the fact that if the recombination time is short compared to the source decay time, the ionization equilibrium tracks the instantaneous incident ionizing photon flux, and the flare decline takes many recombination times. If $\tau_{\rm rec}/\tau_s$ is greater than $\sim$ a few, on the other hand, then the results are nearly indistinguishable from the instant turn-off case, except for $\tau < 1$. \begin{figure}[htbp] \centering \includegraphics[scale=0.58]{Figs/muHalpha_evolve2_jbh.eps} \caption{The predicted \Ha\ surface brightness relative to the peak value versus time $\tau$ measured in units of the recombination time. The individual curves are labeled with the ratio of the recombination time $\tau_{\rm rec}$ to the flare $e$-folding time, $\tau_s$. } \label{f:muHalpha} \medskip \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{Figs/MSRecomb.pdf} \caption{ {\it Mappings V} calculations for the ionization fraction (F) for different ions as a function of the product of gas density \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi\ (cm$^{-3}$) and time (years). The ionization source is our AGN power-law ($f_\nu \propto \nu^{-1}$) model (\S 3); the addition of the `big blue bump' increases the timescale by a small factor. The radiation is hitting a cold slab of gas with sub-solar metallicity ($Z=0.1 Z_\odot$). At the front face, the ionization parameter is $\log u = -3.0$ (top) and $\log u = -2.0$ (bottom). } \label{f:ion_recomb} \medskip \end{figure} Although Fig.~\ref{f:muHalpha} shows the normalized \Ha\ surface brightness, it applies to any measure of the ionization state of the gas, in particular to \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi. In Fig.~\ref{f:ion_recomb}, we use {\it Mappings V} to compute the time dependence of the relevant carbon ion ratios after the Stream gas has been hit by a Seyfert flare. In this model the gas has been allowed to come into photoionization equilibrium, and then the source was turned off. Results are presented for two different ionization parameters, $\log u = -2.0, -3.0$. We scale out the density dependence by using $\ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi t$ as the horizontal axis. Note that this is equivalent to plotting time $\tau$ in recombination times, as in Fig.~\ref{f:muHalpha}. This figure illustrates two important points. First, once \ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ becomes the dominant carbon ion, at $\log \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi t\approx 4.4$, it has a recombination time that exceeds that of ${\rm H}^+$. Secondly, and more importantly for the Stream UV absorption line observations, \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ is abundant only for a very limited range in $\log \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi t$, due to its rapid recombination. Since the UV observations show that \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ and \ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ are comparable in abundance (see Figure \ref{f:CIV}), this places a strict upper limit on the age of the burst once the gas density is known. (Note that in the regime where the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ ratio is near the observed values, ${\rm C\:\scriptstyle III}$\ is the dominant carbon ion in the gas.) For \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi, $\tau_{\rm rec}/\tau_s$ is always much smaller (for gas of similar density) than it is for ${\rm H}^+$ ; this is why the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ abundance declines so much more rapidly compared to ${\rm H}^+$\ in Fig.~\ref{f:ion_recomb}. It is plausible that $\tau_{\rm rec}$ for \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ is short compared to $\tau_s$ (indeed, this is the likely case unless the flare decay was very abrupt or the Stream densities are unexpectedly low). Hence the carbon ionization balance will closely track the photoionization equilibrium corresponding to the instantaneous value of $\phi$ (and hence $u$), while the \Ha\ emission will reflect an earlier, larger ionizing flux. In summary, the flare could be decaying at the present lookback time of approximately half a million years, and the carbon absorption lines (in particular, \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi) measure the strength of the ionizing flux at that time. The brightest \Ha\ emission then reflects the peak intensity of the ionizing flux, or at least something closer to that value than what is indicated by the carbon ion ratios. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{Figs/Carbon_Silicon_Temp.png} \caption{{\it Mappings V} calculations (assuming $Z=0.1Z_\odot$) for the ratio of two ions in a cooling gas shown for \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$. The tracks cross at 10$^4$ K and above 10$^5$ K relevant to photoionization and moderate shock ($v_s \sim 100$ km s$^{-1}$) zones respectively. } \label{f:ion_temp} \medskip \end{figure} \subsection{Other potential sources of ionization} \subsubsection{Explosive shock signatures} We find that a \textit{Fermi }bubble-like explosion in the distant past ($\sim$ 150 Myr $(v_s/500\; {\rm km\; s^{-1}})^{-1}$) $-$ moving through the Magellanic Stream today with a shock velocity $v_s$ $-$ cannot explain either the UV diagnostics or the \Ha\ emissivity, even considering the additional contribution from the photoionized precursor. The detailed modelling of Miller \& Bregman (2016) makes that very clear when extrapolated from 10 kpc (tip of the bubble) to a distance of 75 kpc or more. The intrinsic wind velocity creating the pressure in the bubbles is of order 3000$-$10,000\ km s$^{-1}$\ (Guo \& Mathews 2012) but the wind must push aside the hot Galactic corona to reach the Magellanic Stream. Today, there is a strong pressure gradient across the bubbles, with a thermal pressure ($P_{\rm th}/k$) of roughly 6000 cm$^{-3}$ K at the base dropping to about 1000 cm$^{-3}$ K at the tip. The hot shell has an outflow (shock) velocity of $v_s \approx 490$\ km s$^{-1}$\ (Mach number, ${\cal M}\approx 2-3$) pushing into an external Galactic corona with $P/k \approx 200$ cm$^{-3}$ K. If a cloud exists at the tip of the \textit{Fermi} bubbles, the combined thermal and ram-pressure shock driven into the lowest density gas drives a shock velocity of $v_s \approx 60$\ km s$^{-1}$, too weak to account for \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi, ${\rm Si\:\scriptstyle IV}$\ or the \Ha\ emissivity. The same holds true for the weak x-ray emission emanating from the cooling bubbles. In reality, the \textit{Fermi} bubbles are expected to expand and diffuse into the Galactic corona after only a few tens of kiloparsecs, such that the hot shell never reaches the Stream. To date, diffuse x-ray emission associated directly with the Magellanic Stream has never been observed and would not be expected in our scenario. Thus we do {\it not} believe an energetic bubble (or jet) has ever swept past the Magellanic Stream and, even if it were possible, the shock front would be too weak to leave its mark. For completeness, we use {\it Mappings V} to explore plausible time-dependent shock scenarios for exciting \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ and ${\rm Si\:\scriptstyle IV}$. Once again, we treat metal-poor gas and assume a 1D planar geometry at the working surface. If the shock is allowed to run indefinitely, it cools down to a mostly neutral phase near 100~K. Under these conditions, \ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle II}$\ ionization fractions steadily rise with respect to the higher ionization states. If we truncate the cooling shock at 10$^4$ K, \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ are both less than 0.1 for fast shocks ($v_s \;\lower 0.5ex\hbox{$\buildrel > \over \sim\ $} 100$ km s$^{-1}$), but diverge for slow shocks, e.g. $v_s = 60$ km s$^{-1}$ gives \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ $\approx$ 0.01, and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ $\approx$ 25. These are manifestly inconsistent with observations. In Fig.~\ref{f:ion_temp}, we compute the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ and ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ ion ratios vs. the ionized gas temperature, $T_e$. This ratio is {\it not} independent of the gas abundances for metal-poor gas; the calculation is undertaken with $Z = 0.1 Z_\odot$. The ion ratios reach parity at $T_e \approx 10^4$ K and for $T_e > 10^{5.3}$ K. Taken together, the ion ratios are certainly consistent with photoionization but their convergence at higher temperature suggests another possible origin. The \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ ratio, like the ${\rm Si\:\scriptstyle IV}$/${\rm Si\:\scriptstyle II}$\ ratio (both with up to 0.5 dex of scatter), is of order unity and is enhanced in a region over the South and North Galactic Poles (see Fig.~\ref{f:CIV}). So are there other ways to increase the gas temperature without photoionization or shocks from blast waves? We address this issue in the next section. \subsubsection{Shock cascade and turbulent mixing} Bland-Hawthorn et al (2007) and Tepper-Garcia et al (2015) consider the case of the Magellanic ${\rm H\:\scriptstyle\rm I}$\ stream being ablated by the diffuse hot halo. They show that the post-shock cooling gas ($v_s < 20$ km s$^{-1}$) in a `shock cascade' is generally too weak along the Magellanic Stream to power the \Ha\ emission, particularly at the newly established distance of $D > 75$ kpc (cf. Barger et al 2017). The post-shock temperature ($<10^4$K) is too low to produce high-ionization species, even in the high-energy tail of the particle distribution (cf. Fig.~\ref{f:ion_temp}). But a shock cascade can still be important even if it does not account for the observed spectral signatures directly. For example, it can help to break down the cold gas and enable interchange with the hot halo. A major uncertainty along the Stream is the degree of mixing between the cold clouds and the hot coronal gas; a shearing boundary layer can give rise to intermediate gas phases with a mean temperature of order $\sqrt{T_{\rm hot} T_{\rm cold}}$ and therefore a broad range of ionization states (Ji et al 2019; Begelman \& Fabian 1990). This process is driven by either Kelvin-Helmholz (KH) instabilities at the hot/cold interface, or turbulence in the hot corona for which there are few constraints presently. The outcome depends on the fraction of mass of hot gas deposited into the mixing layer, and the efficiency of hydrodynamic mixing. To our knowledge, there have only been two hydrodynamic studies of this turbulent regime that incorporate consistent non-equilibrium ionization, i.e. Esquivel et al (2006; MHD) and Kwak \& Shelton (2010; HD). Notably, Kwak \& Shelton (2010) find, much like for conductive interfaces (see below), that the low and high ionization states arise from very low column gas ($\lesssim 10^{13}$ cm$^{-2}$). While mixing in sheared layers surely exists at the contact surface of the \textit{Fermi} bubbles (Gronke \& Oh 2018; Cooper et al 2008), it is unclear if these processes are possible at the Stream's distance over the South Galactic Pole where the coronal density is low ($\sim$ a few $\times$ $10^{-5}$ cm$^{-3}$). Several authors have discussed the idea of conductive interfaces in which cool/warm clouds evaporate and hot gas condenses at a common surface where colliding electrons transport heat across a boundary (Gnat, Sternberg \& McKee 2010; Armillotta et al 2017). The gas tends to be `under-ionized' compared to gas in ionization equilibrium which enhances cooling in the different ions. But Gnat et al (2010) show that the non-equilibrium columns are always small ($\lesssim 10^{13}$ cm$^{-2}$) and an order of magnitude below the median columns detected by Fox et al (2014). For full consistency, the shock cascade model is an appropriate framework for a mixing layer calculation but a self-consistent radiative MHD code to achieve this has yet to be developed. Our first models predict projected line broadening up to $\sigma \approx 20$\ km s$^{-1}$\ in ${\rm H\:\scriptstyle\rm I}$\ or warm ion transitions (Bland-Hawthorn et al 2007). It is possible that running models with intrinsically higher resolution, one can broaden the absorption line kinematics and increase the column densities further through line-of-sight projections. An important future constraint is to map the relative distributions of warm ionized, warm neutral and cold neutral hydrogen gas at high spectral/spatial resolution along the Stream. Presently, we do not find a compelling case for dominant processes beyond static photoionization from a distant source. All of these processes may have more relevance to the \textit{Fermi} bubbles and to high velocity clouds (HVCs) much lower in the Galactic halo ($D \ll 75$ kpc). For the HVCs, such arguments have been made (Fox et al 2005). Before an attempt is made to understand the Stream in this context, it will be crucial to first demonstrate how turbulent mixing has contributed to UV diagnostics observed towards low-latitude clouds. \subsection{Correlations between the observed diagnostics along the Magellanic Stream} \subsubsection{The scatter in the \Ha\ emission relative to ${\rm H\:\scriptstyle\rm I}$} \label{s:patchy} Ideally, we would be able to bring together all spectroscopic information within a cohesive framework for the Magellanic Stream in terms of its origin, internal structure, ionization and long-term evolution (e.g. Esquivel et al 2006; Tepper-Garcia et al 2015). As implied in the last section, various parts of the problem have been tackled in isolation, but an overarching scheme covering all key elements does not exist today. For such a complex interaction, we must continue to gather rich data sets across the full electromagnetic spectrum (Fox et al 2019). Our work has concentrated on both absorption and emission lines observed with very different techniques, effective beam sizes and sensitivities. We now consider what one might learn in future when both absorption and emission measures have comparable sensitivities and angular resolution. This may be possible in the era of ELTs, at least for the \Ha-bright regions. Figure 12 of Barger et al (2017) shows the lack of any correlation between the \Ha\ detections and the projected ${\rm H\:\scriptstyle\rm I}$\ column density. The emission measures mostly vary over about a factor of five, from $\sim 30$ to 160 mR; there are two exceptionally bright knots along the Stream with $400 \lesssim \mu(\Ha) \lesssim 600$ mR. The total H column (${\rm H\:\scriptstyle\rm I}$\ + ${\rm H}^+$) today is high enough to absorb a significant fraction of incident UV photons across much of the Stream {\it if the Sgr A* source currently radiates far below the Eddington limit}. This simple observation is consistent with the nuclear flare having shut down and the Stream's recombination emission fading at a rate that depends only on the local gas density. For completeness, we mention one more possibility which is somewhat fine-tuned and therefore less plausible. It is possible that at the lookback time ($2T_c\approx 0.5$ Myr) at which we observe the Stream emission (for a distance of 75 kpc), the Galaxy's nuclear emission is still far above the present-day value and the spread in emission measures is dominated by column density variations along the lines of sight. Assume for a moment that variations in $N/N_{\rm cr}$ are unimportant. In principle, the power spectrum of the \Ha\ patchiness constrains both the gas densities and the time since the radiation field switched off, since the scatter increases with the passage of time (up until the recombination time for the lowest density gas is reached), due to the spread in $\tau_{\rm rec}$; see Figure 4 in BH2013. However, there are several complications. The predicted range of $\mu_{{\rm H}\alpha}$ as a function of time depends on the distribution of gas densities within the Stream. At present, however, the observable range in \Ha\ surface brightness is limited by the moderate S/N of most of the detections. An additional complication is that, at fixed density \ifmmode{\langle n_{\rm H} \rangle}\else{$\langle n_{\rm H} \rangle$}\fi, lines of sight with $N < \ifmmode{N_{\rm cr}}\else{$N_{\rm cr}$}\fi$ will be fainter in \Ha\ by the ratio $N/\ifmmode{N_{\rm cr}}\else{$N_{\rm cr}$}\fi$, as discussed above. Finally, the observed patchiness is likely to be heavily filtered by the angular resolution of the observer's beam (Tepper-Garcia et al 2015). In future, it may be possible to sort out these issues with knowledge of the {\it total} hydrogen column density along the Stream from independent sources of information, e.g., soft x-ray shadowing by the Stream projected against the cosmic x-ray background (e.g. Wang \& Taisheng 1996). \subsubsection{The scatter in the UV absorption lines relative to ${\rm H\:\scriptstyle\rm I}$} \label{s:UVone} For absorption lines, it is the column density $N_p$ that matters, not the product $n_e N_p$. In other words, the \Ha\ emission from low-density regions with large columns is, in effect, being scaled down by their low densities, but this is not true for the UV absorption lines. So, in this model, the prominence of the lowest-density regions in the absorption-line observations will be even more pronounced than it is for the \Ha\ emission: they not only stay more highly ionized for longer, because of the longer recombination times, but they also arise in the largest H column densities (Fig.~\ref{f:carbon}), and that is what the absorption-line diagnostics are sensitive to. What this argument does not determine is whether the carbon ionization state (as measured by the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ ratio) resembles what we are seeing for some reasonable period of time after the source turns off, or whether the only applicable models are ones in which the ionization state has not really had time to change. That still favors the lowest-density regions, however, for the reasons just outlined. In general, for the assumed tubular geometry of the Magellanic Stream, we expect higher densities to roughly correspond to larger column densities. However, in the flare ionization model, as noted above, the densest regions recombine the fastest, and thus fade quickly in \Ha\ and lose their \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ rapidly once the flare has switched off. In the flare model, as long as the gas column densities along the Stream are greater than the critical column needed to soak up all of the ionizing photons, the density/column density anticorrelation (lower density regions have larger ionized columns) is baked in by the physics, and so in this case we anticipate a positive correlation between the \Ha\ emission and the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ absorption strength. There are two caveats: the correlation (1) only arises if the low-density regions still have significant \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ fractions (i.e., they have not had time to recombine to low ionization states); (2) would not hold if the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ is coming mostly from regions where the density is so low that the total column is lower than the critical column, i.e., density-bounded rather than radiation-bounded sightlines. In the latter case, the \Ha\ emission will also be weaker than our model predicts, by the ratio of the actual column to the critical column. The ${\rm H\:\scriptstyle\rm I}$/\Ha\ comparison above was possible because of the comparable ($0.1-1$ degree) beam size for both sets of observations. Unfortunately, the UV absorption lines have an effective beam size that is orders of magnitude smaller then either the optical or radio detections. An additional problem are the short timescales associated with \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi\ recombination relative to \Ha\ and \ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ as we discuss below. \section{Discussion} There is nothing new about the realisation of powerful episodic behaviour erupting from the nuclei of disk galaxies (q.v. Mundell et al 2009). Some of these events could be close analogues to what we observe today in the Milky Way (cf. NGC 3079: Li et al 2019; Sebastian et al 2019). Since 2003, many papers present evidence for a powerful Galactic Centre explosion from radio, mid-infrared, UV, x-ray and $\gamma$ ray emission. The remarkable discovery of the $\gamma$-ray bubbles (Su et al 2010) emphasized the extraordinary power of the event. The dynamical ($2-8$ Myr) and radiative ($2.5-4.5$ Myr) timescales overlap, with possible evidence that the jet/wind break-out (Miller \& Bregman 2016) preceded the radiative event (this work; BH2013). Conceivably, if the error estimates are reliable, this time difference is real, i.e. the explosive event was needed to clear a path for the ionizing radiation. In the search for a singular event that may have triggered Sgr A* to undergo a Seyfert phase, we find the link to the central star streams and young clusters made by Zubovas \& King (2012) to be compelling. Against a backdrop of ancient stars, Paumard et al (2006) review the evidence for a young stellar ring with well constrained ages of $4-6$ Myr. The same connection may extend to the circumnuclear star clusters that fall within the same age range (Simpson 2018). Intriguingly, Koposov et al (2019) have recently discovered a star travelling at 1750\ km s$^{-1}$\ that was ejected from the Galactic Centre some 4.8 Myr ago. It is tempting to suggest this was also somehow connected with the major gas accretion event at that time, i.e. through stars close to the black hole being dislodged. This could reasonably be made to fit with the shorter timescale ($T_o=3.5\pm 1$ Myr) for the flare if the event was sufficiently cataclysmic in the vicinity of Sgr A* to directly fuel the inner accretion disk. Accretion timescales of infalling gas being converted to radiative output can be as short as 0.1$-$1 Myr (Novak et al 2011) although Hopkins et al (2016) argue for a longer viscosity timescale. We now consider how the field can advance in future years with sufficient observational resources. \smallskip\noindent{\sl Towards a complete 3D map of halo clouds.} The most successful approach for absorption line detections along the Magellanic Stream has been to target UV-bright ($B < 14.5$) background AGN and quasars (Fox et al 2013, 2014). In future, all-sky high-precision photometric imaging (e.g. LSST) will allow us to easily identify a population of UV-bright, metal-poor halo stars with well established photometric distances. Targetting some stars ahead and behind the Stream will improve distance brackets for the Stream and provide more information on the nature of the recent Seyfert outburst. There are many potential targets across the sky. The {\sl Galaxia} model of the Galaxy (Sharma et al 2011) indicates there is one metal poor giant per square degree brighter than $B=14.5$ in the Galactic halo out to the distance of the Stream, with a factor of six more at $B=16$ which can be exploited in an era of ELTs. In principle, it will be possible to determine good distances to all neutral and ionized HVCs from distance bracketing across the entire halo, particularly within 50 kpc or so. The high-velocity ${\rm H\:\scriptstyle\rm I}$\ clouds lie almost exclusively close to the Galactic Plane, i.e. outside the ${\rm H\:\scriptstyle\rm I}$-free cones identified by Lockman \& McClure-Griffiths (2016). There are highly ionized HVCs seen all over the sky found in ${\rm O\:\scriptstyle VI}$\ absorption but not in ${\rm H\:\scriptstyle\rm I}$\ emission (Sembach et al 2003). The ${\rm O\:\scriptstyle VI}$\ sky covering fraction is in the range 60-80\% compared to the ${\rm H\:\scriptstyle\rm I}$\ covering fraction at about 40\%. The use of near-field clouds to trace the ionization cones is hampered by the presence of ionized gas entrained by the x-ray/$\gamma$ ray bubbles (Fox et al 2015; Bordoloi et al 2017; Savage et al 2017; Karim et al 2018). But we anticipate that the ionization cones (Fig.~\ref{f:nidever}) and the \textit{Fermi} bubbles (Fig.~\ref{f:fermi}) are filled with hundreds of distinct, fully ionized HVCs. \smallskip\noindent{\sl Magellanic Screen - viewing the AGN along many sight lines.} The Magellanic Stream provides us with a fortuitous absorber for intersecting ionizing radiation escaping from the Galactic Centre. This `Magellanic Screen' extends over 11,000 square degrees (Fox et al 2014) and enables us to probe the complexity of the emitter over wide solid angles. Our simple adoption of the Madau model predicts a centrosymmetric pattern along some arbitrary axis. But many models produce anisotropic radiation fields, e.g. jets (Wilson \& Tsvetanov 1994), thick accretion disks (Madau 1988), warped accretion disks (Phinney 1989; Pringle 1996), dusty tori (Krolik \& Begelman 1988; Nenkova et al 2008) binary black hole. More measurements along the Stream may ultimately shed more light on the recent outburst from Sgr A* and its immediate surrounds. The strongest constraint comes from variations in the ionization parameter $u$ (Tarter et al 1969), but detecting second order effects from the spectral slope may be possible (e.g. Acosta-Pulido et al 1990), although time-dependent ionization complicates matters (\S~\ref{s:agn}). This suggests a future experiment. Consider an ionization pattern defined by an axis tilted with respect to the Galactic poles. Here we are assuming something like the \ifmmode{{\rm C\:\scriptstyle IV}}\else{${\rm C\:\scriptstyle IV}$}\fi/\ifmmode{{\rm C\:\scriptstyle II}}\else{${\rm C\:\scriptstyle II}$}\fi\ line ratio to measure spectral `hardness' {\cal H} or ionization parameter $u$ over the sky. We can now fit spherical harmonics to the all-sky distribution to establish the dominant axis of a centrosymmetric pattern (e.g. Fixsen et al 1994). For illustration, we project our crude fit in Fig.~\ref{f:CIV} as a sine wave in Magellanic longitude. To be useful, we need many more sight lines over the sky. We are far from a convincing narrative for Sgr A* as we are for any supermassive black hole. These fascinating sources are seeded and grow rapidly in the early universe, and then accrete more slowly with the galaxy's evolution over billions of years. Just how they interact and influence that evolution is an outstanding problem in astrophysics. We live in hope that this new work may encourage accretion disk modellers (e.g. GR-R-MHD codes; McKinney et al 2014) to consider the UV outburst in more detail, and to predict the emergent radiation and timescale to aid future comparisons with observations. Ultimately, such models will need to be integrated into fully cosmological models of galaxy formation and evolution. \section{Acknowledgment} JBH is supported by a Laureate Fellowship from the Australian Research Council. JBH also acknowledges a 2018 Miller Professorship at UC Berkeley where the first draft was completed. MG acknowledges funding from the University of Sydney. WHL is supported by China-Australia Scholarship funds for short-term internships. We are particularly grateful to James Josephides (Swinburne University) and Ingrid McCarthy (ANU) for the movie rendition of the Magellanic Stream being ionized by the accretion disk around Sgr A* (see the caption to Fig. 4). Over the past few years, we have benefitted from key insights and suggestions: we thank Chris McKee, Luis Ho, Jenny Greene, Will Lucas, Carole Mundell, Dipanjan Mukherjee, Roger Blandford, Lars Hernquist, Jerry Ostriker, Ramesh Narayan and an anonymous referee, assuming of course they are not in the list already.
1,108,101,564,103
arxiv
\section{Introduction} The classification of gauge symmetries that can arise in the string landscape is an important problem that is comparatively easy to address for vacua with high supersymmetry. A particularly interesting set of vacua of this kind is provided by heterotic strings compactified on $T^d$, realizing models with 16 supercharges and gauge groups of rank $r \leq 16+d$. Although the basic mechanism governing the allowed gauge groups was determined a long time ago by Narain \cite{Narain:1985jj}, the full classification of all possibilities, together with their respective moduli, had not been made except for the case $d = 1$ \cite{Cachazo:2000ey} in the subsequent years. For $d = 2$ the list of possible gauge groups was implicitly known due to the duality between the heterotic string on $T^2$ and $F$-theory on elliptically fibered K3 surfaces, whose gauge groups were classified in \cite{SZ,Shimada2000OnEK}. Carrying out an exhaustive classification of allowed gauge groups in $d = 2$ was the main goal of \cite{Font:2020rsk}, where the results were found to match exactly with those of \cite{Shimada2000OnEK} but also the moduli for a representative model with each gauge group were given. It turned out that the most effective strategy for doing the classification consists on moving from singular points of maximal enhancement in moduli space to others via manipulations of the root lattices in a controlled way, in a process which is best described as an exploration algorithm. This algorithm is at heart a tool for finding embeddings of lattices into other lattices, which is precisely the context in which the mechanism of gauge symmetry enhancement is best formulated for the heterotic string on $T^d$. However it turns out that other sectors in the moduli space of theories with 16 supercharges can be treated in this way. By means of the asymmetric orbifold construction of \cite{Narain:1986qm}, theories with 16 supercharges but with gauge symmetry with reduced rank can be obtained. In particular, one finds the so called CHL string sector \cite{Chaudhuri:1995fk,Chaudhuri:1995bf} in 9d, for which the momentum lattice was constructed by Mikhailov in \cite{Mikhailov:1998si}. In 8d the story is the same, but in 7d one finds four extra sectors (six including the Narain and CHL sector). These were constructed together with their momentum lattices in \cite{deBoer:2001wca}. Classifying the possible gauge groups in the CHL string by means of the exploration algorithm was the main goal of \cite{Font:2021uyw}, where the respective topologies of the groups were obtained using results of \cite{Cvetic:2021sjm}. In this paper we extend this work to the six aforementioned sectors in 7d. To this end it is necessary to state precisely how the enhanced symmetry groups can be obtained from the momentum lattices, which we do by a natural generalization of the case for the CHL string \cite{Mikhailov:1998si,Font:2021uyw,Cvetic:2021sjm}. We see that the lattice alone is not sufficient to determine the allowed gauge groups, but rather one must impose a constraint in the embeddings characterized by an integer, which comes from the string theory but is ad hoc from the point of view of the lattice (see Proposition \ref{propTrip}). Implementing this constraint in our algorithm we obtain a list of maximally enhanced gauge algebras for each sector. We find respectively 1035, 407, 50, 9, 3, and 3 such algebras for the $\mathbb{Z}_m$-triples, respectively with $m = 1, 2, 3, 4, 5, 6$. If we distinguish the enhancements by the global data of the gauge group, the corresponding numbers are 1232, 429, 52, 18, 3 and 3. On the other hand it is well known that the heterotic string on $T^3$ is dual to M-theory on K3. Gauge groups with reduced rank are realized in the later when there are so-called partially frozen singularities on the K3 \cite{deBoer:2001wca,Atiyah:2001qf,Tachikawa:2015wka}. It is then natural to ask how this mechanism of partial freezing appears in the heterotic string. We study this problem by exploiting relations between the reduced rank momentum lattices and the Narain lattice and find a match with the known results in the M-theory side. General freezing rules involving the topology of the gauge groups are obtained, generalizing the results of \cite{Cvetic:2021sjm} for the 8d CHL string. This paper is organized as follows. In Section \ref{s:review} we review the construction of rank reduced heterotic theories in nine to seven dimensions, emphasizing the role of outer automorphisms of the gauge lattice in the framework of asymmetric orbifolds. Then in Section \ref{s:lattices} we state the criteria for gauge groups being realized in the relevant theories in terms of lattice embeddings, and briefly review how the exploration algorithm works. The problem of singularity freezing is studied in Section \ref{s:frozen}. Finally, the main results obtained with the exploration algorithm are presented and discussed in Section \ref{s:results}. In Appendix \ref{app} we leave some comments regarding the role of some technicalities of lattice embeddings in the heterotic string, which may help the reader who is not used to thinking in these terms. \section{Basic constructions with rank reduction} \label{s:review} In this section we review how rank reduced theories with 16 supercharges are constructed from the heterotic string in nine to seven dimensions. The idea is to get an intuitive understanding of these constructions through the manipulation of Dynkin Diagrams, illustrating the asymmetric orbifold construction with an outer automorphism. This complements the more general (and abstract) treatment in \cite{deBoer:2001wca}. We go through the CHL string, the ${\mathrm{Spin}}(32)/\mathbb{Z}_2$ heterotic theory compactification without vector structure and the $\mathbb{Z}_m$-triples. \subsection{CHL string} \label{chl} The CHL string in 9d can be realized as the ${\mathrm{E}}_8 \times {\mathrm{E}}_8$ heterotic string compactified on an orbifold of a circle involving the outer automorphism $\theta$ which exchanges both ${\mathrm{E}}_8$'s and a half-period shift $a$ along the circle \cite{Chaudhuri:1995bf}. The resulting target space has an holonomy $\theta$ along the compact direction which breaks the gauge group ${\mathrm{E}}_8 \times {\mathrm{E}}_8$ to its diagonal ${\mathrm{E}}_8$. The shift $a$ obstructs the recovery of the broken ${\mathrm{E}}_8$ in the twisted sector and so it ensures that the rank of the total gauge group is reduced. Since $\theta$ is an outer automorphism of a gauge group, its implementation as an orbifold symmetry naturally leads to a picture of Dynkin Diagram folding. In the case of the CHL string, one ``folds one ${\mathrm{E}}_8$ into the other", and finds that the gauge group of the resulting theory is ${\mathrm{E}}_8$ (with an extra ${\mathrm{U}(1)}$ for arbitrary radius). Turning on a Wilson line does not change this picture since it must break both ${\mathrm{E}}_8$'s in the same way, and one then just folds one of the broken groups into the other. Even though the length of a root is not by itself a meaningful concept, it is helpful to think that the nodes that get superposed in folding a diagram correspond to shortened roots. The reason is that this maps naturally to an increase in the level of the associated gauge algebra by a factor equal to the order of the automorphism $\theta$. In this case, the ${\mathrm{E}}_8 \times {\mathrm{E}}_8$ at level 1 becomes an ${\mathrm{E}}_8$ at level 2. On the other hand, connected diagrams containing invariant nodes correspond to algebras at level 1. In the 9d CHL string there are no states in the gauge sector invariant under the orbifold symmetry, and so there are no gauge groups at level 1. Compactifying on a circle to 8d, one gets an extra $\text{SU(2)}$ at the self-dual radius which is unaffected by the folding and finds that indeed there are level 1 gauge symmetries (namely symplectic algebras of rank $\leq 10$). The main idea here is that using the symmetry $(a,\theta)$ one constructs a vacuum of the heterotic string with an holonomy that in particular projects out Cartan generator states. Such an holonomy can not be implemented in the theory by merely turning on Wilson lines, as outer automorphisms are not connected to the identity element in the gauge group. However, the set of holonomies that can be obtained by orbifolding the target manifold is larger and includes those of this type. Together with the diagram folding picture, this story generalizes to the other constructions reviewed below. \subsection{Compactification without vector structure} There is a theory dual to the 8d CHL string which is obtained from the ${\mathrm{Spin}}(32)/\mathbb{Z}_2$ heterotic string by compactifying it on a $T^2$ without vector structure \cite{Witten:1997bs}. The basic idea is that the spectrum of the 10d theory does not contain vector representations of ${\mathrm{Spin}}(32)$, and so one should consider topologies of the gauge bundle which do not admit such representations. An obstruction of this type is measured by a mod two cohomology class $\tilde w_2$, analogous to the second Stieffel-Whitney class $w_2$ which obstructs spin structure. This compactification is characterized by the fact that the two holonomies $g_1, g_2$ on the torus commute as elements of ${\mathrm{Spin}}(32)/\mathbb{Z}_2$, but do \textit{not} commute when lifted to elements of the double cover ${\mathrm{Spin}}(32)$. In other words, the commutator of these holonomies is lifted to a nontrivial element in ${\mathrm{Spin}}(32)$ which is identified with the identity upon quotienting by one of the spinor classes in its center. The lifting ${\mathrm{Spin}}(32)/\mathbb{Z}_2 \to {\mathrm{Spin}}(32)$ is therefore obstructed and no vector representations are allowed. Two such holonomies can not be put simultaneously on a maximal torus of the gauge group. Similarly to the CHL string, one of them has to be realized by orbifolding the theory. The difference in this case is that the 10d gauge group ${\mathrm{Spin}}(32)/\mathbb{Z}_2$ does not have any outer automorphism. One can however turn on a Wilson line along one of the compact directions such that from the point of view of the remaining dimensions the gauge group is actually broken to one which does in fact have an outer automorphism. Concretely, we turn on a Wilson line $A = (\tfrac12^8,0^8)$ which breaks ${\mathrm{Spin}}(32)/\mathbb{Z}_2 \to {\mathrm{Spin}}(16)^2/\mathbb{Z}_2$. This can be represented diagrammatically as \begin{equation}\label{diag1} \begin{tikzpicture}[scale = 0.9] \draw(0,0)--(7,0); \draw(6.5,0)--(6.5,0.5); \draw(6.5,0.5)--(6.5,1); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](2,0) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](3.5,0) circle (0.1); \draw[fill=white](4,0) circle (0.1); \draw[fill=white](4.5,0) circle (0.1); \draw[fill=white](5,0) circle (0.1); \draw[fill=white](5.5,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=white](6.5,0.5) circle (0.1); \draw[fill=black](6.5,1) circle (0.1); \draw[red,->,>=stealth](7.5,0.5)--(8.5,0.5) node[above=0.3,left]{$A$}; \draw[blue,->,>=stealth](7.5,-1.25)--(8.5,-1.25) node[above=0.3,left]{$\theta$}; \begin{scope}[shift={(9,0)}] \draw(0,0)--(3,0); \draw(4,0)--(7,0); \draw(0.5,0)--(0.5,0.5); \draw(6.5,0)--(6.5,0.5); \draw(0.5,0.5)--(3.5,1); \draw(6.5,0.5)--(3.5,1); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](2,0) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](4,0) circle (0.1); \draw[fill=white](4.5,0) circle (0.1); \draw[fill=white](5,0) circle (0.1); \draw[fill=white](5.5,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](6.5,0.5) circle (0.1); \draw[fill=black](3.5,1) circle (0.1); \end{scope} \begin{scope}[shift={(11,-1.5)}] \draw(0,0)--(3,0); \draw(0.5,0)--(0.5,0.5); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](2,0) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \end{scope} \end{tikzpicture} \end{equation} where the white nodes are simple roots and the black nodes represent the fundamental weight which generates the $\mathbb{Z}_2$ in each case. We see that the RHS corresponds to a group with outer automorphism $\theta$. Orbifolding the theory by this symmetry and a half period shift along the second compact direction we obtain a theory with gauge group ${\mathrm{Spin}}(16) \times {\mathrm{U}(1)}^2$ (for arbitrary values of the torus metric and B-field). We note that the fundamental weight gets projected out by the orbifold symmetry, hence the resulting gauge group is simply connected. The commutator of the holonomies chosen is the exponential of \begin{equation}\label{comcond} A - \theta(A) = (\tfrac12^8,0^8) - (0^8,\tfrac12^8) = (\tfrac12^8,-\tfrac12^8), \end{equation} which does not yield the identity in ${\mathrm{Spin}}(32)$ but rather the element which gets identified with it in ${\mathrm{Spin}}(32)/\mathbb{Z}_2$. This corresponds to the discussion above. More generally one can deform this Wilson line by adding vectors symmetric in the first and last eight components, i.e. those of the form $(\delta,\delta)$, as to respect condition \eqref{comcond}. One can also turn on another Wilson line $A'$ in the second compact direction such that $\theta(A') = A'$, since the product of two holonomies on the same direction should commute. Together with deformations of the metric and the B-field we reach other points in moduli space exhibiting different gauge symmetries (classified in \cite{Font:2021uyw}). This moduli space is equivalent to that of the 8d CHL string, where the equivalence is given by T-duality\cite{deBoer:2001wca}. \subsection{Holonomy triples in 7d} \label{ss:triples} The basic idea behind the construction just described can be applied to the heterotic string on a circle and further compactifying two dimensions on a torus. This comes from the fact that there are various 9d gauge groups analogous to the 10d ${\mathrm{Spin}}(32)/\mathbb{Z}_2$. It is enough to consider the following five: \begin{equation} \footnotesize \frac{({\mathrm{E}}_7 \times {\mathrm{SU}}(2))^2}{\mathbb{Z}_2}, ~~~ \frac{({\mathrm{E}}_6 \times {\mathrm{SU}}(3))^2}{\mathbb{Z}_3}, ~~~ \frac{({\mathrm{Spin}}(10) \times {\mathrm{SU}}(4))^2}{\mathbb{Z}_4}, ~~~ \frac{{\mathrm{SU}}(5)^4}{\mathbb{Z}_5}, ~~~\frac{({\mathrm{SU}}(2) \times {\mathrm{SU}}(3) \times {\mathrm{SU}}(6))^2}{\mathbb{Z}_6}\,. \end{equation} These correspond to breakings of ${\mathrm{E}}_8 \times {\mathrm{E}}_8$ by a Wilson line $A$, so that it is most natural to work in the framework of the ${\mathrm{E}}_8 \times {\mathrm{E}}_8$ string. Natural choices for these Wilson lines are, respectively, \begin{equation}\label{1sthol} A = \begin{cases} (0^6,-\tfrac12,\tfrac12)\times(\tfrac12,-\tfrac12,0^6)\, & \quad(\mathbb{Z}_2)\\ (0^5,-\tfrac13^2,\tfrac23)\times(\tfrac23,-\tfrac23^2,0^5)\, & \quad(\mathbb{Z}_3)\\ (0^4,-\tfrac14^3,\tfrac34)\times (-\tfrac34,\tfrac14^3,0^4)\, & \quad(\mathbb{Z}_4)\\ (0^3,-\tfrac15^4,\tfrac45)\times(-\tfrac45,\tfrac15^4,0^3)\, & \quad(\mathbb{Z}_5)\\ (0^2,-\tfrac16^5,\tfrac56)\times(-\tfrac56,\tfrac16^5,0^2)\, & \quad(\mathbb{Z}_6) \end{cases}\,. \end{equation} The $\mathbb{Z}_m$'s correspond not only to the fundamental group of each broken gauge group but also to the cyclic group generated by the outer automorphism $\theta$ to be implemented. The name `$\mathbb{Z}_m$-triple' refers to this group together with the three holonomies consisting of \eqref{1sthol} and the pair analogous to the one discussed in the previous section, which we now discuss. \subsubsection{$\mathbb{Z}_2$-triple} \label{ss:z2trip} Consider first the $\mathbb{Z}_2$-triple. From the point of view of the $T^2$ on which the 9d theory is compactified, the gauge group is $({\mathrm{E}}_7 \times {\mathrm{SU}}(2))^2/\mathbb{Z}_2$, which indeed has an order two outer automorphism, exchanging the ${\mathrm{E}}_7 \times {\mathrm{SU}}(2)$ factors. However, using this symmetry to orbifold the theory just gives us the CHL string, as discussed in section \ref{chl}. Consider instead turning on a Wilson line $A'$ on one of the $T^2$ directions ($x^1$), of the form \begin{equation} A' = (0^5,-\tfrac12,\tfrac12,0)\times (0,-\tfrac12,\tfrac12,0^5)\,. \end{equation} It has the effect of further breaking the gauge group to $({\mathrm{E}}_6\times {\mathrm{U}(1)}^2)^2$. From the point of view of the other $T^2$ direction ($x^2$), the gauge group has then an order 2 outer automorphism corresponding to the symmetry of each ${\mathrm{E}}_6$ diagram. To get a consistent theory (meaning that the partition function is modular invariant), however, we have to take into account how the orbifold symmetry acts on the 16 internal directions and not only the 12 corresponding to the ${\mathrm{E}}_6$'s. Fortunately, it is not hard to find such a consistent automorphism. One just has to take the one corresponding to the symmetry of the affine diagram of the original gauge algebra $2{\mathrm{E}}_7 + 2{\mathrm{A}}_1$: \begin{equation} \begin{tikzpicture}[scale = 0.9] \draw(0,0)--(2,0); \draw(0.5,0)--(0.5,1.5); \draw(2.9,0)--(2.9,0.5); \draw(3.1,0)--(3.1,0.5); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](2,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](3,0.5) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \draw[fill=white](0.5,1.5) circle (0.1); \draw[blue,<->,>=stealth](1.5,0.5) arc (0:90:0.5); \draw[blue,<->,>=stealth](2.7,0)--(2.7,0.5); \begin{scope}[xscale = -1, xshift = -7cm] \draw(0,0)--(2,0); \draw(0.5,0)--(0.5,1.5); \draw(2.9,0)--(2.9,0.5); \draw(3.1,0)--(3.1,0.5); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](2,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](3,0.5) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \draw[fill=white](0.5,1.5) circle (0.1); \draw[blue,<->,>=stealth](1.5,0.5) arc (0:90:0.5); \draw[blue,<->,>=stealth](2.7,0)--(2.7,0.5); \end{scope} \end{tikzpicture} \end{equation} It can then be shown that, together with an order 2 shift in $x^2$, one obtains a consistent theory with an holonomy that breaks 8 Cartan generators, and the gauge group is ${\mathrm{F}}_4 \times {\mathrm{F}}_4$ at level 1 times ${\mathrm{U}(1)}^3$, for arbitrary metric and B-field. The former is due to the automorphism having an associated projector $P_\theta = 1 + \theta$ of rank 8. The later comes from the fact that each ${\mathrm{E}}_6$ folds into an ${\mathrm{F}}_4$, where two nodes are left invariant (cf. discussion in section \ref{chl}). As in the previous construction, we can represent this breaking diagrammatically: \begin{equation} \begin{tikzpicture}[scale = 0.9] \draw(0,0)--(2,0); \draw(5,0)--(7,0); \draw(0.5,0)--(0.5,1); \draw(6.5,0)--(6.5,0.5); \draw(6.5,0.5)--(6.5,1); \draw(2,0)--(3.5,1); \draw(3,0)--(3.5,1); \draw(4,0)--(3.5,1);\\ \draw(5,0)--(3.5,1); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](2,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](4,0) circle (0.1); \draw[fill=white](5,0) circle (0.1); \draw[fill=white](5.5,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](6.5,0.5) circle (0.1); \draw[fill=white](6.5,1) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \draw[fill=black](3.5,1) circle (0.1); \draw[red,->,>=stealth](7.5,0.5)--(8.5,0.5) node[above=0.3,left]{$A'$}; \draw[blue,->,>=stealth](7.5,-1.25)--(8.5,-1.25) node[above=0.3,left]{$\theta$}; \begin{scope}[shift={(9,0)}] \draw(0,0)--(1.5,0); \draw(5.5,0)--(7,0); \draw(0.5,0)--(0.5,1); \draw(6.5,0)--(6.5,0.5); \draw(6.5,0.5)--(6.5,1); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](5.5,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](6.5,0.5) circle (0.1); \draw[fill=white](6.5,1) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \end{scope} \begin{scope}[shift={(9,-1.5)}] \draw(0,0)--(0.5,0); \draw(0.5,0.05)--(1,0.05); \draw(0.5,-0.05)--(1,-0.05); \draw(1,0)--(1.5,0); \draw(5.5,0)--(6,0); \draw(6,0.05)--(6.5,0.05); \draw(6,-0.05)--(6.5,-0.05); \draw(6.5,0)--(7,0); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](5.5,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \end{scope} \end{tikzpicture} \end{equation} Let us now consider the commutator of the holonomies along the $T^2$. We find that \begin{equation} \theta(A') - A' = (0^5,1,-1,0)\times(0,1,-1,0^5)\,, \end{equation} which is just the fundamental weight represented as a black node in the above diagram. Its exponential is a nontrivial element of $({\mathrm{E}}_7 \times {\mathrm{SU}}(2)^2)$ which gets identified with the identity in the quotient $({\mathrm{E}}_7 \times {\mathrm{SU}}(2)^2)/\mathbb{Z}_2$, mirroring the situation in the compactification without vector structure as expected. One may also deform the Wilson lines along all directions by adding vectors invariant under $\theta$. This restriction reduces the degrees of freedom of the theory with respect to the Narain moduli space in the appropriate way. Finally we note that here we have obtained a particular gauge group, ${\mathrm{F}}_4 \times {\mathrm{F}}_4 \times {\mathrm{U}(1)}^3$, out of the many possibilities that exist in the moduli space of the theory. The general construction carried out in \cite{deBoer:2001wca} leads to a momentum lattice analogous to the Narain lattice, with which we may systematically explore this moduli space (as we discuss in next section). In this case, the momentum lattice is just the Mikhailov lattice in 7d and the theory is equivalent to the 7d CHL string. We emphasize that the $\mathbb{Z}_2$-triple does not involve the exchange of the ${\mathrm{E}}_8$'s (or subgroups thereof), and so strictly speaking it does not correspond to the CHL string. Indeed, one can construct the CHL string but not the $\mathbb{Z}_2$-triple in 9d. When they exist, they are equivalent by T-duality. \subsubsection{$\mathbb{Z}_3$-triple} Starting in the $\mathbb{Z}_3$-triple we find genuinely new rank-reduced moduli space components with respect to the 8d case. Here the gauge group from the point of view of the $T^2$ is $({\mathrm{E}}_6 \times {\mathrm{SU}}(3))^2/\mathbb{Z}_3$. We turn on a Wilson line along $x^1$ of the form \begin{equation} A' = (0^4,-\tfrac13,\tfrac23,\tfrac13,0)\times (0,-\tfrac13,-\tfrac23,\tfrac13,0^4). \end{equation} This breaks the gauge group to $({\mathrm{SO}}(8)\times {\mathrm{U}(1)}^4)^2$. To get the rank 3 automorphism we again consider the symmetry of the affine diagram of the original group: \begin{equation} \begin{tikzpicture}[scale = 0.9] \draw(-0.5,-0.76)--(0.5,0); \draw(1.5,-0.76)--(0.5,0); \draw(2.5,0)--(3,0); \draw(0.5,0)--(0.5,1); \draw(3,0)--(2.75,0.36); \draw(2.5,0)--(2.75,0.36); \begin{scope}[shift={(0,-0.05)}] \draw[blue,->,>=stealth](0,-0.75)to[out=-45,in=225](1,-0.75); \draw[blue,->,>=stealth,rotate around={120:(0.5,0)}](0,-0.75)to[out=-45,in=225](1,-0.75); \draw[blue,->,>=stealth,rotate around={240:(0.5,0)}](0,-0.75)to[out=-45,in=225](1,-0.75); \end{scope} \draw[fill=white](0,-0.38) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,-0.38) circle (0.1); \draw[fill=white](1.5,-0.76) circle (0.1); \draw[fill=white](-0.5,-0.76) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](2.75,0.36) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \draw[blue,->,>=stealth](2.55,0.5)--(2.25,0.1); \draw[blue,->,>=stealth](2.5,-0.25)--(3,-0.25); \draw[blue,->,>=stealth](3.25,0.1)--(2.95,0.5); \begin{scope}[xscale = -1, xshift = -7cm] \draw(-0.5,-0.76)--(0.5,0); \draw(1.5,-0.76)--(0.5,0); \draw(2.5,0)--(3,0); \draw(0.5,0)--(0.5,1); \draw(3,0)--(2.75,0.36); \draw(2.5,0)--(2.75,0.36); \begin{scope}[shift={(0,-0.05)}] \draw[blue,->,>=stealth](0,-0.75)to[out=-45,in=225](1,-0.75); \draw[blue,->,>=stealth,rotate around={120:(0.5,0)}](0,-0.75)to[out=-45,in=225](1,-0.75); \draw[blue,->,>=stealth,rotate around={240:(0.5,0)}](0,-0.75)to[out=-45,in=225](1,-0.75); \end{scope} \draw[fill=white](0,-0.38) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,-0.38) circle (0.1); \draw[fill=white](1.5,-0.76) circle (0.1); \draw[fill=white](-0.5,-0.76) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](2.75,0.36) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \draw[blue,->,>=stealth](2.55,0.5)--(2.25,0.1); \draw[blue,->,>=stealth](2.5,-0.25)--(3,-0.25); \draw[blue,->,>=stealth](3.25,0.1)--(2.95,0.5); \end{scope} \end{tikzpicture} \end{equation} This descends to the triality of each ${\mathrm{SO}}(8)$ and folds them into ${\mathrm{G}}_2 \times {\mathrm{G}}_2$ at level 1. The projector $P_\theta = 1 + \theta + \theta^2$ is of rank 4, eliminating 12 Cartan generators, and so the resulting gauge group is ${\mathrm{G}}_2 \times {\mathrm{G}}_2 \times {\mathrm{U}(1)}^3$ for arbitrary metric and B-field. Again, the orbifold includes an order 3 shift in $x^2$. The corresponding breaking diagram is \begin{equation} \begin{tikzpicture}[scale = 0.9] \draw(0,0)--(1.5,0); \draw(2.5,0)--(3,0); \draw(4,0)--(4.5,0); \draw(5.5,0)--(7,0); \draw(0.5,0)--(0.5,1); \draw(6.5,0)--(6.5,0.5); \draw(6.5,0.5)--(6.5,1); \draw(1.5,0)--(3.5,1); \draw(2.5,0)--(3.5,1); \draw(4.5,0)--(3.5,1); \draw(5.5,0)--(3.5,1); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.5,0) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](4,0) circle (0.1); \draw[fill=white](4.5,0) circle (0.1); \draw[fill=white](5.5,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](6.5,0.5) circle (0.1); \draw[fill=white](6.5,1) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \draw[fill=black](3.5,1) circle (0.1); \draw[red,->,>=stealth](7.5,0.5)--(8.5,0.5) node[above=0.3,left]{$A'$}; \draw[blue,->,>=stealth](7.5,-1.25)--(8.5,-1.25) node[above=0.3,left]{$\theta$}; \begin{scope}[shift={(9,0)}] \draw(0,0)--(1,0); \draw(6,0)--(7,0); \draw(0.5,0)--(0.5,0.5); \draw(6.5,0)--(6.5,0.5); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](6.5,0.5) circle (0.1); \end{scope} \begin{scope}[shift={(9,-1.5)}] \draw(0.5,0.07)--(1,0.07); \draw(0.5,0)--(1,0); \draw(0.5,-0.07)--(1,-0.07); \draw(6,0.07)--(6.5,0.07); \draw(6,0)--(6.5,0); \draw(6,-0.07)--(6.5,-0.07); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \end{scope} \end{tikzpicture} \end{equation} The commutator of $A'$ and $\theta$ is given by \begin{equation} \theta(A')-A' = (0^4,1,-1,0^2)\times(0^2,1,-1,0^4)\,, \end{equation} corresponding to the weight represented by a the black node in the diagram above, and the story is the same as before for the $\mathbb{Z}_2$-triple. In this case one can deform the three Wilson lines with four degrees of freedom each, which is the rank of the projector $P_\theta$. Together with the nine degrees of freedom coming from the metric and B-field, the dimension of the moduli space is 21, and its local geometry is given by the coset \begin{equation} {\mathrm{SO}}(7,3,\mathbb{R})\big/({\mathrm{SO}}(7,\mathbb{R})\times {\mathrm{SO}}(3,\mathbb{R})). \end{equation} In \cite{deBoer:2001wca} it was proposed that the global structure is given by the automorphism group of the momentum lattice of the theory, which was determined to be \begin{equation} \Lambda_3 = {\mathrm{II}}_{3,3}\oplus {\mathrm{A}}_2 \oplus {\mathrm{A}}_2, \end{equation} extending the results for the first two components of the moduli space where the Narain and the Mikhailov lattice respectively play this role. \subsubsection{$\mathbb{Z}_4$-triple} For the $\mathbb{Z}_4$-triple we start with the 9d gauge group $({\mathrm{Spin}}(10)\times {\mathrm{SU}}(4))^2/\mathbb{Z}_4$ and turn on the Wilson line \begin{equation} A' = \frac{1}{8}(-1,-1,-3,-3,5,3,1,-1)\times(1,-1,-3,-5,3,3,1,1), \end{equation} which breaks it to $({\mathrm{SU}}(2)^8/\mathbb{Z}_2)\times U(1)^8$. The affine diagram of the original group has an order 4 symmetry: \begin{equation} \begin{tikzpicture}[scale = 0.9] \draw(-0.5,-0.5)--(0,0); \draw(0.5,0)--(1,-0.5); \draw(-0.5,0.5)--(0,0); \draw(0.5,0)--(1,0.5); \draw(0,0)--(0.5,0); \begin{scope}[shift={(0,-0.25)}] \draw(2.5,0)--(3,0); \draw(3,0)--(3,0.5); \draw(3,0.5)--(2.5,0.5); \draw(2.5,0)--(2.5,0.5); \draw[fill=white](2.5,0.5) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](3,0.5) circle (0.1); \draw[blue,->,>=stealth](2.25,0.5)--(2.25,0); \draw[blue,->,>=stealth](2.5,-0.25)--(3,-0.25); \draw[blue,->,>=stealth](3.25,0)--(3.25,0.5); \draw[blue,->,>=stealth](3,0.75)--(2.5,0.75); \end{scope} \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,-0.5) circle (0.1); \draw[fill=white](-0.5,-0.5) circle (0.1); \draw[fill=white](1,0.5) circle (0.1); \draw[fill=white](-0.5,0.5) circle (0.1); \draw[blue,->,>=stealth](0.75,0.75)to[out=135,in=45](-0.25,0.75); \draw[blue,->,>=stealth](-0.5,0.25)to[out=270,in=180](0.75,-0.5); \draw[blue,->,>=stealth](0.75,-0.75)to[out=-135,in=-45](-0.25,-0.75); \draw[blue,->,>=stealth](-0.5,-0.25)to[out=-270,in=-180](0.75,0.5); \draw[blue,<->,>=stealth](0,-0.25)--(0.5,-0.25); \begin{scope}[xscale = -1, xshift = -7cm] \draw(-0.5,-0.5)--(0,0); \draw(0.5,0)--(1,-0.5); \draw(-0.5,0.5)--(0,0); \draw(0.5,0)--(1,0.5); \draw(0,0)--(0.5,0); \begin{scope}[shift={(0,-0.25)}] \draw(2.5,0)--(3,0); \draw(3,0)--(3,0.5); \draw(3,0.5)--(2.5,0.5); \draw(2.5,0)--(2.5,0.5); \draw[fill=white](2.5,0.5) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](3,0.5) circle (0.1); \draw[blue,->,>=stealth](2.25,0.5)--(2.25,0); \draw[blue,->,>=stealth](2.5,-0.25)--(3,-0.25); \draw[blue,->,>=stealth](3.25,0)--(3.25,0.5); \draw[blue,->,>=stealth](3,0.75)--(2.5,0.75); \end{scope} \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,-0.5) circle (0.1); \draw[fill=white](-0.5,-0.5) circle (0.1); \draw[fill=white](1,0.5) circle (0.1); \draw[fill=white](-0.5,0.5) circle (0.1); \draw[blue,->,>=stealth](0.75,0.75)to[out=135,in=45](-0.25,0.75); \draw[blue,->,>=stealth](-0.5,0.25)to[out=270,in=180](0.75,-0.5); \draw[blue,->,>=stealth](0.75,-0.75)to[out=-135,in=-45](-0.25,-0.75); \draw[blue,->,>=stealth](-0.5,-0.25)to[out=-270,in=-180](0.75,0.5); \draw[blue,<->,>=stealth](0,-0.25)--(0.5,-0.25);\underline{\draw(-0.5,-0.5)--(0,0); \draw(0.5,0)--(1,-0.5); \draw(-0.5,0.5)--(0,0); \draw(0.5,0)--(1,0.5); \draw(0,0)--(0.5,0); \begin{scope}[shift={(0,-0.25)}] \draw(2.5,0)--(3,0); \draw(3,0)--(3,0.5); \draw(3,0.5)--(2.5,0.5); \draw(2.5,0)--(2.5,0.5); \draw[fill=white](2.5,0.5) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](3,0.5) circle (0.1); \end{scope} \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,-0.5) circle (0.1); \draw[fill=white](-0.5,-0.5) circle (0.1); \draw[fill=white](1,0.5) circle (0.1); \draw[fill=white](-0.5,0.5) circle (0.1); } \end{scope} \end{tikzpicture} \end{equation} The surviving ${\mathrm{SU}}(2)$'s under the action of $A'$ correspond to the outermost nodes of the affine ${\mathrm{Spin}}(10)$'s, and they get identified under $\theta$ into ${\mathrm{SU}}(2)\times {\mathrm{SU}}(2)$ at level 4. The rank of the projector $P_\theta = 1 + \theta + \theta^2 + \theta^3$ is 2, and so 14 Cartan generators are eliminated. There is again an order 4 shift in $x^2$ in the orbifold symmetry, and we get the gauge group $({\mathrm{SU}}(2)\times {\mathrm{SU}}(2)/\mathbb{Z}_2\times \mathbb{Z}_2)\times {\mathrm{U}(1)}^3$ for generic metric and B-field. Here the ${\mathrm{SU}}(2)^2$ factor is quotiented by its center, a fact which is derived at the end of Section \ref{ss:proj}. The breaking diagram is \begin{equation} \begin{tikzpicture}[scale = 0.9] \draw(0,0)--(1,0); \draw(2,0)--(3,0); \draw(4,0)--(5,0); \draw(6,0)--(7,0); \draw(0.5,0)--(0.5,1); \draw(6.5,0)--(6.5,0.5); \draw(6.5,0.5)--(6.5,1); \draw(1,0)--(3.5,1); \draw(2,0)--(3.5,1); \draw(5,0)--(3.5,1); \draw(6,0)--(3.5,1); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0.5,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](2,0) circle (0.1); \draw[fill=white](2.5,0) circle (0.1); \draw[fill=white](3,0) circle (0.1); \draw[fill=white](4,0) circle (0.1); \draw[fill=white](4.5,0) circle (0.1); \draw[fill=white](5,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](6.5,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=white](0.5,0.5) circle (0.1); \draw[fill=white](6.5,0.5) circle (0.1); \draw[fill=white](6.5,1) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \draw[fill=black](3.5,1) circle (0.1); \draw[red,->,>=stealth](7.5,0.5)--(8.5,0.5) node[above=0.3,left]{$A'$}; \draw[blue,->,>=stealth](7.5,-1.25)--(8.5,-1.25) node[above=0.3,left]{$\theta$}; \begin{scope}[shift={(9,0)}] \draw(1,0)--(3.5,1); \draw(7,0)--(3.5,1); \draw(0,0.5)--(3.5,1); \draw(0.5,1)--(3.5,1); \draw(6.5,1)--(3.5,1); \draw(7,0.5)--(3.5,1); \draw(0,0)--(3.5,1); \draw(6,0)--(3.5,1); \draw[fill=black](3.5,1) circle (0.1); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \draw[fill=white](7,0) circle (0.1); \draw[fill=yellow](0,0.5) circle (0.1); \draw[fill=yellow](7,0.5) circle (0.1); \draw[fill=white](6.5,1) circle (0.1); \draw[fill=white](0.5,1) circle (0.1); \end{scope} \begin{scope}[shift={(9,-1.5)}] \draw[fill=white](1,0) circle (0.1); \draw[fill=white](6,0) circle (0.1); \end{scope} \end{tikzpicture} \end{equation} The nodes in yellow are just the lowest roots of each ${\mathrm{Spin}}(10)$. Note that the black node in the RHS represents a different weight than the one in the LHS. Indeed, it corresponds to an order two element in the center of ${\mathrm{SU}}(2)^8$, while the former corresponds to an order four element in the LHS group. We find that \begin{equation} \theta(A')-A' = (0^3,1,-1,0^3)\times (0^3,1,-1,0^3)\,, \end{equation} which is the weight in the LHS of the diagram above. The moduli space is of dimension 15, locally of the form \begin{equation} {\mathrm{SO}}(5,3,\mathbb{R})\big/({\mathrm{SO}}(5,\mathbb{R})\times {\mathrm{SO}}(3,\mathbb{R})), \end{equation} and the momentum lattice is \begin{equation} \Lambda_4 = {\mathrm{II}}_{3,3} \oplus {\mathrm{A}}_1 \oplus {\mathrm{A}}_1. \end{equation} \subsubsection{$\mathbb{Z}_{5}$ and $\mathbb{Z}_6$-triples} For the $\mathbb{Z}_5$-triple we use Wilson line \begin{equation} A' = \frac{1}{10}(-5,1,-3,7,5,3,1,-1)\times(1,-1,-3,-5,-7,3,-1,5)\,, \end{equation} which breaks ${\mathrm{SU}}(5)^4/\mathbb{Z}_5$ to ${\mathrm{U}(1)}^{16}$. The automorphism $\theta$ corresponds to the symmetry \begin{equation} \begin{tikzpicture}[scale = 0.9] \draw(0,0)--(1,0); \draw(1,0)--(1.31,0.95); \draw(1.31,0.95)--(0.51,1.54); \draw(0.51,1.54)--(-0.31,0.95); \draw(-0.31,0.95)--(0,0); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.31,0.95) circle (0.1); \draw[fill=white](0.51,1.54) circle (0.1); \draw[fill=white](-0.31,0.95) circle (0.1); \draw[blue,->,>=stealth](0,-0.25)--(1,-0.25); \draw[blue,->,>=stealth](1.23,-0.08)--(1.54,0.87); \draw[blue,->,>=stealth](1.46,1.15)--(0.66,1.74); \draw[blue,->,>=stealth](0.34,1.74)--(-0.46,1.15); \draw[blue,->,>=stealth](-0.54,0.87)--(-0.23,-0.08); \begin{scope}[shift={(3,0)}] \draw(0,0)--(1,0); \draw(1,0)--(1.31,0.95); \draw(1.31,0.95)--(0.51,1.54); \draw(0.51,1.54)--(-0.31,0.95); \draw(-0.31,0.95)--(0,0); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.31,0.95) circle (0.1); \draw[fill=white](0.51,1.54) circle (0.1); \draw[fill=white](-0.31,0.95) circle (0.1); \draw[blue,->,>=stealth](0,-0.25)--(1,-0.25); \draw[blue,->,>=stealth](1.23,-0.08)--(1.54,0.87); \draw[blue,->,>=stealth](1.46,1.15)--(0.66,1.74); \draw[blue,->,>=stealth](0.34,1.74)--(-0.46,1.15); \draw[blue,->,>=stealth](-0.54,0.87)--(-0.23,-0.08); \end{scope} \begin{scope}[xscale = -1 , xshift = -10cm] \draw(0,0)--(1,0); \draw(1,0)--(1.31,0.95); \draw(1.31,0.95)--(0.51,1.54); \draw(0.51,1.54)--(-0.31,0.95); \draw(-0.31,0.95)--(0,0); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.31,0.95) circle (0.1); \draw[fill=white](0.51,1.54) circle (0.1); \draw[fill=white](-0.31,0.95) circle (0.1); \draw[blue,->,>=stealth](0,-0.25)--(1,-0.25); \draw[blue,->,>=stealth](1.23,-0.08)--(1.54,0.87); \draw[blue,->,>=stealth](1.46,1.15)--(0.66,1.74); \draw[blue,->,>=stealth](0.34,1.74)--(-0.46,1.15); \draw[blue,->,>=stealth](-0.54,0.87)--(-0.23,-0.08); \begin{scope}[shift={(3,0)}] \draw(0,0)--(1,0); \draw(1,0)--(1.31,0.95); \draw(1.31,0.95)--(0.51,1.54); \draw(0.51,1.54)--(-0.31,0.95); \draw(-0.31,0.95)--(0,0); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](1,0) circle (0.1); \draw[fill=white](1.31,0.95) circle (0.1); \draw[fill=white](0.51,1.54) circle (0.1); \draw[fill=white](-0.31,0.95) circle (0.1); \draw[blue,->,>=stealth](0,-0.25)--(1,-0.25); \draw[blue,->,>=stealth](1.23,-0.08)--(1.54,0.87); \draw[blue,->,>=stealth](1.46,1.15)--(0.66,1.74); \draw[blue,->,>=stealth](0.34,1.74)--(-0.46,1.15); \draw[blue,->,>=stealth](-0.54,0.87)--(-0.23,-0.08); \end{scope} \end{scope} \end{tikzpicture} \end{equation} and has projector $P_\theta = 0$. The rank of the gauge group is reduced by a factor of 16 and only the Cartans coming from the $T^3$ compactification are present. We have that \begin{equation} \theta(A')-A' = (0^2,1,-1,0^4)\times (0^4,1,-1,0^2)\,, \end{equation} which is the weight associated to the $\mathbb{Z}_5$ quotient. The moduli space has dimension 9 and is locally of the form \begin{equation} {\mathrm{SO}}(3,3,\mathbb{R})\big/({\mathrm{SO}}(3,\mathbb{R})\times {\mathrm{SO}}(3,\mathbb{R})), \end{equation} and the momentum lattice is just ${\mathrm{II}}_{3,3}$. The story for the $\mathbb{Z}_6$-triple is basically the same, the only differences being that the Wilson line used is \begin{equation} A' = \frac{1}{12}(1,-5,7,5,3,1,-1,-3)\times(3,1,-1,-3,-5,7,5,-1), \end{equation} the automorphism $\theta$ corresponds to the symmetry of the affine $({\mathrm{SU}}(2)\times{\mathrm{SU}}(3)\times {\mathrm{SU}}(6))^2$ diagram, \begin{equation} \begin{tikzpicture}[scale = 0.9] \begin{scope}[shift={(0,0)},scale=0.67] \draw(0,0)--(1,0); \draw(1,0)--(1.5,0.866); \draw(1.5,0.866)--(1,1.71); \draw(1,1.71)--(0,1.71); \draw(0,1.71)--(-0.5,0.866); \draw(-0.5,0.866)--(0,0); \draw[fill=white](0,0) circle (0.15); \draw[fill=white](1,0) circle (0.15); \draw[fill=white](1.5,0.866) circle (0.15); \draw[fill=white](1,1.71) circle (0.15); \draw[fill=white](0,1.71) circle (0.15); \draw[fill=white](-0.5,0.866) circle (0.15); \draw[blue,->,>=stealth](0,0-0.375)--(1,0-0.375); \draw[blue,->,>=stealth](1+0.375,0-0.1875)--(1.5+0.375,0.866-0.1875); \draw[blue,->,>=stealth](1.5+0.375,0.866+0.1875)--(1+0.375,1.71+0.1875); \draw[blue,->,>=stealth](1,1.71+0.375)--(0,1.71+0.375); \draw[blue,->,>=stealth](0-0.375,1.71+0.1875)--(-0.5-0.375,0.866+0.1875); \draw[blue,->,>=stealth](-0.5-0.375,0.866-0.1875)--(0-0.375,0-0.1875); \end{scope} \begin{scope}[shift={(2.5,0)}] \draw(0,0)--(.5,0); \draw(.5,0)--(.25,0.36); \draw(0,0)--(.25,0.36); \draw[blue,->,>=stealth](.05,0.5)--(-.25,0.1); \draw[blue,->,>=stealth](0,-0.25)--(.5,-0.25); \draw[blue,->,>=stealth](.75,0.1)--(.45,0.5); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](.5,0) circle (0.1); \draw[fill=white](.25,0.36) circle (0.1); \end{scope} \begin{scope}[shift={(4.5,0)}] \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0,0.5) circle (0.1); \draw[blue,<->,>=stealth](-0.3,0)--(-0.3,0.5); \draw(-0.1,0)--(-0.1,0.5); \draw(0.1,0)--(0.1,0.5); \end{scope} \begin{scope}[shift={(5.5,0)}] \draw[fill=white](0,0) circle (0.1); \draw[fill=white](0,0.5) circle (0.1); \draw[blue,<->,>=stealth](--0.3,0)--(--0.3,0.5); \draw(--0.1,0)--(--0.1,0.5); \draw(-0.1,0)--(-0.1,0.5); \end{scope} \begin{scope}[shift={(7.5,0)}] \draw(0,0)--(-.5,0); \draw(-.5,0)--(-.25,0.36); \draw(0,0)--(-.25,0.36); \draw[blue,->,>=stealth](-.05,0.5)--(--.25,0.1); \draw[blue,->,>=stealth](-0,-0.25)--(-.5,-0.25); \draw[blue,->,>=stealth](-.75,0.1)--(-.45,0.5); \draw[fill=white](0,0) circle (0.1); \draw[fill=white](-.5,0) circle (0.1); \draw[fill=white](-.25,0.36) circle (0.1); \end{scope} \begin{scope}[shift={(10,0)},scale=0.67] \draw(-0,0)--(-1,0); \draw(-1,0)--(-1.5,0.866); \draw(-1.5,0.866)--(-1,1.71); \draw(-1,1.71)--(-0,1.71); \draw(-0,1.71)--(--0.5,0.866); \draw(--0.5,0.866)--(-0,0); \draw[fill=white](-0,0) circle (0.15); \draw[fill=white](-1,0) circle (0.15); \draw[fill=white](-1.5,0.866) circle (0.15); \draw[fill=white](-1,1.71) circle (0.15); \draw[fill=white](-0,1.71) circle (0.15); \draw[fill=white](--0.5,0.866) circle (0.15); \draw[blue,->,>=stealth](-0,0-0.375)--(-1,0-0.375); \draw[blue,->,>=stealth](-1-0.375,0-0.1875)--(-1.5-0.375,0.866-0.1875); \draw[blue,->,>=stealth](-1.5-0.375,0.866+0.1875)--(-1-0.375,1.71+0.1875); \draw[blue,->,>=stealth](-1,1.71+0.375)--(-0,1.71+0.375); \draw[blue,->,>=stealth](-0--0.375,1.71+0.1875)--(--0.5--0.375,0.866+0.1875); \draw[blue,->,>=stealth](--0.5--0.375,0.866-0.1875)--(-0--0.375,0-0.1875); \end{scope} \end{tikzpicture} \end{equation} and \begin{equation} \theta(A')-A' = (0,1,-1,0^5)\times (0^5,1,-1,0^1)\,. \end{equation} As in the previous case there are no Wilson line degrees of freedom, and the local and global data for the moduli space are the same. One should note however that the groups which are realized at level 5 in the $\mathbb{Z}_5$-triple are realized in this case at level 6. Indeed, this information is not contained implicitly in the momentum lattice. \section{7d Heterotic String and Momentum Lattices} \label{s:lattices} Here we explain the basic machinery of how gauge symmetry groups can be obtained from the momentum lattices corresponding to certain 7d heterotic string compactifications with 16 supercharges. These include the Narain lattice for $T^3$ compactifications, the Mikhailov lattice for the 7d CHL string, and the four extra momentum lattices for sectors with further rank reduction obtained in \cite{deBoer:2001wca}. \subsection{The Narain construction} It was shown in \cite{Narain:1985jj} that the perturbative spectrum of the heterotic string on $T^d$ can be put in correspondence with an even self-dual Lorentzian lattice ${\mathrm{II}}_{16+d,d}$ of signature $(+^{16+d},-^{d})$. This lattice is spanned by vectors $(P,p_L;p_R)$, where $P$ is the left gauge lattice momentum and $p_{L,R}$ are the right and left internal space momenta. The only massless states in the spectrum have $p_R = 0$, and those which realize the adjoint representation of the gauge algebra $\mathfrak g$ also have $P^2 + p_L^2 = 2$. They correspond therefore to a set of length $\sqrt{2}$ vectors in ${\mathrm{II}}_{16+d,d}$ spanning a positive definite sublattice $L$, which is just the root lattice of $\mathfrak g$. The question of what gauge algebras can be realized in the theory is then equivalent to the question of what root lattices $L$ can be embedded in the Narain lattice. Note that this embedding has to be such that the intersection of the real span of $L$ with ${\mathrm{II}}_{16+d,d}$ does not contain a larger root lattice $L'$, since this would leave out extra states that do form part of the massless spectrum. We can be more precise about the relation between gauge symmetries and lattice embeddings and in the way gain more information. As discussed in \cite{Font:2021uyw}, relaxing the condition $P^2 + p_L^2 = 2$ while keeping $p_R = 0$ defines an overlattice $M \supseteq L$ corresponding to the weight lattice of the global gauge group $G$. In this case, $M$ is such that the intersection of its real span with ${\mathrm{II}}_{16+d,d}$ is $M$ itself, i.e. it is \textit{primitively} embedded in ${\mathrm{II}}_{16+d,d}$ (see Appendix \ref{app:prim} for an extended discussion regarding these embeddings). The full statement regarding the possibility of some gauge group $G$ being realized in the heterotic string on $T^d$ is as follows: \\ \\ \fbox{\parbox{\textwidth}{ \begin{prop} Let $G = \tilde G/H$ be some semisimple group of rank $r \leq 16+d$, where $\tilde G$ and $H$ are respectively the universal cover and the fundamental group. $G \times U(1)^{16+d-r}$ is realized in the heterotic string on $T^d$ as a gauge symmetry group if and only if its weight lattice $M$ admits a primitive embedding in the Narain lattice ${\mathrm{II}}_{16+d,d}$ such that the vectors in $M$ of length $\sqrt2$ are roots. \end{prop}\label{propNar} }} \\ \\ At the end of the day, the classification of the possible gauge groups that can be obtained in the heterotic string on $T^d$ turns out to be a (conceptually) simple problem of lattice embeddings. \subsubsection{Exploration algorithm} \label{sss:alg} There exist many useful theorems and techniques, mainly due to Nikulin \cite{MR525944,Taylor:2011wt}, for determining if some lattice $M$ admits a primitive embedding in another lattice $\Lambda$. However, even if they can yield insight into the structure of the theory, they do not by themselves give an efficient method for obtaining a thorough classification of the allowed gauge groups. We are therefore led to develop more constructive methods which can be easily turned into computer algorithms. Such an algorithm was presented in \cite{Font:2020rsk}, and it works roughly as follows (see \cite{Font:2021uyw} for a detailed explanation): \begin{enumerate} \item Take a point in moduli space of $T^d$ compactifications with maximally enhanced gauge group, say ${\mathrm{Spin}}(32 + 2d)$, such that the embedding of the root lattice $L \hookrightarrow {\mathrm{II}}_{16+d,d}$ is explicitly known. \item Break the gauge group by removing a simple root. This relaxes a constraint on the moduli such that the semisimple rank reduced part of the gauge group generically corresponds to a $d$-dimensional subvariety $V$ of the moduli space. \item Enhance the group with a different simple root than the one previously removed. This generically selects a different point of maximal enhancement contained in $V$. \end{enumerate} This procedure is repeated for different choices of breakings-enhancements for a given starting point, and then repeated again starting from the newly found gauge groups, up until it does not yield new results. It is natural to assume that all points of maximal enhancement in moduli space can be reached in this way, and it is in fact true for the cases $d = 1, 2$ \cite{Font:2020rsk}. Non-maximal enhancements can be obtained from the maximal ones by simply removing an arbitrary number of roots. Remarkably, for $d = 1,2$ there are respectively only two gauge groups which can not be obtained in this way, namely ${\mathrm{Spin}}(16)^2/\mathbb{Z}_2$ for $d = 1$ and ${\mathrm{Spin}}(8)^4/\mathbb{Z}_2^2$ for $d = 2$. This pattern suggest that for $d = 3$ this will be the case for ${\mathrm{Spin}}(4)^8/\mathbb{Z}_2^4 \simeq {\mathrm{SU}}(2)^{16}/\mathbb{Z}_2^4$, but our results indicate that this group is not actually realized in the theory, and so the likelihood for an analog of the two aforementioned groups in $d = 3$ is very low. In this paper we are interested in the possible gauge groups that can be realized in the heterotic string on $T^3$. Using the exploration algorithm just described, we have collected a set of points of maximal enhancement characterized by their root lattices $L$, i.e. their gauge algebras $\mathfrak g$. For each point we compute the weight lattice $M$ and from it the generators of the fundamental group $H$, using the methods described in \cite{Font:2021uyw}. The results are presented in Section \ref{s:results}. \subsection{The CHL string and Mikhailov lattice} Now we wish to extend the discussion of the previous subsection to the CHL string on $T^d$, which can be realized as an asymmetric orbifold of the heterotic string on $T^d$. The analog of the Narain lattice for this theory was constructed by Mikhailov in \cite{Mikhailov:1998si} and can be written as \begin{equation} {\mathrm{II}}_{(d)} = {\mathrm{II}}_{d-1,d-1}(2)\oplus {\mathrm{II}}_{1,1} \oplus {\mathrm{E}}_8\,, \end{equation} where the $(2)$ indicates that ${\mathrm{II}}_{d-1,d-1}$ is scaled by a factor of $\sqrt{2}$. Depending on the dimension $d$, this lattice may be rewritten in different ways using lattice isomorphisms. For $d = 3$, we have \begin{equation} {\mathrm{II}}_{2,2}(2)\oplus {\mathrm{II}}_{1,1} \oplus {\mathrm{E}}_8 ~~\simeq~~ {\mathrm{II}}_{3,3} \oplus {\mathrm{D}}_4 \oplus {\mathrm{D}}_4 ~~\simeq~~ {\mathrm{II}}_{3,3} \oplus {\mathrm{F}}_4 \oplus {\mathrm{F}}_4 \,. \end{equation} Here we have used the root lattice isomorphism ${\mathrm{D}}_4 \simeq {\mathrm{F}}_4$ (the corresponding root \textit{systems} are of course not isomorphic, see Appendix \ref{app:iso}) to reflect the fact that the `canonical' point in the theory has gauge algebra $2{\mathrm{F}}_4$ and not $2 {\mathrm{D}}_4$, as shown in Section \ref{ss:z2trip}. The relation between lattice embeddings and realizability of gauge groups in the CHL string is more complicated than for the usual heterotic string on tori. In the latter, the roots of the gauge algebra correspond to the length $\sqrt2$ vectors in some positive definite lattice $\Lambda$ primitively embedded into ${\mathrm{II}}_{16+d,d}$. In the CHL string the mass formulas are such that it is also possible for some but not all vectors of length $2$ to give roots. In order for such a vector $v$ to correspond to a root, it must satisfy the condition that its inner product with all other vectors in the whole Mikhailov lattice is even \cite{Mikhailov:1998si}. In this case we say that $v$ is a level $2$ vector (not to be confused with the level of the Kac-Moody algebra for the gauge group). More generally, a vector $v$ in a lattice $\Lambda$ is said to be at embedding level $\ell$ if the product of $v$ with every vector in $\Lambda$ is divisible by $\ell$. On the other hand, the statement that the global structure of the gauge group is given by the the primitively weight overlattice $M$ does not generalize to the case where the momentum lattice is not self-dual and the gauge algebras are not of ADE type. The problem of obtaining this global data was studied in detail in \cite{Cvetic:2021sjm}. It was shown in particular that the fundamental group $\pi_1(G)$ of the gauge group $G$ is given by the quotient of the cocharacter lattice $M^\vee$ and the coroot lattice $L^\vee$ where the later is embedded in the dual momentum lattice ${\mathrm{II}}_{(d)}^*$ and the former is the corresponding overlattice which is primitively embedded in ${\mathrm{II}}_{(d)}^*$. One strategy to obtain all the possible gauge groups in the theory is to apply the exploration algorithm described above to the dual lattice ${\mathrm{II}}_{(d)}^*$ (which usually has to be rescaled to be made even) and compute the lattices $L$ and $M$ the same way as for the Narain lattice, but dualizing the algebra $\mathfrak g \to \mathfrak g^\vee$ at the end. It can be shown that the embedding level condition for vectors to be roots are the same as for the original lattice ${\mathrm{II}}_{(d)}$. This corresponds to the method employed in \cite{Font:2021uyw} to obtain the list of gauge groups for the CHL string in 8d. Having dealt with this subtlety, a statement generalizing proposition \ref{propNar} for the usual heterotic string to the CHL string on $T^d$ can be made as follows: \\ \\ \fbox{\parbox{\textwidth}{ \begin{prop} Let $G = \tilde G/H$ be some semisimple group of rank $r \leq d+8$, where $\tilde G$ and $H$ are respectively the universal cover and the fundamental group. $G \times U(1)^{d+8-r}$ is realized in the CHL string on $T^d$ as a gauge symmetry group if and only if the weight lattice $M^\vee$ of the dual group $G^\vee$ admits a primitive embedding in the dual Mikhailov lattice ${\mathrm{II}}_{(d)}^*(2)$ such that the vectors in $M^\vee$ of length $\sqrt{2 \ell}$ at embedding level $\ell = 1,2$ in ${\mathrm{II}}_{(d)}^*(2)$ belong to $L^\vee$. \end{prop}\label{propCHL} }} \\ \\ We see that the embedding level $\ell$ plays an important role in the theory, allowing to treat the problem of finding the possible gauge groups without reference to the string theory itself, as in the case of the original heterotic string. Finally let us recall that the simple factors in $G$ have associated Kac-Moody algebras at level $\mathfrak m = 1,2$ where $2/\mathfrak m$ is the squared length of the corresponding longest root. For $d = 2$ there are only ADE groups at level 2 and symplectic groups at level 1 (including ${\mathrm{Sp}}(1) = {\mathrm{SU}}(2)$). This moduli space was exhaustively explored in \cite{Font:2021uyw} using an extension of the algorithm discussed in Section \ref{sss:alg}. For $d = 3$ there are more interesting possibilities including ${\mathrm{B}}_3$ and ${\mathrm{F}}_4$ at level 1. \subsection{Momentum lattices from Triples} Let us now turn to the $\mathbb{Z}_m$-triples reviewed in Section \ref{ss:triples}. The respective momentum lattices are given in Table \ref{tab:lattices}, where we also show the rank reduction of the respective gauge groups. Here again we have chosen to write the lattices in terms of the canonical point groups using the lattice isomorphisms ${\mathrm{D}}_4 \simeq {\mathrm{F}}_4$ and ${\mathrm{A}}_2 \simeq {\mathrm{G}}_2$. We also record the frozen singularity for each lattice $\Lambda_m$, which in this context corresponds to the orthogonal complement of the embedding $\Lambda_m \hookrightarrow {\mathrm{II}}_{19,3}$. This point is discussed in more detail in the next section. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|}\hline $m$ & $\Lambda_m$ & Frozen Singularity &$r_-$\\ \hline 1&${\mathrm{II}}_{3,3}\oplus {\mathrm{E}}_8 \oplus {\mathrm{E}}_8$&$\emptyset$&0\\ \hline 2&${\mathrm{II}}_{3,3} \oplus {\mathrm{F}}_4 \oplus {\mathrm{F}}_4$&${\mathrm{D}}_4\oplus {\mathrm{D}}_4$&8\\ \hline 3&${\mathrm{II}}_{3,3}\oplus {\mathrm{G}}_2 \oplus {\mathrm{G}}_2$&${\mathrm{E}}_6 \oplus {\mathrm{E}}_6$&12\\ \hline 4&${\mathrm{II}}_{3,3}\oplus {\mathrm{A}}_1 \oplus {\mathrm{A}}_1$&${\mathrm{E}}_7 \oplus {\mathrm{E}}_7$&14\\ \hline 5&${\mathrm{II}}_{3,3}$&${\mathrm{E}}_8\oplus {\mathrm{E}}_8$&16\\\hline 6&${\mathrm{II}}_{3,3}$&${\mathrm{E}}_8 \oplus {\mathrm{E}}_8$&16 \\ \hline \end{tabular} \caption{Momentum lattices $\Lambda_m$ for the moduli spaces of heterotic $\mathbb{Z}_m$-triples. The gauge group rank for $m = 1$ is 19, which is just the Narain component. The case $m = 2$ is dual to but not the same as the CHL component \cite{deBoer:2001wca}. The frozen singularities correspond to the orthogonal complements of $\Lambda_m \hookrightarrow {\mathrm{II}}_{19,3}$.} \label{tab:lattices} \end{center} \end{table} It is natural to ask whether we can extend propositions \ref{propNar} and \ref{propCHL} to these lattices. An obvious ansatz is the following: \\ \\ \fbox{\parbox{\textwidth}{ \begin{prop} Let $G = \tilde G/H$ be some semisimple group of rank $r \leq r_m$, where $\tilde G$ and $H$ are respectively the universal cover and the fundamental group, and $r_m = 19, 11, 7, 5, 3, 3$ respectively for $m = 1,...,6$. $G \times U(1)^{19-r_m}$ is realized in the $\mathbb{Z}_m$-triple as a gauge symmetry group if and only if the weight lattice $M^\vee$ of the dual group $G^\vee$ admits a primitive embedding in the dual momentum lattice $\Lambda_m^*(m)$ such that the vectors in $M^\vee$ of length $\sqrt{2\ell}$ at embedding level $\ell = 1,m$ in $\Lambda_m^*(m)$ belong to $L^\vee$. Simple factors are realized at level $\mathfrak m = 2m/\alpha_\text{long}^2$, where $\alpha_\text{long}$ is a long root in $L \hookrightarrow \Lambda_m$. \end{prop}\label{propTrip} }} \\ \\ The key ingredient is that the vectors of length $\sqrt{2m}$ at embedding level $m$ correspond to massless states and give e.g. long roots for non-ADE gauge groups. This can in fact be explicitly proved in the particular construction used in \cite{deBoer:2001wca} to obtain the momentum lattices. This roughly corresponds to the fact that in this construction there is a rescaling by a factor of $\sqrt{m}$ involved, such that the product of long roots, coming from invariant states in the parent theory of the orbifold, with all other vectors is scaled by a factor of $m$. We will however confirm this for the general case by showing in Section \ref{s:frozen} that assuming this ansatz one can reproduce the mechanism of singularity freezing in the dual M-theory on K3 from the heterotic side. We note however that there is a subtlety in the case $m = 4$ that makes some gauge groups not conform fo Proposition \ref{propTrip}. The lattice $\Lambda_4 = {\mathrm{II}}_{3,3} \oplus {\mathrm{A}}_1 \oplus {\mathrm{A}}_1$ has dual lattice $\Lambda_4^*(4) = {\mathrm{II}}_{3,3}(4)\oplus {\mathrm{A}}_1 \oplus {\mathrm{A}}_1$. The ``canonical" maximal enhancement in the theory has root lattice $L = 5 {\mathrm{A}}_1$, so that the coroot lattice $L^\vee$ found in $\Lambda_4^*(4)$ should be $L^\vee = 5 {\mathrm{A}}_1 (4)$. The canonical point in this lattice, however, has lattice $L^\vee = 3 {\mathrm{A}}_1 (4) \oplus 2{\mathrm{A}}_1$, which does not match with the correct coroot lattice. Indeed, the fundamentally correct approach is not to naively explore the dual lattices in the same way as the original ones. One should instead explicitly look for the coroot lattices of those root lattices obtained from the original lattice and take every other vector in its overlattice as part of the cocharacter lattice. We note however that both procedures give the exact same results except in the $m = 4$ case where these two ${\mathrm{A}}_1$'s are involved. An extension of the exploration algorithm used for the CHL string to these lattices is straightforward and produces the results presented in Section \ref{ss:restrip}. In Section \ref{ss:proj} we will see that these can be reproduced by applying an appropriate projection map to the Narain sector. \section{Frozen singularities from the heterotic side} \label{s:frozen} It was already noted by Mikhailov in \cite{Mikhailov:1998si} that the momentum lattice for the CHL string is primitively embedded in the Narain lattice such that its orthogonal complement corresponds to the frozen singularity on the dual F/M-theories on K3 (for $d = 2,3$, respectively). This observation was extended in \cite{deBoer:2001wca} to the $\mathbb{Z}_m$-triples in 7d. Here we make use of it together with Proposition \ref{propTrip} to determine precisely how the ADE singularities are partially frozen (usually to give non-ADE algebras) and recover the known ``freezing rules" on the K3 side. \subsection{Freezing rules in 8d} Let us first demonstrate the general method of obtaining the freezing rules in the $d = 2$ case, which map gauge groups in the Narain component to the CHL component of the moduli space. We start by considering an embedding of the Mikhailov lattice ${\mathrm{II}}_{2,2} \oplus {\mathrm{D}}_8 \simeq {\mathrm{II}}_{2,2} \oplus {\mathrm{C}}_8$ into the Narain lattice ${\mathrm{II}}_{18,2}$. This is done in practice by taking the orthogonal complement of any primitively embedded ${\mathrm{D}}_8$ lattice in ${\mathrm{II}}_{18,2}$, which is unique modulo automorphisms of the later. We then consider in turn an embedding of a ${\mathrm{C}}_n$ root lattice in the Mikhailov sublattice (cf. Proposition \ref{propCHL}), which will therefore be also embedded in the Narain lattice, \begin{equation} {\mathrm{C}}_n \hookrightarrow {\mathrm{II}}_{2,2} \oplus {\mathrm{C}}_8 \hookrightarrow {\mathrm{II}}_{18,2}, ~~~~~ n \leq 10 \,. \end{equation} This will correspond to an embedding ${\mathrm{C}}_n \oplus {\mathrm{D}}_8 \hookrightarrow {\mathrm{II}}_{18,2}$ which will however neither be primitive, nor conform to the rules of Proposition \ref{propNar} due to the long roots. It does however define an $(n+8)$-plane in the ambient space of ${\mathrm{II}}_{18,2}$ which in turn defines some primitively embedded weight lattice $M$. One may chose to focus only on the root sublattice $L \subseteq W$, which is enough to make the comparison with singularity freezing in F-theory. In this case we find that to the naive embedding of ${\mathrm{C}}_n \oplus {\mathrm{D}}_8$ inherited from the Mikhailov lattice and the frozen singularity there corresponds an actual embedding \begin{equation} {\mathrm{D}}_{n+8} \hookrightarrow {\mathrm{II}}_{18,2}, \end{equation} which may require extra weights (but not roots) to be made primitive. With this we recover the freezing rule for F-theory on K3 in the reverse. Indeed, applying these rules to all the possible gauge algebras in the Narain component gives those in the CHL component \cite{Font:2021uyw,Hamada:2021bbz}. \subsection{Freezing rules in 7d} In 7d there are more possibilities for freezing singularities, each one defining a different momentum lattice as shown in Table \ref{tab:lattices}. The process outlined above can be repeated in this case and we obtain the following patterns \begin{equation}\label{frules} \begin{split} m = 2: ~~~&~~~{\mathrm{C}}_p + {\mathrm{C}}_q \to {\mathrm{D}}_{p+4} + {\mathrm{D}}_{q+4}\,,~~~ p,q \geq 0\,,\\ ~~~&~~~{\mathrm{C}}_p + {\mathrm{F}}_q \to {\mathrm{D}}_{p+4} + {\mathrm{E}}_{q+4}\,,~~~ p \geq 0,~q = 2,3,4\,,\\ ~~~&~~~{\mathrm{F}}_p + {\mathrm{F}}_q \to {\mathrm{E}}_{p+4} + {\mathrm{E}}_{q+4}\,,~~~ p,q = 2,3,4\,,\\ m = 3: ~~~&~~~{\mathrm{G}}_p + {\mathrm{G}}_q \to {\mathrm{E}}_{6+p} + {\mathrm{E}}_{6+q}\,,~~~ p,q = 0,1,2\,,\\ m = 4: ~~~&~~~{\mathrm{A}}_p + {\mathrm{A}}_q \to {\mathrm{E}}_{7+p} + {\mathrm{E}}_{7+q}\,, ~~~ p,q = 0,1\,,\\ m = 5,6:~~~&~~~\emptyset \to {\mathrm{E}}_8 + {\mathrm{E}}_8\,, \end{split} \end{equation} where we have defined \begin{equation}\label{newname} {\mathrm{C}}_1 \equiv {\mathrm{A}}_1\,, ~ {\mathrm{F}}_2 \equiv {\mathrm{A}}_2\,, ~ {\mathrm{F}}_3 \equiv {\mathrm{B}}_ \,, ~ {\mathrm{G}}_1 \equiv {\mathrm{A}}_1\,, \end{equation} with the RHS algebras always at level 1. We note that for $m = 4$, the ${\mathrm{A}}_1$'s are not uniquely embedded in the momentum lattice ${\mathrm{II}}_{3,3}\oplus {\mathrm{A}}_1 \oplus {\mathrm{A}}_1$. The ones appearing in the above formulas correspond to those explicitly shown in this lattice, while those embedded in ${\mathrm{II}}_{3,3}$ remain unaffected upon unfreezing. However, all of them are realized at level 4. The converse rules agree perfectly with the freezing mechanism in M-theory on K3 \cite{deBoer:2001wca,Tachikawa:2015wka}. When applied to the enhancements found in the Narain moduli space one reproduces the results, at the level of the algebras, obtained with the exploration algorithm applied to the remaining momentum lattices, as expected. \subsection{Full projection map} \label{ss:proj} It was shown in \cite{Cvetic:2021sjm} that the list of gauge groups found in the heterotic string on $T^2$, together with their fundamental groups, can be projected to that of the 8d CHL string, generalizing the freezing rules for the algebras discussed above. Namely, consider a gauge group, obtained from the Narain lattice, of the form \begin{equation} G = \tilde G/H = G_1 \times \cdots \times G_s \times {\mathrm{Spin}}(2n+16)/H\,, \end{equation} where $H$ is generated by an element $k = (k_1,...,k_s, \hat k)$ of the center $Z(\tilde G)$. The corresponding group in the CHL string will be of the form \begin{equation} G' = G_1 \times \cdots \times G_s \times {\mathrm{Sp}}(n)/H'\,, \end{equation} with $H'$ generated by the element $k' = (k_1,...,k_s,\hat k')$ of the center $Z(\tilde G')$. As can be expected, only the contribution of the partially frozen factor will change. Indeed the center of ${\mathrm{Spin}}(2n+16)$ and that of ${\mathrm{Sp}}(n)$ are different. For $n$ odd, we have $\hat k \in \mathbb{Z}_4$ and $\hat k' \in \mathbb{Z}_2$, and the projection reads \begin{equation} \hat k \to \hat k' = \hat k \mod 2 ~~~~~ \left(\{0,1,2,3\} \to \{0,1,0,1\}\right)\,, ~~~n = \text{odd}\,. \end{equation} For $n$ even, we have $\hat k \equiv (\hat k^{(1)},\hat k^{(2)}) \in \mathbb{Z}_2 \times \mathbb{Z}_2$ and again $\hat k' \in \mathbb{Z}_2$, and the projection reads \begin{equation} \hat k \to \hat k' = \hat k^{(1)}+\hat k^{(2)} \mod 2 ~~~~~ \left(\{0,\mathrm{s},\mathrm{c},\mathrm{v}\} \to \{0,1,1,0\}\right)\,, ~~~n = \text{even}\,, \end{equation} where $\{0,\mathrm{s},\mathrm{c},\mathrm{v}\} \equiv \{(0,0),(1,0),(0,1),(1,1)\}$. As a simple example, the gauge group ${\mathrm{Spin}}(32)/\mathbb{Z}_2$ is mapped to ${\mathrm{Sp}}(8)/\mathbb{Z}_2$ \cite{Cvetic:2021sjm,Font:2021uyw}, since the quotient of the former corresponds to a spinor class in the center. In the case that $n = 0$, we lose a simple factor and $(k_1,...,k_s,\hat k)$ goes to $(k_1,...,k_s)$. This map can be directly generalized to all the different sectors in the moduli space of 7d theories treated here. Similarly, only the contributions to the fundamental group coming from the partially frozen factors change. In the 7d CHL string the rules for going from ${\mathrm{D}}_{n+4}$ to ${\mathrm{C}}_{n}$ are equivalent to those for going from ${\mathrm{D}}_{n+8}$ to ${\mathrm{C}}_{n}$ described above. For example, we find that $({\mathrm{Spin}}(24)/\mathbb{Z}_2)\times {\mathrm{Spin}}(14)$ maps to $({\mathrm{Sp}}(8)/\mathbb{Z}_2) \times {\mathrm{Sp}}(2)$. For the freezing ${\mathrm{E}}_{4+n} \to {\mathrm{F}}_n$ (cf. \eqref{frules}), the center of the gauge group is unaltered and so is the corresponding contribution to the fundamental group, i.e. $\hat k \to \hat k' = \hat k$. This is also true for the freezing ${\mathrm{E}}_{6+n} \to {\mathrm{G}}_n$ in the $m = 3$ case. For $m = 4$, however, the only partial freezing ${\mathrm{E}}_8 \to {\mathrm{A}}_1$ does present a change in the center of the gauge group. Our results indicate that the contribution to the fundamental group changes from $\hat k = 0$ to $\hat k' = 1$. In other words, ${\mathrm{E}}_8 \to {\mathrm{SU}}(2)/\mathbb{Z}_2 \simeq {\mathrm{SO}}(3)$. This can be seen from the fact that the coroot lattice of the rightmost ${\mathrm{A}}_1$'s in $\Lambda_4 = {\mathrm{II}}_{3,3} \oplus {\mathrm{A}}_1 \oplus {\mathrm{A}}_1$ embeds into $\Lambda_4^* = {\mathrm{II}}_{3,3} \oplus {\mathrm{A}}_1^* \oplus {\mathrm{A}}_1^*$ such that its real span contains the fundamental weights of each ${\mathrm{A}}_1$. For $m = 5,6$, the rule ${\mathrm{E}}_8 \to \emptyset$ has no effect on the fundamental group other than shortening $(k_1,...,k_s,\hat k)$ to $(k_1,...,k_s)$. With these generalized freezing rules, one can project the enhancements in the Narain component of the moduli space to the other five components treated in this paper to reproduce the results found with our exploration algorithm. \section{Classification of gauge groups} \label{s:results} Now we present the main results of this work and expand in the methods used to obtain them. The full tables with maximal enhancements and their global data are given in Appendix \ref{app:tab}. Here we give tables with the counting of the different gauge symmetries which are realized in each sector. \subsection{Narain Component} \label{ss:resnar} Obtaining the gauge groups for the the Narain component is done with a straightforward extension of the original exploration algorithm developed in \cite{Font:2020rsk}. Here we have however also computed the complete global data for each group, giving the explicit generators for the fundamental groups using the methods of \cite{Font:2021uyw} based on \cite{Cvetic:2021sjm}. We have for example the gauge group (\# 421 of Table \ref{tab:algebrasT3}) \begin{equation} \frac{{\mathrm{SU}}(8) \times {\mathrm{SU}}(8) \times {\mathrm{Spin}}(10)}{\mathbb{Z}_8}, \end{equation} where the fundamental group $\mathbb{Z}_8$ is generated by the element $(1,1,3)$ of the center $\mathbb{Z}_8 \times \mathbb{Z}_8 \times \mathbb{Z}_4$ of the universal cover ${\mathrm{SU}}(8) \times {\mathrm{SU}}(8) \times {\mathrm{Spin}}(10)$. All the maximally enhanced groups in this sector are listed in Table \ref{tab:algebrasT3} in Appendix \ref{app:tab1}. The data includes the ADE type of the gauge group and the corresponding fundamental group. The generators of the fundamental group are listed in Table \ref{tab:groupsT3} in Appendix \ref{app:tab2}. For each generator we give a sequence of numbers representing the contribution from the center of each simple factor. In the example just given, the generator is 113. Note that the ordering of the sequence corresponds to the ordering of the listed ADE type. To properly read the sequence one must write expressions of the form ${\mathrm{A}}_3^2 {\mathrm{D}}_4^3$ as $({\mathrm{A}}_3,{\mathrm{A}}_3,{\mathrm{D}}_4,{\mathrm{D}}_4,{\mathrm{D}}_4)$, e.g, assigning each number in the sequence to each subsequent ADE factor. For ${\mathrm{D}}_{2n}$ factors there are four order two elements in the center denoted v, c, s and 1, corresponding to the vector class, spinor classes and the identity, respectively. Note that in some cases the fundamental group has more than one generator. The total number of distinct gauge algebras and distinct gauge groups for different ranks of the semisimple part are listed in Table \ref{tab:numbersT3}. These have been obtained by deleting nodes in the Dynkin Diagrams of the maximally enhanced groups, and we assume that this gives all the possibilities, as discussed in Section \ref{sss:alg}.\\ {\centering\scriptsize \setlength{\tabcolsep}{2.5pt}% \begin{tabular} { | >{$}c<{$}| >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}| >{$}c<{$} >{$}c<{$}|} \hline \text{Rank} & 1 & \mathbb{Z}_2 & \mathbb{Z}_2{}^2 & \mathbb{Z}_3 & \mathbb{Z}_4 & \mathbb{Z}_2{}^3 & \mathbb{Z}_2{}^4 & \mathbb{Z}_5 & \mathbb{Z}_6 & \mathbb{Z}_3{}^2 & \mathbb{Z}_2 \mathbb{Z}_4 & \mathbb{Z}_7 & \mathbb{Z}_2 \mathbb{Z}_6 & \mathbb{Z}_4{}^2 & \mathbb{Z}_8 & \text{Algebras} & \text{Groups} \\ \hline 19 & 652 & 381 & 68 & 51 & 37 & 5 & 1 & 6 & 16 & 3 & 2 & 1 & 2 & 1 & 2 & 1035 & 1232 \\ \hline 18 & 852 & 492 & 89 & 52 & 35 & 9 & 1 & 4 & 10 & 3 & 6 & 1 & 1 & 1 & 1 & 1180 & 1557 \\ \hline 17 & 827 & 442 & 73 & 39 & 23 & 8 & 1 & 2 & 4 & 2 & 3 & \text{} & \text{} & \text{} & \text{} & 1024 & 1424 \\ \hline 16 & 694 & 334 & 47 & 25 & 12 & 4 & 1 & 1 & 1 & 1 & 1 & \text{} & \text{} & \text{} & \text{} & 793 & 1121 \\ \hline 15 & 528 & 217 & 24 & 12 & 4 & 2 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 567 & 788 \\ \hline 14 & 389 & 128 & 11 & 6 & 1 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 403 & 536 \\ \hline 13 & 272 & 66 & 3 & 2 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 276 & 343 \\ \hline 12 & 192 & 33 & 1 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 193 & 227 \\ \hline 11 & 128 & 14 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 128 & 142 \\ \hline 10 & 88 & 6 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 88 & 94 \\ \hline 9 & 57 & 2 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 57 & 59 \\ \hline 8 & 39 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 39 & 40 \\ \hline 7 & 24 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 24 & 24 \\ \hline 6 & 16 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 16 & 16 \\ \hline 5 & 9 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 9 & 9 \\ \hline 4 & 6 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 6 & 6 \\ \hline 3 & 3 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 3 & 3 \\ \hline 2 & 2 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 2 & 2 \\ \hline 1 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 1 & 1 \\ \hline \text{All} & 4779 & 2116 & 316 & 188 & 112 & 29 & 5 & 13 & 31 & 9 & 12 & 2 & 3 & 2 & 3 & 5844 & 7624 \\ \hline \end{tabular} \begin{table}[H] \caption{Number of algebras and groups of each rank with a certain fundamental group for the heterotic string on $T^3$.} \label{tab:numbersT3} \end{table} } We note that there are many cases in which two gauge groups have isomorphic fundamental groups with inequivalent inclusions in the center of the universal covering (meaning that they are not related by outer automorphisms of the group, as is the case e.g. for ${\mathrm{SO}}(2n)$ versus ${\mathrm{Spin}}(2n)/\mathbb{Z}_2$ for $n \neq 4$). These are not distinguished in Table \ref{tab:algebrasT3}, so that the numbering goes only up to 1163. The inequivalence is taken into account in Table \ref{tab:groupsT3} by putting primes on the corresponding numbering. \subsection{Triples} \label{ss:restrip} The results for the components of the moduli space with rank reduction are obtained by an extension of the exploration algorithm taking into account Proposition \ref{propTrip}. The gauge groups are recorded in Tables \ref{tab:algebrasZ2} to \ref{tab:algebrasZ5and6} in Appendix \ref{app:tab1}, while the generators for the fundamental groups are recorded in Tables \ref{tab:groupsZ2} and \ref{tab:groupsZ3} in Appendix \ref{app:tab2}. In the case of the $\mathbb{Z}_5$ and $\mathbb{Z}_6$-triples all of the gauge groups are simply connected and so no global data is required to specify them. The data is presented with the same conventions as for the Narain component, together with the notation defined in eq. \eqref{newname}. As explained in Section \ref{ss:proj}, all the gauge groups for the non-trivial $\mathbb{Z}_m$ triples can be obtained from those of the Narain component using a projection map generalizing the one obtained in \cite{Cvetic:2021sjm} for the 8d CHL string. The total number of distinct gauge algebras and distinct gauge groups are listed in Table \ref{tab:numbersTriples}.\\ {\centering\scriptsize \setlength{\tabcolsep}{1.5pt}% \begin{tabular}[b] { | >{$}c<{$}| >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}|} \multicolumn{10}{c}{\small$\mathbb{Z}_2$ triple}\\ \hline \text{Rank} & 1 & \mathbb{Z}_2 & \mathbb{Z}_2{}^2 & \mathbb{Z}_3 & \mathbb{Z}_4 & \mathbb{Z}_2{}^3 & \mathbb{Z}_2{}^4 & \text{Algebras} & \text{Groups} \\ \hline 11 & 224 & 143 & 44 & 7 & 3 & 7 & 1 & 407 & 429 \\ \hline 10 & 307 & 192 & 51 & 5 & 3 & 8 & 1 & 473 & 567 \\ \hline 9 & 284 & 161 & 37 & 2 & 2 & 4 & 1 & 372 & 491 \\ \hline 8 & 214 & 101 & 18 & 1 & 1 & 2 & \text{} & 244 & 337 \\ \hline 7 & 137 & 45 & 5 & \text{} & \text{} & \text{} & \text{} & 143 & 187 \\ \hline 6 & 84 & 17 & 1 & \text{} & \text{} & \text{} & \text{} & 85 & 102 \\ \hline 5 & 46 & 4 & \text{} & \text{} & \text{} & \text{} & \text{} & 46 & 50 \\ \hline 4 & 26 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & 26 & 27 \\ \hline 3 & 12 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 12 & 12 \\ \hline 2 & 6 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 6 & 6 \\ \hline 1 & 2 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & 2 & 2 \\ \hline \text{All} & 1342 & 664 & 156 & 15 & 9 & 21 & 3 & 1816 & 2210 \\ \hline \end{tabular} \begin{tabular}[b] { | >{$}c<{$}| >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}|} \multicolumn{6}{c}{\small$\mathbb{Z}_3$ triple}\\ \hline \text{Rank} & 1 & \mathbb{Z}_2 & \mathbb{Z}_3 & \text{Algebras} & \text{Groups} \\ \hline 7 & 41 & 6 & 5 & 50 & 52 \\ \hline 6 & 37 & 5 & 4 & 41 & 46 \\ \hline 5 & 24 & 2 & 2 & 24 & 28 \\ \hline 4 & 15 & 1 & 1 & 15 & 17 \\ \hline 3 & 8 & \text{} & \text{} & 8 & 8 \\ \hline 2 & 5 & \text{} & \text{} & 5 & 5 \\ \hline 1 & 2 & \text{} & \text{} & 2 & 2 \\ \hline \text{All} & 132 & 14 & 12 & 145 & 158 \\ \hline \multicolumn{6}{c}{}\\ \multicolumn{6}{c}{}\\ \end{tabular} \begin{tabular}[b] { | >{$}c<{$}| >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$} >{$}c<{$}|} \multicolumn{6}{c}{\small$\mathbb{Z}_4$ triple}\\ \hline \text{Rank} & 1 & \mathbb{Z}_2& \mathbb{Z}_2^2 & \text{Algebras} & \text{Groups} \\ \hline 5 & 5 & 10 & 3 & 9 & 18 \\ \hline 4 & 5 & 7 & 2 & 6 & 14 \\ \hline 3 & 3 & 4 & 1 & 3 & 8 \\ \hline 2 & 2 & 2 & 1 & 2 & 5 \\ \hline 1 & 1 & 1 & \text{} & 1 & 2 \\ \text{All} & 16 & 24 & 7 & 21 & 47 \\ \hline \multicolumn{6}{c}{\small$\mathbb{Z}_5$ and $\mathbb{Z}_6$ triples}\\ \hline \text{Rank} && 1 && \text{Algebras} & \text{Groups} \\ \hline 3 && 3 && 3 & 3 \\ \hline 2 && 2 && 2 & 2 \\ \hline 1 && 1 && 1 & 1 \\ \hline \text{All} && 6 && 6 & 6 \\ \hline \end{tabular} \begin{table}[H] \caption{Number of algebras and groups of each rank with a certain fundamental group for the heterotic $\mathbb{Z}_2$, $\mathbb{Z}_3$, $\mathbb{Z}_4$, $\mathbb{Z}_5$ and $\mathbb{Z}_6$ triples.} \label{tab:numbersTriples} \end{table} } \section{Conclusions} In this paper we have advanced the classification of possible gauge groups realized in the string landscape to three-dimensional internal manifolds when the number of supercharges is 16. This was done by finding embeddings of weight lattices into the momentum lattices constructed in \cite{deBoer:2001wca}, taking into account an extra constraint on the role of the lattice vectors as stated in Proposition \ref{propTrip}. We have however ignored other sectors with further rank reduction (see e.g. \cite{Aharony:2007du,Dabholkar:1996pc,Montero:2020icj}), which to our knowledge have so far no heterotic description. We have also studied the mechanism of partial singularity freezing found in M-theory on K3 from the heterotic side, using lattice embedding techniques. We have also constructed the more general freezing rules which indicate how the topology of the gauge group changes, generalizing the results of \cite{Cvetic:2021sjm} for the 8d CHL string. It is clear that this mechanism can be generalized in the heterotic string to compactifications to lower dimensions, e.g. on $T^4$. We plan on investigating this point further in a future work. Finally, we note that our results may serve to test Swampland conjectures \cite{Vafa:2005ui}, which are also easier to study in high dimensional theories with high supersymmetry (see e.g. \cite{Montero:2020icj,Hamada:2021bbz,Cvetic:2020kuw}). More generally it would be interesting to determine with more confidence if there are or not more components in the moduli space of heterotic strings with 16 supercharges using the techniques of asymmetric orbifolds. Recent results \cite{Montero:2020icj,Hamada:2021bbz} indicate that this is not the case in 9d and 8d, but the 7d case remains in question. \subsection*{Acknowledgements} We are grateful to Ruben Minasian and Miguel Montero for interesting and stimulating conversations, and Mariana Gra\~na for valuable guidance and supervision in the development of this project. We also thank Anamaría Font and Carmen Nuñez for their collaboration in initiating and developing this research direction in previous works. This work was partially supported by the ERC Consolidator Grant 772408-Stringlandscape, PIP-CONICET-11220150100559CO, UBACyT 2018-2021 and ANPCyT-PICT-2016-1358 (2017-2020). \newpage
1,108,101,564,104
arxiv
\section{Introduction} This paper aims at implementing an efficient and effective brain-computer interface (BCI) based system with bluetooth based indoor user localization that enables users to control various home appliances without actual physical interaction. For a wide range of people, from those who are almost completely paralyzed to completely healthy people, we provide a non-graphical user interface (GUI) hardware based implementation that frees them from the hassle of pre-training or intermittently staring at a screen. Our implementation consists of a combination of two responses - Steady-State Visually Evoked Potential (SSVEP) and Eye-Blink artifacts. \section{The Algorithm} In this section, we discuss the algorithms for the detection of both the SSVEP response and the eye-blink detection artifact. \subsection{SSVEP Detection} The useful frequency range to induce an adequate SSVEP response is limited to an interval of 6-24Hz, and the inter-stimulus gap has to be greater than or equal to 0.2Hz. But this frequency interval is enough to assign an independent frequency to a large number of devices. When a user focuses on a light source flickering at a predetermined constant frequency within the aforementioned interval, the evoked EEG signal can be used to determine the device associated with that frequency. The successful selection of this device can be used to trigger the desired action. The EEG data sample of 2 seconds at the sampling rate of 512Hz for SSVEP detection is obtained by making $O_{2}$ the signal electrode, $P_{3}$ the ground electrode and $F_{4}$ the reference electrode. We define a set of target frequencies $F$, given by, \begin{equation} F = \{f_1 , f_2, .. , f_m\} \end{equation} Each $f_k$ is the target frequency for controlling the $k\textsuperscript{th}$ device out of a total of $m$ devices. After sampling for 2 seconds, autocorrelation is applied on the raw EEG data to reduce the noise. Fast Fourier Transform (FFT) is then applied to the results obtained after the autocorrelation step. We then calculate $A_k$ as the sum of the power amplitudes $|P|$ around the target frequencies $f_k$ and its second harmonic $2f_k$, $k \in [1,m]$. For the purposes of this experiment, $m = 2$ (one for the table lamps and one for the table fans). We calculate the sum of power amplitudes around the two chosen target frequencies of 6Hz for lamps and 8.2Hz for fans respectively. $A_k$ is given by, \begin{equation} A_k = \sum\nolimits_{f_k-0.05}^{f_k+0.05} |P| + \sum\nolimits_{2f_k-0.05}^{2f_k+0.05} |P| \end{equation} We consider the second harmonic because they are known to elicit a response equal to or stronger than the fundamental frequency \cite{A19}. In order to detect SSVEP responses, we use \emph{adaptive thresholding} on the values of $A_k$. The adaptive threshold $\tau$ is given by, \begin{equation} \tau = c\left\{\frac{1}{m}\sum_{k=1}^{{m}}A_k\right\} \end{equation} where $c \in \mathbb{R}$, can be taken as a parameter to adjust the threshold sensitivity. For the purposes of this experiment, we take the value of $c$ as 2. It must be noted here that this threshold parameter $\tau$ is calculated over a 4 second sample, rather than a standard 2 second sample used for the calculation of $A_k$. This is done so that the peaks detected in this sample are not localized. The frequency $f_k$ corresponding to the maximum value of $A_k$ ($A_{max}$) obtained for each sample is selected as the target frequency, under the condition that the value of $A_{max}$ is greater than the threshold $\tau$ for the sample. The sensitivity parameter $c$ can therefore be adjusted accordingly. \subsection{Eye-Blink Artifact Detection} After SSVEP is detected, which indicates that device selection has occurred, a 4 second window is provided in which the user blinks thrice in order to confirm his device selection. If such a sequence is detected, then the state of the selected device is toggled. This adds an extra layer of error resistance to the system, since the user has the option to not blink thrice in case SSVEP has been erroneously evoked. In order to detect voluntary eye blinking in the raw EEG data, we first apply a Butterworth 4\textsuperscript{th} order digital band pass filter in the range of 1-10Hz on the signal, which is obtained from the $F_{p2}$ electrode position. Then, using the same argument as for SSVEP, we apply adaptive thresholding to detect signal peaks. This threshold $\sigma$ is given by, \begin{equation} \sigma = c'\left\{\frac{1}{n}\sum_{j=1}^{{n}}S_j\right\} \end{equation} where $S_j$ is the input signal potential at the $j\textsuperscript{th}$ time instant, $n$ is the number of samples (1024 samples in a 2 second window at a sampling rate of 512Hz) and $c' \in \mathbb{R}$ can be thought of as a sensitivity parameter for $\sigma$. For the purposes of this experiment, the value of $c'$ has been taken as 5. After identifying the peaks above the threshold $\sigma$, we then check whether they have peak width greater than 200ms. This is done to distinguish between a voluntary eye blink and random noise which could include involuntary eye blinking. \subsection{User localization} The limited set of stimulating frequencies that can be used for SSVEP detection limits the number of devices that can be controlled using this system. One technique to increase the number of controllable devices is to use indoor user localization. Localization allows reuse of frequencies among multiple areas. Let us assume, for instance, that the number of detectable SSVEP frequencies is $N$. Without localization, mapping one frequency to one device allows control of $N$ devices. If the coarse localization system can resolve the user's location into $n$ discrete sub-areas withing the indoor environment, we can effectively increase the total number of controllable devices to $n$ x $N$. Coarse room level indoor user localization is performed by comparing the Received Signal Strength Indication (RSSI) reported by bluetooth beacons placed in each room, each of which tries to establish an L2CAP layer connection with the bluetooth radio on the neuroheadset, as described in \cite{A22}. This particular technique has the advantage of not requiring bluetooth pairing between the beacons and the radio on the headset. The beacons themselves are networked with the main processing computer, and relay these RSSI values to the computer. The system localizes the user to be in the room where the beacon corresponding to that room reports the highest RSSI value. \section{The Experiment} In this experiment, we use our algorithm for controlling two devices - a table fan and a table lamp - in each room. Thus there are a total of 2 fans and 2 lamps being controlled in this experiment. Also, we use a cluster of six 5mm LEDs for generating the stimulus used to control each device and HC-04 . We limit ourselves to two frequencies in this experiment, 6Hz and 8.2Hz. The cluster of LEDs for the fans in each room is made to flicker at a frequency of 8.2Hz and the LEDs for both the lamps flicker at 6Hz. The LED clusters are controlled usbluetooth modules for localization. In order to select a device, the subject needs to look at the respective LED cluster so that the EEG data corresponding to that period in time may show SSVEP response. The system also provides feedback to the user by pausing the flicker action for the duration of selection and causing the LEDs to stop flickering. If this feedback appears at time $t$ seconds, the subject can confirm his device selection by blinking his eyes 3 times by time ($t + 4$) seconds. This is done to ensure that the SSVEP based device selection was not done accidentally by either the subject or the algorithm, thus making the system more robust. The localization system distinguishes between the two fans, and between the two lamps. \section{Results} \begin{center} \begin{tabular}{|p{0.6cm}|p{1.45cm}|p{1.55cm}|p{1.4cm}|p{1.45cm}|} \hline \bfseries{Sub} & \bfseries{SSVEP Accuracy (\%)} & \bfseries{Eye-Blink Accuracy (\%)} & \bfseries{Response Time (sec)} & \bfseries{Transfer Rate (cmd/min)}\\ \hline Sub1 & 96.34 & 100 & 4.8 & 12.5\\ Sub2 & 91.87 & 100 & 5.5 & 10.9\\ Sub3 & 89.26 & 100 & 5.7 & 10.52\\ Sub4 & 99.21 & 100 & 4.8 & 12.5\\ \hline \bfseries{Avg} & \bfseries{94.17} & \bfseries{100} & \bfseries{5.2} & \bfseries{11.6}\\ \hline \end{tabular} \vspace*{0.1cm} \captionof{table}{\textbf{Detection accuracy of SSVEP and eye-blink artifact and response time for all subjects.}} \end{center} The experiments were performed on four subjects, three males (with ages 21, 21 and 35 years) and one female (aged 20 years). The subjects were all healthy with no physical or mental disabilities. Table 1 summarizes our results. Our results are better than the state-of-the-art, giving an average of 11.6 commands per minute, which is significantly higher than \cite{A13}, the previous best result. Also, the detection accuracy for SSVEP is on par with or better than the current state-of-the-art, while detection accuracy of the eye-blink artifact is 100\% in our experiments. \section{Conclusion} We have successfully implemented a robust BCI for home automation with coarse bluetooth-based indoor user localization. This system detects SSVEP response to identify the device to be controlled and uses the eye-blink artifact to confirm device selection. The system is GUI-independent and uses LEDs which flicker at distinct frequencies for each device. These LEDs provide the stimuli for SSVEP detection. This system has been tested with toggling the state of table lamp and table fan and has exhibited detection accuracy on par with and transfer rate better than the current state-of-the-art. {\small \bibliographystyle{ieee}
1,108,101,564,105
arxiv
\section{Introduction} A (pseudo-)Riemannian metric $g_{ab}$ is said to be \textit{Einstein} if its Ricci curvature $R_{ab}$ is a multiple of~$g_{ab}$. The problem of determining whether a given conformal structure (locally) contains an Einstein metric has a rich history, and dates at least to Brinkmann's seminal investiga\-tions~\mbox{\cite{BrinkmannMapped, BrinkmannRiemann}} in the 1920s. Other signif\/icant contributions have been made by, among others, Hanntjes and Wrona~\cite{HaantjesWrona}, Sasaki~\cite{Sasaki}, Wong~\cite{Wong}, Yano~\cite{Yano}, Schouten~\cite{Schouten}, Szekeres~\cite{Szekeres}, Kozameh, Newman, and Tod~\cite{KNT}, Bailey, Eastwood, and Gover~\cite{BEG}, Fef\/ferman and Graham~\cite{FeffermanGraham}, and Gover and Nurowski~\cite{GoverNurowski}. Developments in this topic in the last quarter century in particular have stimulated substantial development both within conformal geometry and far beyond it. In the watershed article \cite{BEG}, Bailey, Eastwood, and Gover showed that the existence of such a~metric in a conformal structure (here, and always in this article, of dimension $n \geq 3$) is governed by a second-order, conformally invariant linear dif\/ferential operator $\smash{\Theta_0^{\mathcal{V}}}$ that acts on sections of a natural line bundle $\mathcal{E}[1]$ (we denote by $\mathcal{E}[k]$ the $k$th power of $\mathcal{E}[1]$): Every conformal structure $(M, \mathbf{c})$ is equipped with a canonical bilinear form $\mathbf{g} \in \Gamma(S^2 T^*M \otimes \mathcal{E}[2])$, and a nowhere-vanishing section $\sigma$ in the kernel of $\Theta_0^{\mathcal{V}}$ determines an Einstein metric $\sigma^{-2} \mathbf{g}$ in $\mathbf{c}$ and vice versa. Writing the dif\/ferential equation $\smash{\Theta_0^{\mathcal{V}}(\sigma) = 0}$ as a f\/irst-order system and prolonging once yields a closed system and hence determines a conformally invariant connection~$\nabla^{\mathcal{V}}$ on a natural vector bundle $\mathcal{V}$, called the (standard) \textit{tractor bundle}. (The conformal structure determines a parallel \textit{tractor metric} $H \in \Gamma(S^2 \mathcal{V}^*)$.) By construction, this establishes a bijective correspondence between Einstein metrics in $\mathbf{c}$ and parallel sections of this bundle satisfying a~genericity condition. This framework immediately suggests a natural relaxation of the Einstein condition: A~section of the kernel of~$\Theta_0^{\mathcal{V}}$ is called an \textit{almost Einstein scale}, and it determines an Einstein metric on the complement of its zero locus and becomes singular along that locus. A~conformal structure that admits a~nonzero almost Einstein scale is itself said to be \textit{almost Einstein}, and, somewhat abusively, the metric it determines is sometimes called an \textit{almost Einstein metric} on the original manifold. The generalization to the almost Einstein setting has substantial geometric consequences: The zero locus itself inherits a geometric structure, which can be realized as a natural limiting geometric structure of the metric on the complement. This arrangement leads to the notion of conformal compactif\/ication, which has received substantial attention in its own right, including in the physics literature~\cite{Susskind}. \looseness=-1 This article investigates the problem of existence of almost Einstein scales, as well as the geo\-metric consequences of existence of such a scale, for a fascinating class of conformal structures that arise naturally from another geometric structure: A \textit{$(2, 3, 5)$ distribution} is a $2$-plane distribution $\mathbf{D}$ on a $5$-manifold which is maximally nonintegrable in the sense that $[\mathbf{D}, [\mathbf{D}, \mathbf{D}]] = TM$. This geometric structure has attracted substantial interest, especially in the last few decades, for numerous reasons: $(2, 3, 5)$ distributions are deeply connected to the exceptional simple Lie algebra of type $\G_2$ (in fact, the study of these distributions dates to 1893, when Cartan~\cite{CartanModel} and Engel~\cite{EngelModel} simultaneously realized that Lie algebra as the inf\/initesimal symmetry algebra of a~distribution of this type), they are the subject of Cartan's most involved application of his ce\-lebrated method of equivalence~\cite{CartanFiveVariables}, they comprise a f\/irst class of distributions with continuous local invariants, they arise na\-turally from a class of second-order Monge equations, they arise naturally from mechanical systems entailing one surface rolling on another without slipping or \mbox{twisting}~\cite{AgrachevSachkov, AnNurowski, BorMontgomery}, they can be used to construct pseudo-Riemannian metrics whose holonomy group is~$\G_2$~\cite{GrahamWillse, LeistnerNurowski}, they are natural compactifying structures for indef\/inite-signature nearly K\"{a}hler geometries in dimension~$6$~\cite{GPW}, and they comprise an interesting example of a broad class of so-called \textit{parabolic geometries}~\cite[Section~4.3.2]{CapSlovak}. (Here and henceforth, the symbol~$\G_2$ refers to the algebra automorphism group of the split octonions; this is a split real form of the complex simple Lie group of type $\G_2$.) For our purposes their most important feature is the natural construction, due to Nurowski \cite[Section~5]{Nurowski}, that associates to any $(2, 3, 5)$ distribution~$(M, \mathbf{D})$ a~conformal structure $\mathbf{c}_{\mathbf{D}}$ of signature $(2, 3)$ on $M$, and it is these structures, which we call \textit{$(2, 3, 5)$ conformal structures}, whose almost Einstein geometry we investigate. For expository convenience, we restrict ourselves to the oriented setting: A~$(2, 3, 5)$ distribution $(M, \mathbf{D})$ is \textit{oriented} if\/f $\mathbf{D} \to M$ is an oriented bundle; an orientation of $\mathbf{D}$ determines an orientation of~$M$ and vice versa. The key ingredient in our analysis is that, like almost Einstein conformal structures, $(2, 3, 5)$ conformal structures can be characterized in terms of the holonomy group of the normal tractor connection, $\nabla^{\mathcal{V}}$ (for any oriented conformal structure of signature $(2, 3)$, $\nabla^{\mathcal{V}}$ has holonomy contained in $\SO(H) \cong \SO(3, 4)$): An oriented conformal structure $\mathbf{c}$ of signature $(2, 3)$ coincides with $\mathbf{c}_{\mathbf{D}}$ for some $(2, 3, 5)$ distibution $\mathbf{D}$ if\/f the holonomy group of $\nabla^{\mathcal{V}}$ is contained inside $\G_2$, or equivalently, if\/f there is a parallel tractor $3$-form, that is, a section $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$, compatible with the conformal structure in the sense that the pointwise stabilizer of $\Phi_p$ in $\GL(\mathcal{V}_p)$ (at any, equivalently, every point $p$) is isomorphic to $\G_2$ and is contained inside $\SO(H_p)$ \cite{HammerlSagerschnig, Nurowski}. While the construction $\mathbf{D} \rightsquigarrow \mathbf{c}_{\mathbf{D}}$ depends at each point on the $4$-jet of $\mathbf{D}$, the corresponding compatibility condition in the tractor setting is algebraic (pointwise), which reduces many of our considerations and arguments to properties of the algebra of $\G_2$. With these facts in hand, it is immediate that whether an oriented conformal structure of signature $(2, 3)$ is both $(2, 3, 5)$ and almost Einstein is characterized by the admission of both a~compatible tractor $3$-form $\Phi$ and a (nonzero) parallel tractor $\mathbb{S} \in \Gamma(\mathcal{V})$, which we may just as well frame as a reduction of the holonomy of $\nabla^{\mathcal{V}}$ to the $8$-dimensional common stabilizer~$S$ in~$\SO(3, 4)$ of a nonzero vector in the standard representation $\mathbb{V}$ of~$\SO(3, 4)$ and a $3$-form in $\Lambda^3 \mathbb{V}^*$ compatible with the conformal structure. The isomorphism type of $S$ depends on the causality type of $\mathbb{S}$: If the vector is spacelike, then $S \cong \SU(1, 2)$; if it is timelike, then $S \cong \SL(3, \mathbb{R})$; if it is isotropic, then $S \cong \SL(2, \mathbb{R}) \ltimes Q_+$, where $Q_+ < \G_2$ is the connected, nilpotent subgroup of~$\G_2$ def\/ined via Sections~\ref{subsubsection:parabolic-geometry} and~\ref{subsubsection:235-distributions}.\footnote{Cf.~\cite[Corollary~2.4]{Kath}.} \begin{propositionA} An oriented conformal structure of signature $(2, 3)$ is both $(2, 3, 5)$ and almost Einstein iff it admits a holonomy reduction to the common stabilizer $S$ of a $3$-form in $\Lambda^3 \mathbb{V}^*$ compatible with the conformal structure and a nonzero vector in $\mathbb{V}$. \end{propositionA} Throughout this article, $S$, $\SU(1, 2)$ $\SL(3, \mathbb{R})$, and $\SL(2, \mathbb{R}) \ltimes Q_+$ refer to the common stabilizer of the data described above~-- that is, to any subgroup in a particular conjugacy class in~$\SO(3, 4)$ (and not just to a subgroup of~$\SO(3, 4)$ of the respective isomorphism types). Given an almost Einstein $(2, 3, 5)$ conformal structure, algebraically combining $\Phi$ and $\mathbb{S}$ yields other parallel tractor objects. The simplest of these is the contraction $\mathbb{K} := -\mathbb{S} \hook \Phi$, which we may identify with a skew endomorphism of the standard tractor bundle, $\mathcal{V}$. Underlying this endomorphism is a conformal Killing f\/ield $\xi$ of the induced conformal structure $\mathbf{c}_{\mathbf{D}}$ that does not preserve any distribution that induces that structure. Thus, if $\xi$ is complete (or alternatively, if we content ourselves with a suitable local statement) the images of $\mathbf{D}$ under the f\/low of $\xi$ comprise a $1$-parameter family of distinct $(2, 3, 5)$ distributions that all induce the same conformal structure. This suggests~-- and connects with the problem of existence of an almost Einstein scale~-- a natural question that we call the \textit{conformal isometry problem} for $(2, 3, 5)$ distributions: Given a~$(2, 3, 5)$ distribution~$(M, \mathbf{D})$, what are the $(2, 3, 5)$ distributions~$\mathbf{D}'$ on~$M$ that induce the same conformal structure, that is, for which $\mathbf{c}_{\mathbf{D}'} = \mathbf{c}_{\mathbf{D}}$? Put another way, what are the f\/ibers of the map~$\mathbf{D} \rightsquigarrow \mathbf{c}_{\mathbf{D}}$? By our previous observation, working in the tractor setting essentially reduces this to an algebraic problem, which we resolve in Proposition~\ref{proposition:compatible-g2-structures} (and which extends to the split real form of~$\G_2$ an analogous result of Bryant \cite[Remark~4]{Bryant} for the compact real form). Translating this result to the tractor setting and then reinterpreting it in terms of the underlying data gives a complete description of all $(2, 3, 5)$ distributions $\mathbf{D}'$ that induce the conformal structure $\mathbf{c}_{\mathbf{D}}$. In order to formulate it, we note f\/irst that, given a f\/ixed oriented conformal structure $\mathbf{c}$ of signature $(2, 3)$, underlying any compatible parallel tractor $3$-form, and hence corresponding to a $(2, 3, 5)$ distribution $\mathbf{D}$, is a conformally weighted $2$-form $\phi \in \Gamma(\Lambda^2 T^*M \otimes \mathcal{E}[3])$, which in particular is a solution to the conformally invariant \textit{conformal Killing $2$-form equation}. The weighted $2$-forms that arise this way are called \textit{generic}. This solution turns out to satisfy $\phi \wedge \phi = 0$~-- so it is locally decomposable~-- but vanishes nowhere. Hence, it def\/ines an oriented $2$-plane distribution, and this distribution is $\mathbf{D}$. \begin{theoremB}\label{theorem:conformally-isometric-distribution-parameterization} Fix an oriented $(2, 3, 5)$ distribution $(M, \mathbf{D})$, denote by $\phi$ the corresponding generic conformal Killing $2$-form, by $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$ the corresponding parallel tractor $3$-form, and by $H_{\Phi} \in \Gamma(S^2 \mathcal{V}^*)$ the parallel tractor metric associated to $\mathbf{c}_{\mathbf{D}}$. \begin{enumerate}\itemsep=0pt \item[$1.$] Suppose $(M, \mathbf{c}_{\mathbf{D}})$ admits the nonzero almost Einstein scale $\sigma \in \Gamma(\mathcal{E}[1])$, and denote by $\mathbb{S} \in \Gamma(\mathcal{V})$ the corresponding parallel tractor; by rescaling, we may assume that $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\}$. Then, for any $(\bar A, B) \in \mathbb{R}^2$ such that $-\varepsilon \bar A^2 + 2 \bar A + B^2 = 0$ $($there is a~$1$-parameter family of such pairs$)$ the weighted $2$-form \begin{gather} \phi'_{ab} := \phi_{ab} + \bar{A} \left[\tfrac{1}{5} \sigma^2 \left(\tfrac{1}{3} \phi_{ab, c}{}^c + \tfrac{2}{3} \phi_{c[a, b]}{}^c + \tfrac{1}{2} \phi_{c [a,}{}^c{}_{b]} + 4 \mathsf{P}^c{}_{[a} \phi_{b] c} \right) - \sigma \sigma^{,c} \phi_{[ca, b]} \right. \label{equation:family-conformal-Killing-2-forms}\\ \left.\hphantom{\phi'_{ab}:= }{} - \tfrac{1}{2} \sigma \sigma_{, [a} \phi_{b] c,}{}^c - \tfrac{1}{5} \sigma \sigma_{,c}{}^c \phi_{ab} + 3 \sigma^{,c} \sigma_{,[c} \phi_{ab]} \right] + B [-\tfrac{1}{4} \sigma \phi^{cd,}{}_d \phi_{[ab, c]} + \tfrac{3}{4} \sigma^{,c} \phi_{[ab} \phi_{c]d,}{}^d]\nonumber \end{gather} is a generic conformal Killing $2$-form, and the oriented $(2, 3, 5)$ distribution $\mathbf{D}'$ it determines induces the same oriented conformal structure that $\mathbf{D}$ does, that is $\mathbf{c}_{\mathbf{D}'} = \mathbf{c}_{\mathbf{D}}$. \item[$2.$] Conversely, all conformally isometric oriented $(2, 3, 5)$ distributions arise this way: If an oriented $(2, 3, 5)$ distribution $\mathbf{D}'$ satisfies $\mathbf{c}_{\mathbf{D}'} = \mathbf{c}_{\mathbf{D}}$ $($this condition is equality of \textit{oriented} conformal structures$)$, then there is an almost Einstein scale $\sigma$ of $\mathbf{c}_{\mathbf{D}}$ $($we may assume that the corresponding parallel tractor $\mathbb{S}$ satisfies $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\})$ and \mbox{$(\bar A, B) \in \mathbb{R}^2$} satisfying $-\varepsilon \bar A^2 + 2 \bar A + B^2 = 0$ such that the normal conformal Killing $2$-form $\phi'$ corresponding to $\mathbf{D}'$ is given by~\eqref{equation:family-conformal-Killing-2-forms}. \end{enumerate} \end{theoremB} Herein, a comma $_,$ denotes the covariant derivative with respect to (any) representative $g \in \mathbf{c}_{\mathbf{D}}$, and $\mathsf{P}_{ab}$ denotes the Schouten tensor~\eqref{equation:definition-Schouten} of~$g$. We say that the distributions in the $1$-parameter family $\mathcal{D}$ determined by~$\mathbf{D}$ and $\sigma$ as in the theorem are \textit{related by $\sigma$}. Theorem~B is proved in Section~\ref{subsection:conformally-isometric-235-distributions}. Dif\/ferent signs of the Einstein constant of the Einstein metric $\sigma^{-2} \mathbf{g}$, or equivalently, dif\/ferent causality types of the corresponding parallel tractor $\mathbb{S}$, determine families of distributions with dif\/ferent qualitative behaviors. Section \ref{subsubsection:parameterizations-isometric} gives simple parameterizations of the $1$-parameter families of conformally isometric distributions, and Section~\ref{subsubsection:recovering-Einstein-scale} gives an explicit algorithm for recovering an almost Einstein scale $\sigma$ of $\mathbf{c}_{\mathbf{D}}$ relating $\mathbf{D}$ and $\mathbf{D}'$ whose existence is guaranteed by Part~(2) of Theorem~B. An immediate corollary of Theorem~B is a natural geometric characterization of almost Einstein oriented $(2, 3, 5)$ distributions: \begin{theoremC} The conformal structure $\mathbf{c}_{\mathbf{D}}$ induced by an oriented $(2, 3, 5)$ distribution $(M, \mathbf{D})$ is almost Einstein iff there is a distribution $(M, \mathbf{D}')$, $\mathbf{D}' \neq \mathbf{D}$ such that $\mathbf{c}_{\mathbf{D}} = \mathbf{c}_{\mathbf{D}'}$. \end{theoremC} Now, f\/ix an oriented conformal structure $\mathbf{c}$ of signature $(2, 3)$ and a nonzero almost Einstein scale $\sigma$ of $\mathbf{c}$. The conformal Killing f\/ield $\xi$ of $\mathbf{c}$ determined together by $\sigma$ and a choice of distribution $\mathbf{D}$ in the $1$-parameter family $\mathcal{D}$ of oriented $(2, 3, 5)$ distributions inducing $\mathbf{c}$ and related by $\sigma$ turns out not to depend on the choice of~$\mathbf{D}$ (Proposition~\ref{proposition:conformal-Killing-field-distribution-family}), and we can ask for all of the geometric objects that (like $\xi$) are determined by~$\sigma$ and $\mathcal{D}$. The almost Einstein scale $\sigma$ alone partitions $M$ into three subsets, $M_+$, $\Sigma$, $M_-$, according to the sign $+$, $0$, $-$ of~$\sigma$ at each point. By construction, $\sigma$ determines an Einstein metric on the complement $M - \Sigma = M_+ \cup M_-$. If the Einstein metric determined by $\sigma$ is not Ricci-f\/lat (that is, if the parallel tractor $\mathbb{S}$ corresponding to $\sigma$ is nonisotropic), the boundary $\Sigma$ itself inherits a conformal structure $\mathbf{c}_{\Sigma}$ that is suitably compatible with and that can be regarded as a natural compactifying structure for $(M_{\pm}, g_{\pm})$ along $\partial M_{\pm} = \Sigma$~\cite{Gover}. Something similar but more involved occurs in the Ricci-f\/lat case. This decomposition of $M$ according to the geometry of the object~$\sigma$~-- equivalently, the holonomy reduction of $\nabla^{\mathcal{V}}$ determined by the parallel standard tractor~$\mathbb{S}$~-- along with descriptions of the geometry induced on each subset in the decomposition, is formalized by the theory of \textit{curved orbit decompositions} \cite{CGH}. Here, the involved geometric structures are encoded as \textit{Cartan geometries} (Section~\ref{subsubsection:Cartan-geometry}) of an appropriate type, which are geometric structures modeled on appropriate homogeneous spaces $G / P$ endowed with $G$-invariant geometric structures, and the decomposition of $M$ in the presence of a holonomy reduction to a group $H \leq G$ is a natural generalization of the $H$-orbit decomposition of $G / P$; the subsets in the decomposition are accordingly termed the \textit{curved orbits} of the reduction. The curved orbits are parameterized by the intersections of $H$ and $P$ up to conjugacy in $G$, and $H$ together with these intersections determine the respective geometric structures on each curved orbit. Section \ref{section:curved-orbit-decomposition} carries out this decomposition for the Cartan geometry canonically associated to $(M, \mathbf{c})$ determined by $\sigma$ and $\mathcal{D}$, that is, by a holonomy reduction to the group $S$. Besides elucidating the geometry of almost Einstein $(2, 3, 5)$ conformal structures for its own sake, this serves three purposes: First, this documents an example of a curved orbit decomposition for which the decomposition is relatively involved. Second, and more importantly, we will see that several classical geometries occur in the curved orbit decompositions, establishing novel and nonobvious links between $(2, 3, 5)$ distributions and those structures. Third, we can then exploit these connections to give new methods for construction of almost Einstein $(2, 3, 5)$ conformal structures from classical geometries, and using these we produce, for the f\/irst time, examples both with negative (Example \ref{example:distinguished-rolling-distribution}) and positive (Example \ref{example:Dirichlet-Ricci-positive}) Einstein constants. Dif\/ferent signs of the Einstein constant (equivalently, dif\/ferent causality types of the parallel tractor $\mathbb{S}$ corresponding to $\sigma$) lead to qualitatively dif\/ferent curved orbit decompositions, so we treat them separately. We say that an almost Einstein scale is \textit{Ricci-negative}, \textit{-positive}, or \textit{-flat} if the Einstein constant of the Einstein metric it determines is negative, positive, or zero, respectively. See also Appendix~\ref{appendix}, which summarizes the results here and records geometric characterizations of the curved orbits. In the Ricci-negative case, the decomposition of a manifold into submanifolds is the same as that determined by $\sigma$ alone, but the family $\mathcal{D}$ determines additional structure on each closed orbit. (Herein, for readability we often suppress notation denoting restriction to a curved orbit.) \begin{theoremDminus}\label{theorem:Dminus} Let $(M, \mathbf{c})$ be an oriented conformal structure of signature $(2, 3)$. A holonomy reduction of $\mathbf{c}$ to $\SU(1, 2)$ determines a $1$-parameter family $\mathcal{D}$ of oriented $(2, 3, 5)$ distributions related by a Ricci-negative almost Einstein scale such that $\mathbf{c} = \mathbf{c}_{\mathbf{D}}$ for all $\mathbf{D} \in \mathcal{D}$, as well as a~decomposition $M = M_5^+ \cup M_5^- \cup M_4$: \begin{itemize}\itemsep=0pt \item $($Section~{\rm \ref{subsection:open-curved-orbits})} The orbits $M_5^{\pm}$ are open, and $M_5 := M_5^+ \cup M_5^-$ is equipped with a Ricci-negative Einstein metric $g := \sigma^{-2} \mathbf{g}\vert_{M_5}$. The pair $(-g, \xi)$ is a Sasaki structure $($see Section~{\rm \ref{subsubsection:vareps-Sasaki-structure})} on $M_5$. Locally, $M_5$ fibers along the integral curves of $\xi$, and the leaf space $L_4$ inherits a K\"{a}hler--Einstein structure $(\hatg, \hatK)$. \item $($Section~{\rm \ref{subsubsection:curved-orbit-negative-hypersurface})} The orbit $M_4$ is a smooth hypersurface, and inherits a Fefferman conformal structure $\mathbf{c}_{\mathbf{S}}$, which has signature $(1, 3)$: Locally, $\mathbf{c}_{\mathbf{S}}$ arises from the classical Fefferman construction {\rm \cite{CapGoverHolonomyCharacterization,Fefferman,LeitnerHolonomyCharacterization}}, which $($in this dimension$)$ canonically associates to any $3$-dimensional CR structure $(L_3, \mathbf{H}, \mathbf{J})$ a conformal structure on a circle bundle over $L_3$. Again in the local setting, the fibers of the fibration $M_4 \to L_3$ are the integral curves of~$\xi$. \end{itemize} \end{theoremDminus} The Ricci-positive case is similar to the Ricci-negative case but entails $2$-dimensional curved orbits that have no analogue there. \begin{theoremDplus} Let $(M, \mathbf{c})$ be an oriented conformal structure of signature $(2, 3)$. A holonomy reduction of $\mathbf{c}$ to $\SL(3, \mathbb{R})$ determines a $1$-parameter family $\mathcal{D}$ of oriented $(2, 3, 5)$ distributions related by a Ricci-positive almost Einstein scale such that $\mathbf{c} = \mathbf{c}_{\mathbf{D}}$ for all $\mathbf{D} \in \mathcal{D}$, as well as a~decomposition $M = M_5^+ \cup M_5^- \cup M_4 \cup M_2^+ \cup M_2^-$: \begin{itemize}\itemsep=0pt \item $($Section~{\rm \ref{subsection:open-curved-orbits})} The orbits $M_5^{\pm}$ are open, and $M_5 := M_5^+ \cup M_5^-$ is equipped with a Ricci-positive Einstein metric $g := \sigma^{-2} \mathbf{g}\vert_{M_5}$. The pair $(-g, \xi)$ is a para-Sasaki structure $($see Section~{\rm \ref{subsubsection:vareps-Sasaki-structure})} on~$M_5$. Locally, $M_5$ fibers along the integral curves of~$\xi$, and the leaf space~$L_4$ inherits a~para-K\"{a}hler--Einstein structure $(\hatg, \hatK)$. \item $($Section~{\rm \ref{subsubsection:curved-orbit-positive-hypersurface})} The orbit $M_4$ is a smooth hypersurface, and inherits a para-Fefferman conformal structure $\mathbf{c}_{\mathbf{S}}$, which has signature $(2, 2)$: Locally, $\mathbf{c}_{\mathbf{S}}$ arises from the paracomplex analogue of the classical Fefferman construction, which (in this dimension) canonically associates to any Legendrean contact structure $(L_3, \mathbf{H}_+ \oplus \mathbf{H}_-)$~-- or, locally equivalently, a point equivalence class of second-order ODEs $\ddot y = F(x, y, \dot y)$~-- a~conformal structure on a $\SO(1, 1)$-bundle over $L_3$. Again in the local setting, the fibers of the fibration $M_4 \to L_3$ are the integral curves of~$\xi$. \item $($Section~{\rm \ref{subsubsection:curved-orbit-M2pm})} The orbits $M_2^{\pm}$ are $2$-dimensional and inherit oriented projective structures. \end{itemize} \end{theoremDplus} The descriptions of the geometric structures in the above two cases are complete in the sense that any other geometric data determined by the holonomy reduction to $S$ can be recovered from the indicated data. We do not claim the same for the descriptions in the Ricci-f\/lat case, which we can view as a sort of degenerate analogue of the other two cases. \begin{theoremDzero} Let $(M, \mathbf{c})$ be an oriented conformal structure of signature $(2, 3)$. A holonomy reduction of $\mathbf{c}$ to $\SL(2, \mathbb{R}) \ltimes Q_+$ determines a $1$-parameter family $\mathcal{D}$ of oriented $(2, 3, 5)$ distributions related by a Ricci-flat almost Einstein scale such that $\mathbf{c} = \mathbf{c}_{\mathbf{D}}$ for all $\mathbf{D} \in \mathcal{D}$, as well as a decomposition $M = M_5^+ \cup M_5^- \cup M_4 \cup M_2 \cup M_0^+ \cup M_0^-$. \begin{itemize}\itemsep=0pt \item $($Section~{\rm \ref{subsection:open-curved-orbits})} The orbits $M_5^{\pm}$ are open, and $M_5 := M_5^+ \cup M_5^-$ is equipped with a Ricci-flat metric $g := \sigma^{-2} \mathbf{g}\vert_{M_5}$. The pair $(-g, \xi)$ is a null-Sasaki structure $($see Section~{\rm \ref{subsubsection:vareps-Sasaki-structure})} on~$M_5$. Locally, $M_5$ fibers along the integral curves of $\xi$, and the leaf space $L_4$ inherits a~null-K\"{a}hler--Einstein structure $(\hatg, \hatK)$. \item $($Section~{\rm \ref{subsubsection:curved-orbit-flat-hypersurface})} The orbit $M_4$ is a smooth hypersurface, and it locally fibers over a $3$-mani\-fold $\wt L$ that carries a conformal structure $\mathbf{c}_{\wt L}$ of signature $(1, 2)$ and isotropic line field. \item $($Section~{\rm \ref{subsubsection:curved-orbit-M2})} The orbit $M_2$ has dimension $2$ and is equipped with a preferred line field. \item $($--$)$ The orbits $M_0^{\pm}$ consist of isolated points and so carry no intrinsic geometry. \end{itemize} \end{theoremDzero} The statements of these theorems involve an expository choice that entails some subtle consequences: In each case, the holonomy reduction determines an almost Einstein scale $\sigma$ and $1$-parameter family $\mathcal{D}$ of oriented $(2, 3, 5)$ distributions, but this reduction does not distinguish a distribution within this family. Alternatively, we could specify for $\mathbf{c}$ an almost Einstein scale and a distribution $\mathbf{D}$ such that $\mathbf{c} = \mathbf{c}_{\mathbf{D}}$. Such a specif\/ication determines a holonomy reduction to $S$ as above, but the choice of a preferred $\mathbf{D}$ is additional data, and this is ref\/lected in the induced geometries on the curved orbits. Proposition \ref{proposition:Einstein-Sasaki-to-235} gives a partial converse to the statements in Theorems $D_-$ and $D_+$ about the $\varepsilon$-Sasaki--Einstein structures $(-g, \xi)$ induced on the open orbits by the corresponding holonomy reductions: Any $\varepsilon$-Sasaki--Einstein structure $(-g, \xi)$ (here restricting to $\varepsilon = \pm 1$) determines around each point a $1$-parameter family of oriented $(2, 3, 5)$ distributions related by the almost Einstein scale for $[g]$ corresponding to $g$, and by construction the $\varepsilon$-Sasaki structure is the one induced by the corresponding holonomy reduction. We also brief\/ly present a generalized Fef\/ferman construction that essentially inverts the projection $M_5 \to L_4$ along the leaf space f\/ibration in the non-Ricci-f\/lat cases, in a way that emphasizes the role of almost Einstein $(2, 3, 5)$ conformal structures (see Section~\ref{subsubsection:twistor-construction}). In particular, any non-Ricci-f\/lat $\varepsilon$-K\"{a}hler--Einstein metric of signature $(2, 2)$ gives rise to a $1$-parameter family of $(2, 3, 5)$ distributions. We treat this construction in more detail in an article currently in preparation \cite{SagerschnigWillseTwistor}. As mentioned above we have for convenience formulated our results for oriented $(2, 3, 5)$ distributions and conformal structures, but all the results herein have analogues for unoriented distributions and many of our considerations are anyway local. Alternatively one could further restrict attention to space- and time-oriented conformal structures (see Remark \ref{remark:space-and-time-oriented}) or work with conformal spin structures, the latter of which would connect the considerations here more closely with those in \cite{HammerlSagerschnigSpinor}. Finally, we mention brief\/ly one aspect of this geometry we do not discuss here, but which will be taken up in a shorter article currently in preparation: One can construct for any oriented $(2, 3, 5)$ distribution~$\mathbf{D}$ an invariant second-order linear dif\/ferential operator that acts on sections of $\mathcal{E}[1] \cong \Lambda^2 \mathbf{D}$ closely related to $\Theta_0^{\mathcal{V}}$~\cite{SagerschnigWillseOperator}. Its kernel can again be interpreted as the space of almost Einstein scales of $\mathbf{c}_{\mathbf{D}}$, but it is a simpler object than $\Theta_0^{\mathcal{V}}$, enough so that one can use it to construct new explicit examples of almost Einstein $(2, 3, 5)$ distributions. Among other things, the existence of such an operator emphasizes that almost Einstein geometry of the induced conformal structure is a fundamental feature of the geometry of $(2, 3, 5)$ distributions. For simplicity of statements of results, we assume that all given manifolds are connected; we do not include this hypothesis explicitly in our statements of results. We use both index-free and Penrose index notation throughout, according to convenience. \section{Preliminaries}\label{section:preliminaries} \subsection[epsilon-complex structures]{$\boldsymbol{\varepsilon}$-complex structures} The \textit{$\varepsilon$-complex numbers}, $\varepsilon \in \{-1, 0, +1\}$, is the ring $\mathbb{C}_{\varepsilon}$ generated over $\mathbb{R}$ by the generator $i_{\varepsilon}$, which satisf\/ies precisely the relations generated by $i_{\varepsilon}^2 = \varepsilon$. An \textit{$\varepsilon$-complex structure} on a~real vector space $\mathbb{W}$ (necessarily of even dimension, say, $2m$) is an endomorphism $\mathbb{K} \in \End(\mathbb{W})$ such that $\mathbb{K}^2 = \varepsilon \id_{\mathbb{W}}$;\footnote{In the case $\varepsilon = 0$, some references require additionally that a null-complex structure satisfy $\rank \mathbb{K} = m$~\cite{DunajskiPrzanowski} this anyway holds for the null-complex structures that appear in this article.}; if $\varepsilon = +1$, we further require that the $(\pm 1)$-eigenspaces both have dimension~$m$. This identif\/ies $\mathbb{W}$ with $\mathbb{C}_{\varepsilon}^m$ (as a free $\mathbb{C}_{\varepsilon}$-module) so that the action of $\mathbb{K}$ coincides with multiplication by $i_{\varepsilon}$, and the pair $(\mathbb{W}, \mathbb{K})$ is an \textit{$\varepsilon$-complex vector space}. One specializes the names of structures to particular values of $\varepsilon$ by omitting $(-1)$-, replacing $(+1)$- with the pref\/ix \textit{para-}, and replacing $0$- with the modif\/ier \textit{null}-. See \cite[Section~1]{SchulteHengesbach}. \subsection[The group G2]{The group $\boldsymbol{\G_2}$} \subsubsection{Split cross products in dimension 7} The geometry studied in this article depends critically on the algebraic features of a so-called (split) cross product $\times$ on a $7$-dimensional real vector space $\mathbb{V}$. One can realize this explicitly using the algebra $\wtbbO$ of split octonions; we follow \cite[Section~2]{GPW}, and see also~\cite{Sagerschnig}. This is a~composition ($\mathbb{R}$-)algebra and so is equipped with a unit $1$ and a nondegenerate quadratic form $N$ multiplicative in the sense that $N(xy) = N(x) N(y)$ for all $x, y \in \wtbbO$. In particular $N(1) = 1$, and polarizing $N$ yields a nondegenerate symmetric bilinear form, which turns out for~$\wtbbO$ to have signature $(4, 4)$. So, the $7$-dimensional vector subspace $\mathbb{I} = \langle 1 \rangle^{\perp}$ of \textit{imaginary split octonions} inherits a nondegenerate symmetric bilinear form $H$ of signature $(3, 4)$, as well as a~map $\times\colon \mathbb{I} \times \mathbb{I} \to \mathbb{I}$ def\/ined by \begin{gather*} x \times y := xy + H(x, y) 1 ; \end{gather*} this is just the orthogonal projection of $xy$ onto $\mathbb{I}$. This map is a (binary) cross product in the sense of \cite{BrownGray}, that is, it satisf\/ies \begin{align}\label{equation:cross-product-totally-skew} H(x \times y, x) = 0 \qquad \textrm{and} \qquad H(x \times y, x \times y) = H(x, x) H(y, y) - H(x, y)^2 \end{align} for all $x, y \in \mathbb{I}$. \begin{Definition} We say that a bilinear map $\times\colon \mathbb{V} \times \mathbb{V} \to \mathbb{V}$ on a $7$-dimensional real vector space $\mathbb{V}$ is a \textit{split cross product} if\/f there is a linear isomorphism $A \colon \mathbb{I} \to \mathbb{V}$ such that $A(x \times y) = A(x) \times A(y)$. \end{Definition} A split cross product $\times$ determines a bilinear form \begin{gather* H_{\times}(x, y) := -\tfrac{1}{6} \tr(x \times (y \times \,\cdot\,)) ; \end{gather*} of signature $(3, 4)$ on the underlying vector space. For the split cross product $\times$ on $\mathbb{I}$, $H_{\times} = H$. We say that a cross product $\times$ is \textit{compatible} with a bilinear form if\/f$\times$ induces $H$, that is, if\/f \mbox{$H = H_{\times}$}. It follows from the alternativity identity $(x x) y = x (x y)$ satisf\/ied by the split octonions that \begin{gather}\label{equation:iterated-cross-product} x \times (x \times y) := -H_{\times}(x, x) y + H_{\times}(x, y) x . \end{gather} By \eqref{equation:cross-product-totally-skew}, $\times$ is totally $H_{\times}$-skew, so lowering its upper index with $H_{\times}$ yields a $3$-form $\Phi \in \Lambda^3 \mathbb{V}^*$: \begin{gather*} \Phi(x, y, z) := H_{\times}(x \times y, z) . \end{gather*} A $3$-form is said to be \textit{split-generic} if\/f it arises this way, and such forms comprise an open $\GL(\mathbb{V})$-orbit under the standard action on $\Lambda^3 \mathbb{V}^*$. One can recover from any split-generic $3$-form the split cross product $\times$ that induces it. A split cross product $\times$ also determines a nonzero volume form $\epsilon_{\times} \in \Lambda^7 \mathbb{V}^*$ for $H_{\times}$: \begin{gather*} (\epsilon_{\times})_{ABCDEFG} := \tfrac{1}{42} {\Phi}_{K[AB} {\Phi}^K{}_{CD} {\Phi}_{EFG]} . \end{gather*} Thus, $\times$ determines an orientation $[\epsilon_{\times}]$ on $\mathbb{V}$ and Hodge star operators $\ast\colon \Lambda^k \mathbb{V}^* \to \Lambda^{7 - k} \mathbb{V}^*$. If $\mathbb{V}$ is a vector space endowed with a bilinear form $H$ of signature $(p, q)$ and an orienta\-tion~$\Omega$, the subgroup of $\GL(\mathbb{V})$ preserving the pair $(H, \Omega)$ is $\SO(p, q)$, so we refer to such a~pair as a~$\SO(p, q)$-structure on $\mathbb{V}$. We say that a cross product $\times$ on $\mathbb{V}$ is \textit{compatible} with an $\SO(p, q)$-structure $(H, \Omega)$ if\/f $\times$ induces $H$ and $\Omega$, that is, if\/f $H = H_{\times}$ and $\Omega = [\epsilon_{\times}]$. A split-generic $3$-form $\Phi$ satisf\/ies various contraction identities, including \cite[equations~(2.8) and~(2.9)]{Bryant}: \begin{gather} \Phi^E{}_{AB} \Phi_{ECD} = (\astPhi \Phi)_{ABCD} + H_{AC} H_{BD} - H_{AD} H_{BC}, \label{equation:contraction-Phi-Phi} \\ \Phi^F{}_{AB} (\astPhi \Phi)_{FCDE} = 3 (H_{A[C} \Phi_{DE]B} - H_{B[C} \Phi_{DE]A}). \label{equation:contraction-Phi-PhiStar} \end{gather} \subsubsection[The group G2]{The group $\boldsymbol{\G_2}$}\label{subsubsection:G2} The (algebra) automorphism group of $\wtbbO$ is a connected, split real form of the complex Lie group of type $\G_2$, and so we denote it by $\G_2$. One can recover the algebra structure of $\wtbbO$ from $(\mathbb{I}, \times)$, so $\G_2$ is also the automorphism group of $\times$, and equivalently, the stabilizer subgroup in $\GL(\mathbb{V})$ of a split-generic $3$-form on a $7$-dimensional real vector space $\mathbb{V}$. For much more about~$\G_2$, see~\cite{Kath}. The action of $\G_2$ on $\mathbb{V}$ def\/ines the smallest nontrivial, irreducible representation of $\G_2$, which is sometimes called the \textit{standard representation}. This action stabilizes a unique split cross product, or equivalently, a unique split-generic $3$-form (up to a positive multiplicative constant). Thus, by a \textit{$\G_2$-structure} on a $7$-dimensional real vector space $\mathbb{V}$ we mean either (1)~a~representation of $\G_2$ on $\mathbb{V}$ isomorphic to the standard one, or, (2)~slightly abusively (on account of the above multiplicative ambiguity), a~split cross product $\times$ on $\mathbb{V}$, or equivalently, a~split-generic $3$-form $\Phi$ on $\mathbb{V}$. Since a split cross product $\times$ on a vector space $\mathbb{V}$ determines algebraically both a bilinear form~$H_{\times}$ and an orientation $[\epsilon_{\times}]$ on $\mathbb{V}$, the induced actions of $\G_2$ preserve both, def\/ining a~natural embedding \begin{gather*} \G_2 \hookrightarrow \SO(H_{\times}) \cong \SO(3, 4) . \end{gather*} Moreover, $H_{\times}$ realizes $\mathbb{V}$ as the standard representation of $\SO(3, 4)$, and its restriction to $\G_2$ is the standard representation $\times$ def\/ines. Like $\SO(3, 4)$, the $\G_2$-action on the ray projectivization of $\mathbb{V}$ has exactly three orbits, namely the sets of spacelike, isotropic, and timelike rays \cite[Theorem~3.1]{Wolf}. It is convenient for our purposes to use the $\G_2$-structure on $\mathbb{V}$ def\/ined in a basis $(E_a)$ (with dual basis, say, $(e^a)$) via the $3$-form \begin{gather}\label{equation:3-form-basis} \Phi := - e^{147} + \sqrt{2} e^{156} + \sqrt{2} e^{237} + e^{245} + e^{346}, \end{gather} where $e^{a_1 \cdots a_k} := e^{a_1} \wedge \cdots \wedge e^{a_k}$ (cf.~\cite[equation~(23)]{HammerlSagerschnig}). With respect to the basis~$(E_a)$, the induced bilinear form $H_{\Phi}$ has matrix representation \begin{gather}\label{equation:bilinear-form-basis} [H_{\Phi}] = \begin{pmatrix} 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & I_2 & 0 \\ 0 & 0 & -1 & 0 & 0 \\ 0 & I_2 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \end{pmatrix} , \end{gather} where $I_2$ denotes the $2 \times 2$ identity matrix, and the induced volume form is \begin{gather* \epsilon_{\Phi} = -e^{1234567}. \end{gather*} Let $\mfg_2$ denote the Lie algebra of $\G_2$. Dif\/ferentiating the inclusion $\G_2 \hookrightarrow \GL(\mathbb{V})$ yields a Lie algebra representation $\mfg_2 \hookrightarrow \mfgl(\mathbb{V}) \cong \End(\mathbb{V})$, and with respect to the basis $(E_a)$ its elements are precisely those of the form \begin{gather}\label{equation:g2-matrix-representation} \begin{pmatrix} \tr A & Z & s & W^{\top} & 0 \\ X & A & \sqrt{2} J Z^{\top} & \frac{s}{\sqrt{2}} J & -W \\ r & -\sqrt{2} X^{\top} J & 0 & -\sqrt{2} Z J & s \\ Y^{\top} & -\frac{r}{\sqrt{2}} J & \sqrt{2} J X & -A^{\top} & -Z^{\top} \\ 0 & -Y & r & -X^{\top} & -\tr A \end{pmatrix} , \end{gather} where $A \in \mfgl(2, \mathbb{R})$, $W, X \in \mathbb{R}^2$, $Y, Z \in (\mathbb{R}^2)^*$, $r, s \in \mathbb{R}$, and $J := \begin{pmatrix}0 & -1\\1 & 0 \end{pmatrix}$. \subsubsection[Some G2 representation theory]{Some $\boldsymbol{\G_2}$ representation theory}\label{subsubsection:g2-representation-theory} Fix a $7$-dimensional real vector space $\mathbb{V}$ and a $\G_2$-structure $\Phi \in \Lambda^3 \mathbb{V}^*$. We brief\/ly record the decompositions of the $\G_2$-representations $\Lambda^2 \mathbb{V}^*$, $\Lambda^3 \mathbb{V}^*$, and $S^2 \mathbb{V}^*$ into irreducible subrepresentations; we use and extend the notation of \cite[Section~2.6]{Bryant}. In each case, $\Lambda^k_l$ denotes the irreducible subrepresentation of $\Lambda^k \mathbb{V}^*$ of dimension~$l$, which is unique up to isomorphism. The representation $\Lambda^2 \mathbb{V}^* \cong \mfso(H_{\Phi})$ decomposes into irreducible subrepresentations as \begin{gather*} \Lambda^2 \mathbb{V}^* = \Lambda^2_7 \oplus \Lambda^2_{14} ; \end{gather*} \looseness=-1 $\Lambda^2_7 \cong \mathbb{V}$ and $\Lambda^2_{14}$ is isomorphic to the adjoint representation $\mfg_2$. Def\/ine the map $\iota^2_7\colon \mathbb{V} \to \Lambda^2 \mathbb{V}^*$~by \begin{gather}\label{equation:iota-2-7} \iota^2_7 \colon \ \mathbb{S}^A \mapsto \mathbb{S}^C \Phi_{CAB} ; \end{gather} by raising an index we can view $\iota^2_7$ as the map $\mathbb{V} \to \mfso(H_{\Phi})$ given by $\mathbb{S} \mapsto (\mathbb{T} \mapsto \mathbb{T} \times \mathbb{S})$. It is evidently nontrivial, so by Schur's lemma its (isomorphic) image is $\Lambda^2_7$. Conversely, consider the map $\pi^2_7 \colon \Lambda^2 \mathbb{V}^* \to \mathbb{V}$ def\/ined by \begin{gather}\label{equation:pi-2-7} \pi^2_7 \colon \ \mathbb{A}_{AB} \mapsto \tfrac{1}{6} \mathbb{A}_{BC} \Phi^{BCA} . \end{gather} Raising indices gives a map $\Lambda^2 \mathbb{V} \to \mathbb{V}$, which up to the multiplicative constant, is the descent of $\times\colon \mathbb{V} \times \mathbb{V} \to \mathbb{V}$ via the wedge product. In particular it is nontrivial, so it has kernel $\Lambda^2_{14}$ and restricts to an isomorphism $\smash{\pi^2_7\vert_{\Lambda^2_7}\colon \Lambda^2_7 \to \mathbb{V}}$; we have chosen the coef\/f\/icient so that $\pi^2_7 \circ \iota^2_7 = \id_{\mathbb{V}}$ and $\smash{\iota^2_7 \circ \pi^2_7 \vert_{\Lambda^2_7} = \id_{\Lambda^2_7}}$. Since $\G_2$ is the stabilizer subgroup in $\SO(H_{\Phi})$ of $\Phi$, $\mfg_2$ is the annihilator in $\mfso(H_{\Phi}) \cong \Lambda^2 \mathbb{V}^*$ of $\Phi$. Expanding $\id_{\mathbb{V}} - \iota^2_7 \circ \pi^2_7$ using \eqref{equation:iota-2-7} and \eqref{equation:pi-2-7} and applying \eqref{equation:contraction-Phi-Phi} gives that under this identif\/ication, the corresponding projection $\pi^2_{14} \colon \mfso(\mathbb{V}) \cong \Lambda^2 \mathbb{V}^* \to \mfg_2$ is \begin{gather* \pi^2_{14} \colon \ \mathbb{A}^A{}_B \mapsto \tfrac{2}{3} \mathbb{A}^A{}_B - \tfrac{1}{6} (\astPhi \Phi)_D{}^{EA}{}_B \mathbb{A}^D{}_E . \end{gather*} The representation $\Lambda^3 \mathbb{V}^*$ decomposes into irreducible subrepresentations as \begin{gather*} \Lambda^3 \mathbb{V}^* = \Lambda^3_1 \oplus \Lambda^3_7 \oplus \Lambda^3_{27} . \end{gather*} Here, $\Lambda^3_1$ is just the trivial representation spanned by $\Phi$, and the map $\pi^3_1 \colon \Lambda^3 \mathbb{V}^* \to \mathbb{R}$ def\/ined by \begin{gather}\label{equation:pi-3-1} \pi^3_1\colon \ \Psi_{ABC} \mapsto \tfrac{1}{42} \Phi^{ABC} \Psi_{ABC} \end{gather} is a left inverse for the map $\mathbb{R} \stackrel{\cong}{\to} \Lambda^3_1 \hookrightarrow \Lambda^3 \mathbb{V}^*$ def\/ined by $a \mapsto a \Phi$. The map $\iota^3_7 \colon \mathbb{V} \to \Lambda^3 \mathbb{V}^*$ def\/ined by \begin{gather* \iota^3_7 \colon \ \mathbb{S}^A \mapsto -\mathbb{S}^D (\astPhi \Phi)_{DABC} = [\ast_{\Phi}(\mathbb{S} \wedge \Phi)]_{ABC} . \end{gather*} is nonzero, so it def\/ines an isomorphism $\mathbb{V} \cong \Lambda^3_7$. The map $\pi^3_7 \colon \Lambda^3 \mathbb{V}^* \to \mathbb{V}$ def\/ined by \begin{gather}\label{equation:pi-3-7} \pi^3_7 \colon \ \Psi_{ABC} \mapsto \tfrac{1}{24} (\astPhi \Phi)^{BCDA} \Psi_{BCD} = \tfrac{1}{ 4} [\astPhi (\Phi \wedge \Psi)]^A \end{gather} is scaled so that $\pi^3_7 \circ \iota^3_7 = \id_{\mathbb{V}}$ and $\iota^3_7 \circ \pi^3_7 = \id_{\Lambda^3_7}$. The $\G_2$-representation $S^2 \mathbb{V}^*$ decomposes into irreducible modules as $\mathbb{R} \oplus S^2_{\circ} \mathbb{V}^*$, namely, into its $H_{\Phi}$-trace and $H_{\Phi}$-tracefree components, respectively. The linear map $i\colon S^2 \mathbb{V}^* \to \Lambda^3 \mathbb{V}^*$ def\/ined by \begin{gather}\label{equation:i} i \colon \ \mathbb{A}_{AB} \mapsto 6 \Phi^D{}_{[AB} \mathbb{A}_{C] D} . \end{gather} satisf\/ies $i(H_{\Phi}) = 6 \Phi$ and is nonzero on $S^2_{\circ} \mathbb{V}^*$, so $i \vert_{S^2_{\circ}}$ is an isomorphism $\smash{S^2_{\circ} \stackrel{\cong}{\to} \Lambda^3_{27}}$. The projection $\pi^3_{27} \colon \Lambda^3 \mathbb{V}^* \to S^2_{\circ} \mathbb{V}^*$ \begin{gather}\label{equation:pi-3-27} \pi^3_{27} \colon \ \Psi_{ABC} \mapsto -\tfrac{1}{8} \astPhi[(\,\cdot \, \hook \Phi) \wedge (\,\cdot \, \hook \Phi) \wedge \Psi]_{AB} - \tfrac{3}{4} \pi^3_1(\Psi) H_{AB} \end{gather} is scaled so that $\pi^3_{27} \circ i \vert_{S^2_{\circ} \mathbb{V}^*} = \id_{S^2_{\circ} \mathbb{V}^*}$ and $i \circ \pi^3_{27}\vert_{\Lambda^3_{27}} = \id_{\Lambda^3_{27}}$. \subsection{Cartan and parabolic geometry} \subsubsection{Cartan geometry} \label{subsubsection:Cartan-geometry} In this subsubsection we follow \cite{CapSlovak, Sharpe}. Given a $P$-principal bundle $\pi \colon \mathcal{G} \to M$, we denote the (right) action of $\mathcal{G} \times P \to \mathcal{G}$ by $R^p(u) = u \cdot p$ for $u \in \mathcal{G}, p \in P$. For each $V \in \mfp$, the corresponding \textit{fundamental vector field} $\eta_V \in \Gamma(T\mathcal{G})$ is $(\eta_V)_u := \partial_t \vert_0 [u \cdot \exp (tV)]$. {\samepage \begin{Definition} For a Lie group $G$ and a closed subgroup $P$ (with respective Lie algebras~$\mfg$ and~$\mfp$), a \textit{Cartan geometry} of type $(G, P)$ on a manifold $M$ is a pair $(\mathcal{G} \to M, \omega)$, where $\mathcal{G} \to M$ is a~$P$-principal bundle and $\omega$ is a \textit{Cartan connection}, that is, a~section of $T^*\mathcal{G} \otimes \mfg$ satisfying \begin{enumerate}\itemsep=0pt \item[1)] (right equivariance) $\omega_{u \cdot p}(T_u R^p \cdot \eta) = \Ad(p^{-1})(\omega_u(\eta))$ for all $u \in \mathcal{G}$, $p \in P$, $\eta \in T_u \mathcal{G}$, \item[2)] (reproduction of fundamental vector f\/ields) $\omega(\eta_V) = V$ for all $V \in \mfp$, and \item[3)] (absolute parallelism) $\omega_u \colon T_u \mathcal{G} \to \mfg$ is an isomorphism for all $u \in \mathcal{G}$. \end{enumerate} \end{Definition} } The \textit{(flat) model} of Cartan geometry of type $(G, P)$ is the pair $(G \to G / P, \omega_{\textrm{MC}})$, where $\omega_{\textrm{MC}}$ is the Maurer--Cartan form on $G$ def\/ined by $(\omega_{\textrm{MC}})_u := T_u L_{u^{-1}}$ (here $L_{u^{-1}} \colon G \to G$ denotes left multiplication by $u^{-1}$). This form satisf\/ies the identity $d\omega_{\textrm{MC}} + \tfrac{1}{2} [\omega_{\textrm{MC}}, \omega_{\textrm{MC}}] = 0$. We def\/ine the \textit{curvature $($form$)$} of a Cartan geometry $(\mathcal{G}, \omega)$ to be the section $\Omega := d\omega + \tfrac{1}{2} [\omega, \omega] \in \Gamma(\Lambda^2 T^* \mathcal{G} \otimes \mfg),$ and say that $(\mathcal{G}, \omega)$ is \textit{flat} if\/f $\Omega = 0$. This is the case if\/f around any point $u \in \mathcal{G}$ there is a local bundle isomorphism between $G$ and $\mathcal{G}$ that pulls back $\omega$ to $\omega_{\textrm{MC}}$. One can show that the curvature $\Omega$ of any Cartan geometry $(\mathcal{G} \to M, \omega)$ is horizontal (it is annihilated by vertical vector f\/ields), so invoking the absolute parallelism and passing to the quotient def\/ines an equivalent object $\kappa \colon \mathcal{G} \to \Lambda^2 (\mfg / \mfp)^* \otimes \mfg$, which we also call the \textit{curvature}. \subsubsection{Holonomy}\label{subsubsection:holonomy} Given any Cartan geometry $(\mathcal{G} \to M, \omega)$ of type $(G, P)$, we can extend the Cartan connection $\omega$ to a unique principal connection $\smash{\hat\omega}$ on $\smash{\hat\mathcal{G} := \mathcal{G} \times_P G}$ characterized by (1) $G$-equivariance and (2) $\smash{\iota^* \hat\omega = \omega}$, where $\smash{\iota\colon \mathcal{G} \hookrightarrow \hat\mathcal{G}}$ is the natural inclusion $u \mapsto [u, e]$. Then, to any point $\smash{\hat u \in \hat\mathcal{G}}$ we can associate the holonomy group $\smash{\Hol_{\hat u}(\hat\omega) \leq G}$. Dif\/ferent choices of $\smash{\hat u}$ lead to conjugate subgroups of~$G$, so the conjugacy class $\smash{\Hol(\hat\omega)}$ thereof is independent of $\smash{\hat u}$, and we def\/ine the \textit{holonomy} of~$\omega$ (or just as well, of $(\mathcal{G} \to M, \omega)$) to be this class. \subsubsection{Tractor geometry}\label{subsubsection:tractor-geometry} Fix a pair $(G, P)$ as in Section~\ref{subsubsection:Cartan-geometry}, denote the Lie algebra of $G$ by $\mfg$, and f\/ix a $G$-representa\-tion~$\mathbb{U}$. Then, for any Cartan geometry $(\pi \colon \mathcal{G} \to M, \omega)$ of type $(G, P)$, we can form the associated \textit{tractor bundle} $\mathcal{U} := \mathcal{G} \times_P \mathbb{U} \to M$, which we can also view as the associated bundle $\smash{\hat\mathcal{G} \times_G \mathbb{U} \to M}$. Then, the principal connection $\smash{\hat\omega}$ on $\smash{\hat\mathcal{G}}$ determined by $\omega$ induces a vector bundle connection~$\nabla^{\mathcal{U}}$ on~$\mathcal{U}$. Of distinguished importance is the \textit{adjoint tractor bundle} $\mathcal{A} := \mathcal{G} \times_P \mfg$. The canonical map $\Pi_0^{\mathcal{A}} \colon \mathcal{A} \to TM$ def\/ined by $(u, V) \mapsto T_u \pi \cdot \omega_u^{-1}(V)$ descends to a natural isomorphism $\smash{\mathcal{G} \times_P (\mfg / \mfp) \stackrel{\cong}{\to} TM}$, and via this identif\/ication $\Pi_0^{\mathcal{A}}$ is the bundle map associated to the canonical projection $\mfg \to \mfg / \mfp$. Since the curvature $\Omega$ of $(\mathcal{G}, \omega)$ is also $P$-equivariant, we may regard it as a section $K \in \Gamma(\Lambda^2 T^*M \otimes \mathcal{A})$, and again we call it the \textit{curvature} of $(\mathcal{G} \to M, \omega)$. \subsubsection{Parabolic geometry}\label{subsubsection:parabolic-geometry} In this article we will mostly (but not exclusively) work with geometries that can be realized as a special class of Cartan geometries that enjoy additional properties, most importantly suitable normalization conditions on $\omega$ that guarantee (subject to a usually satisf\/ied cohomological condition) a correspondence between Cartan geometries satisfying those conditions and geometric structures on the underlying manifold. We say that a Cartan geometry $(\mathcal{G} \to M, \omega)$ of type $(G, P)$ is a \textit{parabolic geometry} if\/f $G$ is semisimple and $P$ is a parabolic subgroup. For a detailed survey of parabolic geometry, including details of the below, see the standard reference \cite{CapSlovak}. Recall that a parabolic subgroup $P < G$ determines a so-called $|k|$-grading on the Lie algebra~$\mfg$ of $G$: This is a vector space decomposition $\mfg = \mfg_{-k} \oplus \cdots \oplus \mfg_{+k}$ compatible with the Lie bracket in the sense that $[\mfg_a, \mfg_b] \subseteq \mfg_{a + b}$ and minimal in the sense that none of the summands~$\mfg_a$, $a = -k, \ldots, k$, is zero. The grading induces a $P$-invariant f\/iltration~$(\mfg^a)$ of~$\mfg$, where $\mfg^a := \mfg_a \oplus \cdots \oplus \mfg_{+k}$. In particular, $\mfp = \mfg^0 = \mfg_0 \oplus \cdots \oplus \mfg_{+k}$. We denote by $G_0 < P$ the subgroup of elements $p \in P$ for which $\Ad(g)$ preserves the grading $(\mfg_a)$ of $\mfg$, and by $P_+ < P$ the subgroup of elements $p \in P$ for which $\Ad(p) \in \End(\mfg)$ have homogeneity of degree $> 0$ with respect to the f\/iltration $(\mfg^a)$; in particular, the Lie algebra of $P_+$ is $\mfp_+ = \mfg^{+1} = \mfg_{+1} \oplus \cdots \oplus \mfg_{+k}$. Since $\mfg$ is semisimple, its Killing form is nondegenerate, and it induces a $P$-equivariant identif\/ication $(\mfg / \mfp)^* \leftrightarrow \mfp_+$. Via this identif\/ication, for any $G$-representation $\mathbb{U}$ we may identify the Lie algebra homology $H_{\bullet}(\mfp_+, \mathbb{U})$ with the chain complex \begin{gather*} \cdots \to \Lambda^{i + 1} (\mfg / \mfp)^* \otimes \mathbb{U} \stackrel{\partial^{\ast}}{\to} \Lambda^ i (\mfg / \mfp)^* \otimes \mathbb{U} \to \cdots . \end{gather*} The \textit{Kostant codifferential} $\partial^*$ is $P$-equivariant, so it induces bundle maps $\partial^* \colon \Lambda^{i + 1} T^* M \otimes \mathcal{U} \to \Lambda^i T^* M \otimes \mathcal{U}$ between the associated bundles. The normalization conditions for a parabolic geometry $(\mathcal{G} \to M, \omega)$ are that \begin{enumerate}\itemsep=0pt \item[1)] (normality) the curvature $\kappa$ satisf\/ies $\partial^* \kappa = 0$, and \item[2)] (regularity) the curvature $\kappa$ satisf\/ies $\kappa(u)(\mfg^i, \mfg^j) \subseteq \mfg^{i + j + 1}$ for all $u \in \mathcal{G}$ and all $i$, $j$. \end{enumerate} Finally, tractor bundles associated to parabolic geometries inherit additional natural structure: Given a $G$-representation $\mathbb{U}$, $P$ determines a natural f\/iltration $(\mathbb{U}^a)$ of $\mathbb{U}$ by successive action of the nilpotent Lie subalgebra $\mfp_+ < \mfg$, namely \begin{gather}\label{equation:general-representation-filtration} \mathbb{U} \supseteq \mfp_+ \cdot \mathbb{U} \supseteq \mfp_+ \cdot (\mfp_+ \cdot \mathbb{U}) \supseteq \cdots \supseteq \{ 0 \} . \end{gather} Since the f\/iltration $(\mathbb{U}^a)$ of $\mathbb{U}$ is $P$-invariant, it determines a bundle f\/iltration $(\mathcal{U}^a)$ of the tractor bundle $\mathcal{U} = \mathcal{G} \times_P \mathbb{U}$. For the adjoint representation $\mfg$ itself, this f\/iltration (appropriately indexed) is just $(\mfg^a)$, and the images of the f\/iltrands $\mathcal{A} = \mathcal{G} \times_P \mfg^{-k} \supsetneq \cdots \supsetneq \mathcal{G} \times_P \mfg^{-1}$ under the projection $\Pi_0^{\mathcal{A}}$ comprise a canonical f\/iltration $TM = T^{-k} M \supsetneq \cdots \supsetneq T^{-1} M$ of the tangent bundle. \subsubsection{Oriented conformal structures}\label{subsubsection:oriented-conformal-structures} The group $\SO(p + 1, q + 1)$, $p + q \geq 3$, acts transitively on the space of isotropic rays in the standard representation $\mathbb{V}$, and the stabilizer subgroup $\bar P$ of such a ray is parabolic. There is an equivalence of categories between regular, normal parabolic geometries of type $(\SO(p + 1, q + 1), \bar P)$ and oriented conformal structures of signature $(p, q)$ \cite[Section~4.1.2]{CapSlovak}. \begin{Definition} A \textit{conformal structure} $(M, \mathbf{c})$ is an equivalence class $\mathbf{c}$ of metrics on $M$, where we declare two metrics to be equivalent if one is a positive, smooth multiple of the other. The signature of $\mathbf{c}$ is the signature of any (equivalently, every) $g \in \mathbf{c}$, and we say that $(M, \mathbf{c})$ is \textit{oriented} if\/f $M$ is oriented. The \textit{conformal holonomy} of an oriented conformal structure $\mathbf{c}$ is $\Hol(\mathbf{c}) := \Hol(\omega)$, where $\omega$ is the normal Cartan connection corresponding to $\mathbf{c}$. \end{Definition} We can choose a basis of $\mathbb{V}$ for which the nondegenerate, symmetric bilinear form $H$ preserved by $\SO(p + 1, q + 1)$ has block matrix representation \begin{gather}\label{equation:bilinear-form-parabolic-adapted} \begin{pmatrix} 0 & 0 & 1 \\ 0 & \Sigma & 0 \\ 1 & 0 & 0 \end{pmatrix}. \end{gather} (With respect to the basis $(E_a)$, the matrix representation $[H_{\Phi}]$ \eqref{equation:bilinear-form-basis} of the bilinear form $H_{\Phi}$ determined by the explicit expression \eqref{equation:3-form-basis} for $\Phi$ has the form \eqref{equation:bilinear-form-parabolic-adapted}.) The Lie algebra $\mfso(p + 1,$ $q + 1)$ consists of exactly the elements \begin{gather* \begin{pmatrix} b & Z & 0 \\ X & B & -\Sigma^{-1} Z^{\top} \\ 0 & -X^{\top} \Sigma & -b \end{pmatrix} , \end{gather*} where $B \in \mfso(\Sigma)$, $X \in \mathbb{R}^{p + q}$, $Z \in (\mathbb{R}^{p + q})^*$. The f\/irst element of the basis is isotropic, and if we take choose the preferred isotropic ray in $\mathbb{V}$ to be the one determined by that element, the corresponding Lie algebra grading on $\mfso(p + 1, q + 1)$ is the one def\/ined by the labeling \begin{gather}\label{equation:conformal-grading} \begin{pmatrix} \mfg_0 & \mfg_{+1} & 0 \\ \mfg_{-1} & \mfg_0 & \mfg_{+1} \\ 0 & \mfg_{-1} & \mfg_0 \end{pmatrix} . \end{gather} Since the grading on $\mfso(p + 1, q + 1)$ induced by $\bar P$ has the form $\mfg_{-1} \oplus \mfg_0 \oplus \mfg_{+1}$, any parabolic geometry of this type is regular. The normality condition coincides with Cartan's normalization condition for what is now called a Cartan geometry of this type \cite[Section~4.1.2]{CapSlovak}. \subsubsection[Oriented (2,3,5) distributions]{Oriented $\boldsymbol{(2, 3, 5)}$ distributions}\label{subsubsection:235-distributions} The group $\G_2$ acts transitively on the space of $H_{\Phi}$-isotropic rays in $\mathbb{V}$, and the stabilizer subgroup~$Q$ of such a ray is parabolic \cite{Sagerschnig}. The subgroup $Q$ is the intersection of $\G_2$ with the stabilizer subgroup $\bar P < \SO(3, 4)$ of the preferred isotropic ray in Section~\ref{subsubsection:oriented-conformal-structures}. In particular, the f\/irst basis element is isotropic, and if we again choose the preferred isotropic ray to be the one determined by that element, the corresponding Lie algebra grading on~$\mfg_2$ is the one def\/ined by the block decomposition \eqref{equation:g2-matrix-representation} and the labeling \begin{gather*} \begin{pmatrix} \mfg_ 0 & \mfg_{+1} & \mfg_{+2} & \mfg_{+3} & 0 \\ \mfg_{-1} & \mfg_ 0 & \mfg_{+1} & \mfg_{+2} & \mfg_{+3} \\ \mfg_{-2} & \mfg_{-1} & 0 & \mfg_{+1} & \mfg_{+2} \\ \mfg_{-3} & \mfg_{-2} & \mfg_{-1} & \mfg_ 0 & \mfg_{+1} \\ 0 & \mfg_{-3} & \mfg_{-2} & \mfg_{-1} & \mfg_0 \end{pmatrix} . \end{gather*} There is an equivalence of categories between regular, normal parabolic geometries of type $(\G_2, Q)$ and so-called oriented $(2, 3, 5)$ distributions \cite[Section~4.3.2]{CapSlovak}. On a manifold $M$, def\/ine the bracket of distributions $\mathbf{E}, \mathbf{F} \subseteq TM$ to be the set $[\mathbf{E}, \mathbf{F}] := \{[\alpha, \beta]_x \colon x \in M; \alpha \in \Gamma(\mathbf{E}), \beta \in \Gamma(\mathbf{F})\} \subseteq TM$. \begin{Definition} A \textit{$(2, 3, 5)$ distribution} is a $2$-plane distribution $\mathbf{D}$ on a $5$-manifold $M$ that is maximally nonintegrable in the sense that (1) $[\mathbf{D}, \mathbf{D}]$ is a $3$-plane distribution, and (2) $[\mathbf{D}, [\mathbf{D}, \mathbf{D}]] = TM$. A $(2, 3, 5)$ distribution is \textit{oriented} if\/f the bundle $\mathbf{D} \to M$ is oriented. \end{Definition} An orientation of $\mathbf{D}$ determines an orientation of $M$ and vice versa. The appropriate restrictions of the Lie bracket of vector f\/ields descend to natural vector bundle isomorphisms $\smash{\mathcal{L}\colon \Lambda^2 \mathbf{D} \stackrel{\cong}{\to} [\mathbf{D}, \mathbf{D}] / \mathbf{D}}$ and $\smash{\mathcal{L} \colon \mathbf{D} \otimes ([\mathbf{D}, \mathbf{D}] / \mathbf{D}) \stackrel{\cong}{\to} TM / [\mathbf{D}, \mathbf{D}]}$; these are components of the \textit{Levi bracket}. For a regular, normal parabolic geometry $(\mathcal{G}, \omega)$ of type $(\G_2, Q)$, the underlying $(2, 3, 5)$ distribution $\mathbf{D}$ is $T^{-1} M = \mathcal{G} \times_Q (\mfg^{-1} / \mfq)$, and $[\mathbf{D}, \mathbf{D}]$ is $T^{-2} M = \mathcal{G} \times_Q (\mfg^{-2} / \mfq)$. \subsection{Conformal geometry}\label{subsection:conformal-tractor-geometry} In this subsection we partly follow \cite{BEG}. \subsubsection{Conformal density bundles} A conformal structure $(M, \mathbf{c})$ of signature $(p, q)$ (denote $n := p + q$), determines a family of natural \textit{$($conformal$)$ density bundles} on $M$: Denote by $\mathcal{E}[1]$ the positive $(2n)$th root of the canonically oriented line bundle $(\Lambda^n TM)^2$, and its respective $w$th integer powers by $\mathcal{E}[w]$; $\mathcal{E} := \mathcal{E}[0]$ is the trivial bundle with f\/iber $\mathbb{R}$, and there are natural identif\/ications $\mathcal{E}[w] \otimes \mathcal{E}[w'] \cong \mathcal{E}[w + w']$. Given any vector bundle $B \to M$, we denote $B[w] := B \otimes \mathcal{E}[w]$, and refer to the sections of $B[w]$ as \textit{sections of $B$ of conformal weight $w$}. We may view $\mathbf{c}$ itself as the canonical \textit{conformal metric}, $\mathbf{g}_{ab} \in \Gamma(S^2 T^*M[2])$. Contraction with $\mathbf{g}_{ab}$ determines an isomorphism $TM \to T^*M[2]$, which we may use to raise and lower indices of objects on the tangent bundle at the cost of an adjustment of conformal weight. By construction, the Levi-Civita connection $\nabla^g$ of any metric $g \in \mathbf{c}$ preserves $\mathbf{g}_{ab}$ and its inverse, $\mathbf{g}^{ab} \in \Gamma(S^2 TM [-2])$. We call a nowhere zero section $\tau \in \Gamma(\mathcal{E}[1])$ a \textit{scale} of $\mathbf{c}$. A scale determines \textit{trivializations} $\smash{B[w] \stackrel{\cong}{\to} B}$, $b \mapsto \ul b := \tau^{-w} b$, of all conformally weighted bundles, and in particular a representative metric $\tau^{-2} \mathbf{g} \in \mathbf{c}$. \subsubsection{Conformal tractor calculus} \label{conformal-tractor-calculus} For an oriented conformal structure $(M, \mathbf{c})$ of signature $(p, q)$, $n := p + q \geq 3$, the tractor bundle~$\mathcal{V}$ associated to the standard representation $\mathbb{V}$ of $\SO(p + 1, q + 1)$ is the \textit{standard tractor bundle}. It inherits from the normal parabolic geometry corresponding to~$\mathbf{c}$ a vector bundle connection~$\nabla^{\mathcal{V}}$. The $\SO(p + 1, q + 1)$-action preserves a canonical nondegenerate, symmetric bilinear form $H \in S^2 \mathbb{V}^*$ and a volume form $\epsilon \in \Lambda^{n + 2} \mathbb{V}^*$; these respectively induce on~$\mathcal{V}$ a~parallel \textit{tractor metric} $H \in \Gamma(S^2 \mathcal{V}^*)$ and parallel volume form $\epsilon \in \Gamma(\Lambda^{n + 2} \mathcal{V}^*)$. Consulting the block structure \eqref{equation:conformal-grading} of $\bar \mfp_+ < \mfso(p + 1, q + 1)$ gives that the f\/iltration \eqref{equation:general-representation-filtration} of the standard representation $\mathbb{V}$ of $\SO(p + 1, q + 1)$ determined by $\bar P$ is \begin{gather}\label{equation:conformal-standard-tractor-filtration} \left\{\tractorT{\ast}{\ast}{\ast}\right\} \supset \left\{\tractorT{ 0}{\ast}{\ast}\right\} \supset \left\{\tractorT{ 0}{ 0}{\ast}\right\} \supset \left\{\tractorT{ 0}{ 0}{ 0}\right\} . \end{gather} We may identify the composition series of the corresponding f\/iltration of $\mathcal{V}$ as \begin{gather*} \mathcal{V} \cong \mathcal{E}[1], \flplus TM[-1], \flplus \mathcal{E}[-1] . \end{gather*} We denote elements and sections of $\mathcal{V}$ using uppercase Latin indices, $A, B, C, \ldots$, as $\mathbb{S}^A \in \Gamma(\mathcal{V})$, and those of the dual bundle $\mathcal{V}^*$ with lower indices, as $\mathbb{S}_A \in \Gamma(\mathcal{V}^*)$; we freely raise and lower indices using $H$. The bundle inclusion $\mathcal{E}[-1] \hookrightarrow \mathcal{V}$ determines a canonical section $X^A \in \Gamma(\mathcal{V}[1])$. Any scale $\tau$ determines an identif\/ication of $\mathcal{V}$ with the associated graded bundle determined by the above f\/iltration, that is, an isomorphism $\mathcal{V} \cong \mathcal{E}[1] \oplus TM[-1] \oplus \mathcal{E}[-1]$ \cite{BEG}. So, $\tau$ also determines (non-invariant, that is, scale-dependent) inclusions $TM[-1] \hookrightarrow \mathcal{V}$ and $\mathcal{E}[1] \hookrightarrow \mathcal{V}$, which we can respectively regard as sections $Z^A{}_a \in \Gamma(\mathcal{V} \otimes T^*M[1])$ and $Y^A \in \Gamma(\mathcal{V}[-1])$. So, for any choice of $\tau$ we can decompose a section $\mathbb{S} \in \Gamma(\mathcal{V})$ uniquely as $\smash{\mathbb{S}^A \stackrel{\tau}{=} \sigma Y^A + \mu^a Z^A{}_a + \rho X^A}$, where the notation $\stackrel{\tau}{=}$ indicates that $Y^A$ and $Z^A{}_a$ are the inclusions determined by $\tau$. Reusing the notation of the f\/iltration of $\mathbb{V}$ we write \begin{gather}\label{equation:standard-tractor-structure-splitting} \mathbb{S}^A \stackrel{\tau}{=} \tractorT{\sigma}{\mu^a}{\rho} . \end{gather} With respect to any scale $\tau$, the tractor metric has the form (cf.~\eqref{equation:bilinear-form-parabolic-adapted}) \begin{gather* H_{AB} \stackrel{\tau}{=} \begin{pmatrix} 0 & 0 & 1 \\ 0 & \mathbf{g}_{ab} & 0 \\ 1 & 0 & 0 \end{pmatrix} . \end{gather*} In particular, the f\/iltration \eqref{equation:conformal-standard-tractor-filtration} of $\mathcal{V}$ is $\mathcal{V} \supset \langle X \rangle^{\perp} \supset \langle X \rangle \supset \{ 0 \}$. The normal tractor connection $\nabla^{\mathcal{V}}$ on $\mathcal{V}$ is \cite{BEG} \begin{gather*} \nabla^{\mathcal{V}}_b \tractorT{\sigma}{\mu^a}{\rho} \stackrel{\tau}{=} \tractorT {\sigma_{,b} - \mu_b} {\mu^a{}_{,b} + \mathsf{P}^a{}_b \sigma + \delta^a{}_b \rho} {\rho_{,b} - \mathsf{P}_{bc} \mu^c} \in \Gamma\left(\tractorT{\mathcal{E}[1]}{TM[-1]}{\mathcal{E}[-1]} \otimes T^*M\right) . \end{gather*} The subscript $_{,b}$ denotes the covariant derivative with respect to $g := \tau^{-2} \mathbf{g}$, and $\mathsf{P}_{ab}$ is the \textit{Schouten tensor} of $g$, which is a particular trace adjustment of the Ricci tensor $R_{ab}$: \begin{gather}\label{equation:definition-Schouten} \mathsf{P}_{ab} := \frac{1}{n - 2} \left( R_{ab} - \frac{1}{2 (n - 1)} R^c{}_c g_{ab} \right) . \end{gather} A section $\mathbb{A}_{A_1 \cdots A_k}$ of the tractor bundle $\Lambda^k \mathcal{V}^*$ associated to the alternating representa\-tion~$\Lambda^k \mathbb{V}^*$ decomposes uniquely as \begin{gather*} \mathbb{A}_{A_1 \cdots A_k}\stackrel{\tau}{=} k \phi_{a_2 \cdots a_k} Y_{\smash{[}A_1} Z_{A_2}{}^{a_2} \cdots Z_{A_k\smash{]}}{}^{a_k} + \chi_{a_1 \cdots a_k} Z_{\smash{[}A_1}{}^{a_1} \cdots Z_{A_k\smash{]}}{}^{a_k} \\ \hphantom{\mathbb{A}_{A_1 \cdots A_k}\stackrel{\tau}{=}}{} + k (k - 1) \theta_{a_3 \cdots a_k} Y_{\smash{[}A_1} X_{A_2} Z_{A_3}{}^{a_3} \cdots Z_{A_k\smash{]}}{}^{a_k} + k \psi_{a_2 \cdots a_k} X_{\smash{[}A_1} Z_{A_2}{}^{a_2} \cdots Z_{A_k\smash{]}}{}^{a_k}, \end{gather*} which we write more compactly as \begin{gather*} \mathbb{A}_{A_1 \cdots A_k}\stackrel{\tau}{=} \tractorQ{\phi_{a_2 \cdots a_k}}{\chi_{a_1 \cdots a_k}}{\theta_{a_3 \cdots a_k}}{\psi_{a_2 \cdots a_k}} \in \Gamma\tractorQ{\Lambda^{k - 1} T^*M [k]}{\Lambda^k T^*M [k]}{\Lambda^{k - 2} T^*M [k - 2]}{\Lambda^{k - 1} T^*M [k - 2]} . \end{gather*} The tractor connection $\nabla^{\mathcal{V}}$ induces a connection on $\Lambda^k \mathcal{V}^*$, and we denote this connection again by $\nabla^{\mathcal{V}}$. In the special case $k = 2$, raising an index using $H$ gives $\Lambda^2 \mathbb{V}^* \cong \mfso(p + 1, q + 1)$, so we can identify $\Lambda^2 \mathcal{V}^* \cong \mathcal{A}$. Any section $\mathbb{A}^A{}_B \in \Gamma(\mathcal{A})$ decomposes uniquely as \begin{gather*} \mathbb{A}^A{}_B = \xi^a \big(Y^A Z_{Ba} - Z^A{}_a Y_B\big) + \zeta^a{}_b Z^A{}_a Z_B{}^b \\ \hphantom{\mathbb{A}^A{}_B =}{} + \alpha \big(Y^A X_B - X^A Y_B\big) + \nu_b \big(X^A Z_B{}^b - Z^{Ab} X_B\big) , \end{gather*} which we write as \begin{gather*} \mathbb{A}^A{}_B \stackrel{\tau}{=} \tractorQ{\xi^b}{\zeta^a{}_b}{\alpha}{\nu_b} \in \Gamma\tractorQ{TM}{\End_{\skewOp}(TM)}{\mathcal{E}}{T^* M} . \end{gather*} Finally, $\bar\mfp_+$ annihilates $\Lambda^{n + 2} \mathbb{V}^* \cong \mathbb{R}$, yielding a natural bundle isomorphism $\Lambda^{n + 2} \mathcal{V}^* \cong \Lambda^n T^*M [n]$. This identif\/ies the tractor volume form $\epsilon$ with the conformal volume form~$\epsilon_{\mathbf{g}}$ of~$\mathbf{g}$. \subsubsection{Canonical quotients of conformal tractor bundles} For any irreducible $\SO(p + 1, q + 1)$-representation $\mathbb{U}$, the canonical Lie algebra cohomology quotient map $\mathbb{U} \mapsto H_0 := H_0(\mfp_+, \mathbb{U}) = \mathbb{U} / (\bar{\mfp}_+ \cdot \mathbb{U})$ is $\bar P$-invariant and so induces a canonical bundle quotient map $\Pi_0^{\mathcal{U}}\colon \mathcal{U} \to \mathcal{H}_0$ between the corresponding associated $\bar{P}$-bundles. (We reuse the notation $\Pi_0^{\mathcal{U}}$ for the induced map $\Gamma(\mathcal{U}) \to \Gamma(\mathcal{H}_0)$ on sections.) Given a section $\mathbb{A} \in \Gamma(\mathcal{U})$, its image $\Pi_0^{\mathcal{U}}(\mathbb{A}) \in \Gamma(\mathcal{H}_0)$ is its \textit{projecting part}. For the standard representation $\mathbb{V}$ this quotient map is $\Pi_0^{\mathcal{V}} \colon \mathcal{V} \to \mathcal{E}[1]$, \begin{gather* \Pi_0^{\mathcal{V}} \colon \ \tractorT{\sigma}{\ast}{\ast} \mapsto \sigma . \end{gather*} For the alternating representation $\Lambda^k \mathbb{V}^*$, the quotient map is $\smash{\Pi_0^{\Lambda^k \mathcal{V}^*}} \colon \Lambda^k \mathcal{V}^* \to \Lambda^{k - 1} T^*M [k]$, \settowidth{\splittingWidth}{$\phi_{b_2 \cdots b_k}$} \begin{gather} \label{equation:projection-operator-alternating} \def1.2{1.1} \Pi_0^{\Lambda^k \mathcal{V}^*} \colon \ \left( \begin{array}{C{0.2\splittingWidth}cC{0.2\splittingWidth}} \multicolumn{3}{c} \ast \\ \ast & | & \ast \\ \multicolumn{3}{c}{\phi_{a_2 \cdots a_k}} \end{array} \right) \mapsto \phi_{a_2 \cdots a_k} \end{gather} For the adjoint representation $\mfso(p + 1, q + 1)$, the quotient map coincides with the map $\Pi_0^{\mathcal{A}} \colon \mathcal{A} \to TM$ def\/ined in Section~\ref{subsubsection:tractor-geometry}; in a splitting, it is \begin{gather* \Pi_0^{\mathcal{A}} \colon \ \tractorQ{\xi^a}{\ast}{\ast}{\ast} \mapsto \xi^a . \end{gather*} \subsubsection{Conformal BGG splitting operators} \label{subsubsection:BGG-splitting-operators} Conversely, for each irreducible $\SO(p + 1, q + 1)$-representation $\mathbb{U}$ there is a canonical dif\/ferential \textit{BGG splitting operator} $L_0^{\mathcal{U}}\colon \Gamma(\mathcal{H}_0) \to \Gamma(\mathcal{U})$ characterized by the properties (1) $\Pi_0^{\mathcal{U}} \circ L_0^{\mathcal{U}} = \id_{\mathcal{H}_0}$ and (2) $\partial^{\ast} \circ \nabla^{\mathcal{U}} \circ L_0^{\mathcal{U}} = 0$ \cite{CalderbankDiemer,CSS}. The only property of the operators $L_0^{\mathcal{U}}$ we need here follows immediately from this characterization: If $\mathbb{A} \in \Gamma(\mathcal{U})$ is $\nabla^{\mathcal{U}}$-parallel, then $L_0^{\mathcal{U}}(\Pi_0^{\mathcal{U}}(\mathbb{A})) = \mathbb{A}$. \subsection{Almost Einstein scales} \label{subsection:almost-Einstein} The BGG splitting operator $L_0^{\mathcal{V}} \colon \Gamma(\mathcal{E}[1]) \to \Gamma(\mathcal{V})$ corresponding to the standard representation is \cite[equation~(114)]{Hammerl} \begin{gather}\label{equation:splitting-operator-standard} L_0^{\mathcal{V}}\colon \ \sigma \mapsto \tractorT{\sigma}{\sigma^{,a}}{- \tfrac{1}{n} (\sigma_{,b}{}^b + \mathsf{P}^b{}_b \sigma)} . \end{gather} Computing gives \begin{gather}\label{equation:nabla-L0-standard} \nabla^{\mathcal{V}}_b L_0^{\mathcal{V}}(\sigma)^A \stackrel{\tau}{=} \tractorT{0}{(\sigma_{,ab} + \mathsf{P}_{ab} \sigma)_{\circ}}{\ast} \in \Gamma\left(\tractorT{\mathcal{E}[1]}{T^*M[1]}{\mathcal{E}[-1]} \otimes T^* M\right) , \end{gather} where $(T_{ab})_{\circ}$ denotes the tracefree part $\smash{T_{ab} - \tfrac{1}{n} T^c{}_c \mathbf{g}_{ab}}$ of the (possibly weighted) covariant $2$-tensor $T_{ab}$, and where $\ast$ is some third-order dif\/ferential expression in $\sigma$. Since the bottom component of $\nabla^{\mathcal{V}} L_0^{\mathcal{V}}(\sigma)$ is zero, the middle component, regarded as a (second-order) linear dif\/ferential operator $\Theta_0^{\mathcal{V}} \colon \Gamma(\mathcal{E}[1]) \to \Gamma(S^2 T^*M [1])$, \begin{gather*} \Theta_0^{\mathcal{V}} \colon \ \sigma \mapsto (\sigma_{,ab} + \mathsf{P}_{ab} \sigma)_{\circ} , \end{gather*} is conformally invariant. The operator $\Theta_0^{\mathcal{V}}$ is the \textit{first BGG operator} \cite{CSS} associated to the standard representation $\mathbb{V}$ for (oriented) conformal geometry. We can readily interpret a solution $\sigma \in \ker \Theta_0^{\mathcal{V}}$ geometrically: If we restrict to the complement $M - \Sigma$ of the zero locus $\Sigma := \{x \in M \colon \sigma_x = 0\}$, we can work in the scale of the solution $\sigma$ itself: We have $\smash{\sigma \stackrel{\sigma}{=} 1}$ and hence $0 = \Theta_0^{\mathcal{V}}(\sigma) = \mathsf{P}_{\circ}$. This says simply says that the Schouten tensor, $\mathsf{P}$, of $g := \sigma^{-2} \mathbf{g} \vert_{M - \Sigma}$ is a multiple of $g$, and hence so is its Ricci tensor, that is, that $g$ is Einstein. This motivates the following def\/inition \cite{GoverAlmostEinstein}: \begin{Definition} An \textit{almost Einstein scale}\footnote{Our terminology follows that of the literature on almost Einstein scales, but this consistency entails a mild perversity, namely that, since they may vanish, almost Einstein scales need not be scales.} of an (oriented) conformal structure of dimension $n \geq 3$ is a solution $\sigma \in \Gamma(\mathcal{E}[1])$ of the operator $\Theta_0^{\mathcal{V}}$. A conformal structure is \textit{almost Einstein} if it admits a nonzero almost Einstein scale. \end{Definition} We denote the set $\ker \Theta_0^{\mathcal{V}}$ of almost Einstein scales of a given conformal structure $\mathbf{c}$ by $\aEs(\mathbf{c})$. Since $\Theta_0^{\mathcal{V}}$ is linear, $\aEs(\mathbf{c})$ is a vector subspace of $\Gamma(\mathcal{E}[1])$. The vanishing of the component $\ast$ in \eqref{equation:nabla-L0-standard} turns out to be a dif\/ferential consequence of the vanishing of the middle component, $\Theta_0^{\mathcal{V}}(\sigma)$. So, $\nabla^{\mathcal{V}}$ is a prolongation connection for the opera\-tor~$\Theta_0^{\mathcal{V}}$: \begin{Theorem}[{\cite[Section~2]{BEG}}] \label{theorem:almost-Einstein-bijection} For any conformal structure $(M, \mathbf{c})$, $\dim M \geq 3$, the restrictions of $L_0^{\mathcal{V}} \colon \Gamma(\mathcal{E}[1]) \to \Gamma(\mathcal{V})$ and $\Pi_0^{\mathcal{V}} \colon \Gamma(\mathcal{V}) \to \Gamma(\mathcal{E}[1])$ comprise a natural bijective correspondence between almost Einstein scales and parallel standard tractors: \begin{gather*} \aEs(\mathbf{c})\mathrel{\mathop{\rightleftarrows}^{L_0^{\mathcal{V}}}_{\Pi_0^{\mathcal{V}}}} \big\{\text{$\nabla^{\mathcal{V}}$-parallel sections of $\mathcal{V}$}\big\} . \end{gather*} \end{Theorem} In particular, if $\sigma$ is an almost Einstein scale and vanishes on some nonempty open set, then $\sigma = 0$. In fact, the zero locus $\Sigma$ of $\sigma$ turns out to be a smooth hypersurface~\cite{CGH}; see Example~\ref{example:curved-orbit-decomposition-almost-Einstein}. We def\/ine the \textit{Einstein constant} of an almost Einstein scale $\sigma$ to be \begin{gather}\label{equation:Einstein-constant} \lambda := -\tfrac{1}{2} H(L_0^{\mathcal{V}}(\sigma), L_0^{\mathcal{V}}(\sigma)) = \tfrac{1}{n} \sigma (\sigma_{,a}{}^a + \mathsf{P}^a{}_a \sigma) - \tfrac{1}{2} \sigma_{,a} \sigma^{,a} . \end{gather} This def\/inition is motivated by the following computation: On $M - \Sigma$ the Schouten tensor of the representative metric $g := \sigma^{-2} \mathbf{g} \vert_{M - \Sigma} \in \mathbf{c}\vert_{M - \Sigma}$ determined by the scale $\sigma\vert_{M - \Sigma}$ is $\mathsf{P} = \lambda g$. Thus, the Ricci tensor of $g$ is $R_{ab} = 2 (n - 1) \lambda g_{ab}$, so we say that $\sigma$ (or the metric $g$ it induces) is \textit{Ricci-negative}, \textit{-flat}, or \textit{-positive} respectively if\/f $\lambda < 0$, $\lambda = 0$, or $\lambda > 0$.\footnote{The def\/inition here of \textit{Einstein constant} is consistent with some of the literature on almost Einstein conformal structures, but elsewhere this term is sometimes used for the quantity $2 (n - 1) \lambda$.} \subsection[Conformal Killing f\/ields and (k-1)-forms]{Conformal Killing f\/ields and $\boldsymbol{(k - 1)}$-forms}\label{subsection:conformal-Killing} The BGG splitting operator $L_0^{\Lambda^k \mathcal{V}^*} \colon \Gamma(\Lambda^{k - 1} T^*M [k]) \to \Gamma(\Lambda^k \mathcal{V}^*)$ determined by the alternating representation $\Lambda^k \mathbb{V}^*$, $1 < k < n + 1$, is \cite[equation~(134)]{Hammerl} \settowidth{\splittingWidth}{$\left(\!\!\!\!{\def1.2{1.1}\begin{array}{@{}c@{}} \!\!\!\tfrac{1}{n} \big[ {-}\tfrac{1}{k} \omega_{a_2 \ldots a_k, b}{}^b + \tfrac{k - 1}{k} \omega_{b[a_3 \cdots a_k, a_2]}{}^b + \tfrac{k - 1}{n - k + 2} \omega_{b[a_3 \cdots a_k,}{}^b{}_{a_2]} \\ + 2 (k - 1) \mathsf{P}^b{}_{[a_2} \omega_{|b| a_3 \cdots a_k]} - \mathsf{P}^b{}_b \omega_{a_2 \cdots a_k} \big] \end{array}}\!\!\!\!\right)$} \begin{gather}\label{equation:splitting-operator-alternating} \def1.2{1.1} L_0^{\Lambda^k \mathcal{V}^*} \colon \ \phi_{a_2 \cdots a_k} \mapsto \left(\!\! \begin{array}{C{0.46\splittingWidth}cC{0.46\splittingWidth}} \multicolumn{3}{c}{ \left({\def1.2{1.1}\begin{array}{@{}c@{}} \tfrac{1}{n} \big[ {}-\tfrac{1}{k} \phi_{a_2 \ldots a_k, b}{}^b + \tfrac{k - 1}{k} \phi_{b[a_3 \cdots a_k, a_2]}{}^b + \tfrac{k - 1}{n - k + 2} \phi_{b[a_3 \cdots a_k,}{}^b{}_{a_2]} \\ + 2 (k - 1) \mathsf{P}^b{}_{[a_2} \phi_{|b| a_3 \cdots a_k]} - \mathsf{P}^b{}_b \phi_{a_2 \cdots a_k} \big] \end{array}}\right) } \\ {\phi_{[a_2 \cdots a_k, a_1]}} & | & {- \tfrac{1}{n - k + 2} \phi_{b [a_3 \cdots a_k a_2],}{}^b} \\ \multicolumn{3}{c}{\phi_{a_2 \cdots a_k}} \end{array}\!\! \right) \! .\!\!\!\! \end{gather} Proceeding as in Section~\ref{subsection:almost-Einstein}, we f\/ind that \settowidth{\splittingWidth}{$\phi_{a_2 \cdots a_k, b} - \phi_{[a_2 \cdots a_k, b]} - \tfrac{k - 1}{n - k + 2} \mathbf{g}_{b [a_2} \phi_{|c| a_3 \cdots a_k]},{}^c$} \begin{gather*} \nabla^{\mathcal{V}}_b L_0^{\Lambda^k \mathcal{V}^*}(\phi)_{A_1 \cdots A_k} = \left( \begin{array}{C{0.44\splittingWidth}cC{0.44\splittingWidth}} \multicolumn{3}{c}{\ast} \\ \ast & | & \ast \\ \multicolumn{3}{c}{\phi_{a_2 \cdots a_k, b} - \phi_{[a_2 \cdots a_k, b]} - \tfrac{k - 1}{n - k + 2} \mathbf{g}_{b [a_2} \phi_{|c| a_3 \cdots a_k],}{}^c} \end{array} \right) , \end{gather*} where each $\ast$ denotes some dif\/ferential expression in $\sigma$. The bottom component def\/ines an inva\-riant conformal dif\/ferential operator $\smash{\Theta_0^{\Lambda^k \mathcal{V}^*} \colon \Gamma(\Lambda^{k - 1} T^*M [k]) \to \Gamma(\Lambda^{k - 1} T^*M \odot T^*M [k])}$ (here $\odot$ denotes the Cartan product) and elements of its kernel are called \textit{conformal Killing $(k - 1)$-forms}~\cite{Semmelmann}. Unlike in the case of almost Einstein scales, vanishing of $\smash{\Theta_0^{\Lambda^k \mathcal{V}^*}(\phi)}$ does not in general imply the vanishing of the remaining components $\ast$; if they do vanish, that is, if $\smash{\nabla^{\mathcal{V}} L_0^{\Lambda^k \mathcal{V}^*}(\phi) = 0}$, $\phi$ is called a~\textit{normal} conformal Killing $(k - 1)$-form \cite[Section~6.2]{Hammerl}, \cite{LeitnerNormalConformalKillingForms}. The BGG splitting operator $L_0^{\mathcal{A}} \colon \Gamma(TM) \to \Gamma(\mathcal{A})$ for the adjoint representation $\mfso(p + 1, q + 1)$ is \cite[equation~(119)]{Hammerl} \settowidth{\splittingWidth}{$\tfrac{1}{n} \left( -\tfrac{1}{2} \xi_{b, c}{}^c + \tfrac{1}{2} \xi^c{}_{,bc} + \tfrac{1}{n} \xi^c{}_{,c b} + 2 \mathsf{P}_{bc} \xi^c - \mathsf{P}^c{}_c \xi_b\right)$} \begin{gather* \def1.2{1.1} L_0^{\Lambda^k \mathcal{V}^*} \colon \ \xi^b \mapsto \left(\!\! \begin{array}{C{0.46\splittingWidth}cC{0.46\splittingWidth}} \multicolumn{3}{c}{ \tfrac{1}{n} \left( -\tfrac{1}{2} \xi_{b, c}{}^c + \tfrac{1}{2} \xi^c{}_{,bc} + \tfrac{1}{n} \xi^c{}_{,c b} + 2 \mathsf{P}_{bc} \xi^c - \mathsf{P}^c{}_c \xi_b\right) } \\ \tfrac{1}{2}(-\xi^a{}_{,b} + \xi_{b,}{}^a) & | & -\tfrac{1}{n} \xi^c{}_{,c} \\ \multicolumn{3}{c}{\xi^b} \end{array}\!\! \right) . \end{gather*} So viewed, $\Theta_0^{\mathcal{A}}$ is the map $\Gamma(TM) \to \Gamma(S^2_{\circ} T^*M [2])$, $\smash{\xi^a \mapsto (\xi_{(a, b)})_{\circ}} = (\mathcal{L}_{\xi} \mathbf{g})_{ab}$. Thus, the solutions of $\ker \Theta_0^{\mathcal{A}}$ are precisely the vector f\/ields whose f\/low preserves $\mathbf{c}$, and so these are called \textit{conformal Killing fields}. If $\nabla^{\mathcal{V}} L_0^{\mathcal{A}}(\xi) = 0$, we say $\xi$ is a \textit{normal} conformal Killing f\/ield. \subsection[(2,3,5) conformal structures]{$\boldsymbol{(2, 3, 5)}$ conformal structures}\label{subsection:235-conformal-structure} About a decade ago, Nurowski observed the following: \begin{Theorem} A $(2, 3, 5)$ distribution $(M, \mathbf{D})$ canonically determines a conformal structure $\mathbf{c}_{\mathbf{D}}$ of signature $(2, 3)$ on~$M$. \end{Theorem} This construction has since been recognized as a special case of a \textit{Fefferman construction}, so named because it likewise generalizes a classical construction of Fef\/ferman that canonically assigns to any nondegenerate hypersurface-type CR structure on a manifold~$N$ a conformal structure on a natural circle bundle over~$N$~\cite{Fefferman}. In fact, this latter construction arises in our setting, too; see Section~\ref{subsubsection:curved-orbit-negative-hypersurface}. We use the following terminology: \begin{Definition} A conformal structure $\mathbf{c}$ is a \textit{$(2, 3, 5)$ conformal structure} if\/f $\mathbf{c} = \mathbf{c}_{\mathbf{D}}$ for some $(2, 3, 5)$ distribution $\mathbf{D}$. \end{Definition} An oriented $(2, 3, 5)$ distribution $\mathbf{D}$ determines an orientation of $TM$, and hence $\mathbf{c}_{\mathbf{D}}$ is oriented (henceforth, that symbol refers to an oriented conformal structure). Because we will need some of the ingredients anyway, we brief\/ly sketch a construction of $\mathbf{c}_{\mathbf{D}}$ using the framework of parabolic geometry: Fix an oriented $(2, 3, 5)$ distribution $(M, \mathbf{D})$, and per Section~\ref{subsubsection:235-distributions} let $(\mathcal{G} \to M, \omega)$ be the corresponding regular, normal parabolic geometry of type $(\G_2, Q)$. Form the extended bundle $\bar\mathcal{G} := \mathcal{G} \times_Q \bar P$, and let $\bar\omega$ denote the Cartan connection equivariantly extending $\omega$ to $\bar\mathcal{G}$. By construction $(\bar\mathcal{G}, \bar\omega)$ is a parabolic geometry of type $(\SO(3, 4), \bar P)$ (for which $\bar\omega$ turns out to be normal, see \cite[Proposition 4]{HammerlSagerschnig}), and hence def\/ines an oriented conformal structure on $M$. For any $(2, 3, 5)$ distribution $(M, \mathbf{D})$ and for any representation $\mathbb{U}$ of $\SO(p + 1, q + 1)$, we may identify the associated tractor bundle $\mathcal{G} \times_Q \mathbb{U}$ (here regarding $\mathbb{U}$ as a $Q$-representation) with the conformal tractor bundle $\bar\mathcal{G} \times_{\bar P} \mathbb{U}$, and so denote both of these bundles by $\mathcal{U}$. Since $\bar\omega$ is itself normal, the (normal) tractor connections that $\omega$ and $\bar\omega$ induce on $\mathcal{U}$ coincide. \subsubsection[Holonomy characterization of oriented (2,3,5) conformal structures]{Holonomy characterization of oriented $\boldsymbol{(2, 3, 5)}$ conformal structures} An oriented $(2, 3, 5)$-distribution $\mathbf{D}$ corresponds to a regular, normal parabolic geometry $(\mathcal{G}, \omega)$ of type $(\G_2, Q)$. In particular, this determines on the tractor bundle $\mathcal{V} = \mathcal{G} \times_Q \mathbb{V}$ a $\G_2$-structure $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$ parallel with respect to the induced normal connection on $\mathcal{V}$, and again we may identify $\mathcal{V}$ and the normal connection thereon with the standard conformal tractor bundle $\smash{\bar\mathcal{G} \times_{\bar P} \mathbb{V}}$ of $\mathbf{c}_{\mathbf{D}}$ and the normal conformal tractor connection. The $\G_2$-structure determines f\/iberwise a bilinear form $H_{\Phi} \in \Gamma(S^2 \mathcal{V}^*)$. Since this construction is algebraic, $H_{\Phi}$ is parallel, and by construction it coincides with the conformal tractor metric on $\mathcal{V}$ determined by $\mathbf{c}_{\mathbf{D}}$. Conversely, if an oriented, signature $(2, 3)$ conformal structure $\mathbf{c}$ admits a parallel tractor $\G_2$-structure $\Phi$ whose restriction to each f\/iber $\mathcal{V}_x$ is compatible with the restriction $H_x$ of the tractor metric (in which case we simply say that $\Phi$ is compatible with~$H$), the distribution $\mathbf{D}$ underlying $\Phi$ satisf\/ies $\mathbf{c} = \mathbf{c}_{\mathbf{D}}$. This recovers a correspondence stated in the original work of Nurowski~\cite{Nurowski} and worked out in detail in~\cite{HammerlSagerschnig}: \begin{Theorem}\label{theorem:2-3-5-holonomy-characterization} An oriented conformal structure $(M, \mathbf{c})$ $($necessarily of signature $(2, 3))$ is induced by some $(2, 3, 5)$ distribution $\mathbf{D}$ $($that is, $\mathbf{c} = \mathbf{c}_{\mathbf{D}})$ iff the normal conformal tractor connection admits a holonomy reduction to~$\G_2$, or equivalently, iff $\mathbf{c}$ admits a parallel tractor $\G_2$-structure~$\Phi$ compatible with the tractor metric~$H$. \end{Theorem} \subsubsection[The conformal tractor decomposition of the tractor G2-structure]{The conformal tractor decomposition of the tractor $\boldsymbol{\G_2}$-structure} Fix an oriented $(2, 3, 5)$ distribution $(M, \mathbf{D})$, let $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$ denote the corresponding parallel tractor $\G_2$-structure, and denote its components with respect to any scale $\tau$ of the induced conformal structure~$\mathbf{c}_{\mathbf{D}}$ according to \begin{gather}\label{equation:G2-structure-splitting} \Phi_{ABC} \stackrel{\tau}{=} \tractorQ{\phi_{bc}}{\chi_{abc}}{\theta_c}{\psi_{bc}} \in \Gamma\tractorQ{\Lambda^2 T^*M [3]}{\Lambda^3 T^*M [3]}{T^*M [1]}{\Lambda^2 T^*M [1]} . \end{gather} In the language of Section~\ref{subsection:conformal-Killing}, $\smash{\phi = \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi)}$ is a normal conformal Killing $2$-form, and $\smash{\Phi = L_0^{\Lambda^3 \mathcal{V}^*}(\phi)}$. An argument analogous to that in the proof of Proposition \ref{proposition:identites-g2-structure-components}(5) below shows that $\phi$ is locally decomposable, and Proposition \ref{proposition:identites-g2-structure-components}(8) shows that it vanishes nowhere, so the (weighted) bivector f\/ield $\phi^{ab} \in \Gamma(\Lambda^2 TM [-1])$ determines a $2$-plane distribution on $M$, and this is precisely $\mathbf{D}$ \cite{HammerlSagerschnig}. We collect for later some useful geometric facts about $\mathbf{D}$ and encode them in algebraic identities in the tractor components $\phi, \chi, \theta, \psi$. Parts (1) and (2) of the Proposition \ref{proposition:identites-g2-structure-components} are well-known features of $(2, 3, 5)$ distributions. \begin{Proposition} \label{proposition:identites-g2-structure-components} Let $(M, \mathbf{D})$ be an oriented $(2, 3, 5)$ distribution, let $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$ denote the corresponding parallel tractor $\G_2$-structure, and denote its components with respect to an arbitrary scale $\tau$ as in~\eqref{equation:G2-structure-splitting}. Then: \begin{enumerate}\itemsep=0pt \item[$1.$] The distribution $\mathbf{D}$ is totally $\mathbf{c}_{\mathbf{D}}$-isotropic; equivalently, $\phi^{ac} \phi_{cb} = 0$. \item[$2.$] The annihilator of $\phi_{ab}$ (in $TM$) is $[\mathbf{D}, \mathbf{D}]$, and hence $\mathbf{D}^{\perp} = [\mathbf{D}, \mathbf{D}]$ $($here, $\mathbf{D}^{\perp}$ is the subbundle of $TM$ orthogonal to $\mathbf{D}$ with respect to $\mathbf{c}_{\mathbf{D}})$; equivalently, $\phi^{bc} \chi_{bca} = 0$. \item[$3.$] The weighted vector field $\theta^b \in \Gamma(TM[-1])$ is a section of $[\mathbf{D}, \mathbf{D}][-1]$, or equivalently, the line field $\mathbf{L}$ that $\theta$ determines $($which depends on $\tau)$ is orthogonal to $\mathbf{D}$; equivalently, $\theta^b \phi_{ba} = 0$. \item[$4.$] The weighted vector field $\theta^b$ satisfies $\theta_b \theta^b = -1$. In particular, the line field $\mathbf{L}$ is timelike. \item[$5.$ Like $\phi$, the weighted $2$-form $\psi$ is locally decomposable, that is, $(\psi \wedge \psi)_{abcd} = 6 \psi_{[ab} \psi_{cd]} = 0$. Since $($by equation \eqref{eq-item:conformal-volume-form-identity}$)$ it vanishes nowhere, it determines a $2$-plane distribution $\mathbf{E}$ $($which depends on $\tau)$. \item[$6.$] The distribution $\mathbf{E}$ is totally $\mathbf{c}_{\mathbf{D}}$-isotropic; equivalently, $\psi^{ac} \psi_{cb} = 0$. \item[$7.$] The line field $\mathbf{L}$ is orthogonal to $\mathbf{E}$; equivalently, $\theta^b \psi_{ba} = 0$. \item[$8.$] The $($weighted$)$ conformal volume form $\epsilon_{\mathbf{g}} \in \Gamma(\Lambda^5 T^*M [5])$ satisfies \begin{gather}\label{eq-item:conformal-volume-form-identity} (\epsilon_{\mathbf{g}})_{abcde} \stackrel{\tau}{=} \tfrac{1}{2} (\phi \wedge \theta \wedge \psi)_{abcde} = 15 \phi_{[ab} \theta_c \psi_{de]} . \end{gather} \end{enumerate} In particular, {\rm (8)} implies that $\mathbf{D}$, $\mathbf{L}$, and $\mathbf{E}$ are pairwise transverse and hence span~$TM$. Moreover, {\rm (2)} and {\rm (3)} imply that $\mathbf{D} \oplus \mathbf{L} = [\mathbf{D}, \mathbf{D}]$ and so $\mathbf{D} \oplus \mathbf{L} \oplus \mathbf{E}$ is a splitting of the canonical filtration $\mathbf{D} \subset [\mathbf{D}, \mathbf{D}] \subset TM$.\footnote{The splitting $\mathbf{D} \oplus \mathbf{L} \oplus \mathbf{E}$ determined by $\tau$ is a special case of a general feature of parabolic geometry, in which a choice of Weyl structure yields a splitting of the canonical f\/iltration of the tangent bundle of the underlying structure \cite[Section~5.1]{CapSlovak}.} \end{Proposition} It is possible to give abstract proofs of the identities in Proposition~\ref{proposition:identites-g2-structure-components}, but it is much faster to use frames of the standard tractor bundle suitably adapted to the parallel tractor $\G_2$-structure~$\Phi$. \begin{proof}[Proof of Proposition \ref{proposition:identites-g2-structure-components}] Call a local frame $(E_a)$ of $\mathcal{V}$ \textit{adapted} to $\Phi$ if\/f (1) $E_1$ is a local section of the line subbundle $\langle X \rangle$ determined by $X$, and (2) the representation of $\Phi$ in the dual coframe $(e^a)$ is given by \eqref{equation:3-form-basis}; it follows from \cite[Theorem 3.1]{Wolf} that such a local frame exists in some neighborhood of any point in~$M$. Any adapted local frame determines a (local) choice of scale: Since $X \in \Gamma(\mathcal{V}[1])$, we have $\tau := e^7(X) \in \mathcal{E}[1]$, and by construction it vanishes nowhere. Then, since $\langle X \rangle^{\perp} = \langle E_1, \ldots, E_6 \rangle$, the (weighted) vector f\/ields $F_a := E_a + \langle E_1 \rangle$, $a = 2, \ldots, 6$ comprise a frame of $\langle X \rangle^{\perp} / \langle X \rangle$ which by Section~\ref{conformal-tractor-calculus} is canonically isomorphic to $TM[-1]$. Trivializing these frame f\/ields (by multiplying by $\tau$) yields a local frame $(\ul F_2, \ldots, \ul F_6)$ of $TM$; denote the dual coframe by $(f^2, \ldots, f^6)$. One can read immediately from \eqref{equation:3-form-basis} that in an adapted local frame, (the trivialized) components of~$\Phi$ are \begin{gather* \ul \phi \stackrel{\tau}{=} \sqrt{2} f^5 \wedge f^6, \qquad \ul \chi \stackrel{\tau}{=} f^2 \wedge f^4 \wedge f^5 + f^3 \wedge f^4 \wedge f^6, \qquad \ul\theta \stackrel{\tau}{=} f^4, \qquad \ul \psi \stackrel{\tau}{=} \sqrt{2} f^2 \wedge f^3 , \end{gather*} and consulting the form of equation \eqref{equation:bilinear-form-basis} gives that the (trivialized) conformal metric is \begin{gather*} \ul\mathbf{g} \stackrel{\tau}{=} f^2 f^5 + f^3 f^6 - \big(f^4\big)^2 . \end{gather*} In an adapted frame, $\epsilon_{\Phi}$ is given by $-e^1 \wedge \cdots \wedge e^7$, so the (trivialized) conformal volume form is $\ul\epsilon_{\mathbf{g}} = f^2 \wedge \cdots \wedge f^6$. All of the identities follow immediately from computing in this frame. For example, to compute (1), we see that raising indices gives $\smash{\ul\phi^{\sharp \sharp} = \sqrt{2} \ul F_2 \wedge \ul F_3}$, and that contracting an index of this bivector f\/ield with $\ul\phi = \sqrt{2} f^5 \wedge f^6$ yields $0$. It remains to show that the geometric assertions are equivalent to the corresponding identities; these are nearly immediate for all but the f\/irst two parts. For both parts, pick a local frame $(\alpha, \beta)$ of $\mathbf{D}$ around an arbitrary point; by scaling we may assume that $\ul \phi^{\sharp\sharp} = \alpha \wedge \beta$. \begin{enumerate}\itemsep=0pt \item The identity implies that the trace over the second and third indices of the tensor product $\smash{\ul\phi^{\sharp\sharp} \otimes \ul\phi = (\alpha \wedge \beta) \otimes (\alpha^{\flat} \wedge \beta^{\flat})}$ is zero, or, expanding, that \begin{gather*} 0 = -\ul\mathbf{g}(\alpha, \alpha) \beta \otimes \beta^{\flat} + \ul\mathbf{g}(\alpha, \beta) \alpha \otimes \beta^{\flat} + \ul\mathbf{g}(\alpha, \beta) \beta \otimes \alpha^{\flat} - \ul\mathbf{g}(\beta, \beta) \beta \otimes \beta^{\flat} . \end{gather*} Since $\alpha$, $\beta$ are linearly independent, the four coef\/f\/icients on the right-hand side vanish separately, but up to sign these are the components of the restriction of $\mathbf{c}_{\mathbf{D}}$ to $\mathbf{D}$ in the given frame. \item By Part~(1), $\phi(\alpha, \,\cdot\,) = \phi(\beta, \,\cdot\,) = 0$. Any local section $\eta \in \Gamma([\mathbf{D}, \mathbf{D}])$ can be written as $\eta = A \alpha + B \beta + C [\alpha, \beta]$ for some smooth functions $A$, $B$, $C$, giving $\phi(\eta, \gamma) = C \phi([\alpha, \beta], \gamma)$. The invariant formula for the exterior derivative of a $2$-form then gives $\phi([\alpha, \beta], \gamma) = -d\phi(\alpha, \beta, \gamma)$. Now, in the chosen scale, $d\phi = \chi$ so \begin{gather*} -[d\phi(\alpha, \beta, \,\cdot\,)]_a = -\chi_{bca} \alpha^b \beta^c = -\tfrac{1}{2} \chi_{bca} \cdot 2 \alpha^{[b} \beta^{c]} = -\tfrac{1}{2} \chi_{bca} \phi^{bc} . \tag*{\qed} \end{gather*} \end{enumerate}\renewcommand{\qed}{} \end{proof} Since the tractor Hodge star operator $\ast_{\Phi}$ is algebraic, $\ast_{\Phi} \Phi \in \Gamma(\Lambda^4 \mathcal{V}^*)$ is parallel. We can express its components with respect to a scale $\tau$ in terms of those of $\Phi$ and the weighted Hodge star operators $\ast \colon \Lambda^l T^*M [w] \to \Lambda^{5 - l} T^*M [7 - w]$ determined by $\mathbf{g}$ \cite{LeitnerNormalConformalKillingForms}: \begin{gather}\label{equation:Hodge-star-3-form} (\ast_{\Phi} \Phi)_{ABCD} \stackrel{\tau}{=} \tractorQ { - (\ast \phi)_{ fgh}} { - (\ast \theta)_{efgh}} { (\ast \chi)_{ gh}} { (\ast \psi)_{ fgh}} \in \Gamma\tractorQ {\Lambda^3 T^*M [4]} {\Lambda^4 T^*M [4]} {\Lambda^2 T^*M [2]} {\Lambda^3 T^*M [2]} . \end{gather} Computing in an adapted frame as in the proof of Proposition \ref{proposition:identites-g2-structure-components} yields some useful identities relating the components of $\Phi$ and their images under~$\ast$: \begin{alignat}{3} & (\ast \phi)_{ fgh} = 3 \phi_{[fg} \theta_{h]}, \qquad && (\ast \chi)_{ gh} = \theta^i \chi_{igh}, &\nonumber\\ & (\ast \theta)_{efgh} = -3 \phi_{[ef} \psi_{gh]}, \qquad && (\ast \psi)_{ fgh} = 3 \psi_{[fg} \theta_{h]} . &\label{equation:Hodge-star-3-form-components} \end{alignat} \section[The global geometry of almost Einstein (2,3,5) distributions]{The global geometry of almost Einstein $\boldsymbol{(2, 3, 5)}$ distributions} \label{section:global-geometry} In this section we investigate the global geometry of $(2, 3, 5)$ distributions $(M, \mathbf{D})$ that induce almost Einstein conformal structures $\mathbf{c}_{\mathbf{D}}$; naturally, we call such distributions themselves \textit{almost Einstein}. Almost Einstein $(2, 3, 5)$ distributions are special among $(2, 3, 5)$ conformal structures: In a sense that can be made precise \cite[Theorem 1.2, Proposition 5.1]{GrahamWillse}, for a generic $(2, 3, 5)$ distribution $\mathbf{D}$ the holonomy of $\mathbf{c}_{\mathbf{D}}$ is equal to $\G_2$ and hence $\mathbf{c}_{\mathbf{D}}$ admits no nonzero almost Einstein scales. Via the identif\/ication of the standard tractor bundles of $\mathbf{D}$ and $\mathbf{c}_{\mathbf{D}}$, Theorem \ref{theorem:almost-Einstein-bijection} gives that an oriented $(2, 3, 5)$ distribution is almost Einstein if\/f its standard tractor bundle~$\mathcal{V}$ admits a~nonzero parallel standard tractor $\mathbb{S} \in \Gamma(\mathcal{V})$, or equivalently, if\/f it admits a holonomy reduction from~$\G_2$ to the stabilizer subgroup $S$ of a~nonzero vector in the standard representation~$\mathbb{V}$. \subsection[Distinguishing a vector in the standard representation V of G2]{Distinguishing a vector in the standard representation $\boldsymbol{\mathbb{V}}$ of $\boldsymbol{\G_2}$} In this subsection, let $\mathbb{V}$ denote the standard representation of $\G_2$ and $\Phi \in \Lambda^3 \mathbb{V}^*$ the corresponding $3$-form. We establish some of the algebraic consequences of f\/ixing a nonzero vector~$\mathbb{S} \in \mathbb{V}$. \subsubsection{Stabilizer subgroups} Recall from the introduction that the stabilizer group in $\G_2$ of $\mathbb{S} \in \mathbb{V}$ is as follows: \begin{Proposition}\label{proposition:stabilizer-subgroup} The stabilizer subgroup of a nonzero vector $\mathbb{S}$ in the standard representa\-tion~$\mathbb{V}$ of~$\G_2$ is isomorphic to: \begin{enumerate}\itemsep=0pt \item[$1)$] $\SU(1, 2)$, if $\mathbb{S}$ is spacelike, \item[$2)$] $\SL(3, \mathbb{R})$, if $\mathbb{S}$ is timelike, and \item[$3)$] $\SL(2, \mathbb{R}) \ltimes Q_+$, where $Q_+ < \G_2$ is the connected, nilpotent subgroup of $\G_2$ defined via Sections~{\rm \ref{subsubsection:parabolic-geometry}} and~{\rm \ref{subsubsection:235-distributions}}, if~$\mathbb{S}$ is isotropic. \end{enumerate} \end{Proposition} \subsubsection[An varepsilon-Hermitian structure]{An $\boldsymbol{\varepsilon}$-Hermitian structure}\label{subsubsection:vareps-hermitian-structure} Contracting a nonzero vector $\mathbb{S} \in \mathbb{V}$ with $\Phi$ determines an endomorphism: \begin{gather}\label{equation:definition-K} \mathbb{K}^A{}_B := -\iota^2_7(\mathbb{S})^A{}_B = -\mathbb{S}^C \Phi_C{}^A{}_B \in \mfso(3, 4) . \end{gather} We can identify $\mathbb{K}$ with the map $\mathbb{T} \mapsto \mathbb{S} \times \mathbb{T}$, so if we scale $\mathbb{S}$ so that $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, 1\}$, identity \eqref{equation:iterated-cross-product} becomes \begin{gather}\label{equation:K-squared-identity} \mathbb{K}^2 = \varepsilon {\id_{\mathbb{V}}} + \mathbb{S} \otimes \mathbb{S}^{\flat} . \end{gather} By skewness, $H_{AC} \mathbb{S}^A \mathbb{K}^C{}_B = -\mathbb{S}^A \mathbb{S}^D \mathbb{K}_{DAB} = 0$, so the image of $\mathbb{K}$ is contained in $\mathbb{W} := \langle \mathbb{S} \rangle^{\perp}$, and hence we can regard $\mathbb{K}\vert_{\mathbb{W}}$ as an endomorphism of $\mathbb{W}$, which by abuse of notation we also denote~$\mathbb{K}$. Restricting~\eqref{equation:K-squared-identity} to $\mathbb{W}$ gives that this latter endomorphism is an $\varepsilon$-complex structure on that bundle: $\mathbb{K}^2 = \varepsilon \id_{\mathbb{W}}$. Thus, $(H_{\Phi}\vert_{\mathbb{W}}, \mathbb{K})$ is an \textit{$\varepsilon$-Hermitian structure on~$\mathbb{W}$}: this is a~pair~$(g, \mathbb{K})$, where $g \in S^2 \mathbb{W}^*$ is a symmetric, nondegenerate, bilinear form and $\mathbb{K}$ is an $\varepsilon$-complex structure on $\mathbb{W}$ compatible in the sense that $g(\,\cdot\,, \mathbb{K} \,\cdot\,)$ is skew-symmetric. If~$\mathbb{K}$ is complex, $g$ has signature $(2p, 2q)$ for some integers $p$, $q$; if $\mathbb{K}$ is paracomplex, $g$ has signature $(m, m)$. \subsubsection{Induced splittings and f\/iltrations} If $\mathbb{S}$ is nonisotropic, it determines an orthogonal decomposition $\mathbb{V} = \mathbb{W} \oplus \langle \mathbb{S} \rangle$. If $\mathbb{S}$ is isotropic, it determines a f\/iltration $(\mathbb{V}^a_{\mathbb{S}})$ \cite[Proposition 2.5]{GPW}: \begin{gather}\label{equation:isotropic-filtration} \begin{array}{@{}cccccccccccc@{}} ^{-2} & & ^{-1} & & ^0 & & ^{+1} & & ^{+2} & & ^{+3} \\ \eqmakebox[F]{$\mathbb{V}$} & \supset & \eqmakebox[F]{$\mathbb{W}$} & \supset & \eqmakebox[F]{$\im \mathbb{K}$} & \supset & \eqmakebox[F]{$\ker \mathbb{K}$} & \supset & \eqmakebox[F]{$\langle \mathbb{S} \rangle$} & \supset & \eqmakebox[F]{$\{ 0 \}$} \\ _7 & & _6 & & _4 & & _3 & & _1 & & _0 \end{array} \end{gather} The number above each f\/iltrand is its f\/iltration index $a$ (which are canonical only up to addition of a given integer to each index) and the number below its dimension. Moreover, $\im \mathbb{K} = (\ker \mathbb{K})^{\perp}$ (so~$\ker \mathbb{K}$ is totally isotropic). If we take~$Q$ to be the stabilizer subgroup of the ray spanned by~$\mathbb{S}$, then the f\/iltration is $Q$-invariant, and checking the (representation-theoretic) weights of~$\mathbb{V}$ as a $Q$-representation shows that it coincides with the f\/iltration \eqref{equation:general-representation-filtration} determined by~$Q$. The map~$\mathbb{K}$ satisf\/ies $\mathbb{K}(\mathbb{V}_{\mathbb{S}}^a) = \mathbb{V}_{\mathbb{S}}^{a + 2}$, where we set $\mathbb{V}_{\mathbb{S}}^a = 0$ for all $a > 2$. \subsubsection{The family of stabilized 3-forms}\label{subsubsection:stabilized-3-forms} For nonzero $\mathbb{S} \in \mathbb{V}$, elementary linear algebra gives that the subspace of $3$-forms in $\Lambda^3 \mathbb{V}^*$ f\/ixed by the stabilizer subgroup $S$ of $\mathbb{S}$ has dimension $3$ and contains \begin{gather} \Phi_I := \mathbb{S} \hook (\mathbb{S}^{\flat} \wedge \Phi) \in \Lambda^3_1 \oplus \Lambda^3_{27}, \label{equation:definition-Phi-I} \\ \Phi_J := -\iota^3_7(\mathbb{S}) = -\astPhi\big(\mathbb{S}^{\flat} \wedge \Phi\big) = \mathbb{S} \hook \astPhi \Phi\in \Lambda^3_7, \label{equation:definition-Phi-J} \\ \Phi_K := \mathbb{S}^{\flat} \wedge (\mathbb{S} \hook \Phi) \in \Lambda^3_1 \oplus \Lambda^3_{27} . \label{equation:definition-Phi-K} \end{gather} The containment $\Phi_K \in \Lambda^3_1 \oplus \Lambda^3_{27}$ follows from the fact that $\Phi_K = \tfrac{1}{2} i(\mathbb{S}^{\flat} \circ \mathbb{S}^{\flat})$, where $i$ is the $\G_2$-invariant map def\/ined in \eqref{equation:i}. The containment $\Phi_I \in \Lambda^3_1 \oplus \Lambda^3_{27}$ follows from that containment, the identity \begin{gather}\label{equation:Phi-I-plus-Phi-K} \Phi_I + \Phi_K = \mathbb{S} \hook \big(\mathbb{S}^{\flat} \wedge \Phi\big) + \mathbb{S}^{\flat} \wedge (\mathbb{S} \hook \Phi) = H_{\Phi}(\mathbb{S}, \mathbb{S}) \Phi , \end{gather} and the fact that $\Phi \in \Lambda^3_1$. It follows immediately from the def\/initions that \begin{gather}\label{equation:contraction-S-PhiIJK} \mathbb{S} \hook \Phi_I = \mathbb{S} \hook \Phi_J = 0 \qquad \textrm{and} \qquad \mathbb{S} \hook \Phi_K = H_{\Phi}(\mathbb{S}, \mathbb{S}) \mathbb{S} \hook \Phi . \end{gather} Since $\mathbb{S}$ annihilates $\Phi_I$ but not $\Phi$, the containments in \eqref{equation:definition-Phi-I}, \eqref{equation:definition-Phi-J} show that $\{\Phi, \Phi_I, \Phi_J\}$ is a~basis of the subspace of stabilized $3$-forms. If $H_{\Phi}(\mathbb{S}, \mathbb{S}) \neq 0$, then \eqref{equation:Phi-I-plus-Phi-K} implies that $\{\Phi_I, \Phi_J, \Phi_K\}$ is also a basis of that space. If $H_{\Phi}(\mathbb{S}, \mathbb{S}) = 0$ then $\Phi_K = -\Phi_I$. It is convenient to abuse notation and denote by $\Phi_I, \Phi_J$ the pullbacks to $\mathbb{W}$ of the $3$-forms of the same names via the inclusion $\mathbb{W} \hookrightarrow \mathbb{V}$. For nonisotropic $\mathbb{S}$, def\/ine $\mathbb{W}^{1, 0} \subset \mathbb{W} \otimes_{\mathbb{R}} \mathbb{C}_{\varepsilon}$ to be the $(+i_{\varepsilon}$)-eigenspace of (the extension of)~$\mathbb{K}$, and an \textit{$\varepsilon$-complex volume form} to be an element of $\Lambda^m_{\mathbb{C}_{\varepsilon}} := \Lambda^m (\mathbb{W}^{1, 0})^*$. \begin{Proposition} \label{proposition:vareps-complex-volume-forms} Suppose $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{\pm 1\}$. For each $(A, B)$ such that $A^2 - \varepsilon B^2 = 1$, \begin{gather*} \Psi_{(A, B)} := [A \Phi_I + \varepsilon B \Phi_J] + i_{\varepsilon} [B \Phi_I + A \Phi_J] \in \Gamma\big(\Lambda^3_{\mathbb{C}_{\varepsilon}} \mathbb{W}\big) \end{gather*} is an $\varepsilon$-complex volume form for the $\varepsilon$-Hermitian structure $(H_{\Phi} \vert_{\mathbb{W}}, \mathbb{K})$ on $\mathbb{W}$. \end{Proposition} \begin{Proposition}\label{proposition:epsilon-complex-volume-form-g2-structure} Suppose $\mathbb{V}'$ is a $7$-dimensional real vector space and $H \in S^2 (\mathbb{V}')^*$ is a symmetric bilinear form of signature $(3, 4)$. Now, fix a vector $\mathbb{S} \in \mathbb{V}'$ such that $-\varepsilon := H(\mathbb{S}, \mathbb{S}) \in \{\pm 1\}$, denote $\mathbb{W} := \langle \mathbb{S} \rangle^{\perp}$, fix an $\varepsilon$-complex structure $\mathbb{K} \in \End(\mathbb{W})$ such that $(H\vert_{\mathbb{W}}, \mathbb{K})$ is a Hermitian structure on $\mathbb{W}$, and fix a compatible $\varepsilon$-complex volume form $\Psi \in \Lambda^3_{\mathbb{C}_{\varepsilon}} \mathbb{W}^*$ satisfying the normalization condition \begin{gather*} \Psi \wedge \bar\Psi = -\tfrac{4}{3}i_{\varepsilon}\mathbb{K} \wedge \mathbb{K}\wedge \mathbb{K}. \end{gather*} Then, the $3$-form \begin{gather*} \Re \Psi + \varepsilon \mathbb{S}^{\flat} \wedge \mathbb{K} \in \Lambda^3 (\mathbb{V}')^* \end{gather*} is a $\G_2$-structure on $\mathbb{V}'$ compatible with $H$. Here, $\Re \Psi$ and $\mathbb{K}$ are regarded as objects on $\mathbb{V}'$ via the decomposition $\mathbb{V}' = \mathbb{W} \oplus \langle \mathbb{S} \rangle$. \end{Proposition} This proposition can be derived, for example, from \cite[Proposition~1.12]{CLSS}, since, using the terminology of the article, $(\Re \Psi, \mathbb{K})$ is a~compatible and normalized pair of stable forms. \subsection[The canonical conformal Killing field xi]{The canonical conformal Killing f\/ield $\boldsymbol{\xi}$} \label{subsection:conformal-Killing-field} For this subsection, f\/ix an oriented $(2, 3, 5)$ distribution $\mathbf{D}$, let $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$ denote the corresponding parallel tractor $\G_2$-structure, and denote its components with respect to an arbitrary scale $\tau$ as in \eqref{equation:G2-structure-splitting}; in particular, $\phi := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi)$ is the underlying normal conformal Killing $2$-form. Also, f\/ix a nonzero almost Einstein scale $\sigma \in \Gamma(\mathcal{E}[1])$ of $\mathbf{c}_{\mathbf{D}}$, denote the cor\-respon\-ding parallel standard tractor by $\mathbb{S} := L_0^{\mathcal{V}}(\sigma)$, and denote its components with respect to $\tau$ as in~\eqref{equation:standard-tractor-structure-splitting}. By scaling, we assume that $-\varepsilon := H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\}$. We view the adjoint tractor \begin{gather*} \mathbb{K}^A{}_B := -\mathbb{S}^C \Phi_C{}^A{}_B \in \Gamma(\mathcal{A}) . \end{gather*} as a bundle endomorphism of $\mathcal{V}$ (cf.~\eqref{equation:definition-K}), and computing gives that the components of $\mathbb{K}$ with respect to $\tau$ are \begin{gather}\label{equation:components-of-K} \mathbb{K}^A{}_B\stackrel{\tau}{=} \tractorQ{\sigma \theta^a + \mu_c \phi^{ca}}{-\sigma \psi^a{}_b - \mu_c \chi^{ca}{}_b - \rho \phi^a{}_b}{\mu^c \theta_c}{\mu^c \psi_{ca} - \rho \theta_a} . \end{gather} We denote the projecting part of $\mathbb{K}^A{}_B$ by \begin{gather}\label{equation:definition-xi} \xi^a := \Pi_0^{\mathcal{A}}(\mathbb{K})^a = \sigma \theta^a + \mu_b \phi^{ba} \in \Gamma(TM) ; \end{gather} because $\mathbb{K}$ is parallel, $\xi$ is a normal conformal Killing f\/ield for $\mathbf{c}_{\mathbf{D}}$. By~\eqref{equation:K-squared-identity} $\mathbb{K}$ is not identically zero and hence neither is $\xi$. This immediately gives a simple geometric obstruction~-- nonexistence of a conformal Killing f\/ield~-- for the existence of an almost Einstein scale for an oriented $(2, 3, 5)$ conformal structure. By construction, $\xi = \iota_7(\sigma)$, where $\iota_7$ is the manifestly invariant dif\/ferential operator $\iota_7 := \Pi_0^{\mathcal{A}} \circ (-\iota^2_7) \circ L_0^{\mathcal{V}} \colon \Gamma(\mathcal{E}[1]) \to \Gamma(TM)$. Here, $\iota^2_7$ is the bundle map $\mathcal{V} \to \Lambda^2 \mathcal{V}^*$ associated to the algebraic map~\eqref{equation:iota-2-7} of the same name, and we have implicitly raised an index with $H_{\Phi}$. Computing gives $\xi^a = \iota_7(\sigma)^a = -\phi^{ab} \sigma_{,b} + \tfrac{1}{4} \phi^{ab}{}_{,b} \sigma$.\footnote{This formula corrects a sign error in \cite[equation~(41)]{HammerlSagerschnig}, and~\eqref{equation:g2-conformal-Killing-field-decomposition} below corrects a corresponding sign error in equation~(40) of that reference.} \begin{Proposition}\label{proposition:containment-xi-DD} Given an oriented $(2, 3, 5)$ distribution $\mathbf{D}$, let $\phi$ denote the corresponding normal conformal Killing $2$-form, and suppose the induced conformal class $\mathbf{c}_{\mathbf{D}}$ admits an almost Einstein scale $\sigma$. The corresponding vector field $\xi := \iota_7(\sigma)$ is a section of $[\mathbf{D}, \mathbf{D}]$. \end{Proposition} \begin{proof} By \eqref{equation:definition-xi}, $\phi_{ba} \xi^b = \phi_{ba} (\sigma \theta^b + \mu_c \phi^{cb}) = \sigma \phi_{ba} \theta^b + \mu_c \phi^{cb} \phi_{bc}$, but the f\/irst and second term vanish respectively by Proposition~\ref{proposition:identites-g2-structure-components}(1),(3). Thus, $\xi \in \ker \phi$, which by Part~(2) of that proposition is $[\mathbf{D}, \mathbf{D}]$. \end{proof} On the set $M_{\xi} := \{x \in M \colon \xi_x \neq 0\}$, $\xi$ spans a canonical line f\/ield \begin{gather*} \mathbf{L} := \langle \xi \rangle\vert_{M_{\xi}} , \end{gather*} and by Proposition \ref{proposition:containment-xi-DD}, $\mathbf{L}$ is a subbundle of $[\mathbf{D}, \mathbf{D}]\vert_{M_{\xi}}$. Henceforth we often suppress the restriction notation $\vert_{M_{\xi}}$. We will see in Proposition~\ref{proposition:coincidence-line-fields} that~$\mathbf{L}$ coincides with the line f\/ield of the same name determined via Proposition~\ref{proposition:identites-g2-structure-components} by the preferred scale~$\sigma$ (on the complement of its zero locus). \subsection[Characterization of conformal Killing fields induced by almost Einstein scales]{Characterization of conformal Killing f\/ields induced \\ by almost Einstein scales}\label{subsection:symmetry-decomposition} Hammerl and Sagerschnig showed that for any oriented $(2, 3, 5)$ distribution $\mathbf{D}$, the Lie algebra $\mathfrak{aut}(\mathbf{c}_{\mathbf{D}})$ of conformal Killing f\/ields of the induced conformal structure $\mathbf{c}_{\mathbf{D}}$ admits a natural (vector space) decomposition, corresponding to the $\G_2$-module decomposition $\mfso(3, 4) \cong \mfg_2 \oplus \mathbb{V}$ into irreducible submodules, that encodes features of the geometry of the underlying distribution. Given an oriented $(2, 3, 5)$ distribution $M$, a vector f\/ield $\eta \in \Gamma(TM)$ is an \textit{infinitesimal symmetry} of $\mathbf{D}$ if\/f $\mathbf{D}$ is invariant under the f\/low of $\eta$, and the inf\/initesimal symmetries of $\mathbf{D}$ comprise a Lie algebra $\mathfrak{aut}(\mathbf{D})$ under the usual Lie bracket of vector f\/ields. The construction $\mathbf{D} \rightsquigarrow \mathbf{c}_{\mathbf{D}}$ is functorial, so $\mathfrak{aut}(\mathbf{D}) \subseteq \mathfrak{aut}(\mathbf{c}_{\mathbf{D}})$. By construction, the map $\pi_7$ given in \eqref{equation:pi-7} below is a left inverse for $\iota_7$, so in particular $\iota_7$ is injective. \begin{Theorem}[{\cite[Theorem~B]{HammerlSagerschnig}}] \label{theorem:conformal-Killing-field-decomposition} If $(M, \mathbf{D})$ is an oriented $(2, 3, 5)$ distribution, the Lie algebra $\mathfrak{aut}(\mathbf{c}_{\mathbf{D}})$ of conformal Killing fields of the induced conformal structure $\mathbf{c}_{\mathbf{D}}$ admits a natural $($vector space$)$ decomposition \begin{gather}\label{equation:g2-conformal-Killing-field-decomposition} \mathfrak{aut}(\mathbf{c}_{\mathbf{D}}) = \mathfrak{aut}(\mathbf{D}) \oplus \iota_7(\aEs(\mathbf{c}_{\mathbf{D}})) \end{gather} and hence an isomorphism $\mathfrak{aut}(\mathbf{c}_{\mathbf{D}}) \cong \mathfrak{aut}(\mathbf{D}) \oplus \aEs(\mathbf{c}_\mathbf{D})$. The projection $\mathfrak{aut}(\mathbf{c}_{\mathbf{D}}) \to \aEs(\mathbf{c}_{\mathbf{D}})$ is $($the restriction of$)$ the invariant differential operator $\pi_7 := \Pi_0^{\mathcal{V}} \circ (-\pi^2_7) \circ L_0^{\mathcal{A}} \colon \Gamma(TM) \to \Gamma(\mathcal{E}[1])$, which is given by \begin{gather}\label{equation:pi-7} \pi_7 \colon \ \eta^a \mapsto \tfrac{1}{6} \phi^{ab} \eta_{a, b} - \tfrac{1}{12} \phi_{ab,}{}^b \eta^a . \end{gather} The canonical projection $\mathfrak{aut}(\mathbf{c}_{\mathbf{D}}) \to \mathfrak{aut}(\mathbf{D})$ is $($the restriction of$)$ the invariant differential operator $\pi_{14} := \id_{\Gamma(TM)} - \iota_7 \circ \pi_7 \colon \Gamma(TM) \to \Gamma(TM)$. \end{Theorem} The map $\pi^2_7$ is the bundle map $\mathcal{A} \cong \Lambda^2 \mathcal{V}^* \to \mathcal{V}^*$ associated to the algebraic map~\eqref{equation:pi-2-7} of the same name. The conformal Killing f\/ields in the distinguished subspace $\iota_7(\aEs(\mathbf{c}_{\mathbf{D}}))$, that is, those corresponding to almost Einstein scales, admit a simple geometric characterization: \begin{Proposition} Let $(M, \mathbf{D})$ be an oriented $(2, 3, 5)$ distribution. Then, a conformal Killing field of $\mathbf{c}_{\mathbf{D}}$ is in the subspace $\iota_7(\aEs(\mathbf{c}_{\mathbf{D}}))$ iff it is a section of $[\mathbf{D}, \mathbf{D}]$. Hence, the indicated restrictions of $\iota_7$ and $\pi_7$ comprise a natural bijective correspondence \begin{gather*} \aEs(\mathbf{c}_{\mathbf{D}}) \mathrel{\mathop{\rightleftarrows}^{\iota_7}_{\pi_7}} \mathfrak{aut}(\mathbf{c}_{\mathbf{D}}) \cap \Gamma([\mathbf{D}, \mathbf{D}]) . \end{gather*} \end{Proposition} \begin{proof} Let $q_{-3}$ denote the canonical projection $TM \to TM / [\mathbf{D}, \mathbf{D}]$ and the map on sections it induces. It follows from a general fact about inf\/initesimal symmetries of parabolic geometries \cite{Cap} that an inf\/initesimal symmetry $\xi$ of $\mathbf{D}$ can be recovered from its image $q_{-3}(\xi) \in \Gamma(TM / [\mathbf{D}, \mathbf{D}])$ via a natural linear dif\/ferential operator $\Gamma(TM / [\mathbf{D}, \mathbf{D}]) \to \Gamma(TM)$; in particular, if $q_{-3}(\xi) = 0$ then $\xi = 0$, so $\mathfrak{aut}(\mathbf{D})$ intersects trivially with $\ker q_{-3}$. On the other hand, Proposition \ref{proposition:containment-xi-DD} gives that the image of $\iota_7$ is contained in $\ker q_{-3} = [\mathbf{D}, \mathbf{D}]$. The claim now follows from the decomposition in Theorem \ref{theorem:conformal-Killing-field-decomposition}. \end{proof} \subsection[The weighted endomorphisms $I$, $J$, $K$]{The weighted endomorphisms $\boldsymbol{I}$, $\boldsymbol{J}$, $\boldsymbol{K}$}\label{subsection:I-J-K} Since they are algebraic combinations of parallel tractors, the $3$-forms $\Phi_I, \Phi_J, \Phi_K \in \Gamma(\Lambda^3 \mathcal{V}^*)$ respectively def\/ined pointwise by \eqref{equation:definition-Phi-I}, \eqref{equation:definition-Phi-J}, \eqref{equation:definition-Phi-K} are themselves parallel. Thus, their respective projecting parts, $I_{ab} := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi_I)_{ab}$, $J_{ab} := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi_J)_{ab}$, $K_{ab} := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi_K)_{ab}$, are normal conformal Killing $2$-forms. The def\/initions of $\Phi_I$, $\Phi_J$, $\Phi_K$, together with \eqref{equation:Hodge-star-3-form} and \eqref{equation:Hodge-star-3-form-components} give \begin{gather} I_{ab} =-\sigma^2 \psi_{ab} - \sigma \mu^c \chi_{cab}- 2 \sigma \mu_{[a} \theta_{b]} + \sigma \rho \phi_{ab}+ 3 \mu^c \mu_{[c} \phi_{ab]}m, \label{equation:I} \\ J_{ab} = -\sigma \theta^c \chi_{cab} + 3 \mu^c \phi_{[ca} \theta_{b]}, \label{equation:J}\\ K_{ab} = \sigma^2 \psi_{ab} + \sigma \mu^c \chi_{cab}+ 2 \sigma \mu_{[a} \theta_{b]} + \sigma \rho \phi_{ab} - 2 \mu^c \mu_{[a} \phi_{b] c}. \label{equation:K} \end{gather} Using the splitting operators $L_0^{\mathcal{V}}$ \eqref{equation:splitting-operator-standard} and $L_0^{\Lambda^3 \mathcal{V}^*}$ \eqref{equation:splitting-operator-alternating}, we can write these normal conformal Killing $2$-forms as dif\/ferential expressions in $\phi$ and $\sigma$: \begin{gather} I_{ab}= \tfrac{1}{5} \sigma^2 \left(\tfrac{1}{3} \phi_{ab, c}{}^c + \tfrac{2}{3} \phi_{c[a, b]}{}^c + \tfrac{1}{2} \phi_{c [a,}{}^c{}_{b]} + 4 \mathsf{P}^c{}_{[a} \phi_{b] c} \right) \nonumber \\ \hphantom{I_{ab}=}{} - \sigma \sigma^{,c} \phi_{[ca, b]} - \tfrac{1}{2} \sigma \sigma_{, [a} \phi_{b] c,}{}^c - \tfrac{1}{5} \sigma \sigma_{,c}{}^c \phi_{ab} + 3 \sigma^{,c} \sigma_{,[c} \phi_{ab]}, \label{equation:I-sigma-phi} \\ J_{ab} = -\tfrac{1}{4} \sigma \phi^{cd,}{}_d \phi_{[ab, c]} + \tfrac{3}{4} \sigma^{,c} \phi_{[ab} \phi_{c]d,}{}^d, \label{equation:J-sigma-phi} \\ K_{ab}= -\tfrac{1}{5} \sigma^2 \left(\tfrac{1}{3} \phi_{ab, c}{}^c + \tfrac{2}{3} \phi_{c [a, b]}{}^c + \tfrac{1}{2} \phi_{c [a,}{}^c{}_{b]} + 4 \mathsf{P}^c{}_{[a} \phi_{b] c} + 2 \mathsf{P}^c{}_c \phi_{ab}\right) \nonumber \\ \hphantom{K_{ab}=}{} + \sigma \sigma^{,c} \phi_{[ab, c]} + \tfrac{1}{2} \sigma \sigma_{,[a} \phi_{b] c,}{}^c - \tfrac{1}{5} \sigma \sigma_{, c}{}^c \phi_{ab} - 2 \sigma^{,c} \sigma_{, [a} \phi_{b] c} . \label{equation:K-sigma-phi} \end{gather} Raising indices gives weighted $\mathbf{g}$-skew endomorphisms $I^a{}_b, J^a{}_b, K^a{}_b \in \Gamma(\End_{\skewOp}(TM)[1])$. \subsection{The (local) leaf space}\label{subsection:local-leaf-space} Let $L$ denote the space of integral curves of $\xi$ in $M_{\xi} := \{x \in M \colon \xi_x \neq 0\}$, and denote by $\pi_L\colon M_{\xi} \to L$ the projection that maps a point to the integral curve through it. Since $\xi$ vanishes nowhere, around any point in $M_{\xi}$ there is a neighborhood such that the restriction of~$\pi_L$ thereto is a trivial f\/ibration over a smooth $4$-manifold; henceforth in this subsection, we will assume we have replaced $M_{\xi}$ with such a neighborhood. \subsubsection{Descent of the canonical objects}\label{subsubsection:descent} Some of the objects we have already constructed on $M$ descend to $L$ via the projection $\pi_L$. One can determine which do by computing the Lie derivatives of the various tensorial objects with respect to the generating vector f\/ield $\xi$, but again it turns out to be much more ef\/f\/icient to compute derivatives in the tractor setting. Since any conformal tractor bundle $\mathcal{U} \to M$ is a~natural bundle in the category of conformal manifolds, one may pull back any section $\mathbb{A} \in \Gamma(\mathcal{U})$ by the f\/low $\Xi_t$ of $\xi$ and def\/ine the Lie derivative $\mathcal{L}_{\xi} \mathbb{A}$ to be $\mathcal{L}_{\xi} \mathbb{A} := \partial_t\vert_0 \Xi_t^* \mathbb{A}$ \cite{KMS}. Since the tractor projection $\Pi_0^{\mathcal{U}}$ is associated to a canonical vector space projection, it commutes with the Lie derivative. We exploit the following identity: \begin{Lemma}[{\cite[Appendix~A.3]{CurryGover}}]\label{lemma:Lie-derivative-tractor} Let $(M, \mathbf{c})$ be a conformal structure of signature $(p, q)$, \mbox{$p + q \geq 3$}, let $\mathbb{U}$ be a $\SO(p + 1, q + 1)$-representation, and denote by $\mathcal{U} \to M$ the tractor bundle it induces. If $\xi \in \Gamma(TM)$ is a conformal Killing field for $\mathbf{c}$, and $\mathbb{A} \in \Gamma(\mathcal{U})$, then $\mathcal{L}_{\xi} \mathbb{A} = \nabla_{\xi} \mathbb{A} - L_0^{\mathcal{A}}(\xi) \cdot \mathbb{A} ,$ where $\cdot$ denotes the action on sections induced by the action $\mfso(p + 1, q + 1) \times \mathbb{U} \to \mathbb{U}$. In particular, if $\mathbb{A}$ is parallel, then \begin{gather}\label{equation:Lie-derivative-parallel-tractor} \mathcal{L}_{\xi} \mathbb{A} = - L_0^{\mathcal{A}}(\xi) \cdot \mathbb{A}. \end{gather} \end{Lemma} \begin{Proposition}\label{proposition:descent} Suppose $(M, \mathbf{D})$ is an oriented $(2, 3, 5)$ distribution and let $\phi \in \Gamma(\Lambda^2 T^*M [3])$ denote the corresponding normal conformal Killing $2$-form. Suppose moreover that the conformal structure $\mathbf{c}_{\mathbf{D}}$ induced by $\mathbf{D}$ admits a nonzero almost Einstein scale $\sigma \in \Gamma(\mathcal{E}[1])$, let $\xi \in \Gamma(TM)$ denote the corresponding normal conformal Killing field, and let $I$, $J$, $K$ denote the normal conformal Killing $2$-forms defined in Section~{\rm \ref{subsection:I-J-K}}. Then, \begin{gather} \mathcal{L}_{\xi} \sigma = 0 ,\qquad (\mathcal{L}_{\xi} \phi)_{ab} = 3 J_{ab},\qquad (\mathcal{L}_{\xi} I)_{ab} = -3 \varepsilon J_{ab},\nonumber\\ (\mathcal{L}_{\xi} J)_{ab} = 3 I_{ab},\qquad (\mathcal{L}_{\xi} K)_{ab} = 0 .\label{equation:Lie-derivatives-objects} \end{gather} As usual, we scale $\sigma$ so that $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\}$, where $\mathbb{S}^A := L_0^{\mathcal{V}}(\sigma)^A$ is the parallel standard tractor corresponding to $\sigma$. In particular, $\sigma$ and $K$ descend via $\pi_L \colon M_{\xi} \to N$ to well-defined objects $\hat{\sigma}$ and $\hat{K}$, but~$\phi$ and~$J$ do not descend, and when $\varepsilon \neq 0$ neither does $I$ (recall that when $\varepsilon = 0$, $I = -K$). \end{Proposition} \begin{proof} As usual, denote $\smash{\mathbb{K} := L_0^{\mathcal{A}}(\xi)}$, $\smash{\Phi := L_0^{\Lambda^3 \mathcal{V}^*}(\phi)}$, $\smash{\Phi_I := L_0^{\Lambda^3 \mathcal{V}^*}(I)}$, $\smash{\Phi_J := L_0^{\Lambda^3 \mathcal{V}^*}(J)}$, and $\smash{\Phi_K := L_0^{\Lambda^3 \mathcal{V}^*}(K)}$. Since $\xi$ is a conformal Killing f\/ield, by \eqref{equation:Lie-derivative-parallel-tractor} $(\mathcal{L}_{\xi} \mathbb{S})^A = -(L_0^{\mathcal{A}}(\xi) \cdot \mathbb{S})^A = -\mathbb{K}^A{}_B \mathbb{S}^B = -\mathbb{S}^C \Phi_C{}^A{}_B \mathbb{S}^B = 0$. Applying $\Pi_0^{\mathcal{V}}$ yields $\mathcal{L}_{\xi} \sigma = \mathcal{L}_{\xi} \Pi_0^{\mathcal{V}}(\mathbb{S}) = \Pi_0^{\mathcal{V}}(\mathcal{L}_{\xi} \mathbb{S}) = \Pi_0^{\mathcal{V}}(0) = 0$. The proofs for $J$ and $\phi$ are similar, and use the identities \eqref{equation:contraction-Phi-Phi}, \eqref{equation:contraction-Phi-PhiStar}. By def\/inition, $\mathcal{L}_{\xi} \Phi_K = \mathcal{L}_{\xi} [\mathbb{S}^{\flat} \wedge (\mathbb{S} \hook \Phi)]$, and since $\mathcal{L}_{\xi} \mathbb{S} = 0$, we have $\mathcal{L}_{\xi} \Phi_K = \mathbb{S}^{\flat} \wedge (\mathbb{S} \hook \mathcal{L}_{\xi} \Phi) = \mathbb{S}^{\flat} \wedge [\mathbb{S} \hook (3 \Phi_J)]$, but by \eqref{equation:definition-Phi-J} this is $3 \mathbb{S}^{\flat} \wedge [\mathbb{S} \hook (\mathbb{S} \hook \astPhi \Phi)],$ which is zero by symmetry. App\-lying~$\smash{\Pi_0^{\Lambda^3 \mathcal{V}^*}}$ gives $\mathcal{L}_{\xi} K = 0$. Finally, \eqref{equation:Phi-I-plus-Phi-K} gives $\mathcal{L}_{\xi} \Phi_I = \mathcal{L}_{\xi} (-\varepsilon \Phi - \Phi_K) = -\varepsilon \mathcal{L}_{\xi} \Phi - \mathcal{L}_{\xi} \Phi_K = -\varepsilon (3 \Phi_J) - (0) = -3 \varepsilon \Phi_J$, and applying $\smash{\Pi_0^{\Lambda^3 \mathcal{V}^*}}$ gives $\mathcal{L}_{\xi} I = -3 \varepsilon J$. \end{proof} \section{The conformal isometry problem} \label{section:conformally-isometric} In this section we consider the problem of determining when two distributions $(M, \mathbf{D})$ and $(M, \mathbf{D}')$ induce the same oriented conformal structure; we say two such distributions are \textit{conformally isometric}. This problem turns out to be intimately related to existence of a nonzero almost Einstein scale for $\mathbf{c}_{\mathbf{D}}$. Approaching this question at the level of underlying structures is prima facie dif\/f\/icult: The value of the conformal structure $\mathbf{c}_{\mathbf{D}}$ at a point $x \in M$ induced by a $(2, 3, 5)$ distribution~$(M, \mathbf{D})$ depends on the $4$-jet of $\mathbf{D}$ at $x$ \cite[equation~(54)]{Nurowski} (or, essentially equivalently, multiple prolongations and normalizations), so analyzing directly the dependence of $\mathbf{c}_{\mathbf{D}}$ on $\mathbf{D}$ involves apprehending high-order dif\/ferential expressions that turn out to be cumbersome. We have seen that in the tractor bundle setting, however, this construction is essentially algebraic: At each point, the parallel tractor $\G_2$-structure $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$ determined by an oriented $(2, 3, 5)$ distribution $(M, \mathbf{D})$ determines the parallel tractor bilinear form $H_{\Phi} \in \Gamma(S^2 \mathcal{V}^*)$ and orientation $[\epsilon_{\Phi}]$ canonically associated to the oriented conformal structure $\mathbf{c}_{\mathbf{D}}$. So, the problem of determining the distributions $(M, \mathbf{D}')$ such that $\mathbf{c}_{\mathbf{D}'} = \mathbf{c}_{\mathbf{D}}$ amounts to the corresponding algebraic problem of identifying for a $\G_2$-structure $\Phi$ on a $7$-dimensional real vector space $\mathbb{V}$ the $\G_2$-structures $\Phi'$ on $\mathbb{V}$ such that $(H_{\Phi'}, [\epsilon_{\Phi'}]) = (H_{\Phi}, [\epsilon_{\Phi}])$. We solve this algebraic problem in Section~\ref{subsection:compatible-G2-structures} and then transfer the result to the setting of parallel sections of conformal tractor bundles to resolve the conformal isometry problem in Section~\ref{subsection:conformally-isometric-235-distributions}. \subsection[The space of G2-structures compatible with an SO(3,4)-structure]{The space of $\boldsymbol{\G_2}$-structures compatible with an $\boldsymbol{\SO(3,4)}$-structure}\label{subsection:compatible-G2-structures} In this subsection, which consists entirely of linear algebra, we characterize explicitly the space of $\G_2$-structures compatible with a given $\SO(3, 4)$-structure on a $7$-dimensional real vector space~$\mathbb{V}$, or more precisely, the $\SO(3, 4)$-structure determined by a reference $\G_2$-structure $\Phi$. This characterization is essentially equivalent to that in \cite[Remark~4]{Bryant} for the analogous inclusion of the compact real form of $\G_2$ into $\SO(7, \mathbb{R})$. The following proposition can be readily verif\/ied by computing in an adapted frame. (Computer assistance proved particularly useful in this verif\/ication.) \begin{Proposition}\label{proposition:compatible-g2-structures} Let $\mathbb{V}$ be a $7$-dimensional real vector space and fix a $\G_2$-structure $\Phi \in \Lambda^3 \mathbb{V}^*$. \begin{enumerate}\itemsep=0pt \item[$1.$] Fix a nonzero vector $\mathbb{S} \in \mathbb{V}$; by rescaling we may assume that $\varepsilon := - H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{{-}1{,} 0{,} {+}1\}$. For any $(\bar A, B) \in \mathbb{R}^2$ such that $-\varepsilon \bar A^2 + 2 \bar A + B^2 = 0$ $($there is a $1$-parameter family of such pairs$)$ the $3$-form \begin{gather}\label{equation:family-G2-structures} \Phi' := \Phi + \bar A \Phi_I + B \Phi_J \in \Lambda^3 \mathbb{V}^* \end{gather} is a $\G_2$-structure compatible with the $\SO(3, 4)$-structure $(H_{\Phi}, [\epsilon_{\Phi}])$, that is, $(H_{\Phi'}, [\epsilon_{\Phi'}]) = (H_{\Phi}, [\epsilon_{\Phi}])$. \item[$2.$] Conversely, all compatible $\G_2$-structures arise this way: If a $\G_2$-structure $\Phi'$ on $\mathbb{V}$ satisfies $(H_{\Phi}, [\epsilon_{\Phi}]) = (H_{\Phi'}, [\epsilon_{\Phi'}])$, there is a vector $\mathbb{S} \in \mathbb{V}$ $($we may assume that $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\})$ and $(\bar A, B) \in \mathbb{R}^2$ satisfying $-\varepsilon \bar A^2 + 2 \bar A + B^2 = 0$ such that $\Phi'$ is given by~\eqref{equation:family-G2-structures}. \end{enumerate} \end{Proposition} \begin{Remark}\label{remark:determined-by-S} Let $\mathbb{V}$ be the standard representation of $\SO(3, 4)$, and let $S$ denote the intersection of (a copy of) $\G_2$ in $\SO(3, 4)$ and the stabilizer subgroup in $\SO(3, 4)$ of a nonzero vector $\mathbb{S} \in \mathbb{V}$. By Section~\ref{subsubsection:stabilized-3-forms} the space of $3$-forms in $\Lambda^3 \mathbb{V}^*$ stabilized by $S$ is $\langle \Phi, \Phi_I, \Phi_J \rangle$. By Proposition~\ref{proposition:compatible-g2-structures}, this determines a $1$-parameter family of copies of $\G_2$ containing $S$ and contained in $\SO(3, 4)$, or equivalently, a $1$-parameter family $\mathcal{F}$ of $\G_2$-structures $\Phi'$ compatible with the $\SO(3, 4)$-structure, but $S$ does not distinguish a $\G_2$-structure in this family. \end{Remark} \begin{Remark}\label{remark:homogeneous-space-compatible-G2-structures} We may identify the space of $\G_2$-structures that induce a particular $\SO(3, 4)$-structure with the homogeneous space $\SO(3, 4) / \G_2$. Since $\SO(3, 4)$ has two components but $\G_2$ is connected, this homogeneous space has two components; the $\G_2$-structures in one determine the opposite space and time orientations as those in the other. We can identify one component with the projectivization $\mathbb{P}(\mathbb{S}^{3, 4})$ of the cone of spacelike elements in $\mathbb{R}^{4, 4}$ and the other as the projectivization $\mathbb{P}(\mathbb{S}^{4, 3})$ of the cone of timelike elements, and the homogeneous space $\SO(3, 4) / \G_2$ (the union of these projectivizations) as the complement of the neutral null quadric in $\mathbb{P}(\mathbb{R}^{4, 4})$ \cite[Theorem 2.1]{Kath}. \end{Remark} Henceforth denote by $\mathcal{F}[\Phi; \mathbb{S}]$ the $1$-parameter family of $\G_2$-structures compatible with the $\SO(3, 4)$-structure $(H_{\Phi}, [\epsilon_{\Phi}])$ def\/ined by Proposition \ref{proposition:compatible-g2-structures}. By construction, if $\Phi' \in \mathcal{F}[\Phi; \mathbb{S}]$, then $\mathcal{F}[\Phi'; \mathbb{S}] = \mathcal{F}[\Phi; \mathbb{S}]$. \begin{Proposition}\label{proposition:compatible-3-form-Xi} For any $\G_2$-structure $\Phi'\! \in\! \mathcal{F}[\Phi; \mathbb{S}]$, the endomorphism $(\mathbb{K}')^A{}_B := -\mathbb{S}^C (\Phi')_C{}^A{}_B\!$ coincides with $\mathbb{K}^A{}_B := -\mathbb{S}^C \Phi_C{}^A{}_B$. \end{Proposition} \begin{proof}Proposition \ref{proposition:compatible-g2-structures} gives that $\Phi' = \Phi + \bar A \Phi_I + B \Phi_J$ for some constants $\bar A, B$, so \eqref{equation:contraction-S-PhiIJK} gives that $\mathbb{K}' := -\mathbb{S} \hook \Phi' = -\mathbb{S} \hook (\Phi + \bar A \Phi_I + B \Phi_J) = -\mathbb{S} \hook \Phi = \mathbb{K}$. \end{proof} We can readily parameterize the families $\mathcal{F}[\Phi; \mathbb{S}]$ of $\G_2$-structures. It is convenient henceforth to split cases according to the causality type of $\mathbb{S}$, that is, according to $\varepsilon$. If $\varepsilon \neq 0$, then $\Phi = -\varepsilon (\Phi_I + \Phi_K)$, so in terms of $A := \bar A - \varepsilon$ and $B$, $\Phi' = A \Phi_I + B \Phi_J - \varepsilon \Phi_K$ and the condition on the coef\/f\/icients is $A^2 - \varepsilon B^2 = 1$. If $\varepsilon = -1$, then $A^2 + B^2 = 1$, and so we can parameterize $\mathcal{F}[\Phi; \mathbb{S}]$ by{\samepage \begin{gather}\label{equation:parameterization-Phi-upsilon} \Phi_{\upsilon} := (\cos \upsilon) \Phi_I + (\sin \upsilon) \Phi_J + \Phi_K . \end{gather} The parameterization descends to a bijection $\mathbb{R} / 2\pi \mathbb{Z} \cong \mathbb{S}^1 \leftrightarrow \mathcal{F}[\Phi; \mathbb{S}]$, and $\Phi_0 = \Phi$.} If $\varepsilon = +1$, then $A^2 - B^2 = 1$, and so we can parameterize $\mathcal{F}[\Phi; \mathbb{S}]$ by \begin{gather}\label{equation:parameterization-Phi-t} \Phi_t^{\mp} := (\mp \cosh t) \Phi_I + (\sinh t) \Phi_J - \Phi_K \end{gather} and $\Phi_0^- = \Phi$. If $\varepsilon = 0$, the compatibility conditions simplify to $2 \bar A + B^2 = 0$, so we can parameterize $\mathcal{F}[\Phi; \mathbb{S}]$ by \begin{gather}\label{equation:parameterization-Phi-s} \Phi_s := \Phi - \tfrac{1}{2} s^2 \Phi_I + s \Phi_J \end{gather} and $\Phi_0 = \Phi$. Each of the above parameterizations $\Phi_u$ satisf\/ies $\smash{\left.\frac{d}{du}\right\vert_0 \Phi_u = \Phi_J}$, and the parameterizations in the latter two cases are bijective. In the nonisotropic cases, we can encode the compatible $\G_2$-structures ef\/f\/iciently in terms of the $\varepsilon$-complex volume forms in Proposition \ref{proposition:vareps-complex-volume-forms}. \begin{Proposition} Suppose $\mathbb{S}$ is nonisotropic, and let $\Psi_{(A, B)} \in \Gamma(\Lambda^3_{\mathbb{C}_{\varepsilon}} \mathbb{W})$, $A^2 - \varepsilon B^2 = 1$, denote the corresponding $1$-parameter family of $\varepsilon$-complex volume forms defined pointwise in Proposition~{\rm \ref{proposition:vareps-complex-volume-forms}}. Then, $\mathcal{F}[\Phi; \mathbb{S}]$ consists of the $\G_2$-structures \begin{gather*} \Phi_{(A, B)} := \Pi_{\mathbb{W}}^* \Re \Psi_{(A, B)} + \varepsilon \Phi_K , \end{gather*} $A^2 - \varepsilon B^2 = 1$. $($Here, $\Pi_{\mathbb{W}}$ is the orthogonal projection $\mathbb{V} \to \mathbb{W}.)$ \end{Proposition} \begin{proof} This follows immediately from the appearance of the condition $A^2 - \varepsilon B^2 = 1$ in the discussion after the proof of Proposition~\ref{proposition:compatible-3-form-Xi} and the form of $\Psi_{(A, B)}$. \end{proof} \subsection[Conformally isometric (2,3,5) distributions]{Conformally isometric $\boldsymbol{(2, 3, 5)}$ distributions}\label{subsection:conformally-isometric-235-distributions} We now transfer the results of Proposition~\ref{proposition:compatible-g2-structures} to the level of parallel sections of conformal tractor bundles, thereby proving Theorem~B: \begin{proof}[Proof of Theorem~B] The oriented $(2, 3, 5)$ distributions $\mathbf{D}'$ conformally isometric to $\mathbf{D}$ are precisely those for which the corresponding parallel tractor $3$-form $\Phi$ is compatible with the parallel tractor metric $H_{\Phi}$ and orientation $[\epsilon_{\Phi}]$. Transferring the content of Proposition~\ref{proposition:compatible-g2-structures} to the tractor setting gives that these are precisely the $3$-forms $\Phi' := \Phi + \bar A \Phi_I + B \Phi_J \in \Gamma(\Lambda^3 \mathcal{V}^*)$, and applying $\Pi_0^{\Lambda^3 \mathcal{V}^*}$ \eqref{equation:projection-operator-alternating} gives that the underlying normal conformal Killing $2$-forms are $\phi' := \phi + \bar A I + B J \in \Gamma(\Lambda^2 T^* M[3])$. Substituting for~$I$,~$J$ respectively using \eqref{equation:I-sigma-phi} and~\eqref{equation:J-sigma-phi} yields the formula~\eqref{equation:family-conformal-Killing-2-forms}. \end{proof} We reuse the notation $\mathcal{F}[\Phi; \mathbb{S}]$ for the $1$-parameter family of parallel tractor $\G_2$-structures def\/ined pointwise by~\eqref{equation:family-G2-structures} by a parallel tractor $3$-form $\Phi$ and a parallel, nonzero standard tractor~$\mathbb{S}$. By analogy, we denote by $\mathcal{D}[\mathbf{D}; \sigma]$ the $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions determined by $\mathbf{D}$ and $\sigma$ as in Theorem~B; we say that the distributions in the family are \textit{related by $\sigma$}. Again, if $\mathbf{D}' \in \mathcal{D}[\mathbf{D}; \sigma]$, then by construction $\mathcal{D}[\mathbf{D}'; \sigma] = \mathcal{D}[\mathbf{D}; \sigma]$. Henceforth, $\mathcal{D}$ denotes a family $\mathcal{D}[\mathbf{D}; \sigma]$ for some $\mathbf{D}$ and $\sigma$. \begin{Proposition}\label{proposition:conformal-Killing-field-distribution-family}\looseness=-1 Let $\mathbf{D}$ be an oriented $(2, 3, 5)$ distribution and $\sigma$ an almost Einstein scale of $\mathbf{c}_{\mathbf{D}}$. For any $\mathbf{D}' \in \mathcal{D}[\mathbf{D}; \sigma]$, the conformal Killing fields~$\xi$ and~$\xi'$ respectively determined by $(\mathbf{D}, \sigma)$ and $(\mathbf{D}', \sigma)$ coincide. In particular, $\xi$, the line $\mathbf{L} \subset TM \vert_{M_{\xi}}$ its restriction spans, and its orthogonal hyperplane field $\mathbf{C} := \mathbf{L}^{\perp} \subset TM \vert_{M_{\xi}}$ depend only on the family $\mathcal{D}[\mathbf{D}; \sigma]$ and not on~$\mathbf{D}$. \end{Proposition} \begin{proof}Let $\Phi, \Phi'$ denote the parallel tractor $\G_2$-structures corresponding respectively to $\mathbf{D}, \mathbf{D}'$. Translating Proposition~\ref{proposition:compatible-3-form-Xi} to the tractor bundle setting gives that $\mathbb{K}' := -L_0^{\mathcal{V}}(\sigma) \hook \Phi'$ and $\mathbb{K} := -L_0^{\mathcal{V}}(\sigma) \hook \Phi$ coincide, and hence $\xi' = \Pi_0^{\mathcal{A}}(\mathbb{K}') = \Pi_0^{\mathcal{A}}(\mathbb{K}) = \xi$. \end{proof} \begin{Corollary}\label{corollary:containment-family-hyperplane-distribution} Let $\mathcal{D}$ be a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by an almost Einstein scale. Every distribution $\mathbf{D} \in \mathcal{D}$ satisfies $\mathbf{D} \subset \mathbf{C}$. \end{Corollary} \subsubsection{Parameterizations of conformally isometric distributions}\label{subsubsection:parameterizations-isometric} Now, given an oriented $(2, 3, 5)$ distribution $\mathbf{D}$ and a nonzero almost Einstein scale $\sigma$ of $\mathbf{c}_{\mathbf{D}}$, we can explicitly parameterize the family $\mathcal{D}[\mathbf{D}; \sigma]$ they determine by passing to the projecting parts $\smash{\phi' := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi')}$ of the corresponding parallel tractor $\G_2$-structures $\Phi' \in \mathcal{F}[\Phi; \mathbb{S}]$. To do so, it is convenient to split cases according to the sign of the Einstein constant \eqref{equation:Einstein-constant} of $\sigma$. As usual we denote by $\Phi \in \Gamma(\Lambda^3 \mathcal{V}^*)$ the parallel tractor $\G_2$-structure corresponding to~$\mathbf{D}$ and scale~$\sigma$ (by a~constant) so that $\mathbb{S} := L_0^{\mathcal{V}}(\sigma)$ satisf\/ies $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\}$. For $\varepsilon = -1$, $\mathcal{D}[\mathbf{D}; \mathbb{S}]$ consists of the distributions $\mathbf{D}_{\upsilon}$ corresponding to the normal conformal Killing $2$-forms \begin{gather}\label{equation:phi-parameterization-Ricci-negative} \phi_{\upsilon} := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi_{\upsilon}) = (\cos \upsilon) I + (\sin \upsilon) J + K . \end{gather} As for the corresponding family $\mathcal{F}[\Phi; \mathbb{S}]$ of parallel tractor $\G_2$-structures, this parameterization descends to a bijection $\mathbb{R} / 2\pi \mathbb{Z} \cong \mathbb{S}^1 \leftrightarrow \mathcal{D}[\mathbf{D}; \sigma]$. For $\varepsilon = +1$, $\mathcal{D}[\mathbf{D}; \mathbb{S}]$ consists of the distributions $\mathbf{D}_t^{\mp}$ corresponding to \begin{gather}\label{equation:phi-parameterization-Ricci-positive} \phi_t^{\mp} := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi_t^{\mp}) = (\mp \cosh t) I + (\sinh t) J - K . \end{gather} Each value of the parameter $(\pm, t)$ corresponds to a distinct distribution. For $\varepsilon = 0$, $\mathcal{D}[\mathbf{D}; \mathbb{S}]$ consists of the distributions $\mathbf{D}_{\upsilon}$ corresponding to \begin{gather}\label{equation:phi-parameterization-Ricci-flat} \phi_s := \Pi_0^{\Lambda^3 \mathcal{V}^*}(\Phi_s) = \phi - \tfrac{1}{2} s^2 I + s J . \end{gather} Each value of the parameter $s$ corresponds to a distinct distribution. These parameterizations are distinguished: Locally they agree (up to an overall constant) with the f\/low of the distinguished conformal Killing f\/ield $\xi$ determined by~$\mathbf{D}$ and~$\sigma$. \begin{Proposition} Let $\mathbf{D}$ be an oriented $(2, 3, 5)$ distribution and $\sigma$ an almost Einstein scale of~$\mathbf{c}_{\mathbf{D}}$. Denote $\xi := \iota_7(\sigma)$ and denote its flow by $\Xi_{\bullet}$. Then, for each $x \in M$ there is a neighbor\-hood~$U$ of~$x$ and an interval~$T$ containing $0$ such that: \begin{enumerate}\itemsep=0pt \item[$1)$] $(T \Xi_{\upsilon / 3}) \cdot \mathbf{D} \vert_U = \mathbf{D}_{\upsilon}\vert_U$ for all $\upsilon \in T$, if $\sigma$ is Ricci-negative, \item[$2)$] $(T \Xi_{t / 3}) \cdot \mathbf{D} \vert_U = \mathbf{D}_t^-\vert_U$ and $(T \Xi_{t / 3}) \cdot \mathbf{D}_0^+ \vert_U = \mathbf{D}_t^+\vert_U$ for all $t \in T$, if $\sigma$ is Ricci-positive, and \item[$3)$] $(T \Xi_{s / 3}) \cdot \mathbf{D} \vert_U = \mathbf{D}_s\vert_U$ for all $s \in T$, if~$\sigma$ is Ricci-flat. \end{enumerate} \end{Proposition} \begin{proof} In the Ricci-negative case this follows immediately from the facts that the normal conformal Killing $2$-form $\phi$ corresponding to $\mathbf{D}$ satisf\/ies $\mathcal{L}_{\xi} \phi = 3 J$~\eqref{equation:Lie-derivatives-objects} and that the $1$-parameter family of normal conformal Killing $2$-forms $\phi_{\upsilon}$ corresponding to the distributions $\mathbf{D}_{\upsilon}$ satisfy $\smash{\left.\frac{d}{d\upsilon}\right\vert_0 \phi_{\upsilon} = J}$. The other cases are analogous. \end{proof} \subsubsection{Additional induced distributions} \label{subsubsection:additional-distributions} An almost Einstein scale for an oriented $(2, 3, 5)$ distribution $(M, \mathbf{D})$ naturally determines one or more additional $2$-plane distributions on $M$, depending on the sign of the Einstein constant. \begin{Definition} We say that two oriented $(2, 3, 5)$ distributions $\mathbf{D}, \mathbf{D}'$ in a given $1$-parameter family $\mathcal{D}$ of conformally isometric oriented $(2, 3, 5)$ distributions are \textit{antipodal} if\/f (1)~they are distinct, and (2)~their respective corresponding parallel tractor $\G_2$-structures, $\Phi$, $\Phi'$, together satisfy $\Phi \wedge \Phi' = 0$. \end{Definition} This condition is visibly symmetric in $\mathbf{D}, \mathbf{D}'$. Note that rearranging~\eqref{equation:pi-3-7} and passing to the tractor bundle setting gives that $\Phi \wedge \Phi' = 4 \astPhi \pi^3_7(\Phi')$, where $\pi^3_7$ denotes the bundle map $\Lambda^3 \mathcal{V}^* \to \mathcal{V}$ associated to the algebraic map of the same name in that equation. \begin{Proposition}\label{proposition:antipodal-distributions} Let $\mathcal{D}$ be a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by an almost Einstein scale and fix $\mathbf{D} \in \mathcal{D}$. \begin{itemize}\itemsep=0pt \item If the almost Einstein scale determining $\mathcal{D}$ is non-Ricci-flat, there is precisely one distribution $\mathbf{E}$ antipodal to~$\mathbf{D}$. \item If the almost Einstein scale determining $\mathcal{D}$ is Ricci-flat, there are no distributions antipodal to~$\mathbf{D}$. \end{itemize} \end{Proposition} \begin{proof}Let $\Phi$ denote the parallel tractor $\G_2$-structure corresponding to $\mathbf{D}$ and let $\mathbb{S}$ the parallel standard tractor corresponding to the almost Einstein scale. Any $\mathbf{D}' \in \mathcal{D}$ corresponds to a~compatible parallel tractor $\G_2$-structure $\Phi' \in \mathcal{F}[\Phi; \mathbb{S}]$ and by Proposition~\ref{proposition:compatible-g2-structures} we can write $\Phi' = \Phi + \bar A \Phi_I + B \Phi_J$, so $\Phi \wedge \Phi' = 4 \astPhi \pi^3_7(\Phi + \bar A \Phi_I + B \Phi_J)$. Since $\Phi \in \Lambda^3_1$, $\Phi_I \in \Lambda^3_7$, and $\Phi_J \in \Lambda^3_1 \oplus \Lambda^3_{27}$ (see~\eqref{equation:definition-Phi-I}, \eqref{equation:definition-Phi-J}), where $\Lambda^3_{\bullet} \subset \Lambda^3 \mathcal{V}^*$ denote the subbundles associated to the $\G_2$-representations $\Lambda^3_{\bullet} \subset \Lambda^3 \mathbb{V}^*$, Schur's Lemma implies that $\pi^3_7(\Phi) = \pi^3_7(\Phi_I) = 0$. On the other hand, $\pi^3_7(\Phi_J) = \mathbb{S} \neq 0$, so $\Phi \wedge \Phi' = 4 B \astPhi \mathbb{S}$, which is zero if\/f $B = 0$. If $\varepsilon = -1$, then in the parameterization $\{\Phi_{\upsilon}\}$ \eqref{equation:parameterization-Phi-upsilon}, the coef\/f\/icient of $\Phi_J$ is $\sin \upsilon$, and this vanishes only for $\Phi_0 = \Phi = \Phi_I + \Phi_K$ and $\Phi_{\pi} = -\Phi_I + \Phi_K$, corresponding to the distributions $\mathbf{D} = \mathbf{D}_0$ and $\mathbf{E} := \mathbf{D}_{\pi}$. If $\varepsilon = +1$, then in the parameterization $\{\Phi_t^{\mp}\}$ \eqref{equation:parameterization-Phi-t}, the coef\/f\/icient is $\sinh t$, and this vanishes only for $\Phi_0^+ = \Phi_0 = \Phi_I - \Phi_K$ and $\Phi_0^- = -\Phi_I - \Phi_K$, corresponding to the distributions $\mathbf{D} = \mathbf{D}_0^-$ and $\mathbf{E} := \mathbf{D}_0^+$. Finally, if $\varepsilon = 0$, then in the parameterization $\{\Phi_s\}$ \eqref{equation:parameterization-Phi-s}, the coef\/f\/icient is $s$, and this vanishes only for $\Phi_0 = \Phi$ itself, corresponding to $\mathbf{D} = \mathbf{D}_0$. \end{proof} Though in the Ricci-f\/lat case there are no antipodal distributions (see Proposition \ref{proposition:antipodal-distributions}), there is in that case a suitable replacement: Given an oriented $(2, 3, 5)$ distribution $(M, \mathbf{D})$ and an almost Einstein scale $\sigma$ of $\mathbf{c}_{\mathbf{D}}$, the family $2 s^{-2} \phi_s$ converges to the normal conformal Killing $2$-form $\phi_{\infty} = -I = K$ as $s \to \pm \infty$. By continuity, $\phi_{\infty}$ is decomposable and hence def\/ines on the set where $\phi_{\infty}$ does not vanish a distinguished $2$-plane distribution $\mathbf{E}$ called the \textit{null-complementary distribution} (for $\mathbf{D}$ and $\sigma$), and the distribution so def\/ined is the same for every $\mathbf{D} \in \mathcal{D}[\mathbf{D}; \sigma]$. (This is analogous to the notion of antipodal distribution in that both antipodal and null-complementary distributions are spanned by the decomposable conformal Killing form $-I - \varepsilon K$.) Corollary \ref{corollary:null-complementary-distribution-set-of-definition} gives a precise description of the set on which $\phi_{\infty}$ does not vanish and hence on which $\mathbf{E}$ is def\/ined; this set turns out to be the complement of a set that (if nonempty) has codimension $\geq 3$. Corollary \ref{corollary:null-complementary-intgrable} below shows that $\mathbf{E}$ is integrable (and hence not a $(2, 3, 5)$ distribution). \begin{Remark}\label{remark:space-and-time-oriented} Since $\G_2$ is connected, it is contained in the connected component $\SO_+(3, 4)$ of the identity of $\SO(3, 4)$, and hence a $\G_2$-structure determines space- and time-orientations on the underlying vector space. If we replace $\SO(3, 4)$ with $\SO_+(3, 4)$ in the description of the construction $\mathbf{c} \rightsquigarrow \mathbf{c}_{\mathbf{D}}$ in Section~\ref{subsection:235-conformal-structure}, the construction assigns to an oriented $(2, 3, 5)$ distribu\-tion~$\mathbf{D}$ the (oriented) conformal structure $\mathbf{c}_{\mathbf{D}}$ along with space and time orientations. Suppose $\mathbf{c}_{\mathbf{D}}$ admits a nonzero almost Einstein scale $\sigma$. If $\sigma$ is Ricci-negative or Ricci-f\/lat, then the family $\mathcal{D} := \mathcal{D}[\mathbf{D}; \sigma]$ (parameterized respectively as in~\eqref{equation:phi-parameterization-Ricci-negative} or~\eqref{equation:phi-parameterization-Ricci-flat}) is connected, so the space and time orientations of~$\mathbf{c}_{\mathbf{D}}$ determined by the distributions in~$\mathcal{D}$ all coincide. \looseness=-1 If instead $\sigma$ is Ricci-positive, then $\mathcal{D}$ consists of two connected components. Again, by connectness, the distributions $\mathbf{D}_t^-$, which comprise the component containing $\mathbf{D}_0^- = \mathbf{D}$ (in the notation of Section~\ref{subsubsection:parameterizations-isometric}) all determine the same space and time orientations of $\mathbf{c}_{\mathbf{D}}$, but the distributions $\mathbf{D}_t^+$, which comprise the other component and which include the antipodal distribution $\mathbf{D}_0^+ = \mathbf{E}$, determine the space and time orientations opposite those determined by~$\mathbf{D}$. \end{Remark} In the case that $\sigma$ is Ricci-positive, $\mathbf{D}$ and $\sigma$ determine two additional distinguished distributions: In the notation of Section~\ref{subsubsection:parameterizations-isometric}, the family $(\sech t) \phi_t^{\mp}$ converges to the normal conformal Killing $2$-form $\mp I \pm' J$ as $t \to \pm' \infty$. By continuity $\phi_{\mp \infty} := \pm I + J$ are decomposable and hence determine distributions $\mathbf{D}_{\mp \infty}$ on the sets where they respectively do not vanish. Proposition~\ref{proposition:vanishing-phi-pm-infinity} below describes precisely these sets (their complements, if nonempty, have codimension~$3$). By construction $\mathbf{D}_{\mp \infty}$ depend only on the family $\mathcal{D}[\mathbf{D}; \sigma]$ and not $\mathbf{D}$ itself. Computing in an adapted frame shows that the corresponding parallel tractor $3$-forms $\smash{\Phi_{\mp \infty} := L_0^{\Lambda^3 \mathcal{V}^*}(\phi_{\mp \infty})}$ are not generic (they both annihilate $L_0^{\mathcal{V}}(\sigma)$, that is, they are not $\G_2$-structures, and hence the distribu\-tions~$\mathbf{D}_{\mp \infty}$ are not $(2, 3, 5)$ distributions). \subsubsection{Recovering the Einstein scale relating conformally isometric distributions}\label{subsubsection:recovering-Einstein-scale} Given two distinct, oriented $(2, 3, 5)$ distributions $\mathbf{D}$, $\mathbf{D}'$ for which $\mathbf{c}_{\mathbf{D}} = \mathbf{c}_{\mathbf{D}'}$, we can reconstruct explicitly an almost Einstein scale $\sigma \in \Gamma(\mathcal{E}[1])$ of $\mathbf{c}_{\mathbf{D}}$ for which $\mathbf{D}' \in \mathcal{D}[\mathbf{D}; \sigma]$. If we require that the corresponding parallel standard tractor $\mathbb{S} := L_0^{\mathcal{V}}(\sigma)$ satisf\/ies $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\}$, then $\mathbf{D}$ and $\mathbf{D}'$ together determine $\varepsilon$. If $\varepsilon \in \{\pm 1\}$, $\sigma$ is determined up to sign. If $\varepsilon = 0$, then we may choose $\sigma$ so that $\mathbf{D}' = \mathbf{D}_1$, where the right-hand side refers to the parameterization in Section~\ref{subsubsection:parameterizations-isometric}, and this additional condition determines $\sigma$. The maps $\pi^3_1 \colon \Lambda^3 \mathcal{V}^* \to \mathcal{E}$, \mbox{$\pi^3_7 \colon \Lambda^3 \mathcal{V}^* \to \mathcal{V}$}, $\pi^3_{27} \colon \Lambda^3 \mathcal{V}^* \to S^2_{\circ} \mathcal{V}^*$ denote the bundle maps respectively associated to \eqref{equation:pi-3-1}, \eqref{equation:pi-3-7}, \eqref{equation:pi-3-27}. \begin{algorithm}\label{algorithm:recovery} \textit{Input}: Fix distinct, oriented $(2, 3, 5)$ distributions $\mathbf{D}$, $\mathbf{D}'$ on $M$ such that \mbox{$\mathbf{c}_{\mathbf{D}} = \mathbf{c}_{\mathbf{D}'}$}. Let $\Phi, \Phi'$ denote the parallel tractor $\G_2$-structures respectively corresponding to $\mathbf{D}$, $\mathbf{D}'$, and define $\mathbb{T} := \pi^3_7(\Phi')$. In each case, $\sigma := \Pi_0^{\mathcal{V}}(\mathbb{S})$. \begin{itemize}\itemsep=0pt \item If $H_{\Phi}(\mathbb{T}, \mathbb{T}) < 0$, set $s := \sqrt{-H_{\Phi}(\mathbb{T}, \mathbb{T})}$, so that $\mathbb{S} := s^{-1} \mathbb{T}$ satisfies $-H_{\Phi}(\mathbb{S}, \mathbb{S}) = +1$. Then, $\phi' = \phi_{\arsinh s} = \mp\sqrt{s^2 + 1} I + s J - K$, where $\mp$ is the negative of the sign of $\pi^3_1(\Phi')$. \item If $H_{\Phi}(\mathbb{T}, \mathbb{T}) > 0$, set $s := -\sqrt{H_{\Phi}(\mathbb{T}, \mathbb{T})}$, so that $\mathbb{S} := s^{-1} \mathbb{T}$ satisfies $-H_{\Phi}(\mathbb{S}, \mathbb{S}) = -1$. Then, $\phi' = \phi_{\upsilon} = c I + s J + K$, where $c := \tfrac{1}{4} [7 \pi^3_1(\Phi') - 3]$ and $\upsilon$ is an angle that satisfies $\cos \upsilon = c$, $\sin \upsilon = s$. \item If $H_{\Phi}(\mathbb{T}, \mathbb{T}) = 0$ but $\mathbb{T} \neq 0$, set $\mathbb{S} := -\tfrac{1}{4} \mathbb{T}$, giving $\phi' = \phi_1 = \phi - \tfrac{1}{2} I + J$. \item If $\mathbb{T} = 0$, then $($by definition$)$ the distributions are antipodal. Now, $\pi^3_{27}(\Phi') + \tfrac{1}{7} H_{\Phi} = \pm \mathbb{S}^{\flat} \otimes \mathbb{S}^{\flat}$ for a unique choice of $\pm$ and a parallel tractor $\mathbb{S} \in \Gamma(\mathcal{V})$ determined up to sign. If the equality holds for the sign $+$, then $\varepsilon = -1$ and $\phi' = \phi_{\pi}$. If the equality holds for $-$, then $\varepsilon = 1$ and $\phi' = \phi_0^+$. \end{itemize} \end{algorithm} Since all of the involved tractor objects are parallel, the reconstruction problem is equivalent to the algebraic one recovering a normalized vector~$\mathbb{S}$ from $\G_2$-structures $\Phi, \Phi'$ on a $7$-dimensional real vector space inducing the same $\SO(3, 4)$-structure such that $\Phi' \in \mathcal{F}[\Phi; \mathbb{S}]$. One can thus verify the algorithm by computing in an adapted basis. \section{The curved orbit decomposition}\label{section:curved-orbit-decomposition} In this section, we treat the curved orbit decomposition of an oriented $(2, 3, 5)$ distribution determined by an almost Einstein scale, that is of a parabolic geometry of type $(\G_2, Q)$ to the stabilizer $S$ of a nonzero ray in the standard representation $\mathbb{V}$ of $\G_2$. In Section~\ref{subsection:primer-curved-orbit-decompositions} we brief\/ly review the general theory of curved orbit decompositions and the decomposition of an oriented conformal manifold determined by an almost Einstein scale. In Section~\ref{subsection:orbit-decomposition-flat-model} we determine the orbit decomposition of the f\/lat model. In Section~\ref{subsection:characterizations-curved-orbits} we state and prove geometric characterizations of the curved orbits, both in terms of tractor data and in terms of data on the base manifold. In the remaining subsections we elaborate on the induced geometry determined on each of the curved orbits, which among other things yields proofs of Theorems~D$_-$, D$_+$, and D$_0$. \subsection{The general theory of curved orbit decompositions}\label{subsection:primer-curved-orbit-decompositions} Here we follow~\cite{CGH}. If the holonomy $\Hol(\omega)$ of a Cartan geometry $(\mathcal{G}, \omega)$ is a proper subgroup of~$G$, the principal connection $\hat\omega$ extending $\omega$ (see Section~\ref{subsubsection:holonomy}) can be reduced: If $H \leq G$ is a closed subgroup that contains any group in the conjugacy class $\Hol(\omega)$, $\smash{\hat\mathcal{G}} := \mathcal{G} \times_P G$ admits a reduction $j\colon \mathcal{H} \to \smash{\hat\mathcal{G}}$ of structure group to $H$, and $j^* \hat\omega$ is a principal connection on~$\mathcal{H}$. Such a reduction can be viewed equivalently as a section of the associated f\/iber bundle $\smash{\hat\mathcal{G}} / H := \smash{\hat\mathcal{G}} \times_G (G / H)$. We henceforth work with an abstract $G$-homogeneous space $\mathcal{O}$ instead of~$G / H$, which makes some exposition more convenient, and we call the corresponding $G$-equivariant section $s \colon \smash{\hat\mathcal{G}} \to \mathcal{O}$ a~\textit{holonomy reduction of type $\mathcal{O}$}. Note that we can identify $\hat\mathcal{G} \times_G \mathcal{O}$ with $\mathcal{G} \times_P \mathcal{O}$. Given a Cartan geometry $(\mathcal{G} \to M, \omega)$ of type $(G, P)$ and a holonomy reduction thereof of type~$\mathcal{O}$ corresponding to a section $s\colon \hat\mathcal{G} \to \mathcal{O}$, we def\/ine for each $x \in M$ the \textit{$P$-type of $x$ $($with respect to~$s)$} to be the $P$-orbit $s(\mathcal{G}_x) \subseteq \mathcal{O}$. This partitions $M$ by $P$-type into a disjoint union $\smash{\bigcup_{a \in P \backslash \mathcal{O}} M_a}$ of so-called \textit{curved orbits} parameterized by the space $P \backslash \mathcal{O}$ of $P$-orbits of~$\mathcal{O}$. By construction, the $P$-type decomposition of the f\/lat model~$G / P$ coincides with the decomposition of~$G / P$ into $H$-orbits (for any particular choice of conjugacy class representative~$H$). Put another way, the $P$-types correspond to the possible intersections of~$H$ and~$P$ in~$G$ up to conjugacy. The central result of the theory of curved orbit decompositions is that each curved orbit inherits from the Cartan connection $\omega$ and the holonomy reduction $s$ an appropriate Cartan geometry: We need some notation to state the result: Given a~$G$-homogeneous space $\mathcal{O}$ and elements $x, x' \in \mathcal{O}$, we have $x' = g \cdot x$ for some $g \in G$, and their respective stabilizer subgroups~$G_x$,~$G_{x'}$ are related by $G_{x'} = g G_x g^{-1}$. If~$x$,~$x'$ are in the same $P$-orbit, we can choose $g \in P$, and if we denote $P_x := G_x \cap P$, we likewise have $P_{x'} = g P_x g^{-1}$. Thus, as groups endowed with subgroups, $(G_x, P_x) \cong (G_{x'}, P_{x'})$. Given an orbit $a \in P \backslash \mathcal{O}$, we denote by $(H, P_a)$ an abstract representative of the isomorphism class of groups so endowed. \begin{Theorem}[{\cite[Theorem 2.6]{CGH}}]\label{theorem:curved-orbit-decomposition} Let $(\mathcal{G} \to M, \omega)$ be a parabolic $($more generally, Cartan$)$ geometry of type $(G, P)$ with a holonomy reduction of type $\mathcal{O}$. Then, for each orbit $a \in P \backslash \mathcal{O}$, there is a principal bundle embedding $j_a \colon \mathcal{G}_a \hookrightarrow \mathcal{G}\vert_{M_a}$, and $(\mathcal{G}_a \to M_a, \omega_a)$ is a Cartan geometry of type $(H_a, P_a)$ on the curved orbit~$M_a$, where $\omega_a := j_a^* \omega$. \end{Theorem} Informally, since each $P$-type corresponds to an intersection of $H$ and $P$ up to conjugacy in~$G$, for each such intersection $H \cap P$ (up to conjugacy) the induced Cartan geometry on the corresponding curved orbit has type~$(H, H \cap P)$. \begin{Example}[almost Einstein scales, {\cite[Theorem~3.5]{CGH}}]\label{example:curved-orbit-decomposition-almost-Einstein} Given a conformal structure $(M, \mathbf{c})$ of signature $(p, q)$, $n := p + q \geq 4$, by Theorem \ref{theorem:almost-Einstein-bijection} a nonzero almost Einstein scale $\sigma \in \Gamma(\mathcal{E}[1])$ corresponds to a nonzero parallel standard tractor $\mathbb{S} := L_0^{\mathcal{V}}(\sigma)$ and hence determines a holonomy reduction to the stabilizer subgroup $\bar{S}$ of a nonzero vector in the standard representation $\mathbb{V}$ of $\SO(p + 1, q + 1)$; the conjugacy class of $\bar{S}$ depends on the causality type of $\mathbb{S}$. If $\mathbb{S}$ is nonisotropic, there are three curved orbits, characterized by the sign of $\sigma$. The union of the open orbits is the complement $M - \Sigma$ of the zero locus \mbox{$\Sigma := \{x \in M \colon \sigma_x = 0\}$}, and the reduced Cartan geometries on these orbits are equivalent to the non-Ricci-f\/lat Einstein met\-ric~$\sigma^{-2} \mathbf{g}$ of signature~$(p, q)$. If $\mathbb{S}$ is spacelike (timelike) the reduced Cartan geometry on the hypersurface curved orbit~$\Sigma$ is a~normal parabolic geometry of type $(\SO(p, q + 1), \bar P)$ ($(\SO(p + 1, q), \bar P)$), which corresponds to an oriented conformal structure $\mathbf{c}_{\Sigma}$ of signature $(p - 1, q)$ \mbox{($(p, q - 1)$)}. If $\mathbb{S}$ is isotropic, then again there are two open orbits, and on the union $M - \Sigma$ of these, the reduced Cartan geometry is equivalent to the Ricci-f\/lat metric $\sigma^{-2} \mathbf{g}$ of signature $(p, q)$. In this case, $\Sigma$ decomposes into three curved orbits: $\{x \in M \colon \sigma_x = 0, (\nabla \sigma)_x \neq 0\}$, $M_0^+ := \{x \in M \colon$ $\sigma_x = 0, (\nabla \sigma)_x = 0, (\Delta \sigma)_x < 0\}$, and $M_0^- := \{x \in M \colon \sigma_x = 0, (\nabla \sigma) = 0, (\Delta \sigma)_x > 0\}$; here~$\Delta \sigma$ denotes the Laplacian~$\sigma_{,a}{}^a$. The curved orbits $M_0^{\pm}$ are discrete, but the formermost curved orbit is a hypersurface that naturally (locally) f\/ibers by the integral curves of the line f\/ield~$\mathbf{S}$ spanned by $\sigma^{,a}$, and the (local) leaf space thereof inherits a conformal structure of signature $(p - 1, q - 1)$. \end{Example} \begin{Example}[$(2, 3, 5)$ conformal structures] By Theorem \ref{theorem:2-3-5-holonomy-characterization}, an oriented conformal structure $(M, \mathbf{c})$ of signature $(2, 3)$ is induced by a $(2, 3, 5)$ distribution if\/f it admits a holonomy reduction to $\G_2$. Since $\G_2$ acts transitively on the f\/lat model $\SO(3, 4) / \bar P \cong \mathbb{S}^2 \times \mathbb{S}^3$, the holonomy reduction to $\G_2$ determines only a single curved orbit. \end{Example} \subsection[The orbit decomposition of the flat model M]{The orbit decomposition of the f\/lat model $\boldsymbol{\mathcal{M}}$}\label{subsection:orbit-decomposition-flat-model} In this subsection, we determine the orbits and stabilizer subgroups of the action of $S$ on the f\/lat model $\mathcal{M} := \G_2 / Q \cong \mathbb{S}^2 \times \mathbb{S}^3$, which by Theorem~\ref{theorem:curved-orbit-decomposition} determines the curved orbit decomposition of a parabolic geometry of type $(\G_2, Q)$. \begin{Remark}Alternatively, as in the statements of Theorems D$_-$, D$_+$, and D$_0$, we could f\/ix a~conformal structure~$\mathbf{c}$, that is, a normal parabolic geometry of type $(\SO(3, 4), \bar P)$ (Section~\ref{subsubsection:oriented-conformal-structures}) equipped with a holonomy reduction to the intersection $S$ of a copy of $\G_2$ in $\SO(3, 4)$ and the stabilizer of a nonzero vector $\mathbb{S} \in \mathbb{V}$ (where we now temporarily view $\mathbb{V}$ as the standard representation of $\SO(3, 4)$). By Remark~\ref{remark:determined-by-S}, this determines a $1$-parameter family $\mathcal{F} \subset \Lambda^3 \mathbb{V}^*$ of compatible $\G_2$-structures but does not distinguish an element of this family. Transferring this statement to the setting of tractor bundles and then translating it into the setting of a~tangent bundle, such a~holonomy reduction determines a $1$-parameter family $\mathcal{D}$ of conformally isometric oriented $(2, 3, 5)$ distributions for which $\mathbf{c} = \mathbf{c}_{\mathbf{D}}$ for all $\mathbf{D} \in \mathcal{D}$, but does not distinguish a~distribution among them. \end{Remark} As usual by scaling assume $\mathbb{S}$ satisf\/ies $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\}$ and denote $\mathbb{W} := \langle \mathbb{S} \rangle^{\perp}$. The parabolic subgroup $Q$ preserving any ray $x \in \mathcal{M}$ (spanned by the isotropic weighted vector $X \in \mathbb{V}[1]$) preserves the f\/iltration $(\mathbb{V}_X^a)$ of $\mathbb{V}$ determined via \eqref{equation:isotropic-filtration} by $X$: Explicitly this is \begin{gather}\label{equation:filtration-X} \begin{array}{@{}cccccccccccc@{}} ^{-2} & & ^{-1} & & ^0 & & ^{+1} & & ^{+2} & & ^{+3} \\ \mathbb{V} & \supset & \langle X \rangle^{\perp} & \supset & \im (X \times \,\cdot\,) & \supset & \ker (X \times \,\cdot\,) & \supset & \langle X \rangle & \supset & \{ 0 \} \end{array} \end{gather} Here, $X \times \,\cdot\,$ is the map $-X^C \Phi_C{}^A{}_B \in \End(\mathbb{V})[1]$, and by the comments after \eqref{equation:isotropic-filtration} it satisf\/ies $X \times \mathbb{V}_X^a = \mathbb{V}_X^{a + 2}$. Since $Q$ preserves this f\/iltration, the corresponding set dif\/ferences $\{x \in \mathcal{M} \colon$ $\mathbb{S}_x \in \mathbb{V}_X^a - \mathbb{V}_X^{a + 1}\}$ are each unions of curved orbits. \subsubsection{Ricci-negative case} In this case, which corresponds to $\mathbb{S}$ spacelike ($\varepsilon = -1$), $S \cong \SU(1, 2)$ (Proposition \ref{proposition:stabilizer-subgroup}), and $\mathbb{W}$ inherits a complex structure $\mathbb{K}$ \eqref{equation:definition-K} and a complex volume form $\Psi$ (Proposition \ref{proposition:vareps-complex-volume-forms}). Since $\ker X$ is isotropic and $H_{\Phi}$ has signature $(3, 4)$, $\im X = (\ker X)^{\perp}$ is negative-semidef\/inite, so the only unions of orbits determined by the f\/iltrations are $\{x \colon \mathbb{S} \in \mathbb{V} - \langle X \rangle^{\perp}\}$ and $\{x \colon \mathbb{S} \in \langle X \rangle^{\perp} - \im X\}$. Pick a nonzero vector $\ul X \in \mathbb{V}$ in the ray determined by $X \in \mathbb{V}[1]$. Since $\mathbb{S}$ is nonisotropic, $\mathbb{V}$~decomposes (as an $\SU(1, 2)$-module) as $\mathbb{W} \oplus \langle \mathbb{S} \rangle$, and with respect to this decomposition, $\ul X$~decomposes as $\ul w + \ul \sigma \mathbb{S} \in \mathbb{W} \oplus \langle \mathbb{S} \rangle$, where $\ul \sigma := H_{\Phi}(\ul X, \mathbb{S})$; so, $0 = H_{\Phi}(\ul X, \ul X) = H_{\Phi}(\ul w, \ul w) + \ul \sigma^2$. If $\ul \sigma > 0$, then $\ul \sigma^{-1} \ul w \in \mathbb{W}$ satisf\/ies $H_{\Phi}(\ul \sigma^{-1} \ul w, \ul \sigma^{-1} \ul w) = -1$. But the set of vectors $\ul w_0 \in \mathbb{W}$ satisfying $H(\ul w_0, \ul w_0) = -1$ is just the sphere $\mathbb{S}^{2, 3}$, and $\SU(1, 2)$ acts transitively on this space, and hence on the $5$-dimensional space $\smash{\mathcal{M}_5^+} \cong \mathbb{S}^{2, 3}$ of rays in $\mathcal{M}$ it subtends. The isotropy group of the ray spanned by $\ul w_0 + \mathbb{S}$ preserves the appropriate restrictions of $H$, $\mathbb{K}$, $\Psi$ to the four-dimensional subspace $\langle \ul w_0, \mathbb{K} \ul w_0 \rangle^{\perp} \subset \mathbb{W}$, so that space is neutral Hermitian and admits a~complex volume form, and hence the isotropy subgroup is contained in $\SU(1, 1) \cong \SL(2, \mathbb{R})$. On the other hand, the isotropy subgroup has dimension $\dim S - \dim \smash{\mathcal{M}_5^+} = 8 - 5 = 3$, so it must coincide with~$\SU(1, 1)$. If $\ul \sigma = 0$, then $\ul w =\ul X \in \mathbb{W}$ is isotropic. The set of such vectors is the intersection of the null cone of $H$ with $\mathbb{W}$. Again, $\SU(1, 2)$ acts transitively on the ray projectivization $\mathcal{M}_4 \cong \mathbb{S}^3 \times \mathbb{S}^1$ of this space. By construction, the isotropy subgroup $P_-$, which (since $\dim \mathcal{M}_4 = 4$) has dimension four, is contained in the $5$-dimensional stabilizer subgroup $P_{\SU(1, 2)}$ in $\SU(1, 2)$ of the complex line $\langle \ul X, \mathbb{K} \ul X \rangle \subset \mathbb{W}$ generated by $\ul X$; this latter group is (up to connectedness) the only parabolic subgroup of $\SU(1, 2)$. The case $\ul \sigma < 0$ is essentially identical to the case $\ul \sigma > 0$, and we denote the correponding orbit $\smash{\mathcal{M}_5^-} \cong \mathbb{S}^{2, 3}$. \subsubsection{Ricci-positive case} In this case, which corresponds to $\mathbb{S}$ timelike ($\varepsilon := +1$), $S \cong \SL(3, \mathbb{R})$, and $\mathbb{W}$ inherits a~paracomplex structure $\mathbb{K}$ and a paracomplex volume form $\Psi$. This case is similar to the Ricci-negative one, and we omit details that are similar thereto. Since all of the vectors in $\im X - \ker X$ are timelike, however, that set dif\/ference corresponds to a union of orbits that has no analogue for the other causality types of $\mathbb{S}$. Similarly to the Ricci-negative case, $\mathbb{V}$ decomposes (as an $\SL(3, \mathbb{R})$-module) as $\mathbb{W} \oplus \langle \mathbb{S} \rangle$, we can decompose $\ul X = \ul w + \ul \sigma \mathbb{S} \in \mathbb{W} \oplus \langle \mathbb{S} \rangle$, where $\sigma = H_{\Phi}(\ul X, \ul \mathbb{S}),$ and we have $0 = H_{\Phi}(\ul X, \ul X) = H_{\Phi}(\ul w, \ul w) - \ul \sigma^2$. If $\ul \sigma > 0$, then the resulting curved orbit is $M_5^+ \cong \mathbb{S}^{2, 3}$, and the stabilizer subgroup is isomorphic to $\SL(2, \mathbb{R})$. If $\ul \sigma = 0$, then $\ul w = \ul X \in \mathbb{W}$ is isotropic. As mentioned at the beginning of this subsubsection, unlike in the Ricci-negative case, this subcase entails more than one orbit. To see this, let~$\mathbb{E}$ denote the $(+1)$-eigenspace of $\mathbb{K}$. Using (the restriction of) $H_{\Phi}$ we may identify the $-1$-eigenspace with $\mathbb{E}^*$, and so we can write $\ul X$ as $(0, \ul e, \ul \beta) \in \langle \mathbb{S} \rangle \oplus \mathbb{E} \oplus \mathbb{E}^*$. Since $\ul X$ is isotropic, $0 = \tfrac{1}{2} H_{\Phi}(\ul X, \ul X) = \beta(\ul e)$, and we can identify the ray projectivization of the set of such triples with $\mathbb{S}^2 \times \mathbb{S}^2$. Now, the action of~$S$ preserves whether each of the components $\ul e$, $\ul \beta$ is zero, giving three cases. One can readily compute that $S$ acts transitively on pairs $(\ul e, \ul \beta)$ of nonzero elements with isotropy group $P_+ \cong \mathbb{R}_+ \ltimes \mathbb{R}^3$, which is characterized by its restriction to $\mathbb{E}$, and which in turn is given in a convenient basis by \begin{gather*} \left\{ \begin{pmatrix} a & b & c \\ 0 & 1 & d \\ 0 & 0 & a^{-1} \end{pmatrix} \colon a \in \mathbb{R}_+;\, b, c, d \in \mathbb{R} \right\} . \end{gather*} So, the corresponding orbit $\mathcal{M}_4$ has dimension $4$, and it follows from the remaining two cases that $\mathcal{M}_4 \cong \mathbb{S}^2 \times (\mathbb{S}^2 - \{\pm \ast\}) \cong \mathbb{S}^2 \times \mathbb{S}^1 \times \mathbb{R}$ for some point $\ast \in \mathbb{S}^2$. By construction, $P_+$~is contained in the $5$-dimensional stabilizer subgroup $P_{12}$ in $\SL(3, \mathbb{R})$ of the paracomplex line $\langle \ul X, \mathbb{K} \ul X\rangle \subset \mathbb{W}$ generated by $\ul X$, and we may identify $P_{12}$ with the subgroup of the stabilizer subgroup in $\SL(3, \mathbb{R})$ of a complete f\/lag in $\mathbb{W}$ that preserves either (equivalently, both of) the rays of the $1$-dimensional subspace in the f\/lag. In the case that $\ul e \neq 0$ but $\ul \beta = 0$, $S$ again acts transitively, and this time the isotropy subgroup is isomorphic to the (parabolic) stabilizer subgroup $\smash{P_1 := \GL(2, \mathbb{R}) \ltimes (\mathbb{R}^2)^*}$ of a ray in~$\mathbb{E}$, which is the f\/irst parabolic subgroup in $\SL(3, \mathbb{R})$, so the corresponding orbit is $\smash{\mathcal{M}_2^+} \cong \mathbb{S}^2$. The dual case $\ul e = 0$, $\ul \beta \neq 0$ is similar: $S$ acts transitively, and we can identify the isotropy subgroup with the (parabolic) stabilizer subgroup $P_2 := \GL(2, \mathbb{R}) \ltimes \mathbb{R}^2 \cong P_1$ of a dual ray in~$\mathbb{E}^*$, so the corresponding orbit is $\smash{\mathcal{M}_2^-} \cong \mathbb{S}^2$. Finally, the case $\ul \sigma < 0$ is essentially identical to the case $\ul \sigma > 0$ and we denote the corresponding orbit by $\smash{\mathcal{M}_5^-} \cong \mathbb{S}^{2, 3}$. \subsubsection{Ricci-f\/lat case} In this case, which corresponds to $\mathbb{S}$ isotropic ($\varepsilon = 0$), $S \cong \SL(2, \mathbb{R}) \ltimes Q_+$ and $\mathbb{W}$ inherits a~endomorphism $\mathbb{K}$ whose square is zero. Since $\mathbb{S}$ is isotropic, it determines a f\/iltration $(\mathbb{V}_{\mathbb{S}}^a)$ of~$\mathbb{V}$. By symmetry, we may identify the sets $\{x \in \mathcal{M} \colon \mathbb{S}_x \in \mathbb{V}_X^a - \mathbb{V}_X^{a + 1}\}$ that occur in this case with $\{x \in \mathcal{M} \colon X_x \in \mathbb{V} - \langle \mathbb{S} \rangle^{\perp}\}$, $\{x \in \mathcal{M} \colon X_x \in \langle \mathbb{S} \rangle^{\perp} - \im \mathbb{K}\}$, $\{x \in \mathcal{M} \colon X_x \in \ker \mathbb{K} - \langle \mathbb{S} \rangle\}$, and $\{x \in \mathcal{M} \colon X_x \in \langle \mathbb{S} \rangle - \{0\} \}$. (The dif\/ference $\{x \in \mathcal{M} \colon X_x \in \im \mathbb{K} - \ker \mathbb{K}\}$ does not occur here, as every vector in $\im \mathbb{K} - \ker \mathbb{K}$ is timelike.) If $\ul \sigma > 0$, $S$ acts transitively on the $5$-dimensional space of rays. Computing directly shows that the isotropy subgroup is conjugate to the Levi factor $\SL(2, \mathbb{R}) < \G_2$, so we may identify $\smash{\mathcal{M}_5^+} \cong (\SL(2, \mathbb{R}) \ltimes Q_+) / \SL(2, \mathbb{R}) \cong Q_+ \cong \mathbb{R}^5$. If $\ul \sigma = 0$, we see there are several possibilities. Since every vector in $\im \mathbb{K} - \ker \mathbb{K}$ is timelike, $\{x \in \mathcal{M} \colon X \in \langle \mathbb{S} \rangle^{\perp} - \im \mathbb{K} \}$ is the set of points $x \in \mathcal{M}$ such that $H_{\Phi}(X, \mathbb{S}) = 0$ but $X \times \mathbb{S} \neq 0$. Again, $S$ acts transitively on the $4$-dimensional space $\mathcal{M}_4$ of rays. In this case, computing gives that the isotropy subgroup is isomorphic to $\mathbb{R} \ltimes \mathbb{R}^3$. Next, we consider the set of points $\{x \in \mathcal{M} \colon X \in \ker \mathbb{K} - \langle \mathbb{S} \rangle\}$. Since $\ker \mathbb{K}$ is totally isotropic, $\ker \mathbb{K} - \langle X \rangle$ is the set dif\/ference of a $3$-dimensional af\/f\/ine space and a linear subspace, so the corresponding orbit $\mathcal{M}_2$ of rays is a twice-punctured $2$-sphere. Again, computing directly gives that $S$ acts transitively on this space, and the stabilizer subgroup is a certain $6$-dimensional solvable group. When $X \in \langle \mathbb{S} \rangle$, either $\mathbb{S}$ is in the ray determined by $X$ or its opposite, and these correspond respectively to $0$-dimensional orbits $\smash{\mathcal{M}_0^+}$ and $\smash{\mathcal{M}_0^-}$. Finally, the case $\ul \sigma < 0$ is again essentially identical to the case $\ul \sigma > 0$, and again we denote the corresponding orbit $\smash{\mathcal{M}_5^-} \cong \mathbb{R}^5$. \subsection{Characterizations of the curved orbits}\label{subsection:characterizations-curved-orbits} In this subsection we give geometric characterizations of the curved orbits $M_{\bullet}$ determined by the holonomy reduction to $S$. For the rest of this section, let $\mathbf{D}$ be an oriented $(2, 3, 5)$ distribution, denote the corresponding parallel tractor $\G_2$-structure by $\Phi$, and denote its components with respect to a~scale~$\tau$ by~$\phi$, $\chi$, $\theta$, $\psi$ as in \eqref{equation:G2-structure-splitting}. Also f\/ix an almost Einstein scale $\sigma$, denote the corresponding parallel tractor by $\mathbb{S} := L_0^{\mathcal{V}}(\sigma)$, and denote its components with respect to $\tau$ by $\sigma$, $\mu$, $\rho$ as in~\eqref{equation:standard-tractor-structure-splitting}. On the zero locus $\Sigma := \{x \in M \colon \sigma_x = 0\}$ of~$\sigma$,~$\mu$ is invariant (that is, independent of the choice of scale $\tau$), so on the set where moreover $\mu \neq 0$, $\mu$ determines a line f\/ield~$\mathbf{S}$. See also Appendix~\ref{appendix}. \begin{Proposition}\label{proposition:curved-orbit-characterization} The curved orbits are characterized $($separately$)$ by the following conditions on tractorial and tangent data. The bullets~$\bullet$ indicate which curved orbits occur for each causality type. For the curved orbits $M_2^{\pm}$, $(\ast)$ indicates the following: On $M_2^+ \cup M_2^-$ we have $\mu_x \in [\mathbf{D}, \mathbf{D}]_x[-1] - \mathbf{D}_x[-1]$, so projecting to $([\mathbf{D} , \mathbf{D}]_x / \mathbf{D}_x)[-1]$ gives a nonzero element and via the Levi bracket we can regard this as an element of $\Lambda^2 \mathbf{D}_x[-1]$. Then, $x \in M_2^-$ $(M_2^+)$ iff this $($weighted$)$ bivector is oriented $($anti-oriented$)$. Also, $\Delta \sigma := \sigma_{,a}{}^a$. \begin{center} \def1.2{1.2} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{$M_a$} & \multicolumn{3}{c|}{$\varepsilon$} & \multirow{2}{*}{\begin{tabular}{c}tractor \\ condition\end{tabular}} & \multirow{2}{*}{\begin{tabular}{c}tangent \\ condition\end{tabular}}\\ \cline{2-4} & \parbox{0.5cm}{\centering $-1$} & \parbox{0.5cm}{\centering $0$} & \parbox{0.5cm}{\centering $+1$} & & \\ \hline\hline $M_5^{\pm}$ & $\bullet$ & $\bullet$ & $\bullet$ & $\pm H_{\Phi}(X, \mathbb{S}) > 0$ & $\pm \sigma > 0$ \\ \hline $M_4 $ & $\bullet$ & $\bullet$ & $\bullet$ & \begin{tabular}{c}$H_{\Phi}(X, \mathbb{S}) = 0$ \\ $(X \times \mathbb{S}) \wedge X \neq 0$\end{tabular} & \begin{tabular}{c}$\sigma = 0$ \\ $\xi \neq 0$\end{tabular} \\ \hline $M_2^{\pm}$ & & & $\bullet$ & $X \times \mathbb{S} = \pm X$ & \begin{tabular}{c}$\xi = 0$ \\ $(\ast)$\end{tabular} \\ \hline $M_2 $ & & $\bullet$ & & \begin{tabular}{c}$X \times \mathbb{S} = 0$ \\ $X \wedge \mathbb{S} \neq 0$\end{tabular} & $\mathbf{S} \subset \mathbf{D}$ \\ \hline $M_0^{\pm}$ & & $\bullet$ & & $\mathbb{S} \in (\pm X \cdot \mathbb{R}_+)[-1]$ & \begin{tabular}{c}$\sigma = 0$ \\ $\nabla \sigma = 0$ \\ $\mp \Delta \sigma > 0$\end{tabular} \\ \hline \end{tabular} \end{center} \end{Proposition} \begin{proof} The characterizations of $M_5^{\pm}$ are immediate from the descriptions in Section~\ref{subsection:orbit-decomposition-flat-model}. Passing to the tractor setting, a point $x \in M$ is in $M_4$ if $\mathbb{S}_x \in \langle\mathbb{S}_x\rangle^{\perp} - \im (X_x \times \,\cdot\,)$ (for readability, in this proof we herein sometimes suppress the subscript $_x$). By the discussion after \eqref{equation:filtration-X}, $X \times (\im X) = \langle X \rangle$, so $\mathbb{S} \in \im (X \times \,\cdot\,)$ if\/f $(X \times \mathbb{S}) \wedge X = 0$, yielding the tractor characterization of $M_4$. Next, $x \in M_2^{\pm}$ if $\mathbb{S} \in \im (X \times \,\cdot\,) - \ker (X \times \,\cdot\,)$, so this curved orbit is characterized by $X \times \mathbb{S} \in \langle X \rangle - \{ 0 \}$; in fact, since $X \times \mathbb{S} = -\mathbb{S} \times X = \mathbb{K}(X)$, $X$ is an eigenvalue of $\mathbb{K}$, and we acutally have $X \times \mathbb{S} = \pm X$. Finally, $x \in M_0^{\pm}$ if\/f $\mathbb{S} \in \langle X \rangle$, that is, if $X \wedge \mathbb{S} = 0$, so $x \in M_2 = \ker X - \langle X \rangle$ if\/f $X \times \mathbb{S} = 0$ but $X \wedge \mathbb{S} \neq 0$. In the splitting determined by a scale $\tau$, \begin{gather*} (X \times \mathbb{S})^A = \mathbb{K}^A{}_B X^B \stackrel{\tau}{=} \tractorT{0}{\xi^a}{\alpha} = \tractorT{0}{\sigma \theta^a + \mu_c \phi^{ca}}{\mu^c \theta_c} \in \Gamma\tractorT{\mathcal{E}[2]}{TM}{\mathcal{E}}. \end{gather*} So, the only nonzero component of $(X \times \mathbb{S}) \wedge X$ is $\xi^a$, which together with the tractor characterization gives the tangent bundle characterization of~$M_4$. Since $\sigma = 0$ on $M_4$, we have on that curved orbit that $0 \neq \xi^a = \mu_c \phi^{ca}$, so $\mu^b \not\in [\mathbf{D}, \mathbf{D}]$ and hence $\mathbf{S} \cap [\mathbf{D}, \mathbf{D}] = \{ 0 \}$ (including this one, the assertions about the components of~$\mathbb{S}$ and~$\Phi$ in this proof all follow from Proposition~\ref{proposition:identites-g2-structure-components}). For $M_2^{\pm}$, comparing the components of $X \times \mathbb{S} = \pm X$ gives $\xi^a = 0$ and $\alpha = \mu^c \theta_c = \pm 1$. Together with $\sigma = 0$ the f\/irst condition gives $\phi^{ab} \mu_b = 0$, which is equivalent to $\mu^b \in [\mathbf{D}, \mathbf{D}][-1]$; on the other hand, $\mu^c \theta_c \neq 0$ gives that $\mu^b \not\in \mathbf{D}[-1]$, so $\mu^b$ projects to a nonzero element of $([\mathbf{D}, \mathbf{D}] / \mathbf{D})[-1] \cong \Lambda^2 \mathbf{D}[-1]$ (the isomorphism is the one given by the Levi bracket). Since $\theta^c \in [\mathbf{D}, \mathbf{D}][-1] - \mathbf{D}[-1]$, it also determines a nonzero (and by construction, oriented) element of $\Lambda^2 \mathbf{D}[-1]$. Thus, since $\theta^c \theta_c = -1$, we have $\mu^c \theta_c = -1$ (and hence $x \in M_2^-$) if\/f the element of $\Lambda^2 \mathbf{D}[-1]$ determined by $\mu$ is oriented. (The above gives $\mathbf{S} \subset [\mathbf{D}, \mathbf{D}]$ and $\mathbb{S} \cap \mathbf{D} = \{ 0 \}$.) For $M_2$, if $\mathbb{S} \in \ker(X \times \,\cdot\,)$ we have $\sigma = 0$, $\xi^a = 0$, and $\alpha = 0$, so by the argument in the previous case we have $\mathbf{S} \subset \mathbf{D}$. Since $\mathbb{S} \not\in \langle X \rangle$, we have $\mathbb{S} \wedge X \neq 0$, which is equivalent to $\mu^a = 0$, and by \eqref{equation:splitting-operator-standard} $\mu = \nabla \sigma$. Finally, if $\mathbb{S} \in \langle X \rangle$, which by the previous case comprises the points where $\mu = 0$. Again using $L_0^{\mathcal{V}}$ (and that $\sigma = 0$) gives that $\pm \rho = \mp \tfrac{1}{5} \Delta \sigma$, and so $\pm \rho > 0$ (and hence $x \in M_0^{\pm}$) if\/f $\mp \Delta \sigma > 0$. \end{proof} \begin{Corollary} \label{corollary:xi-behavior-curved-orbits} Let $\mathcal{D}$ be a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by the almost Einstein scale $\sigma$, and let $\xi$ denote the corresponding conformal Killing field. Then: \begin{enumerate}\itemsep=0pt \item[$1.$] The set $M_{\xi} := \{x \in M \colon \xi_x \neq 0\}$ is the union of the open and hypersurface curved orbits: \begin{gather*} M_{\xi} = M_5 \cup M_4 \cup M_5^- . \end{gather*} In particular, $(a)$ $M_{\xi} \supseteq M_5^+ \cup M_5^-$, $(b)$ the complement $M - M_{\xi}$ $($if nonempty$)$ has codimension $3$, and $(c)$ if $\sigma$ is Ricci-negative then $M_{\xi} = M$. \item[$2.$] The curved orbits $M_5^{\pm}$ and $M_4$ are preserved by the flow of $\xi$. \item[$3.$ If $x \in M_5^+ \cup M_5^-$, then for every distribution $\mathbf{D} \in \mathcal{D}$, $\mathbf{L}_x \subset [\mathbf{D}, \mathbf{D}]_x$ and $\mathbf{L}_x$ is transverse to~$\mathbf{D}_x$. In particular, $\mathbf{L}_x$ is timelike. \item[$4.$] If $x \in M_4$, then $\mathbf{L}_x \subset \mathbf{D}_x$ for every distribution $\mathbf{D} \in \mathcal{D}$. In particular, $\mathbf{L}_x$ is isotropic. \end{enumerate} \end{Corollary} \begin{proof} Only (2) is not immediate: It follows from the characterizations $M_5^{\pm} = \{\pm \sigma > 0\}$ and $M_4 = \{\sigma = 0\} \cap M_{\xi}$ together with the fact~\eqref{equation:Lie-derivatives-objects} that the f\/low of $\xi$ preserves $\sigma$. \end{proof} \begin{Corollary}\label{corollary:null-complementary-distribution-set-of-definition} Let $\mathcal{D}$ be a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by the almost Ricci-flat scale~$\sigma$. Then, the limiting distribution $\mathbf{D}_{\infty}$ vanishes precisely on $M_2 \cup M_0^+ \cup M_0^-$, which $($if nonempty$)$ has codimension~$3$, and so the null-complementary distribution~$\mathbf{E}$ it determines is defined precisely on $M_{\xi} = M_5^+ \cup M_4 \cup M_5^-$. \end{Corollary} \begin{proof} On $M_5^+ \cup M_5^-$, \eqref{equation:IJK-open-orbit} below gives $\phi_{\infty} = I \stackrel{\sigma}{=} -\psi$ , which vanishes nowhere by Proposi\-tion~\ref{proposition:identites-g2-structure-components}(8) (here, $I$ is the $2$-form determined by any $\mathbf{D} \in \mathcal{D}$; in the Ricci-f\/lat case, it does not depend on that choice). On $M - (M_5^+ \cup M_5^-)$, \eqref{equation:I-hypersurface} below gives that $I = \xi^{\flat} \wedge \mu^{\flat}$. In the proof of Proposition~\ref{proposition:curved-orbit-characterization} we saw that on $M_4$, $\xi \in \mathbf{D}$ and $\mu \not\in [\mathbf{D}, \mathbf{D}] \supset \mathbf{D}$, so $\xi^{\flat}$ and $\mu^{\flat}$ are linearly independent there. On $M_2 \cup M_0^+ \cup M_0^-$, that proposition gives $\xi = 0$, and hence $I = 0$ there. \end{proof} \subsection[The open curved orbits M5+/-]{The open curved orbits $\boldsymbol{M_5^{\pm}}$}\label{subsection:open-curved-orbits} In this section, we f\/ix an oriented $(2, 3, 5)$ distribution $\mathbf{D}$ and a nonzero almost Einstein scale $\sigma$ of $\mathbf{c}_{\mathbf{D}}$ and restrict our attention to the union \begin{gather*} M_5 := M_5^+ \cup M_5^- = \{x \in M \colon \sigma_x \neq 0\} = M - \Sigma \end{gather*} determined by the corresponding holonomy reduction; in particular $M_5$ is open, and moreover, by the discussion after Theorem~\ref{theorem:almost-Einstein-bijection}, it is dense in $M$. Since $\sigma$ is nowhere zero on $M_5$, we can work in the scale $\sigma\vert_{M_5}$ itself, in which many earlier formulae simplify. (Henceforth in this subsection, we suppress the restriction notation $\vert_{M_5}$.) As usual, by rescaling we may assume that the parallel tractor $\mathbb{S}$ corresponding to $\sigma$ satisf\/ies $\varepsilon := -H_{\Phi}(\mathbb{S}, \mathbb{S}) \in \{-1, 0, +1\}$. Then, from the discussion after Theorem \ref{theorem:almost-Einstein-bijection}, $g_{ab} := \sigma^{-2} \mathbf{g}_{ab}$ is almost Einstein and has Schouten tensor $\mathsf{P}_{ab} = \frac{1}{2} \varepsilon g_{ab}$, or equivalently, Ricci tensor $R_{ab} = 4 \varepsilon g_{ab}$, and hence scalar curvature $R = 20 \varepsilon$. In the scale $\sigma$, the components of the parallel tractor $\mathbb{S}$ itself are \begin{gather}\label{equation:components-S-scale-sigma} \sigma \stackrel{\sigma}{=} 1, \qquad \mu^e \stackrel{\sigma}{=} 0, \qquad \rho \stackrel{\sigma}{=} -\tfrac{1}{2} \varepsilon , \end{gather} and substituting in \eqref{equation:components-of-K} gives that \begin{gather}\label{equation:K-components-scale-sigma} \mathbb{K}^A{}_B := -\mathbb{S}^C \Phi_C{}^A{}_B \stackrel{\sigma}{=} \tractorQ {\ul \theta^b} {\tfrac{1}{2} (\varepsilon \ul \phi_{ab} + \ul \barphi_{ab})} {0} {\tfrac{1}{2} \varepsilon \ul \theta_b} . \end{gather} We have introduced the $2$-form $\barphi_{ab} := -2 \psi_{ab} \in \Gamma(\Lambda^2 T^*M[1])$ for notational convenience. Note that $\xi^b \stackrel{\sigma}{=} \ul \theta^b$. Substituting \eqref{equation:components-S-scale-sigma} in \eqref{equation:I}, \eqref{equation:J}, \eqref{equation:K} gives that $I$, $J$, $K$ simplify to \begin{gather}\label{equation:IJK-open-orbit} \ul I_{ab} = \tfrac{1}{2} \big({-}\varepsilon \ul\phi_{ab} + \ul\barphi_{ab}\big) , \qquad \ul J_{ab} = \xi^c \ul\chi_{cab} \qquad \ul K_{ab} = \tfrac{1}{2} \big({-}\varepsilon \ul\phi_{ab} - \ul\barphi_{ab}\big) . \end{gather} The endomorphism component of $\mathbb{K}$ in the scale $\sigma$ coincides with $-\ul K^a{}_b$. \subsubsection{The canonical splitting} The components of canonical tractor objects on $M_5$ in the splitting determined by $\sigma$ are themselves canonical (and so are, just as well, their trivializations); in particular this includes the components $\chi_{abc}$, $\theta_c$, $\psi_{bc}$ of~$\Phi_{ABC}$. Moreover, via Proposition \ref{proposition:identites-g2-structure-components}, $\mathbf{D}$ and the scale $\sigma$ together determine a canonical splitting of the canonical f\/iltration $\mathbf{D} \subset [\mathbf{D}, \mathbf{D}] \subset TM_5$: \begin{gather}\label{equation:open-orbit-splitting} TM_5 = \mathbf{D} \oplus \mathbf{L} \oplus \mathbf{E} . \end{gather} The fact that $\xi = \ul\theta$ gives the following: \begin{Proposition} \label{proposition:coincidence-line-fields} The line field $\mathbf{L}$ in the splitting \eqref{equation:open-orbit-splitting} determined by $\sigma$ coincides with $($the restriction to $M_5$ of$)$ the line field of the same name spanned by $\xi$ in Section~{\rm \ref{subsection:conformal-Killing-field}}. \end{Proposition} The splitting \eqref{equation:open-orbit-splitting} entails isomorphisms $\mathbf{L} \cong [\mathbf{D}, \mathbf{D}] / \mathbf{D}$ and $\mathbf{E} \cong TM_5 / [\mathbf{D}, \mathbf{D}]$, and so the components $\smash{\Lambda^2 \mathbf{D} \stackrel{\cong}{\to} [\mathbf{D}, \mathbf{D}]}$ and $\smash{\mathbf{D} \otimes ([\mathbf{D}, \mathbf{D}] / \mathbf{D}) \stackrel{\cong}{\to} TM_5 / [\mathbf{D}, \mathbf{D}]}$ of the Levi bracket $\mathcal{L}$ give rise to isomorphisms $\smash{\Lambda^2 \mathbf{D} \stackrel{\cong}{\to} \mathbf{L}}$ and $\smash{\mathbf{D} \otimes \mathbf{L} \stackrel{\cong}{\to} \mathbf{E}}$. The preferred nonvanishing section $\smash{\xi = \ul\theta \in \Gamma(\mathbf{L})}$ thus yields isomorphisms $\smash{\Lambda^2 \mathbf{D} \stackrel{\cong}{\to} \mathbb{R}}$ (equivalently, a volume form on $\mathbf{D}$) and $\smash{\mathbf{D} \stackrel{\cong}{\to} \mathbf{E}}$. We may use the volume form to identify $\smash{\mathbf{D} \cong \Lambda^2 \mathbf{D} \otimes \mathbf{D}^* \cong \mathbf{D}^*}$ and dualize the previous isomorphism to yield a bilinear pairing $\mathbf{D} \times \mathbf{E} \to \mathbb{R}$. This splitting is closely connected with the notions of antipodal and null-complementary distributions introduced in Section~\ref{subsubsection:additional-distributions}. \begin{Theorem}\label{theorem:canonical-splitting} The canonical distribution $\mathbf{E}$ spanned by the tractor component $\psi$ in the splitting $\mathbf{D} \oplus \mathbf{L} \oplus \mathbf{E}$ determined by $\sigma$ $($equivalently, the distribution determined by~$\barphi)$, is $($the restriction to~$M_5$ of$)$ the distribution antipodal or null-complementary to~$\mathbf{D}$. \end{Theorem} \begin{proof} From Section~\ref{subsubsection:additional-distributions} the antipodal or null-complementary distribution is spanned by $I - K$, and substituting using \eqref{equation:IJK-open-orbit} gives that this is $\bar\phi$. \end{proof} \begin{Remark}\label{remark:generalized-path-geometry} Together $\mathbf{D}$ and $\mathbf{L}$ comprise the underlying data of another parabolic geometry on~$M_5$, namely of type $(\SL(4, \mathbb{R}), P_{12})$, where $P_{12}$ is the stabilizer subgroup of a partial f\/lag in~$\mathbb{R}^4$ of signature $(1, 2)$ under the action induced by the standard action. The underlying structure for a regular, normal geometry of this type is a \textit{generalized path geometry} in dimension $5$, which consists of a $5$-manifold $M_5$, a line f\/ield $\mathbf{L} \subset TM_5$, and a $2$-plane distribution $\mathbf{D} \subset TM$ such that (1) $\mathbf{L} \cap \mathbf{D} = \{ 0 \}$, (2) $[\mathbf{D}, \mathbf{D}] \subseteq \mathbf{D} \oplus \mathbf{L}$, and (3) if $\eta \in \Gamma(\mathbf{D})$, $\xi' \in \Gamma(\mathbf{L})$, and $x \in M$ together satisfy $[\eta, \xi']_x = 0$, then $\eta_x = 0$ or $\xi'_x = 0$ \cite[Section~4.3.3]{CapSlovak} (in our case, these conditions follow from the properties of the Levi bracket $\mathcal{L}$). In this dimension, this geometry is sometimes called XXO geometry, in reference to the marked Dynkin diagram that encodes it. The restriction of~$\mathbf{L}$ to $M_{\xi} - M_5 = M_4$ is contained in $\mathbf{D} \vert_{M_4}$, so~$M_5$ is the largest set on which this construction yields a generalized path geometry. \end{Remark} \subsubsection{The canonical hyperplane distribution} Denote by $\mathbf{C} \subset TM_{\xi}$ the hyperplane distribution orthogonal to $\mathbf{L} := \langle \xi \rangle\vert_{M_{\xi}}$. \begin{Proposition} Let $\mathcal{D}$ be a family of conformally isometric oriented $(2, 3, 5)$ distributions related by an almost Einstein scale $\sigma$. Then, on~$M_5$: \begin{enumerate}\itemsep=0pt \item[$1.$] The pullback $g_{\mathbf{C}}$ of the metric $g := \sigma^{-2} \mathbf{g}$ to the hyperplane distribution $\mathbf{C}$ has neutral signature. \item[$2.$] The hyperplane distribution $\mathbf{C}$ is a contact distribution iff $\sigma$ is not Ricci-flat. \item[$3.$] If we fix $\mathbf{D} \in \mathcal{D}$, then $\mathbf{C} = \mathbf{D} \oplus \mathbf{E}$, where $\mathbf{E}$ is the distribution antipodal or null-complementary to $\mathbf{D}$. \item[$4.$] The canonical pairing $\mathbf{D} \times \mathbf{E} \to \mathbb{R}$ is nondegenerate, and the bilinear form it induces on~$\mathbf{C}$ via the direct sum decomposition $\mathbf{C} = \mathbf{D} \oplus \mathbf{E}$ is~$g_{\mathbf{C}}$. \end{enumerate} \end{Proposition} \begin{proof} (1) The conformal class has signature $(2, 3)$, and Corollary~\ref{corollary:xi-behavior-curved-orbits}(3) gives that $\mathbf{L} = \mathbf{C}^{\perp}$ is timelike. (2) In the scale $\sigma$, $\xi \stackrel{\sigma}{=} \ul\theta$ and $d\xi^{\flat} \stackrel{\sigma}{=} \ul K^{\flat} = - \tfrac{1}{2} (\varepsilon \ul \phi + \ul \barphi)$. The decomposability of~$\phi$ and~$\barphi$ (the latter follows from Proposition~\ref{proposition:identites-g2-structure-components}(5)) implies $\xi^{\flat} \wedge d\xi^{\flat} \wedge d\xi^{\flat} \stackrel{\sigma}{=} -\tfrac{1}{2} \varepsilon \ul \phi \wedge \ul \theta \wedge \ul \barphi$. By Proposition~\ref{proposition:identites-g2-structure-components}(8), $\phi \wedge \theta \wedge \barphi$ is a nonzero multiple of the conformal volume form $\epsilon_{\mathbf{g}}$ and so vanishes nowhere if\/f $\varepsilon \neq 0$. (3) By Theorem \ref{theorem:canonical-splitting}, $\mathbf{D}$ and $\mathbf{E}$ are transverse, and by Corollary~\ref{corollary:containment-family-hyperplane-distribution} they are both contained in $\mathbf{C}$, so the claim follows from counting dimensions. (4) This follows from computing in an adapted frame. \end{proof} Computing in an adapted frame gives the following pointwise description: \begin{Proposition}Let $\mathcal{D}$ be a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by an almost Einstein scale $\sigma$. Then, for any $x \in M_5 = \{x \in M \colon \sigma_x \neq 0\}$, the family $\mathcal{D}_x := \{\mathbf{D}_x \colon \mathbf{D} \in \mathcal{D}\}$ is precisely the set of totally isotropic $2$-planes in $\mathbf{C}_x$ self-dual with respect to the $($weighted$)$ bilinear form $\mathbf{g}_{\mathbf{C}}$ and the orientation determined by~$\epsilon_{\mathbf{C}}$. \end{Proposition} Computing in an adapted frame shows that the images of $I, J, K \in \Gamma(\End(TM)[1])$ are contained in~$\mathbf{C}[1]$, so they restrict sections of $\End(\mathbf{C})[1]$, which by mild abuse of notation we denote $I^{\alpha}{}_{\beta}$, $J^{\alpha}{}_{\beta}$, $K^{\alpha}{}_{\beta}$ (here and henceforth, we use lowercase Greek indices $\alpha, \beta, \gamma, \ldots$ for tensorial objects on $M$). It also gives that these maps satisfy, for example, $I^{\alpha}{}_{\gamma} I^{\gamma}{}_{\beta} = -\varepsilon \sigma^2 \delta^{\alpha}{}_{\beta} \in \Gamma(\End(\mathbf{C})[2])$ and Proposition~\ref{proposition:properties-IJK} below. In the scale $\sigma$, this and the remaining equations become: \begin{alignat}{3} & \ul I^{\alpha}{}_{\gamma} \ul I^{\gamma}{}_{\beta} \stackrel{\sigma}{=} - \varepsilon \delta^a{}_{\beta}, \qquad && \ul J^{\alpha}{}_{\gamma} \ul K^{\gamma}{}_{\beta} = -\ul K^{\alpha}{}_{\gamma} \ul J^{\gamma}{}_{\beta} \stackrel{\sigma}{=} \ul I^{\alpha}{}_{\beta},&\nonumber \\ &\ul J^{\alpha}{}_{\gamma} \ul J^{\gamma}{}_{\beta} \stackrel{\sigma}{=} \delta^a{}_{\beta},\qquad && \ul K^{\alpha}{}_{\gamma} \ul I^{\gamma}{}_{\beta} = -\ul I^{\alpha}{}_{\gamma} \ul K^{\gamma}{}_{\beta} \stackrel{\sigma}{=} - \varepsilon \ul J^{\alpha}{}_{\beta}, &\nonumber \\ & \ul K^{\alpha}{}_{\gamma} \ul K^{\gamma}{}_{\beta} \stackrel{\sigma}{=} \varepsilon \delta^a{}_{\beta},\qquad && \ul I^{\alpha}{}_{\gamma} \ul J^{\gamma}{}_{\beta} = -\ul J^{\alpha}{}_{\gamma} \ul I^{\gamma}{}_{\beta} \stackrel{\sigma}{=} - \ul K^{\alpha}{}_{\beta} .& \label{equation:ijk-compositions} \end{alignat} \begin{Proposition} \label{proposition:properties-IJK} Let $(M, \mathbf{D})$ be an oriented $(2, 3, 5)$ distribution and $\sigma$ an almost Einstein scale $\sigma$ for $\mathbf{c}_{\mathbf{D}}$, and let $\mathbf{E}$ be the distribution antipodal or null-complementary to $\mathbf{D}$ determined by $\sigma$. Then, on $M_5$: \begin{enumerate}\itemsep=0pt \item[$1.$] $\ul I \in \End(C)$ is an almost $(-\varepsilon)$-complex structure, and $-\ul I\vert_{\mathbf{D}}$ is the isomorphism $\smash{\mathbf{D} \stackrel{\cong}{\to} \mathbf{E}}$ determined by $\sigma$ introduced at the beignning of the subsection. If $\sigma$ is Ricci-flat, then $\ul I\vert_{\mathbf{E}} = 0$, and if $\sigma$ is not Ricci-flat, then $\ul I\vert_{\mathbf{E}}$ is an isomorphism $\smash{\mathbf{E} \stackrel{\cong}{\to} \mathbf{D}}$. \item[$2.$] $\ul J \in \End(\mathbf{C})$ is an almost paracomplex structure, and its eigenspaces are $\mathbf{D}$ and $\mathbf{E}$: $\ul J\vert_{\mathbf{D}} = \id_{\mathbf{D}}$, $\ul J\vert_{\mathbf{E}} = -\id_{\mathbf{E}}$. \item[$3.$] $\ul K \in \End(\mathbf{C})$ is an almost $\varepsilon$-complex structure, $\ul K\vert_{\mathbf{D}} = -\ul I\vert_{\mathbf{D}}$, and $\ul K\vert_{\mathbf{E}} = \ul I\vert_{\mathbf{E}}$. If $\varepsilon = +1$, the $(\mp 1)$-eigenspaces of $\ul \mathbf{K}$ are the limiting $2$-plane distributions $\mathbf{D}_{\mp}$ defined in Section~{\rm \ref{subsubsection:additional-distributions}}. \end{enumerate} \end{Proposition} In the non-Ricci-f\/lat case, we can identify the pointwise $\U(1, 1)$-structure on $M_5$ as follows: \begin{Proposition} Let $\mathcal{D}$ be a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by a non-Ricci-flat almost Einstein scale $\sigma$. \begin{enumerate}\itemsep=0pt \item[$1.$] For any $\mathbf{D} \in \mathcal{D}$, the endomorphisms $\ul I$, $\ul J$, $\ul K$ determine an almost split-quaternionic structure on $($the restriction to $M_5$ of$)$~$\mathbf{C}$, that is, an injective ring homomor\-phism $\smash{\widetilde{\mathbb{H}} \!\hookrightarrow\! \End(\mathbf{C}_x)}$ for each $x \in M_5$, where $\smash{\widetilde{\mathbb{H}}}$ is the ring of split quaternions. \item[$2.$] The almost split-quaternionic structure in $(1)$ depends only on~$\mathcal{D}$. \end{enumerate} \end{Proposition} \begin{proof} (1) This follows immediately from the identities~\eqref{equation:ijk-compositions}. (2) This follows from computing in an adapted frame. \end{proof} \subsubsection[The varepsilon-Sasaki structure]{The $\boldsymbol{\varepsilon}$-Sasaki structure}\label{subsubsection:vareps-Sasaki-structure} The union $M_5$ of the open orbits turns out also to inherit an $\varepsilon$-Sasaki structure, the odd-dimensional analogue of a K\"ahler structure. \begin{Definition}\label{definition:vareps-Sasaki-structure} For $\varepsilon \in \{-1, 0, +1\}$, an \textit{$\varepsilon$-Sasaki structure} on a (necessarily odd-di\-men\-sio\-nal) manifold $M$ is a pair $(h, \xi)$, where $h \in \Gamma(S^2 T^*M)$ is a pseudo-Riemannian metric on $M$ and $\xi \in \Gamma(TM)$ is a vector f\/ield on $M$ such that \begin{enumerate}\itemsep=0pt \item[1)] $h_{ab} \xi^a \xi^b = 1$, \item[2)] $\xi_{(a, b)} = 0$ (or equivalently, $(\mathcal{L}_{\xi} h)_{ab} = 0$, that is, $\xi$ is a Killing f\/ield for $h$), and \item[3)] $\xi^a{}_{, bc} = \varepsilon (\xi^a h_{bc} - \delta^a{}_c \xi_b)$. \end{enumerate} An $\varepsilon$-Sasaki--Einstein structure is an $\varepsilon$-Sasaki structure $(h, \xi)$ for which $h$ is Einstein. \end{Definition} It follows quickly from the def\/initions that the restriction of $\xi^a{}_{,b}$ is an almost $\varepsilon$-complex structure on the subbundle $\langle \xi \rangle^{\perp}$, and that if $\varepsilon = +1$, the $(\pm 1)$-eigenbundles of this restriction (which have equal, constant rank) are integrable and totally isotropic. \begin{Theorem}\label{theorem:open-orbit-vareps-Sasaki-structure} Let $\mathcal{D}$ be a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by an almost Einstein scale $\sigma$. On $M_5 := \{\sigma \neq 0\}$, the signature-$(3, 2)$ Einstein metric \begin{gather*} -g = -\sigma^{-2} \mathbf{c}_{\mathbf{D}} \end{gather*} and the Killing field $\xi$ together comprise an $\varepsilon$-Sasaki--Einstein structure. \end{Theorem} \begin{proof}Substituting in \eqref{equation:K-squared-identity} the components of $\mathbb{K}$ in the scale $\sigma$ \eqref{equation:K-components-scale-sigma}, using that the endomorphism component of $\mathbb{K}$ in that scale is $-\ul K^a{}_b$, and simplifying leaves \begin{gather}\label{equation:K-squared-identity-components-scale-sigma} -\xi_c \xi^c \stackrel{\sigma}{=} 1 , \qquad \ul K^a{}_c \xi^c \stackrel{\sigma}{=} 0, \qquad \ul K^a{}_c \ul K^c{}_b \stackrel{\sigma}{=} \varepsilon (\delta^a{}_b + \xi^a \xi_b) , \end{gather} together with $\xi^c K^b{}_c = 0$, but this last equation follows from the second equation and the $g$-skewness of $\ul K$. Similarly, expanding the left-hand side of $\nabla \mathbb{K} = 0$ using \eqref{equation:K-components-scale-sigma} with respect to the scale $\sigma$, eliminating duplicated equations, and rearranging gives \begin{gather}\label{equation:K-parallel-identity-components-scale-sigma} \xi^b{}_{, c} \stackrel{\sigma}{=} \ul K^b{}_c , \qquad \ul K^a{}_{b, c} \stackrel{\sigma}{=} -\varepsilon (\xi^a g_{bc} - \xi_b \delta^a{}_c) . \end{gather} 1.~Rearranging the f\/irst equation in \eqref{equation:K-squared-identity-components-scale-sigma} gives $(-g_{ab}) \xi^a \xi^b = 1$. 2.~Since $\ul K^a{}_c$ is $g$-skew, symmetrizing the f\/irst equation in \eqref{equation:K-parallel-identity-components-scale-sigma} with $g_{ab}$ gives $\xi_{(b, c)} = \ul K_{(bc)}$ $= 0$; equivalently, $\xi$ is a Killing f\/ield for $g$ and hence for $-g$. 3.~This follows immediately from substituting the f\/irst equation in \eqref{equation:K-parallel-identity-components-scale-sigma} into the second. (Here indices are raised and lowered with $g$ and not with the candidate Sasaki metric~$-g$.) \end{proof} \begin{Corollary}\label{corollary:null-complementary-intgrable} Let $\mathbf{D}$ be an oriented $(2, 3, 5)$ distribution and $\sigma$ a nonzero Ricci-flat almost Einstein scale for $\mathbf{c}_{\mathbf{D}}$. The null-complementary distribution $\mathbf{E}$ they determine is integrable. \end{Corollary} \begin{proof}Since $\varepsilon = 0$, the second equation of~\eqref{equation:K-parallel-identity-components-scale-sigma} says that $\ul K$ is parallel. By Section~\ref{subsubsection:additional-distributions}, $\ul K$ is decomposable and (as a decomposable bivector f\/ield) spans $\mathbf{E}\vert_{M_5}$, so that distribution is $\nabla$-parallel and hence integrable. But $M_5$ is dense in $M$, and integrability is a closed condition, so $\mathbf{E}$ is integrable (on all of $M$). \end{proof} \begin{Proposition}\label{proposition:Einstein-Sasaki-to-235} Let $(g, \xi)$ be an oriented $\varepsilon$-Sasaki--Einstein structure $($with $\varepsilon=\pm 1)$ of signature $(2,3)$ on a $5$-manifold~$M$. Then, locally, there is a canonical $1$-parameter family~$\mathcal{D}$ of oriented $(2,3,5)$ distributions related by an almost Einstein scale for which the associated conformal structure is $\mathbf{c}=[-g]$. \end{Proposition} \begin{proof}Set $\mathbf{c}=[-g]$, let $\sigma \in \Gamma(\mathcal{E}[1])$ be the unique section such that $-g = \sigma^{-2} \mathbf{g}$, and $\mathbb{S} := L_0^{\mathcal{V}}(\sigma)$ the corresponding parallel tractor. Def\/ine the adjoint tractor $\mathbb{K} := L_0^{\mathcal{A}}(\xi)$. It is known that a~Sasaki--Einstein metric $g$ satisf\/ies $\mathsf{P}_{ab} = \frac{1}{2} \varepsilon g_{ab}$, and so by \eqref{equation:Einstein-constant} $\mathbb{S}$ satisf\/ies $H(\mathbb{S}, \mathbb{S}) = -\varepsilon$, where~$H$ is the tractor metric determined by $\mathbf{c}$. Thus, the proof of Theorem~\ref{theorem:open-orbit-vareps-Sasaki-structure} gives \begin{align*} \nabla^{\mathcal{V}} \mathbb{K}=0, \qquad\mathbb{K}^2 = {\varepsilon} \,\mathrm{id} + {\mathbb{S} \otimes \mathbb{S}^{\flat}}, \qquad \mathrm{and} \qquad \mathbb{S} \hook\mathbb{K}=0. \end{align*} Transferring the content of Section~\ref{subsubsection:vareps-hermitian-structure} to the tractor bundle setting then shows that the parallel subbundle $\mathcal{W} := \langle \mathbb{S} \rangle^{\perp} \subset \mathcal{V}$ inherits a parallel almost $\varepsilon$-Hermitian structure. Denote the curvature of the normal tractor connection by ${{\Omega_{ab}}^C}_D \in \Gamma(\Lambda^2 T^*M \otimes \End(\mathcal{V}))$. The curvature of the induced connection on the bundle $\smash{\Lambda^3_{\mathbb{C}_{\varepsilon}} \mathcal{W}}$ of $\varepsilon$-complex volume forms on~$\mathcal{W}$ is given by $\smash{\Omega_{ab}{}^C{}_C + \varepsilon i_{\varepsilon} \Omega_{ab}{}^C{}_D \mathbb{K}^D{}_C}$. Now $\Omega_{ab}{}^C{}_C=0$ by skew-symmetry, and, since $\mathbb{K}$ is parallel, $\Omega_{ab}{}^C{}_D\mathbb{K}^D{}_C=0$ by \cite[Proposition~2.1]{CapGoverHolonomyCharacterization}. Thus, the induced connection on $\smash{\Lambda^3_{\mathbb{C}_{\varepsilon}}\mathcal{W}}$ is f\/lat, so it admits local parallel sections. Let $\Psi$ be such a (local) parallel section normalized so that $\Psi \wedge \bar\Psi = -\frac{4}{3}i_{\varepsilon}\mathbb{K} \wedge \mathbb{K}\wedge \mathbb{K}$. Denote $\Re \Psi$ the pullback to $\mathcal{V}$ of the real part of $\Psi$. Then, by Proposition \ref{proposition:epsilon-complex-volume-form-g2-structure}, the parallel tractor $3$-form \begin{gather*} \Phi=\mathrm{Re} \Psi+\varepsilon\, \mathbb{S}^{\flat}\wedge\mathbb{K}\in\Gamma\big(\Lambda^3\mathcal{V}\big) \end{gather*} def\/ines a parallel $\G_2$-structure on $\mathcal{V}$ compatible with $H$. By the discussion before Proposition~\ref{proposition:identites-g2-structure-components}, its projecting slot def\/ines a $(2,3,5)$ distribution with associated conformal structure $\smash{\mathbf{c}=[-g]}$. Finally, parallel sections of $\smash{\Lambda^3_{\mathbb{C}_{\epsilon}}\mathcal{W}}$ satisfying $\smash{\Psi\wedge \bar{\Psi}= -\frac{4}{3}i_{\varepsilon}\mathbb{K} \wedge \mathbb{K}\wedge \mathbb{K}}$ are parametrized by $\{z\in\mathbb{C}_{\varepsilon}\colon z\bar{z}=1\}$ (that is, $\mathbb{S}^1$ if $\varepsilon=-1$ and $\SO(1, 1)$ if~$\varepsilon=1$). \end{proof} \subsubsection{Projective geometry} On the complement $M_5$ of the zero locus $\Sigma$ of $\sigma$, we may canonically identify (the restriction of) the parallel subbundle $\mathcal{W} := \langle \mathbb{S} \rangle^{\perp}$ with the \textit{projective} tractor bundle of the projective structure~$[\nabla^g]$, where $g$ is the Einstein metric~$\sigma^{-2} \mathbf{g}$, and the connection $\nabla^{\mathcal{W}}$ that $\nabla^{\mathcal{V}}$ induces on $\mathcal{W}$ with the normal projective tractor connection \cite[Section~8]{GoverMacbeth}. This compatibility determines a holonomy reduction of the latter connection to $S$, and one can analyze separately the consequences of this projective reduction. For example, if $\sigma$ is non-Ricci-f\/lat, then lowering an index of the parallel complex structure $\mathbb{K} \in \Gamma(\End(\mathcal{W}))$ with $H\vert_{\mathcal{W}}$ yields a parallel symplectic form on $\mathcal{W}$. A holonomy reduction of the normal projective tractor connection on a $(2 m + 1)$-dimensional projective manifold $M$ to the stabilizer $\Sp(2 m + 2, \mathbb{R})$ of a~symplectic form on a~$(2 m + 2)$-dimensional real vector space determines precisely a~torsion-free contact projective structure \cite[Section~4.2.6]{CapSlovak} on $M$ suitably compatible with the projective structure~\cite{Fox}. This also leads to an alternative proof that the open curved orbits inherit a Sasaki--Einstein structure in the Ricci-negative case: The holonomy of $\nabla^{\mathcal{W}}$ is reduced to $\SU(1, 2)$, but \cite[Section~4.2.2]{Armstrong} identif\/ies $\mfsu(p', q')$ as the Lie algebra to which the projective holonomy connection determined by an Sasaki--Einstein structure is reduced. The upcoming article \cite{GNW} discusses the consequences of a holonomy reduction of (the normal projective tractor connection of) a $(2m + 1)$-dimensional projective structure to the special unitary group $\SU(p', q')$, $p' + q' = m + 1$. \subsubsection[The open leaf space L4]{The open leaf space $\boldsymbol{L_4}$}\label{subsubsection:open-leaf-space} As in Section~\ref{subsection:local-leaf-space}, we assume that we have replaced $M$ by an open subset so that $\pi_L$ is a locally trivial f\/ibration over a smooth $4$-manifold. Def\/ine $L_4 := \pi_L(M_5)$: By Corollary~\ref{corollary:xi-behavior-curved-orbits}(2) $M_5$ is a~union of $\pi_L$-f\/ibers, so $L_3 := \pi_L(M_4) = L - L_4$ is a hypersurface. Since $\xi\vert_{M_5}$ is a nonisotropic Killing f\/ield, $-g := -\sigma^{-2} \mathbf{c}_{\mathbf{D}}\vert_{M_5}$ descends to a metric $\hatg$ on~$L_4$ (henceforth in this subsection we sometimes suppress the restriction notation $\vert_{M_5}$). By Proposition~\ref{proposition:descent} $\mathcal{L}_{\xi} \sigma = 0$ and $\mathcal{L}_{\xi} K = 0$, so the trivialization $\ul K \in \Gamma(\End(TM_5))$ is invariant under the f\/low of $\xi$. Since it annihilates~$\xi$, it descends to an endormorphism f\/ield we denote $\hatK \in \End(TL_4)$. Then, Proposition~\ref{equation:ijk-compositions} implies $\hatK^2 = -\varepsilon \id_{TL_4}$, that is, $K$~is an almost $\varepsilon$-complex structure on~$L_4$. This yields a specialization to our setting of a well-known result in Sasaki geometry. \begin{Theorem}\label{theorem:Kahler-Einstein} The triple $(L_4, \hatg, \hatK)$ is an $\varepsilon$-K\"ahler--Einstein structure with $\smash{\hat R} = -6 \varepsilon \hatg$. \end{Theorem} \begin{proof}Since $\ul K^a{}_b \xi^b = 0$, the $g$-skewness of $\ul K$ implies the $\hatg$-skewness of $\hatK$. Thus, $\hatg$ and $\hatK$ together comprise an almost K\"ahler structure on $L_4$; the integrability of $\hatK$ is proved, for example, in~\cite{Blair}, so they in fact consistute a K\"ahler structure. Since $\pi_L\vert_{L_4}$ is a~(pseudo-)Riemannian submersion, we can relate the curvatures of $g$ and $\hatg$ via the O'Neill formula, which gives that~$\hatg$ is Einstein and determines the Einstein constant. \end{proof} \subsubsection[The varepsilon-K\"ahler--Einstein Fef\/ferman construction]{The $\boldsymbol{\varepsilon}$-K\"ahler--Einstein Fef\/ferman construction}\label{subsubsection:twistor-construction} The well known construction of Sasaki--Einstein structures from K\"ahler--Einstein structures immediately generalizes to the $\varepsilon$-K\"ahler--Einstein setting; see, for example, \cite{HabilKath} (in this subsubsection, we restrict to $\varepsilon \in \{\pm1\}$). Here we brief\/ly describe the passage from $\varepsilon$-K\"ahler--Einstein structures to almost Einstein~$(2,3,5)$ conformal structures as a generalized Fef\/ferman construction \cite[Section~4.5]{CapSlovak} between the respective Cartan geometries. Further details will be discussed in an article in preparation~\cite{SagerschnigWillseTwistor}. An $\varepsilon$-K\"ahler structure $(\hat{g},\hat{K})$ of signature $(2,2)$ on a manifold $L_4$ can be equivalently encoded in a torsion-free Cartan geometry $(\mathcal{S}\to L_4,\omega)$ of type $(S,A)$, where $(S,A) =(\SU(1,2),\U(1,1) )$ if $\varepsilon = -1$ and $(S,A)=(\SL(3,\mathbb{R}), \GL(2,\mathbb{R}) )$ if $\varepsilon = 1$, see, for example, \cite{CGH} for the K\"ahler case. We realize $A$ within $S$ as block diagonal matrices $\begin{pmatrix}\mathrm{det}A^{-1}&0\\ 0& A\end{pmatrix}.$ The action of $A$ preserves the decomposition $\mathfrak{s}=\mathfrak{a}\oplus\mathfrak{m}=\begin{pmatrix}\mathfrak{a}&\mathfrak{m}\\ \mathfrak{m}& \mathfrak{a}\end{pmatrix}$ and is given on $\mathfrak{m}\subset\mathfrak{s}$ by $X\mapsto \mathrm{det}(A) AX$; in particular it preserves an $\varepsilon$-Hermitian structure (unique up to multiples) on $\mathfrak{m}$ and we f\/ix a (standard) choice. The $\mathfrak{m}$-part $\theta$ of a Cartan connection $\hat{\omega}$ of type~$(S,A)$ determines an isomorphism $TL_4 \cong \mathcal{S}\times_{A}\mathfrak{m}$ and (via this isomorphism) an $\varepsilon$-Hermitian structure on~$TL_4$. The $\mathfrak{a}$-part $\gamma$ of the Cartan connection def\/ines a linear connection $\nabla$ preserving this $\varepsilon$-Hermitian structure. If $\omega$ is torsion-free then $\nabla$ is torsion-free, and thus the $\varepsilon$-Hermitian structure is $\varepsilon$-K\"ahler. Conversely, given an $\varepsilon$-K\"ahler structure, the Cartan bundle $\mathcal{S}\to L_4$ is the reduction of structure group of the frame bundle to $A\subset\mathrm{SO}(2,2)$ def\/ined by the parallel $\varepsilon$-Hermitian structure and the (reductive) Cartan connection $\hat{\omega}\in\Omega^1(\mathcal{S},\mathfrak{s})$ is given by the sum $\hat{\omega}=\gamma+\theta$ of the pullback of the Levi-Civita connection form $\gamma\in\Omega^1(\mathcal{S},\mathfrak{a})$ and the soldering form $\theta\in\Omega^1(\mathcal{S},\mathfrak{m})$. For the construction we f\/irst build the \emph{correspondence space} \begin{gather*}\mathcal{C}L_4 := \mathcal{S}/A_0 \cong \mathcal{S}\times_{A} (A/A_0),\end{gather*} where $A_0= \SU(1,1)$ if $\varepsilon=-1$ and $A_0= \SL(2,\mathbb{R})$ if $\varepsilon=1$. Then, $\mathcal{C}L_4\to L_4$ is an $\mathbb{S}^1$-bundle if $\varepsilon=-1$ and an $\SO(1, 1)$-bundle if $\varepsilon=1$. We can view $\hat{\omega}\in\Omega^1(\mathcal{S},\mathfrak{s})$ as a Cartan connection on the $A_0$-principal bundle $\mathcal{S}\to \mathcal{C}L_4$. Next we f\/ix inclusions \begin{gather*}S\hookrightarrow \G_2\hookrightarrow \SO(3,4),\end{gather*} such that $S$ stabilizes a vector $\mathbb{S}$ satisfying $H(\mathbb{S},\mathbb{S})=-\varepsilon$ in the standard representation~$\mathbb{V}$ of~$\G_2$ (here $H$ is the bilinear form the representation determines on $\mathbb{V}$), the $S$-orbit in $\G_2/Q\cong \SO(3,4)/\bar P$ is open and $A_0= S\cap Q= S\cap \bar P$. Consider the extended Cartan bundles $\mathcal{G}=\mathcal{S}\times_{A_0}Q $ and $\bar{\mathcal{G}}=\mathcal{S}\times_{A_0}\bar{P} $. There exist unique Cartan connections $\omega\in\Omega^1(\mathcal{G}, \mathfrak{g}_2)$ and $\bar{\omega}\in\Omega^1(\bar{\mathcal{G}}, \mathfrak{so}(3,4))$ extending $\hat{\omega}$~\cite{CapSlovak}. Thus one obtains Cartan geometries of type $(\G_2, Q)$ and $(\SO(3,4),\bar{P})$, respectively, on $\mathcal{C}L_4$ that are non-f\/lat whenever one applies the construction to a torsion-free non-f\/lat Cartan connection $\omega$ of type~$(S,A)$. \begin{Proposition} Let $(\hat{g},\hat{K})$ be an $\varepsilon$-K\"ahler--Einstein structure, $\varepsilon \in \{\pm1\}$, of signature $(2,2)$ on~$L_4$ such that $\smash{\hat{R}_{ab}=-6 \varepsilon \hat{g}_{ab}}$. Then the induced conformal structure $\mathbf{c} := \smash{\hat g}$ on the correspondence space $\mathcal{C}L_4$ is a $(2,3,5)$ conformal structure equipped with a parallel standard tractor~$\mathbb{S}$, $H(\mathbb{S},\mathbb{S})=-\varepsilon$, which corresponds to a non-Ricci-flat Einstein metric in~$\mathbf{c}$.\ $($Here $H$ is the canonical tractor metric determined by $\mathbf{c}.)$ Conversely, locally, all $(2,3,5)$ conformal structures containing non-Ricci-flat Einstein metrics arise via this construction from $\varepsilon$-K\"ahler--Einstein structures. \end{Proposition} \begin{proof}We f\/irst show that the conformal Cartan geometry $(\bar{\mathcal{G}}\to \mathcal{C}L_4,\bar{\omega})$ on the correspondence space is normal if and only if the $\varepsilon$-K\"ahler structure on $L_4$ is Einstein with $\hat{R}_{ab}=-6 \varepsilon \hat{g}_{ab}$. The curvature of the Cartan connection $\hat{\omega}=\gamma+\theta$ is given by \begin{gather*}\hat{\Omega}=\mathrm{d}\gamma +\tfrac{1}{2}[\gamma,\gamma]+\tfrac{1}{2}[\theta,\theta]\in\Omega^2(\mathcal{S},\mathfrak{a}).\end{gather*} Computing the Lie bracket $[[X,Y],Z]$ for $X,Y,Z\in\mathfrak{m}$, and interpreting the curvature as a tensor f\/ield on $L_4$ shows that it can be expressed as \begin{align*} \hat{\Omega}_{ij}{}^k{}_l=\hat{R}_{ij}{}^k{}_l+\varepsilon \hat{g}_{jl}\delta^k{}_i-\varepsilon \hat{g}_{il}\delta^k{}_j +\hat{K}_{jl}\hat{K}^{k}{}_i -\hat{K}_{il}\hat{K}^k{}_j-2\, \hat{K}_{ij}\hat{K}^k{}_{l}, \end{align*} where $\hat{g}_{ij}$ denotes the metric, $\hat{R}_{ij}{}^k{}_l$ its Riemannian curvature tensor and $\hat{K}_{ij}=\hat{g}_{ik}\hat{K}^k{}_j$ the K\"ahler form. Tracing over $i$ and $k$ shows that \begin{align*} \hat{\Omega}_{kj}{}^k{}_{l}=\hat{R}_{kj}{}^k{}_{l}+ 6 \varepsilon \hat{g}_{jl}. \end{align*} Thus $\hat{\Omega}_{kj}{}^k{}_l=0$ if and only if $\hat{R}_{ab}=-6 \varepsilon \hat{g}_{ab}$. Further, for an $\varepsilon$-K\"ahler--Einstein structure \begin{align*} \hat{R}_{ij}{}^k{}_l \hat{K}^l{}_k=2\, \hat{K}^l{}_i \hat{R}_{kl}{}^k{}_j \end{align*} holds. If $\hat{R}_{kj}{}^k{}_l=- 6 \varepsilon \hat{g}_{jl},$ this implies that $\hat{\Omega}_{ij}{}^k{}_l \hat{K}^l{}_k=0.$ This precisely means that the Cartan curvature takes values in the subalgebra $\mathfrak{a}_0\subset\mathfrak{a}$ of matrices with vanishing complex trace. Since $\mathfrak{a}_0\subset\bar{\mathfrak{p}},$ the resulting conformal Cartan connection $\bar{\omega}$ is torsion-free. Vanishing of the Ricci-type contraction of $\hat{\Omega}$, i.e., $\hat{\Omega}_{kj}{}^k{}_l=0$, then further implies that that the conformal Cartan connection is normal. Conversely, normality of the conformal Cartan connection implies $\hat{\Omega}_{kj}{}^k{}_l=0$ and thus $\hat{R}_{ab}=-6 \varepsilon \hat{g}_{ab}$. By construction and normality of the conformal Cartan geometry $(\bar{\mathcal{G}}\to \mathcal{C}L_4,\bar{\omega})$, the induced conformal structure on $\mathcal{C}L_4$ is a $(2,3,5)$ conformal structure that admits a parallel standard tractor $\mathbb{S}$, $H(\mathbb{S},\mathbb{S})=-\varepsilon$, with underlying nowhere-vanishing Einstein scale $\sigma$. The vertical bundle $V\mathcal{C} L_4$ for $\mathcal{C} L_4\to L_4$ corresponds to the subspace $\mathfrak{a}/\mathfrak{a}_0\subset \mathfrak{s}/\mathfrak{a}_0$, i.e., $V\mathcal{C}L_4=\mathcal{S}\times_{A_0} (\mathfrak{a}/\mathfrak{a}_0)$. Since this is the unique $A_0$-invariant $1$-dimensional subspace in $\mathfrak{s}/\mathfrak{a}_0$, the vertical bundle coincides with the subbundle $\mathbf{L}$ spanned by the Killing f\/ield~$\xi$. We now prove the converse. Let $(\mathcal{G}_5\to M_5, \omega_5)$ the Cartan geometry of type $(S,A_0)$ on $M_5$ determined (according to Theorem~\ref{theorem:curved-orbit-decomposition}) by the holonomy reduction coresponding to a parallel standard tractor $\mathbb{S}$, $H(\mathbb{S},\mathbb{S})=-\varepsilon$, of a $(2,3,5)$ conformal structure. Restrict to an open subset so that $\pi_L\colon M_5\to L_4$ is a f\/ibration over the leaf space determined by the corresponding Killing f\/ield~$\xi$. Since $\xi$ is a normal conformal Killing f\/ield, it inserts trivially into the curvature of $\omega_5$. Since~$\xi$ spans $VM_5$, by \cite[Theorem~1.5.14]{CapSlovak} this guarantees that on a suf\/f\/iciently small leaf space~$L_4$ one obtains a Cartan geometry of type~$(S,A)$ such that the restriction of $(\mathcal{G}_5\to M_5, \omega_5)$ is locally isomorphic to the canonical geometry on the correspondence space over~$L_4$. Normality of the conformal Cartan connection implies that the Cartan geometry of type $(S,A)$ is torsion-free and the corresponding $\varepsilon$-K\"ahler metric is non-Ricci-f\/lat Einstein. \end{proof} \begin{Remark}It is interesting to note the following geometric interpretation of the correspondence spaces: If $\varepsilon=-1$, the bundle $\mathcal{C} L_4\to L_4$ can be identif\/ied with the twistor bundle $\mathbb{T}L_4\to L_4$ whose f\/iber over a point $x\in L_4$ comprises all self-dual totally isotropic $2$-planes in $T_x \mathcal{C} L_4$. If $\varepsilon=1$, $\mathcal{C}L_4\to L_4$ can be identif\/ied with the subbundle of the twistor bundle whose f\/iber over a point $x\in L_4$ comprises all self-dual $2$-planes in $T_x \mathcal{C}L_4$ except the eigenspaces of the endomorphism $\hat{K}_x$. The total space~$\mathcal{C} L_4$ carries a tautological rank $2$-distribution obtained by lifting each self-dual totally isotropic $2$-plane horizontally to its point in the f\/iber, and it was observed~\cite{AnNurowski, BLN} that, provided the self-dual Weyl tensor of the metric on~$L_4$ vanishes nowhere, this distribution is $(2,3,5)$ almost everywhere. This suggests a relation of the present work to the An--Nurowski twistor construction (and recent work of Bor and Nurowski). \end{Remark} \subsection[The hypersurface curved orbit M4]{The hypersurface curved orbit $\boldsymbol{M_4}$} On the complement $M - M_5$, $\sigma = 0$ (and so $\mu$ is invariant); this simplif\/ies many formulae there. First, $\varepsilon = -H_{\Phi}(\mathbb{S}, \mathbb{S}) = -\mu_a \mu^a$. Substituting in \eqref{equation:definition-xi}, \eqref{equation:I}, \eqref{equation:J}, \eqref{equation:K} and using that expression for $\varepsilon$ yields (on~$M_4$) \begin{gather} \xi^a = \mu_b \phi^{ba}, \label{equation:xi-hypersurface} \\ I_{ab}= 3 \mu^c \mu_{[c} \phi_{ab]} = -\varepsilon \phi_{ab} - 2 \mu_{[a} \xi_{b]}, \label{equation:I-hypersurface} \\ J_{ab} = 3 \mu^c \phi_{[ca} \theta_{b]} = \mu^c (\ast \phi)_{cab}, \label{equation:J-hypersurface} \\ K_{ab} = -2 \mu^c \mu_{[a} \phi_{b]c} = 2 \mu_{[a} \xi_{b]} . \label{equation:K-hypersurface} \end{gather} Denote by $M_{\mathbf{S}}$ the set on which the line f\/ield $\mathbf{S}$ is def\/ined (recall from Section~\ref{subsection:characterizations-curved-orbits} that $\mathbf{S} := \langle\mu\rangle$ on the space where $\sigma = 0$ and $\mu \neq 0$). By the proof of Proposition \ref{proposition:curved-orbit-characterization}, this is $M_4$ in the Ricci-negative case, $M_4 \cup M_2^+ \cup M_2^-$ in the Ricci-positive case, and $M_4 \cup M_2$ in the Ricci-f\/lat case. \begin{Proposition} On the submanifold $M_{\mathbf{S}}$, $\mathbf{S}^{\perp} = TM_{\mathbf{S}} \subset TM\vert_{M_{\mathbf{S}}}$. \end{Proposition} \begin{proof} The set $M_{\mathbf{S}}$ is precisely where $\sigma = 0$ and $\mu^a = \sigma^{,a} \neq 0$, so $\sigma$ is a def\/ining function for~$M_{\mathbf{S}}$ and hence $TM_{\mathbf{S}} = \ker \mu$ there. \end{proof} \subsubsection{The canonical lattices of hypersurface distributions} Recall that if $\mathbb{S}$ is nonisotropic (if $\sigma$ is not Ricci-f\/lat) it determines a direct sum decomposition $\mathcal{V} = \mathcal{W} \oplus \langle \mathbb{S} \rangle$, where $\mathcal{W} := \langle \mathbb{S} \rangle^{\perp}$, and if $\mathbb{S}$ is isotropic (if $\sigma$ is Ricci-f\/lat), it determines a f\/iltration associated to~\eqref{equation:isotropic-filtration}: $\mathcal{V} \supset \mathcal{W} \supset \im \mathbb{K} \supset \ker K \supset \langle \mathbb{S} \rangle \supset \{ 0 \}$. On $M_4$, $\mathbb{S} \in \langle X \rangle^{\perp} - \im (X \times \,\cdot\,)$, so $X \times \mathbb{S} \in \ker (X \times \,\cdot\,) - \langle X \rangle$. In particular, $X \times \mathbb{S}$ is isotropic but nonzero, and hence it determines an analogous f\/iltration of~$\mathcal{V}$. Forming the intersections and spans of the components of the f\/iltrations determined by $\mathbb{S}$ and $X \times \mathbb{S}$ gives a lattice of vector subbundles of $\mathcal{V}$ under the operations of span and intersection (in fact, it is a graded lattice graded by rank). It has $22$ elements in the non-Ricci-f\/lat case and $26$ in the Ricci-f\/lat case, so for space reasons we do not reproduce these here. However, since $\mathcal{W} / \langle X \rangle \cong TM[-1]$, the sublattice of vector bundles $\mathcal{N}$ satisfying $\langle X \rangle \preceq \mathcal{N} \preceq \mathcal{W}$ descends to a natural lattice of subbundles of $TM\vert_{M_4}$. We record these lattices (they are dif\/ferent in the Ricci-f\/lat and non-Ricci-f\/lat cases), which ef\/f\/iciently encode the incidence relationships among the subbundles, in the following proposition. (We omit the proof, which is tedious but straightforward, and which can be achieved by working in an adapted frame.) \begin{Proposition}\label{proposition:M4-lattice} The bundle $TM\vert_{M_4}$ admits a natural lattice of vector subbundles under the operations of span and intersection: If $\sigma$ is non-Ricci-flat, the lattice is \begin{center} \begin{tikzcd}[row sep = tiny] & & \mathbf{D} \arrow[-]{r} \arrow[-]{rdd} & {[\mathbf{D}, \mathbf{D}]} \arrow[-]{rdd} \\ \phantom{0} \\ & \mathbf{L} \arrow[-]{ruu} \arrow[-]{r} \arrow[-]{rdd} \arrow[-]{rdddd} & (\mathbf{D} + \mathbf{E})^{\perp} \arrow[-, crossing over]{ruu} \arrow[-]{rdd} \arrow[-]{rdddd} & \mathbf{D} + \mathbf{E} \arrow[-]{r} & \mathbf{C} \arrow[-]{rd} \\ 0 \arrow[-]{ru} \arrow[-]{rd} & & & & & TM\vert_{M_4} \\ & \mathbf{S} \arrow[-]{rdd} & \mathbf{E} \arrow[-, crossing over]{ruu} \arrow[-, crossing over]{r} & {[\mathbf{E}, \mathbf{E}]} \arrow[-]{ruu} & TM_4 \arrow[-]{ru} \\ \phantom{0} \\ & & \mathbf{L} \oplus \mathbf{S} \arrow[-, crossing over]{ruuuu} & (\mathbf{L} \oplus \mathbf{S})^{\perp} \arrow[-]{ruuuu} \arrow[-]{ruu} \\ _0 & _1 & _2 & _3 & _4 & _5 \end{tikzcd} . \end{center} In particular, this contains a full flag field \begin{gather*} 0 \subset \mathbf{L} \subset (\mathbf{D} + \mathbf{E})^{\perp} \subset (\mathbf{L} \oplus \mathbf{S})^{\perp} \subset TM_4 \end{gather*} on~$TM_4$. The subbundles in the lattice that depend only on $\mathcal{D}[\mathbf{D}; \sigma]$ and not $\mathbf{D}$ are $0$, $\mathbf{L}$, $\mathbf{S}$, $\mathbf{L} \oplus \mathbf{S}$, $(\mathbf{L} \oplus \mathbf{S})^{\perp}$, $\mathbf{C}$, $TM_4$, $TM\vert_{M_4}$. If $\sigma$ is Ricci-flat, the lattice is \begin{center} \begin{tikzcd}[row sep = tiny] & & \mathbf{D} \arrow[-]{r} \arrow[-]{rdd} & {[\mathbf{D}, \mathbf{D}]} \arrow[-]{rd} \\ & \mathbf{L} \arrow[-]{ru} \arrow[-]{rd} \arrow[-]{rddd} & & & \mathbf{C} \arrow[-]{rd} \\ 0 \arrow[-]{ru} \arrow[-]{rd} & & (\mathbf{D} + \mathbf{E})^{\perp} \arrow[-, crossing over]{ruu} \arrow[-]{r} \arrow[-]{rdd} & \mathbf{D} + \mathbf{E} \arrow[-]{ru} & & TM\vert_{M_4} \\ & \mathbf{S} \arrow[-]{rd} & & & TM_4 \arrow[-]{ru} \\ & & \mathbf{E} \arrow[-, crossing over]{ruu} \arrow[-]{r} & \mathbf{E}^{\perp} \arrow[-]{ruuu} \arrow[-]{ru} \\ _0 & _1 & _2 & _3 & _4 & _5 \end{tikzcd} \end{center} This determines a natural sublattice \begin{center} \begin{tikzcd}[row sep = tiny] & & (\mathbf{D} + \mathbf{E})^{\perp} \arrow[-]{rd} \\ & \mathbf{L} \arrow[-]{ru} \arrow[-]{rd} & & \mathbf{E}^{\perp} \arrow[-]{r} & TM_4 \\ 0 \arrow[-]{ru} \arrow[-]{rd} & & \mathbf{E} \arrow[-]{ru} \\ & \mathbf{S} \arrow[-]{ru} \\ _0 & _1 & _2 & _3 & _4 \end{tikzcd} \end{center} of vector subbundles of $TM_4$. The subbundles in the first lattice that depend only on the $1$-parameter family $\mathcal{D}[\mathbf{D}; \sigma]$ and not $\mathbf{D}$ are $0$, $\mathbf{L}$, $\mathbf{S}$, $\mathbf{E}$, $\mathbf{E}^{\perp}$, $\mathbf{C}$, $TM_4$, $TM\vert_{M_4}$. In both cases, the restriction $\mathbf{L}\vert_{M_4}$ $($which depends only on $\mathcal{D}[\mathbf{D}; \sigma])$ is the intersection of the distribution $\mathbf{D}$ and the tangent space $TM_4$ of the hypersurface. In the lattices, all bundles are implicitly restricted to $M_4$, the numbers indicate the ranks of the bundles in their respective columns, and $($in the case of the two large lattices$)$ the diagram is arranged so that each bundle is positioned horizontally opposite its $\mathbf{g}$-orthogonal bundle. \end{Proposition} \subsubsection[The hypersurface leaf space L3]{The hypersurface leaf space $\boldsymbol{L_3}$}\label{subsubsection:hypersurface-leaf-space} Recall that $L_3 := \pi_L(M_4)$. Since $\mathbf{S}$ is spanned by the invariant component $\mu$ of $\mathbb{S}$ and $\mathcal{L}_{\xi} \mathbb{S} = 0$, $\mathbf{S}$ descends to a line f\/ield $\smash{\hat\mathbf{S} \subset TL\vert_{L_3}}$. This line f\/ield is contained in $TL_3$ if\/f $\mathbf{S}$ is contained in $TM_4 = \mathbf{S}^{\perp}$, that is (by Proposition \ref{proposition:M4-lattice}) if\/f $\sigma$ is Ricci-f\/lat. Similarly, since the f\/low of $\xi$ preserves $\mathbf{g}$, it also preserves $\mathbf{C} \cap TM_4 = (\mathbf{L} \oplus \mathbf{S})^{\perp}$. Then, because $\mathbf{L} \subset \mathbf{C} \cap TM_4 \subset \mathbf{S}^{\perp} = TM_4$, $\mathbf{C} \cap TM_4$ descends to a $2$-plane distribution $\mathbf{H} \subset TL_3$. \begin{Proposition} The $2$-plane distribution $\mathbf{H} \subset TL_3$ defined as above is contact. \end{Proposition} \begin{proof} Since $\mathbf{C} \cap TM_4 = (\ker \xi^{\flat}) \cap TM_4$, we can write this bundle as $\smash{\ker \iota_{M_4}^* \xi^{\flat}} $, where $\iota_{M_4} \colon$ \mbox{$M_4 \hookrightarrow M$} denotes inclusion. By construction, $\smash{\iota_{M_4}^* \xi^{\flat}}$ (where we have trivialized $\xi^{\flat}$ with respect to an arbitrary scale $\tau$) is also the pullback $\pi_L^* \beta$ of a def\/ining $1$-form $\beta \in \Gamma(TL_3)$ for $\mathbf{H}$. Thus, $\smash{\pi_L^* (\beta \wedge d\beta)} = \smash{\pi_L^* \beta \wedge d(\pi_L^* \beta)} = \smash{\iota_{M_4}^* \xi^{\flat} \wedge d(\iota_{M_4}^* \xi^{\flat})} = \smash{\iota_{M_4}^* (\xi^{\flat} \wedge d\xi^{\flat})}$, but computing in an adapted frame shows that $\xi^{\flat} \wedge d\xi^{\flat}$ vanishes nowhere, and hence the same holds for $\beta \wedge d\beta$; equivalently, $\mathbf{H}$ is contact. \end{proof} Now, consider the component $\zeta^a{}_b := -\sigma \psi^a{}_b - \mu_c \chi^{ca}{}_b - \rho \phi^a{}_b\in \Gamma(\End_{\circ}(TM))$ of $\mathbb{K}^A{}_B$ in the splitting \eqref{equation:components-of-K} with respect to a scale $\tau$. Let $\mathbf{J} \colon \mathbf{H} \to TL\vert_{L_3}$ be the map that lifts a vector $\smash{\hat\eta} \in \mathbf{H}_{\hat x}$ to any $\eta \in T_x M_4$ for arbitrary $x \in \pi_L^{-1}(\hat x)$, applies $\zeta$, and then pushes forward back to~$T_{\hat x} L$ by~$\pi_L$. We show that this map is well-def\/ined, that it is independent of the choice of $\tau$, and that we may regard it as an endomorphism of $\mathbf{H}$: By Lemma~\ref{lemma:Lie-derivative-tractor}, $\mathcal{L}_{\xi} \mathbb{K} = -[L_0^{\mathcal{A}}(\xi), \mathbb{K}] = -[\mathbb{K}, \mathbb{K}] = 0$, so~$\mathbb{K}$ and hence $\zeta$ is itself invariant under the f\/low of $\xi$, and hence that $\zeta$ is independent of choice of basepoint $x$ of the lift. Now, any two lifts $\eta, \eta' \in T_x M_4$ dif\/fer by an element of $\ker T_x \pi_L = \langle \xi_x \rangle$; on the other hand, expanding \eqref{equation:K-squared-identity} in terms of the splitting determined by $\tau$, taking a particular component equation, and evaluating at $\sigma = 0$ gives the identity $\zeta^b{}_a \xi^a = \alpha \xi^b$ for some smooth function $\alpha$, so $T_x \pi_L \cdot \zeta(\xi) = 0$, and hence $\mathbf{J}$ is well-def\/ined. Finally, under a change of scale, $\zeta$~is transformed to $\zeta^a{}_b \mapsto \zeta^a{}_b + \Upsilon^a \xi_b - \xi^a \Upsilon_b$ for some form $\Upsilon_a \in \Gamma(T^*M)$~\cite{BEG}. A lift $\eta^b$ of $\hat\eta \in \mathbf{H}$ is an element of $\mathbf{C} \cap TM_4 \subset \mathbf{C} = \ker \xi^{\flat}$, $\Upsilon^a \xi_b \eta^b = 0$. The term $\xi^a \Upsilon_b \eta^b$ is again in $\ker T_x \pi_L$, and we conclude that $\mathbf{J}$ is independent of the scale~$\tau$. Now, in the notation of the previous paragraph, we have $\mu_b \zeta^b{}_c \eta^c = \mu_b(-\mu_d \chi^{db}{}_c - \rho \phi^b{}_c) \eta^c = - \rho \nu_b \phi^b{}_c \eta^c$. This is $-\rho \xi_c \eta^c$, and we saw above that $\xi_c \eta^c = 0$, so $\mu_b \zeta^b{}_c \eta^c = 0$, that is, $\zeta^b{}_c \eta^c \in \ker \mu = \mathbf{S}^{\perp}$. Using the $\mathbf{g}$-skewness of $\zeta$ gives $\xi_b \zeta^b{}_c \eta^c = -\eta_b \zeta^b{}_c \xi^c = -\eta_b (\alpha \xi^b) = -\alpha \eta_b \xi^b$, but again $\eta_b \xi^b = 0$, so we also have $\zeta^b{}_c \eta^c \in \ker \xi^{\flat} = \mathbf{L}^{\perp}$. Thus, $\zeta(\eta) \in \mathbf{L}^{\perp} \cap \mathbf{S}^{\perp} = \mathbf{C} \cap TM_4$, and pushing forward by $\pi_L$ gives $\mathbf{J}(\hat\eta) \in \mathbf{H}$, so we may view $\mathbf{J}$ as an endomorphism of~$\mathbf{H}$. \begin{Proposition} The endomorphism $\mathbf{J} \in \Gamma(\End(\mathbf{H}))$ defined as above is an $\varepsilon$-complex structure. \end{Proposition} \begin{proof}In the above notation, unwinding (twice) the def\/inition of $\mathbf{J}$ gives that $\mathbf{J}^2(\hat\eta) = T_x \pi_L \cdot \zeta^2(\eta)$. Now, another component equation of \eqref{equation:K-squared-identity} is $\zeta^a{}_c \zeta^c{}_b - \xi^a \nu_b - \nu^a \xi_b = \varepsilon \delta^a{}_b + \mu^a \mu_b$ for some $\nu \in \Gamma(T^*M)$. The above observations about the terms $\Upsilon^a \xi_b$ and $\xi^a \Upsilon_b$ apply just as well to $\xi^a \nu_b$ and $\nu^a \xi_b$, and since $\eta \in \mathbf{C} \cap TM_4 \subset \mathbf{S}^{\perp} = \ker \mu$, we have $\mu^a \mu_b \eta^b = 0$, so the above component equation implies $\mathbf{J}^2 = \varepsilon \id_{\mathbf{H}}$. In the case $\varepsilon = +1$ one can verify that the $(\pm 1)$-eigenspaces of~$\mathbf{J}$ are both $1$-dimensional, that is, $\mathbf{J}$ is an almost $\varepsilon$-complex structure on~$\mathbf{H}$. \end{proof} In the Ricci-negative case, this shows precisely that $(L_3, \mathbf{H}, \mathbf{J})$ is an almost CR structure (in fact, it turns out to be integrable, see the next subsubsection), and one might call the resulting structure in the general case an almost $\varepsilon$-CR structure. The three signs of $\varepsilon$ (equivalently, the three signs of the Einstein constant) give three qualitatively distinct structures, so we treat them separately. \subsubsection{Ricci-negative case: The classical Fef\/ferman conformal structure}\label{subsubsection:curved-orbit-negative-hypersurface} If $\sigma$ is Ricci-negative, then by Example \ref{example:curved-orbit-decomposition-almost-Einstein}, $M - M_5 = M_4$ inherts a conformal structure $\mathbf{c}_{\mathbf{S}}$ of signature $(1, 3)$. We can identify the standard tractor bundle $\mathcal{V}_{\mathbf{S}}$ of $\mathbf{c}_{\mathbf{S}}$ with the restriction $\mathcal{W}\vert_{\Sigma}$ of the $\nabla^{\mathcal{V}}$ parallel subbundle $\mathcal{W} := \langle \mathbb{S} \rangle^{\perp}$, and under this identif\/ication the normal tractor connection on $\mathcal{V}_{\mathbf{S}}$ coincides with the restriction of $\nabla^{\mathcal{V}}$ to $\mathcal{W}\vert_{\Sigma}$ \cite{Gover}. In particular, $\Hol(\mathbf{c}_{\mathbf{S}}) \leq \SU(1, 2)$, but this containment characterizes (locally) the $4$-dimensional conformal structures that arise from the classical Fef\/ferman conformal construction \cite{CapGoverHolonomyCharacterization, LeitnerHolonomyCharacterization}, which canonically associates to any nondegenerate partially integrable almost CR structure of hypersurface type on a manifold a conformal structure on a natural $\mathbb{S}^1$-bundle over that manifold \cite{Fefferman}, \cite[Example~3.1.7, Section~4.2.4]{CapSlovak}. \begin{Proposition}\label{proposition:Fefferman-conformal-structure} Let $\mathcal{D}$ denote a $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions related by a Ricci-negative almost Einstein scale $\sigma$. \begin{enumerate}\itemsep=0pt \item[$1.$] The conformal structure $\mathbf{c}_{\mathbf{S}}$ of signature $(1, 3)$ determined on the hypersurface curved orbit~$M_4$ is a Fefferman conformal structure. \item[$2.$] The infinitesimal generator of the $($local$)$ $\mathbb{S}^1$-action is $\xi\vert_{M_4} = \iota_7(\sigma)\vert_{M_4}$, so the line field it spans is $\mathbf{L}\vert_{M_4} = \mathbf{D} \cap TM_4$ for every $\mathbf{D} \in \mathcal{D}$. \item[$3.$] The $3$-dimensional CR-structure underlying the Fefferman conformal structure $(M_4, \mathbf{c}_{\mathbf{S}})$ is $(L_3, \mathbf{H}, \mathbf{J})$. \end{enumerate} \end{Proposition} \begin{proof} The f\/irst claim is deduced in the paragraph before the proposition. The latter claims follow from (1), unwinding def\/initions, and the proof of \cite[Corollary~2.3]{CapGoverHolonomyCharacterization}. \end{proof} \subsubsection[Ricci-positive case: A paracomplex analogue of the Fef\/ferman conformal structure]{Ricci-positive case: A paracomplex analogue\\ of the Fef\/ferman conformal structure}\label{subsubsection:curved-orbit-positive-hypersurface} This case is similar to the Ricci-negative case, but dif\/fers qualitatively in two ways. First, the endomorphism $\mathbf{J} \in \End(\mathbf{H})$ is a paracomplex structure rather than a complex one; let $\mathbf{H}_{\pm}$ denote its $(\pm 1)$-eigendistributions, which are both line f\/ields. A contact distribution on a $3$-manifold equipped with a direct sum decomposition into line f\/ields is the $3$-dimensional specialization of a \textit{Legendrean contact structure}, the paracomplex analogue of a partially integrable almost CR structure of hypersurface type \cite[Section~4.2.3]{CapSlovak}. These correspond to regular, normal parabolic geometries of type $(\SL(3, \mathbb{R}), P_{12})$ where $P_{12} < \SL(3, \mathbb{R})$ is a Borel subgroup. The analog of the classical Fef\/ferman conformal structure associates to any Legendrean contact structure on a manifold $N$ a neutral conformal structure on a natural $\SO(1, 1)$-bundle over the manifold \cite{HSSTZ, NurowskiSparling}. By analogy with the construction discussed in Section~\ref{subsubsection:curved-orbit-negative-hypersurface}, we call a~conformal structure that locally arises this way a \textit{para-Fefferman conformal structure}. Second, the conformal structure $\mathbf{c}_{\mathbf{S}}$ is def\/ined on the union $M_4 \cup M_2^+ \cup M_2^-$, but only its restriction to $M_4$ is induced by the analogue of the classical Fef\/ferman construction (indeed, recall from Section~\ref{proposition:curved-orbit-characterization} that the vector f\/ield $\xi$ whose integral curves comprise the leaf space $L$ vanishes on $M_2^{\pm}$). The paracomplex analogue of Proposition \ref{proposition:Fefferman-conformal-structure} is the following: \begin{Proposition} Let $\mathcal{D}$ denote a $1$-parameter family of conformally isometric $(2, 3, 5)$ distributions related by a Ricci-positive almost Einstein scale $\sigma$. \begin{enumerate}\itemsep=0pt \item[$1.$] The conformal structure $\mathbf{c}_{\mathbf{S}} \vert_{M_4}$ of signature $(2, 2)$ determined on the hypersurface curved orbit $M_4$ is a para-Fefferman conformal structure. \item[$2.$] The infinitesimal generator of the $($local$)$ $\SO(1, 1)$-action is $\xi\vert_{M_4} = \iota_7(\sigma)\vert_{M_4}$, so the line field it spans is $\mathbf{L}\vert_{M_4} = \mathbf{D} \cap TM_4$ for every $\mathbf{D} \in \mathcal{D}$. \item[$3.$] The $3$-dimensional Legendrean contact structure underlying $(M_4, \mathbf{c}_{\mathbf{S}}\vert_{M_4})$ is $(L_3, \mathbf{H}_+ \oplus \mathbf{H}_-)$, where $\mathbf{H}_{\pm}$ are the $(\pm 1)$-eigenspaces of the paracomplex structure~$\mathbf{J}$ on~$\mathbf{H}$. \end{enumerate} \end{Proposition} The geometry of $3$-dimensional Legendrean contact structures admits another concrete, and indeed classical (local) interpretation, namely as that of second-order ordinary dif\/ferential equations (ODEs) modulo point transformations: We can regard a second-order ODE $\ddot{y} = F(x, y, \dot{y})$ as a function $F(x, y, p)$ on the jet space $J^1 := J^1(\mathbb{R}, \mathbb{R})$, and the vector f\/ields $D_x := \partial_x + p \partial_y + F(x, y, p) \partial_p$ and $\partial_p$ span a contact distribution (namely the kernel of $dy - p \,dx \in \Gamma(T^* J^1)$), so $\langle D_x \rangle \oplus \langle \partial_p \rangle$ is a Legendrean contact structure on $J^1$. Point transformations of the ODE, namely those given by prolonging to $J^1$ (local) coordinate transformations of $\mathbb{R}_{xy}^2$, are precisely those that preserve the Legendrean contact structure (up to dif\/feomorphism) \cite{DoubrovKomrakov}. \subsubsection{Ricci-f\/lat case: A f\/ibration over a special conformal structure}\label{subsubsection:curved-orbit-flat-hypersurface} In this case, Example \ref{example:curved-orbit-decomposition-almost-Einstein} gives that the hypersurface curved orbit $M_4$ locally f\/ibers over the space $\smash{\wt L}$ of integral curves of $\mathbf{S}$ (nota bene the f\/ibrations $\pi_L\vert_{M_4} \colon M_4 \to L_3$ in the non-Ricci-f\/lat cases above are instead along the integral curves of $\mathbf{L}$), and that $\smash{\wt L}$ inherits a conformal structure~$\mathbf{c}_{\wt L}$ of signature~$(1, 2)$. Considering the sublattice of the last lattice in Proposition \ref{proposition:M4-lattice} of the distributions containing $\mathbf{S}$ and forming the quotient bundles modulo $\mathbf{S}$ yields a complete f\/lag f\/ield of $T \wt L$ that we write as $0 \subset \mathbf{E} / \mathbf{S} \subset \mathbf{E}^{\perp} / \mathbf{S} \subset T \wt L$ it depends only on $\mathcal{D}$. Since $\mathbf{E}$ is totally $\mathbf{c}$-isotropic, the line f\/ield $\mathbf{E} / \mathbf{S}$ is $\mathbf{c}_{\wt L}$-isotropic, and by construction it is orthogonal to~$\mathbf{E}^{\perp} / \mathbf{S}$ with respect to $\mathbf{c}_{\wt L}$. Thus, we may regard the induced structure on~$\smash{\wt L}$ as a Lorentzian conformal structure equipped with an isotropic line f\/ield. Similarly, the f\/ibration along the integral curves of $\mathbf{L}$ determines a complete f\/lag f\/ield that we denote $0 \subset \mathbf{E} / \mathbf{L} \subset \mathbf{H} \subset TL_3$. Computing in a local frame gives $\mathbf{E} / \mathbf{L} = \ker \mathbf{J} = \im \mathbf{J}$, and this line the kernel of the (degenerate) conformal (negative semidef\/inite) bilinear form $\mathbf{c}$ determines on~$\mathbf{H}$. \subsection[The high-codimension curved orbits M2+/-, M2, M0+/-]{The high-codimension curved orbits $\boldsymbol{M_2^{\pm}}$, $\boldsymbol{M_2}$, $\boldsymbol{M_0^{\pm}}$}\label{subsection:high-codimension-curved-orbits} Recall that on these orbits, $\xi = 0$ and $K = 0$, and hence $\mathbf{E}$ is not def\/ined. Recall also that if $\sigma$ is Ricci-negative, all three of these curved orbits are empty. If $\sigma$ is Ricci-f\/lat, only $M_2$ and $M_0^{\pm}$ occur, and if $\sigma$ is Ricci-positive, only $M_2^{\pm}$ occur. Since the curved orbits $M_0^{\pm}$ are $0$-dimensional, they inherit no structure. \subsubsection[The curved orbits M2+/-: Projective surfaces]{The curved orbits $\boldsymbol{M_2^{\pm}}$: Projective surfaces}\label{subsubsection:curved-orbit-M2pm} By Theorem \ref{theorem:curved-orbit-decomposition} and the orbit decomposition of the f\/lat model in Section~\ref{subsection:orbit-decomposition-flat-model}, the holonomy reduction determines parabolic geometries of type $(\SL(3, \mathbb{R}), P_1)$ and $(\SL(3, \mathbb{R}), P_2)$ on $M_2^{\pm}$. Torsion-freeness of the normal conformal Cartan connection immediately implies that these parabolic geometries are torsion-free and hence determine underlying torsion-free projective structures (that is, equivalence classes of torsion-free af\/f\/ine connections having the same unparametrized geodesics). Again, the formulae for various objects simplify on this orbit: By the proof of Proposition~\ref{proposition:curved-orbit-characterization} we have $\sigma = 0$, $\xi = 0$, and $\mu^c \theta_c = \pm 1$ here, and substituting in~\eqref{equation:I}, \eqref{equation:J}, \eqref{equation:K} gives \begin{gather* I = -\phi, \qquad J = \pm \phi, \qquad K = 0 . \end{gather*} These specializations immediately give the Ricci-positive analog of Corollary \ref{corollary:null-complementary-distribution-set-of-definition}: \begin{Proposition} \label{proposition:vanishing-phi-pm-infinity} Let $(M, \mathbf{D})$ be an oriented $(2, 3, 5)$ distribution and $\sigma$ a Ricci-positive scale for $\mathbf{c}_{\mathbf{D}}$. Then, the limiting normal conformal Killing forms $\phi_{\mp\infty} := \pm I + J$ respectively vanish precisely on $M_2^{\pm}$, so the distributions $\mathbf{D}_{\mp\infty}$ they respectively determine are respectively defined precisely on $M_5 \cup M_4 \cup M_2^{\mp}$. \end{Proposition} \begin{Proposition}\label{proposition:TM2pm-D} For all $x \in M_2^{\pm}$, $T_x M_2^{\pm} = \mathbf{D}_x$ $($for every $\mathbf{D} \in \mathcal{D})$. \end{Proposition} \begin{proof}By Proposition \ref{proposition:curved-orbit-characterization}, $M_2^{\pm} = \{ x \in M \colon \xi_x = 0 \}$ and so $TM_2^{\pm} \subseteq \ker \nabla \xi$. On the other hand, as in the proof of that proposition we have $\xi^a{}_{,b} = -\zeta^a{}_b - \theta_c \mu^c \delta^a{}_b$, and computing in an adapated frame shows that on $M_2^{\pm}$, $\xi^a{}_{,b}$ has rank $3$. Equivalently, the kernel has dimension $2 = \dim TM_2^{\pm} = 2$, so $\ker \nabla \xi = T_x M_2^{\pm}$. Writing $\nabla^{\mathcal{V}}_c \mathbb{K}^A{}_B = 0$ in components gives $\xi^b{}_{,c} = -\zeta^b{}_{,c} - \mu^d \theta_d \delta^b{}_c$, and as in the proof of Proposition \ref{proposition:curved-orbit-characterization}, $-\zeta^b{}_c = \sigma \psi^b{}_c + \mu_d \chi^{db}{}_c + \rho \phi^b{}_c$. Since $x \in M_2^{\pm}$, $\mu^d \phi_d = \pm 1$ and $\sigma = 0$. For $\eta \in \mathbf{D}_x$, Proposition \ref{proposition:identites-g2-structure-components}(2) gives that $\phi^b{}_c \eta^c = 0$, and computing in an adapted frame gives that $\mu_d \chi^{db}{}_c$ restricts to $\id_{\mathbf{D}}$ on $M_2^{\pm}$. Substituting then gives $\xi^b{}_{,c} \eta^c = 0$, so by dimension count $T_x M_2^{\pm} = \mathbf{D}_x$. \end{proof} \subsubsection[The curved orbit M2]{The curved orbit $\boldsymbol{M_2}$}\label{subsubsection:curved-orbit-M2} As for the hypersurface curved orbits, forming the intersections and spans of the components of the f\/iltrations determined by $\mathbb{S}$ and $X \times \mathbb{S}$ in this case yields a lattice of (14) vector subbundles of $\mathcal{V}$, and determining the lattice of (10) vector subbundles of $TM\vert_{M_2}$ this induces shows in particular that one has a distinguished line f\/ield $\mathbf{S} = \mathbf{D} \cap TM_2$ on~$M_2$. Specializing the formulae for $I$, $J$, $K$ as in the previous cases gives that on $M_2$, $I = J = K = 0$. \section{Examples}\label{section:examples} In this section, we give three conformally nonf\/lat examples, one for each sign of the Einstein constant; each is produced using a dif\/ferent method. To the knowledge of the authors, before the present work there were no examples in the literature of nonf\/lat $(2, 3, 5)$ conformal structures known to admit a~non-Ricci-f\/lat almost Einstein scale.\footnote{We recently learned from Bor and Nurowski that they have, in work in progress, also constructed examples~\cite{BorNurowskiPrivate}.} In particular, these examples show that none of the holonomy reductions considered in this article force local f\/latness of the underlying conformal structure. \begin{Example}[a distinguished rolling distribution]\label{example:distinguished-rolling-distribution} We construct a homogeneous Sasaki--Einstein metric of signature $(3, 2)$ whose negative determines a Ricci-negative conformal structure. Each $(2, 3, 5)$ distribution in the corresponding family is dif\/feomorphic to a particular special so-called \textit{rolling distribution}. Let $(\mathbb{S}^2, h_+, \mathbb{J}_+)$ and $(\mathbb{H}^2, h_-, \mathbb{J}_-)$ respectively denote the round sphere and hyperbolic plane with their usual K\"ahler structures, rescaled so that their respective scalar curvatures are~$\pm 12$. In the usual respective polar coordinates $(r, \varphi)$ and $(s, \psi)$, \begin{alignat*}{3} &h_+ := \frac{2}{3} \cdot \frac{1}{\big(r^2 + 1\big)^2} \big(dr^2 + r^2 d\varphi^2\big), \qquad&& \mathbb{J}_+:= r \partial_r \otimes d\varphi - \tfrac{1}{r} \partial_{\varphi} \otimes dr ,&\\ & h_- := \frac{2}{3} \cdot \frac{1}{\big(s^2 - 1\big)^2} \big(ds^2 + s^2 d\psi^2\big), \qquad && \mathbb{J}_- := s \partial_s \otimes d\psi - \tfrac{1}{s} \partial_{\psi} \otimes ds . \end{alignat*} Then, the triple $(\mathbb{S}^2 \times \mathbb{H}^2, \hatg, \hatK)$, where $\hatg := h_+ \oplus -h_-$ and $\hatK := \mathbb{J}_+ \oplus \mathbb{J}_-$, is a K\"ahler structure satisfying $\smash{\hat R_{ab} = 6 \hatg_{ab}}$. The K\"ahler form $\hatg_{ac} \hatK^c{}_b$ is equal to $(d\alpha)_{ab}$, where \begin{gather*} \alpha := \frac{2}{3}\left(-\frac{\varphi r\,dr}{(r^2 + 1)^2} + \frac{\psi s\,ds}{(s^2 - 1)^2}\right) . \end{gather*} The inf\/initesimal symmetries of the K\"ahler structure are spanned by the lifts of the inf\/in\-te\-si\-mal symmetries of $(\mathbb{S}^2, h_+, \mathbb{J}_+)$ and $(\mathbb{H}^2, h_-, \mathbb{J}_-)$, and so the inf\/initesimal symmetry algebra is $\mathfrak{aut}(\hatg, \hatK) \cong \mfso(3, \mathbb{R}) \oplus \mfsl(2, \mathbb{R})$. On the canonical $\mathbb{S}^1$-bundle $\pi \colon M \to \mathbb{S}^2 \times \mathbb{H}^2$ def\/ined in Section~\ref{subsubsection:twistor-construction}, with standard f\/iber coordinate $\lambda$, def\/ine $\beta := d \lambda - 2 \pi^* \alpha$. Then, the associated Sasaki--Einstein structure is $(M, g, \partial_{\lambda})$, where $g := \pi^* \hatg + \beta^2$. The normalizations of the scalar curvatures of the sphere and hyperbolic plane were chosen so that $R_{ab} = 4 g_{ab}$. The $1$-parameter family $\{\mathbf{D}_{\upsilon}\}$ of corresponding oriented $(2, 3, 5)$ distributions, which in particular induce the conformal class $[-g]$, is \begin{gather*} \mathbf{D}_{\upsilon} =\bigg\langle 3 \big(r^2\! + 1\big) s \partial_r + 3 \big(s^2\! - 1\big) s \cos \gamma \partial_s + 3 \big(s^2\! - 1\big) \sin \gamma \partial_{\psi} + 4 s \left(\frac{s \psi \cos \gamma}{s^2 - 1} - \frac{r \varphi}{r^2 + 1}\right) \partial_{\lambda}, \\ \hphantom{\mathbf{D}_{\upsilon} =\bigg\langle}{} 3 \big(r^2 + 1\big) s \partial_{\varphi} + 3 \big(s^2 - 1\big) s \sin \gamma \partial_s - 3 r \big(s^2 - 1\big) \cos \gamma \partial_{\psi} + \frac{4 s}{s^2 - 1} r \psi \sin \gamma \partial_{\lambda} \bigg\rangle, \end{gather*} where \begin{gather*} \gamma := \frac{r^2 - 1}{r^2 + 1} \varphi + \frac{s^2 + 1}{s^2 - 1} \psi - 3 \lambda + \upsilon . \end{gather*} One can compute the tractor connection explicitly (the explicit expression is unwieldy, so we do not reproduce it here) and use it to compute that the conformal holonomy $\Hol([-g])$ is the full group $\SU(1, 2)$. In particular, this shows that in the Ricci-negative case the holonomy reduction considered in this case does not automatically entail a holonomy reduction to a smaller group. Since almost Einstein scales are in bijective correspondence with parallel standard tractors, the space of Einstein scales is $1$-dimensional (as an independent parallel standard tractor would further reduce the holonomy). One can compute that $\mathfrak{aut}(\mathbf{D}_{\upsilon}) \cong \mathfrak{aut}(\hatg, \hatK) \cong \mfso(3, \mathbb{R}) \oplus \mfsl(2, \mathbb{R})$ and $\mathfrak{aut}([-g]) \cong \mathfrak{aut}(g) \cong \mathfrak{aut}(\hatg, \hatK) \oplus \langle \xi \rangle \cong \mfso(3, \mathbb{R}) \oplus \mfsl(2, \mathbb{R}) \oplus \mathbb{R}$. One can show that every distribution $\mathbf{D}_{\upsilon}$ is equivalent to the so-called rolling distribution for the Riemannian surfaces $(\mathbb{S}^2, g_+)$ and $(\mathbb{H}^2, g_-)$. The underlying space of this distribution, which we can informally regard as the space of relative conf\/igurations of $\mathbb{S}^2$ and $\mathbb{H}^2$ in which the surfaces are tangent at a single point, is the twistor bundle \cite{AnNurowski} over $\mathbb{S}^2 \times \mathbb{H}^2$ whose f\/iber over $(x_+, x_-)$ is the circle $\operatorname{Iso}(T_{x_+} \mathbb{S}^2, T_{x_-} \mathbb{H}^2) \cong \mathbb{S}^1$ of isometries. The distribution is the one characterized by the so-called no-slip, no-twist conditions on the relative motions of the two surfaces \cite[Section~3]{BryantHsu}. We can produce a para-Sasaki analogue of this example, which in particular has full holonomy group $\SL(3, \mathbb{R})$ and hence shows the holonomy reduction to that group again does not auto\-matically entail a reduction to a smaller group. Let $(\mathbb{L}^2, h, \mathbb{J})$ denote the para-K\"ahler Lorenztian surface with \begin{gather*} h := \frac{2}{3 \big(r^2 + 1\big)^2}\big({-}dr^2 + r^2 d\varphi^2\big), \qquad \mathbb{J} := r \partial_r \otimes d\varphi + \tfrac{1}{r} \partial_{\varphi} \otimes dr . \end{gather*} Then, the triple $(\mathbb{L}^2 \times \mathbb{L}^2, h \oplus h, \mathbb{J} \oplus \mathbb{J})$, is a suitably normalized para-K\"ahler structure and we can proceed as before. Every $(2, 3, 5)$ distribution in the determined family is dif\/feomorphic to the Lorentzian analogue of the rolling distribution for the surfaces $(\mathbb{L}^2, h)$ and $(\mathbb{L}^2, -h)$.\footnote{This para-K\"ahler--Einstein structure is isometric to \cite[equation~(4.21)]{Chudecki}, which is attributed there to Nurowski.} \end{Example} \begin{Example}[a cohomogeneity $1$ distribution from a homogeneous projective surface]\label{example:Dirichlet-Ricci-positive} We construct an example of a Ricci-positive almost Einstein $(2, 3, 5)$ conformal structure by specifying a para-Fef\/ferman conformal structure $\mathbf{c}_N$ on a $4$-manifold $N$ and solving a natural geometric Dirichlet problem: We produce a conformal structure $\mathbf{c}$ on $N \times \mathbb{R}$ equipped with a holonomy reduction to $\SL(3, \mathbb{R})$ for which the hypersurface curved orbit is $N$ and the induced structure there is $\mathbf{c}_N$. In particular, this yields an example of an almost Einstein $(2, 3, 5)$ distribution for which the zero locus of the almost Einstein scale is nonempty, and hence for which the curved orbit decomposition has more than one nonempty curved orbit. Consider the projective structure $[\nabla]$ on $\mathbb{R}^2_{xy}$ containing the torsion-free connection $\nabla$ characterized by \begin{gather*} \nabla_{\partial_x} \partial_x = 3 x y^2 \partial_x + x^3 \partial_y , \qquad \nabla_{\partial_x} \partial_y = \nabla_{\partial_y} \partial_x = 0 , \qquad \nabla_{\partial_y} \partial_y = x^3 \partial_x - 3 x^2 y \partial_y . \end{gather*} Eliminating the parameter in the geodesic equations for $\nabla$ yields the ODE $\ddot{y} = (x \dot{y} - y)^3$, which corresponds (recall Section~\ref{subsubsection:curved-orbit-positive-hypersurface}) to the function $F(x, y, p) = (x p - y)^3$. The point symmetry algebra of the ODE (that is, the symmetry algebra of the Legendrean contact structure on $J := \{x p - y > 0\} \subset J^1(\mathbb{R}, \mathbb{R})$) is $\mfsl(2, \mathbb{R})$ and acts inf\/initesimally transitively. Hence, we may identify $J$ with an open subset of (some cover of) $\SL(2, \mathbb{R})$. With respect to the left-invariant local frame \begin{gather*} E_X := -(x p - y)^2 \partial_p, \qquad E_H := x \partial_x + y \partial_y, \qquad E_Y := \frac{1}{x p - y} (\partial_x + p \partial_y) . \end{gather*} of $J$, the line f\/ields spanning the contact distribution are $\langle \partial_p \rangle = \langle E_X \rangle$, and $\langle D_x \rangle = \langle E_Y - 3 E_X \rangle$. The Fef\/ferman conformal structure $(N, \mathbf{c}_N)$ is again homogeneous: Its ($5$-dimensional) symmetry algebra $\mathfrak{aut}(\mathbf{c}_N)$ contains an inf\/initesimally transitive subalgebra isomorphic to $\mfgl(2, \mathbb{R})$. A (local) left-invariant frame of $N$ realizing this subalgebra is given by \begin{gather*} \hat E_X = E_X + x (x p - y) \partial_a , \qquad \hat E_H = E_H - \partial_a , \qquad \hat E_Y = E_Y , \qquad \partial_a , \end{gather*} where $a$ is the standard coordinate on the f\/iber of $N \to J$ and our notation uses the natural (local) decomposition $N \cong J \times \mathbb{R}_a$. In the dual left-invariant coframe $\{\chi, \eta, \upsilon, \alpha\}$, the conformal structure $\mathbf{c}_N$ has left-invariant representative $g_N := - \chi \upsilon - \eta^2 + \eta \alpha - \upsilon^2$. The scale $\sigma_N := e^{a / 2} \sqrt{x p - y}$ (given here with respect to the scale corresponding to $g_N$) is an almost Einstein scale, and hence $\smash{g_E := \sigma_N^{-2} g_N}$ is Einstein (in fact, Ricci-f\/lat). The conformal class $\mathbf{c}$ on $M := N \times \mathbb{R}_r$ containing $g' := g_E - dr^2$ admits the almost Einstein scale $r$ (here given with respect to $g'$): $\smash{g := r^{-2} g'\vert_{\{\pm r > 0\}} \in \mathbf{c}\vert_{\{\pm r > 0\}}}$ is a \textit{Poincar\'e--Einstein metric} for $\mathbf{c}_N$, and in particular is Ricci-positive, and $\mathbf{c}_N$ is a \textit{conformal infinity} for $g$; see \cite[Section~4]{FeffermanGraham}. (We suppress the notation for the pullback by the canonical projection $M = N \times \mathbb{R} \to N$.) So, the curved orbits are $M_5^{\pm} = \{(p, r) \in N \times \mathbb{R} \colon \pm r > 0\}$, $M_4 = N \times \{0\} \leftrightarrow N$, and $M_2^{\pm} = \varnothing$. On $M_5$, $g := r^{-2} g'$ is Ricci-positive, and $(N, \mathbf{c}_N)$ is a conformal inf\/inity for either of $\smash{(M_5^{\pm}, g\vert_{M_5^{\pm}})}$. The inf\/initesimal symmetry algebra $\mathfrak{aut}(\mathbf{c})$ of $\mathbf{c}$ has dimension $6$, and is spanned by $\mathcal{X} := y \partial_x - p^2 \partial_p + p \partial_a$, $\mathcal{H} := -x \partial_x + y \partial_y + 2 p \partial_p - \partial_a$, $\mathcal{Y} := x \partial_y + \partial_p$, $\mathcal{Z} := e^{-a}[(x p - y) \partial_p - x \partial_a]$, $\mathcal{A} := - 2 \partial_a + r \partial_r$, $\partial_r$. Now, $\mathcal{X} \wedge \mathcal{H} \wedge \mathcal{Y} \wedge \mathcal{A} \wedge \partial_r = -2 (x p - y)^2 \partial_x \wedge \partial_y \wedge \partial_p \wedge \partial_a \wedge \partial_r$, which vanishes nowhere on $M$, so $(M, \mathbf{c})$ is homogeneous. Computing the compatible parallel tractor $3$-forms, and in particular using \eqref{equation:phi-parameterization-Ricci-positive}, gives that one $1$-parameter family of conformally isometric oriented $(2, 3, 5)$ distributions~$\mathbf{D}_t^{\mp}$ that induce $\mathbf{c}$ and are related by the Einstein scale~$r$ is given on~$M_5$ as \begin{gather*} \mathbf{D}_t^{\mp} := \bigg\langle \pm \frac{r e^{-a \mp t}}{x p - y} \hat E_X + 2 \partial_a ,\mp \left[2 e^{2 a \pm t} (x p - y)^2 + \frac{1}{2} e^{\mp t} r^2\right] \hat E_X \\ \hphantom{\mathbf{D}_t^{\mp} := \bigg\langle}{} + e^a r (x p - y) \hat E_H \pm 2 (x p - y)^2 e^{2 a \pm t} \hat E_Y + \frac{1}{r} \partial_a + \partial_r \bigg\rangle . \end{gather*} Computing the wedge product of the two spanning f\/ields shows that this span extends smoothly across $M_4$ to a $(2, 3, 5)$ distribution on all of $M$. By def\/inition this family is $\mathcal{D}(\mathbf{D}_0^-; r)$, and the corresponding conformal Killing f\/ield is $\iota_7(r) = \mathcal{A}$. The inf\/initesimal symmetry algebra of $\smash{\mathbf{D}_t^{\pm}}$ is $\smash{\mathfrak{aut}(\mathbf{D}_t^{\pm})} = \smash{\langle \mathcal{X}, \mathcal{H}, \mathcal{Y}, \pm e^{\pm t} \mathcal{Z} - 2 \partial_r \rangle} \cong \smash{\mfgl(2, \mathbb{R})}$. In particular, this furnishes an example of an inhomogeneous $(2, 3, 5)$ distribution that induces a homogeneous conformal structures. The metric $g'$ is itself Ricci-f\/lat, so the conformal structure $\mathbf{c}$ admits two linearly independent almost Einstein scales. In the scale of $\mathbf{c}$ determined by $g'$, $\aEs(\mathbf{c}) = \langle 1, r \rangle$, and the corresponding conformal Killing f\/ields are spanned by $\iota_7(1) = \mathcal{Z} + \partial_r$ and $\iota_7(r) = \mathcal{A}$. These scales correspond to two linearly independent parallel tractors, which reduces the conformal holonomy $\Hol(\mathbf{c})$ to a proper subgroup of $\SL(3,\mathbb{R})$; computing gives $\Hol(\mathbf{c}) \cong \SL(2, \mathbb{R}) \ltimes \mathbb{R}^2$. \end{Example} \begin{Example}[submaximally symmetric $(2, 3, 5)$ distributions] In Cartan's analysis \cite{CartanFiveVariables} of the equivalence problem for $(2, 3, 5)$ distributions, he showed that if the dimension of the inf\/initesimal symmetry algebra of a $(2, 3, 5)$ distribution $\mathbf{D}$ has inf\/initesimal symmetry algebra of dimension $< 14$ (equivalently, if it is not locally f\/lat) and satisf\/ies a natural uniformity condition, then $\dim \mathfrak{aut}(\mathbf{D}) \leq 7$. (It was shown much more recently, in~\cite{KruglikovThe}, that the uniformity condition is unnecessary.) Moreover, equality holds if\/f the distribution is locally equivalent, up to a suitable notion of complexif\/ication, to the distribution \begin{gather}\label{equation:submaximal-distributions} \mathbf{D}_I := \big\langle \partial_q ,\partial_x + p \partial_y + q \partial_p - \tfrac{1}{2}\big[q^2 + \tfrac{10}{3} I p^2 + \big(1 + I^2\big) y^2\big] \partial_z \big\rangle \end{gather} on $\mathbb{R}^5_{xypqz}$ for some constant $I$.\footnote{The coef\/f\/icient $\smash{\frac{10}{3}}$ corrects an arithmetic error in \cite[Section~9, equation~(6)]{CartanFiveVariables}. Also, note that we have specialized the formula given there to constant~$I$.} The almost Einstein geometry of the distributions $\mathbf{D}_I$ is discussed in detail in \cite{Willse}: The induced conformal structure $\mathbf{c}_I := \mathbf{c}_{\mathbf{D}_I}$ contains the representative metric \begin{gather*} g_I := \left[-\tfrac{3}{2} \big(I^2 + 1\big) y^2 + 2 I p^2 - \tfrac{1}{2} q^2\right] \! dx^2 - 4 I p \,dx \,dy + q \,dx \,dp \\ \hphantom{g_I :=}{}- 3 p \,dx \,dq - 3 \,dx \,dz - 3 I \,dy^2 + 3 \,dy \,dq - 2 \,dp^2 . \end{gather*} The trivializations by $g_I$ of the almost Einstein scales of $\mathbf{c}_I$ are the pullbacks by the projection $\mathbb{R}^5_{xypqz} \to \mathbb{R}_x$ of the solutions of the homogeneous ODE $\sigma'' - \tfrac{1}{3} I \sigma = 0$ in $x$, and all of these turn out to be Ricci-f\/lat. In particular the vector space of almost Einstein scales of $\mathbf{c}_I$ is $2$-dimensional, so by Theorem \ref{theorem:conformal-Killing-field-decomposition} $\dim \mathfrak{aut}(\mathbf{c}_I) = \dim \mathfrak{aut}(\mathbf{D}_I) + \dim \aEs(\mathbf{c}_I) = 9$. For all $I$, $\Hol(\mathbf{c}_I)$ is isomorphic to the $5$-dimensional Heisenberg group. Unlike for the non-Ricci-f\/lat cases, the authors are aware of no example of a $(2, 3, 5)$ distribution $\mathbf{D}$ for which $\mathbf{c}_{\mathbf{D}}$ is equal to the full ($8$-dimensional) stabilizer $\SL(2, \mathbb{R}) \ltimes Q_+$ in $\G_2$ of an isotropic vector in the standard representation. These distributions are contained in the f\/irst class of examples of $(2, 3, 5)$ distributions whose induced conformal structures locally admit Einstein representatives \cite[Example~6]{Nurowski}.\footnote{In that reference, these distributions were given in a form not immediately recognizable as dif\/feomorphic to those in \eqref{equation:submaximal-distributions}. For $I \neq \pm \frac{3}{4}$, the distribution $\mathbf{D}_I$ is dif\/feomorphic to the distribution def\/ined via \cite[equation~(55)]{Nurowski} by the function $F(q) = q^m$, where $k = 2 m - 1$ is any value that satisf\/ies \begin{gather*} I^2 = \frac{(k^2 + 1)^2}{(k^2 - 9)(\tfrac{1}{9} - k^2)} ; \end{gather*} when $I = \pm \frac{3}{4}$, one may take $F(q) = \log q$ \cite{DoubrovKruglikov}.} \end{Example} \begin{landscape}
1,108,101,564,106
arxiv
\section{\label{sec:level1}First-level heading:\protect\\ The line \section{\label{Intro}Introduction} Despite their apparently separated application areas, general relativity and quantum information are not disjoint research fields. On the contrary, following the pioneering work of Alsing and Milburn \cite{Alsingtelep} a wealth of works \cite{TeraUeda2,ShiYu,Alicefalls,AlsingSchul,SchExpandingspace,Adeschul,KBr,LingHeZ,ManSchullBlack,PanBlackHoles,Steeg} has considered different situations in which entanglement was studied in a general relativistic setting, for instance, quantum information tasks in the proximity of black holes \cite{TeraUedaSpd,PanBlackHoles,ManSchullBlack,Adeschul}, entanglement in an expanding universe \cite{SchExpandingspace,Steeg}, entanglement with non-inertial partners \cite{Alicefalls,AlsingSchul,TeraUeda2,KBr} etc. Entanglement behavior in non-inertial frames was first considered in \cite{Alsingtelep} where the fidelity of teleportation between relative accelerated partners was analyzed. After this, occupation number entanglement degradation of scalar \cite{Alicefalls} and Dirac \cite{AlsingSchul} fields due to Unruh effect \cite{Unruh,Crispino} was shown. Recent works studied the effect of the instantaneous Wigner rotations and Thomas spin precession on entanglement \cite{AlsingWigner},\cite{AlsingWignerFot}. The previous work \cite{AlsingSchul} on Unruh effect for Dirac field mode entanglement does not consider the spin of the parties. Hence, only two occupation numbers $n=(0,1)$ are allowed for each mode. Higher values of $n$ are forbidden by Pauli exclusion principle. However, addressing the effect of Unruh decoherence on spin entanglement can only be done by incorporating the spin of the parties in the framework from the very beginning. As a consequence, occupation number $n=2$ is also allowed. This fact will affect occupation number entanglement which has to be reconsidered in this new setting. For this purpose, in this work we will study the case of two parties (Alice and Rob) sharing a general superposition of Dirac vacuum and all the possible one particle spin states for both Alice and Rob. Alice is in an inertial frame while Rob undergoes a constant acceleration $a$. We will show that Rob --when he is accelerated respect to an inertial observer of the Dirac vacuum-- would observe a thermal distribution of fermionic spin $1/2$ particles due to Unruh effect \cite{Unruh}. Next, we will consider that Alice and Rob share spin Bell states in a Minkowski frame. Then, the case in which Alice and Rob share a superposition of the Dirac vacuum and a specific one particle state in a maximally entangled combination. In both cases we analyze the entanglement and mutual information in terms of Rob's acceleration $a$. Finally, we will study the case when the information about spin is erased from our setting by partial tracing, remaining only the occupation number information. Here, entanglement is more degraded than in \cite{AlsingSchul}. This comes about because more accessible levels of occupation number are allowed, so our system has a broader margin to become degraded. This paper is structured in the following sections. In sections \ref{sec2}, and \ref{sec3} we introduce the basic formalism and notation to deal with Dirac fields from the point of view of an accelerated observer taking its spin structure into account. In section \ref{sec4} we study how the Minkowski vacuum state is expressed by an accelerated observer when the spin of each mode is included in the setting, discussing the implications of the single-mode approximation often carried out in the literature \cite{AlsingSchul,AlsingMcmhMil}. Also, we build the one particle state with spin $s$ in Rindler coordinates and analyze the Unruh effect when the spin structure is included. Here we discuss the necessity of tracing over Rindler's region IV for Rob, since it is causally disconnected from region I in which we consider Rob's location. In section \ref{sec5} we analyze how entanglement of spin Bell states is degraded due to Unruh effect. We show that, even in the limit of $a\rightarrow\infty$, some degree of entanglement is preserved due to Pauli exclusion principle. Then we analyze Unruh effect on a completely different class of maximally entangled states (like $\ket{00}+\ket{ss'}$ where $s$ and $s'$ are $z$ component of spin labels) comparing it with the spin Bell states. In section \ref{sec6} we show that the erasure of spin information, in order to investigate occupation number entanglement alone, requires considering total spin states for the bipartite system. Finally, our results and conclusions are summarized in section \ref{sec7}. \section{The setting}\label{sec2} We consider a free Dirac field in a Minkowski frame expanded in terms of the positive (particle) and the negative (antiparticle) energy solutions of Dirac equation notated $\psi^+_{k,s}$ and $\psi^-_{k,s}$ respectively: \begin{equation}\label{field}\psi=\sum_{s}\int d^3k\, (a_{k,s}\psi^+_{k,s}+b_{k,s}^\dagger\psi^-_{k,s})\end{equation} Here, the subscript $k$ notates momentum which labels the modes of the same energy and $s=\{\uparrow ,\downarrow\}$ is the spin label that indicates spin-up or spin-down along the quantization axis. $a_{k,s}$ and $b_{k,s}$ are respectively the annihilation operators for particles and antiparticles, and satisfy the usual anticommutation relations. For each mode of frequency $k$ and spin $s$ the positive and negative energy modes have the form \begin{equation}\label{eq2}\psi^\pm_{k,s} =\frac{1}{\sqrt{2\pi k_0}}u^\pm_s(\bm k) e^{\pm i(\bm k\cdot\bm x- k_0t)}\end{equation} where $u^\pm_s(\bm k)$ is a spinor satisfying the normalization relations $\pm \bar u^\pm_s(\bm k)u^\pm_{s'}(\bm k)=(k_0/m)\delta_{ss'},\bar u^{\mp}_s(\bm k)u^\pm_{s'}(\bm k)=0$. The modes are classified as particle or antiparticle respect to $\partial_t$ (Minkowski Killing vector directed to the future). The Minkowski vacuum state is defined by the tensor product of each frequency mode vacuum \begin{equation}\label{vacua}\ket0=\bigotimes_{k,k'}\ket{0_k}^+\ket{0_{k'}}^-\end{equation} such that it is annihilated by $a_{k,s}$ and $b_{k,s}$ for all values of $s$. We will use the same notation as reference \cite{AlsingSchul} where the mode label will be a subscript inside the ket, and the absence of subscript outside the ket indicates a Minkowski Fock state. In this way, and as a difference with previous works, we will consider the spin structure for each mode, and hence, the maximum occupation number is two. This introduces the following notation \begin{equation}a^\dagger_{k,s}a^\dagger_{k,s'}\ket0=\ket{ss'_k}\delta_{s,{-s'}}\end{equation} If $s=s'$ the two particles state is not allowed due to Pauli exclusion principle, so our allowed Minkowski states for each mode of particle/antiparticle are \begin{equation}\{\ket{0_k}^\pm,\ket{\uparrow_k}^\pm,\ket{\downarrow_k}^\pm,\ket{\uparrow\downarrow_k}^\pm\}.\end{equation} Consider that we have the following Minkowski bipartite state \begin{eqnarray}\label{gen1} \nonumber\ket{\phi_{k_A,k_R}}&=&\mu\ket{0_{k_A}}^+\ket{0_{k_R}}^++\alpha\ket{\uparrow_{k_A}}^+\ket{\uparrow_{k_R}}^++\beta\ket{\uparrow_{k_A}}^+\\* \nonumber&&\times\ket{\downarrow_{k_R}}^++ \gamma\ket{\downarrow_{k_A}}^+\ket{\uparrow_{k_R}}^++\delta\ket{\downarrow_{k_A}}^+\ket{\downarrow_{k_R}}^+\\* \end{eqnarray} with $\mu=\sqrt{1-|\alpha|^2-|\beta|^2-|\gamma|^2-|\delta|^2}$. The subscripts $A,R$ indicate the modes associated with Alice and Rob respectively. All other modes of the field are unoccupied --that is to say that the complete state would be $\ket\Phi=\ket{\phi_{k_A,k_R}}\otimes[\bigotimes_{(k\neq k_A,k_R),k'}\ket{0_k}^+\ket{0_k}^-]$--. This state generalizes the Bell spin states (for instance, we have $\ket{\phi^+}$ choosing $\alpha=\delta=1/\sqrt2$) or a modes entangled state (for instance choosing $\alpha=\mu=1/\sqrt2$). With this state \eqref{gen1} we will be able to deal with two different and interesting problems at once, 1. Studying the Unruh decoherence of spin entangled states and 2. Investigating the impact of considering the spin structure of the fermion on the occupation number entanglement and its Unruh decoherence. Later on, under the single mode approximation, we will assume that Alice is stationary and has a detector sensitive only to the mode $k_A$, and Rob moves with uniform acceleration $a$ taking with him a detector sensitive to the mode $k_R$. \section{Rindler metric and Bogoliubov coefficients for Dirac fields}\label{sec3} An uniformly accelerated observer viewpoint is described by means of the well-known Rindler coordinates \cite{gravitation}. In order to cover the whole Minkowski space-time, two different set of coordinates are necessary. These sets of coordinates define two causally disconnected regions in Rindler space-time. If we consider that the uniform acceleration $a$ lies on the $z$ axis, the new Rindler coordinates $(t,x,y,z)$ as a function of Minkowski coordinates $(\tilde t,\tilde x,\tilde y,\tilde z)$ are \begin{equation}\label{Rindlcoordreg1} a\tilde t=e^{az}\sinh(at),\; a\tilde z=e^{az}\cosh(at),\; \tilde x= x,\; \tilde y= y \end{equation} for region I, and \begin{equation}\label{Rindlcoordreg2} a\tilde t=-e^{az}\sinh(at),\; a\tilde z=-e^{az}\cosh(at),\; \tilde x= x,\; \tilde y= y \end{equation} for region IV. \begin{figure}\label{fig1} \includegraphics[width=.45\textwidth]{rindler} \caption{Rindler space-time diagram: lines of constant position $z=\text{const.}$ are hyperbolae and all the curves of constant proper time $t$ for the accelerated observer are straight lines that come from the origin. An uniformly accelerated observer Rob travels along a hyperbola constrained to region I} \end{figure} As we can see from fig. 1, although we have covered the whole Minkowski space-time with these sets of coordinates, there are two more regions labeled II and III. To map them we would need to switch $\cosh\leftrightarrow\sinh$ in equations \eqref{Rindlcoordreg1},\eqref{Rindlcoordreg2}. In these regions $t$ is a spacelike coordinate and $z$ is a timelike coordinate. However, the solutions of Dirac equation in such regions are not required to discuss entanglement between Alice and an accelerated observer since he would be constrained to either region I or IV, having no possible access to the opposite regions as they are causally disconnected \cite{Birrell,gravitation,Alicefalls,AlsingSchul}. The Rindler coordinates $z,t$ go from $-\infty$ to $\infty$ independently in regions I and IV. It means that each region admits a separate quantization procedure with their corresponding positive and negative energy solutions\footnote{Throughout this work we will consider that the spin of each mode is in the acceleration direction and, hence, spin will not undergo Thomas precession due to instant Wigner rotations \cite{AlsingSchul,Jauregui}.} $\{\psi^{I+}_{k,s},\psi^{I-}_{k,s}\}$ and $\{\psi^{IV+}_{k,s},\psi^{IV-}_{k,s}\}$. Particles and antiparticles will be classified with respect to the future-directed timelike Killing vector in each region. In region I the future-directed Killing vector is \begin{equation}\label{KillingI} \partial_t^I=\frac{\partial \tilde t}{\partial t}\partial_{\tilde t}+\frac{\partial\tilde z}{\partial t}\partial_{\tilde z}=a(\tilde z\partial_{\tilde t}+\tilde t\partial_{\tilde z}), \end{equation} whereas in region IV the future-directed Killing vector is $\partial_t^{IV}=-\partial_t^{I}$. This means that solutions in region I, having time dependence $\psi_k^{I+}\sim e^{-ik_0t}$ with $k_0>0$, represent positive energy solutions, whereas solutions in region IV, having time dependence $\psi_k^{I+}\sim e^{-ik_0t}$ with $k_0>0$, are actually negative energy solutions since $\partial^{IV}_t$ points to the opposite direction of $\partial_{\tilde t} $ \cite{AlsingSchul,Birrell}. As I and IV are causally disconnected $\psi^{IV\pm}_{k,s}$ and $\psi^{I\pm}_{k,s}$ only have support in their own regions, vanishing outside them. Let us denote $(c_{I,k,s},c^{\dagger}_{I,k,s})$ the particle annihilation and creation operators in region I and $(d_{I,k,s},d^{\dagger}_{I,k,s})$ the corresponding antiparticle operators. Analogously we define $(c_{IV,k,s},c^{\dagger}_{IV,k,s}, d_{IV,k,s},d_{IV,k,s}^\dagger)$ the particle/antiparticle operators in region IV. These operators satisfy the usual anticommutation relations $\{c_{\text{R},k,s},c^\dagger_{\text{R}',k',s'}\}=\delta_{\text{R}\text{R}'}\delta_{kk'}\delta_{ss'}$ where the subscript R notates the Rindler region of the operator $\text{R}=\{I,IV\}$. All other anticommutators are zero. That includes the anticommutators between operators in different regions of the Rindler space-time. Taking this into account we can expand the Dirac field in Rindler coordinates analogously to \eqref{field}: \begin{eqnarray}\label{fieldri} \nonumber\psi&=&\sum_{s}\int d^3k\, \left(c_{I,k,s}\psi^{I+}_{k,s}+d_{I,k,s}^\dagger\psi^{I-}_{k,s}+c_{IV,k,s}\psi^{IV+}_{k,s}\right.\\* &&\left.+d_{IV,k,s}^\dagger\psi^{IV-}_{k,s}\right).\end{eqnarray} Equations \eqref{field} and \eqref{fieldri} represent the decomposition of the Dirac field in its modes in Minkowski and Rindler coordinates respectively. We can relate Minkowski and Rindler creation and annihilation operators by taking appropriate inner products \cite{Takagi,Jauregui,Birrell,AlsingSchul}. The relationship between Minkowski and Rindler particle/antiparticle operators is linear and the coefficients that relate them are called Bogoliubov coefficients: \begin{eqnarray}\label{Bogoliubov} \nonumber a_{k,s}&=&\cos{r}\,c_{I,k,s}-e^{i\phi}\sin r\,d^\dagger_{IV,-k,-s}\\* b^\dagger_{k,s}&=&\cos{r}\,d^\dagger_{IV,k,s}+e^{-i\phi}\sin r\,c_{I,-k,-s} \end{eqnarray} where \begin{equation}\label{defr} \tan r=e^{-\pi \frac{k_0c}{a}} \end{equation} and $\phi$ is a phase factor that will turn out to be irrelevant for our purposes. Notice that as we are working with two spatial-temporal dimensions and with massless Dirac field, the relation between Rindler modes and Minkowski modes is given in \eqref{Bogoliubov}. We will discuss in the conclusions the implications of considering extra dimensions and massive fields, where Minkowski modes are spread over all positive Rindler frequencies \cite{Takagi}. Notice from Bogoliubov transformations \eqref{Bogoliubov} that the Minkowski particle annihilator $a_{k,s}$ transforms into a Rindler particle annihilator of momentum $k$ and spin $s$ in region I and an antiparticle creator of momentum $-k$ and spin $-s$ in region IV (in region IV all the magnitudes that are not invariant under time reversal change). \section{Unruh effect for fermion fields of spin $1/2$}\label{sec4} Now that we have the relationships between the creation and annihilation operators in Minkowski and Rindler coordinates, we can obtain the expression of the Minkowski vacuum state for each mode $\ket {0_k}$ in Rindler coordinates. For notation simplicity, we will drop the $k$ label in operators/states when it does not give any relevant information, but we will continue writing the spin label. It is useful to introduce some notation for our states. We will denote with a subscript outside the kets if the mode state is referred to region I or IV of the Rindler space-time. The absence of subscript outside the ket will denote Minkowski coordinates. The $\pm$ label of particle/antiparticle will be omitted throughout the paper because, for the cases considered, a ket referred to Minkowski space-time or Rindler's region I will always denote particle states and a ket referred to region IV will always notate antiparticle states. Inside the ket we will write the spin state of the modes as follows \begin{equation}\label{notation1} \ket{s}_I=c^\dagger_{Is}\ket{0}_I,\qquad \ket{s}_{IV}=d^\dagger_{IVs}\ket{0}_{IV} \end{equation} which will notate a particle state in region I and an antiparticle state in region IV respectively, both with spin $s$. We will use the following definitions for our kets \begin{eqnarray}\label{notation2} \nonumber \ket{\uparrow\downarrow}_I&=&c^\dagger_{I\uparrow}c^\dagger_{I\downarrow}\ket{0}_I=-c^\dagger_{I\downarrow}c^\dagger_{I\uparrow}\ket{0}_I\\* \ket{\uparrow\downarrow}_{IV}&=&d^\dagger_{IV\uparrow}d^\dagger_{IV\downarrow}\ket{0}_{IV}=-d^\dagger_{IV\downarrow}d^\dagger_{IV\uparrow}\ket{0}_{IV} \end{eqnarray} and, being consistent with the different Rindler regions operators anticommutation relations, \begin{equation}\label{notation3} \nonumber \ket{s}_I\ket{s'}_{IV}=c^\dagger_{Is}d^\dagger_{IVs'}\ket{0}_{I}\ket{0}_{IV}=-d^\dagger_{IVs'}c^\dagger_{Is}\ket{0}_{I}\ket{0}_{IV} \end{equation} \begin{equation}\label{notation4} d^\dagger_{IVs'}\biket{s}{0}=-\biket{s}{s'}. \end{equation} Now, it is useful to note that \eqref{Bogoliubov} could be expressed as two-modes squeezing transformation for each $k$ \cite{Alicefalls,AlsingSchul} \begin{equation}\left(\!\begin{array}{c} a_{s,k}\\ b^\dagger_{k,s} \end{array}\!\right)=S\left(\!\begin{array}{c} c_{I,k,s}\\ d^\dagger_{IV,-k,-s} \end{array}\!\right)S^\dagger\end{equation} where \begin{equation}\label{squeez} S=\exp\left[r\left(c_{I,k,s}^\dagger\, d_{IV,-k,-s}e^{-i\phi}+c_{I,k,s}\, d_{IV,-k,-s}^\dagger e^{i\phi}\right)\right] \end{equation} So, analogously to \cite{Alicefalls,AlsingSchul}, it is reasonable to postulate that the Minkowski vacuum is a Rindler two-mode particles/antiparticles squeezed state with opposite spin and momentum states in I and IV. Contrarily to \cite{AlsingSchul}, considering that the modes have spin, occupation number for each $k$ is allowed to be 2, being higher occupation numbers forbidden by Pauli exclusion principle. In the literature the analysis is restricted only to one mode of the Minkowski field, but we can restrict our analysis to some sector of the Minkowski vacuum \eqref{vacua}, defining for the particles sector \begin{equation}\label{revacua} \ket{\tilde 0}=\bigotimes_{k_1,\dots,k_n}\ket{0_k} \end{equation} such that the particle sector of \eqref{vacua} can be written as \mbox{$\ket 0=\ket{\tilde 0}\otimes\bigotimes_{p\neq k_1,\dots,k_n}\ket{0_p}$.} In this fashion we are considering a discrete number $n$ of different modes $k_1,\dots,k_n$, so Minkowski vacuum should be expressed as a squeezed state in Rindler coordinates which is an arbitrary superposition of spins and momenta. This will be useful to discuss what would happen if we relax the single-mode approximation carried out often in the literature and let our detectors have a small mode spread. \begin{eqnarray}\label{vacuumCOMP} \nonumber \ket{\tilde 0}&=&C^0\biket{0}{0}+\sum_{\substack{s_1\\k_1}}C^1_{s_1,k_1} \biket{\tilde 1}{\tilde 1}\\* \nonumber&&+\sum_{\substack{s_1,s_2\\k_1,k_2}}C^2_{s_1,s_2,k_1,k_2}\xi_{s_1,s_2}^{k_1,k_2} \biket{\tilde 2}{\tilde 2}+\dots\\* \nonumber &&+\!\!\!\!\!\sum_{\substack{s_1,\dots,s_n\\k_1,\dots,k_n}}\!\!\!\!C^n_{s_1,\dots,s_n,k_1,\dots,k_n}\xi_{s_1,\dots,s_n}^{k_1,\dots,k_n} \biket{\tilde n}{\tilde n}+\dots\\* \nonumber&&+\!\!\!\!\!\!\!\sum_{\substack{s_1,\dots,s_{2n}\\k_1,\dots,k_{2n}}}\!\!\!\!C^{2n}_{s_1,\dots,s_{2n},k_1,\dots,k_{2n}}\xi_{s_1,\dots,s_{2n}}^{k_1,\dots,k_{2n}} \biket{\widetilde{2n}}{\widetilde{2n}}\\* \end{eqnarray} Where, here, the notation is \begin{equation}\label{notationmod} \biket{\tilde m}{\tilde m}\!=\!\biket{s_1,\!k_1;\dots;\!s_m,\!k_m}{-s_1,\!-k_1;\dots;\!-s_m,\!-k_m} \end{equation} with \begin{equation}\label{notationmod2} \ket{s_1,\!k_1;\dots;\!s_n,\!k_n}_I=c^\dagger_{I,k_n,s_n}\dots c^\dagger_{I,k_1,s_1}\ket{0}_I \end{equation} being \begin{equation}\label{notationmod} \sum_{\substack{s_1,\dots,s_m\\k_1,\dots,k_m}}\bra{\tilde m'}_{IV}\bra{\tilde m'}_I\biket{\tilde m}{\tilde m}\!=(m!)^2 \end{equation} and the symbol $\xi$ is \begin{equation}\label{gi} \xi_{s_1,\dots,s_m}^{k_1,\dots,k_m}\equiv\left\{\begin{array}{l} 0 \; \text{If } s_i=s_j \text{ and } k_i=k_j \quad i\neq j\\[1mm] 1\; \text{Otherwise} \end{array}\right. \end{equation} which imposes Pauli exclusion principle constraints on the state (quantum numbers of fermions cannot coincide). Notice a pair of aspects of this notation for the multimode case. First, in the series in \eqref{vacuumCOMP} all the possible orders of the operators are implicitly written. Due to the anticommutation relations of the fermionic operators, terms with different orderings of the creation operators are related, i.e. \begin{eqnarray}\label{ordering} \nonumber\biket{s_1,k_1;s_2,k_2}{-s_1,-k_1;-s_2,-k_2}&=&\\* \biket{s_2,k_2;s_1,k_1}{-s_2,-k_2;-s_1,-k_1} \end{eqnarray} So, without loss of generality, we could choose not to write all the possible orderings of the operators in \eqref{vacuumCOMP}. The difference between taking all the possible orderings of the operators into account and taking only one representant is a factor $m!$ in the constants $C^m$. From \eqref{ordering} we can also see that the coefficients $C^m$ are symmetric with respect to $s_i,k_i$ index permutations. Second, as there are only $n$ different modes $(k_1,\dots,k_n)$, the last summation in equation \eqref{vacuumCOMP} has only $(2n)!$ terms due to \eqref{gi}. These terms are all the different permutations of the creation operators for pairs of opposed spins for each mode. There would be only one summand--instead of $(2n)!$--in the simplified notation where we do not write all the different permutations of the operators but only one representant. It means that, in this simplified notation, the series of terms with $n$ pairs has the same summands as the series with the vacuum state (i.e. only one). Actually, in this notation--i.e. if we count all the different order permutations as only one-- the series with $C^{2n}$ to $C^{n+1}$ has exactly the same number of summands as the series with $C^{0}$ to $C^{n-1}$. To obtain restrictions on the values of the coefficients $C^m$ we demand that the Minkowski vacuum has to be annihilated by the particle annihilator, $a_{k_0,s_0}\ket0=0$. Translating this into Rindler coordinates we have \begin{equation}\label{anhRindler} \left[\cos r\,c_{I,k_0,s_0}-e^{i\phi}\sin r\,d^\dagger_{IV,-k_0,-s_0}\right]\ket0=0 \end{equation} where the vacuum state should be expressed in Rindler coordinates using \eqref{vacuumCOMP}. As the elements \eqref{notationmod} form an orthogonal set, from \eqref{anhRindler} we see that all the terms proportional to different elements of the set should be zero simultaneously, which gives the following conditions on the coefficients \begin{itemize} \item $C^1_{s,k}$ as a function of $C^0$\\[-9mm] \end{itemize} \begin{eqnarray} \label{01} C^1_{\uparrow,k_0}\cos r-C^0e^{i\phi}\sin r&=&0\\* C^1_{\downarrow,k_0}\cos r-C^0e^{i\phi}\sin r&=&0 \label{02} \end{eqnarray} since equations \eqref{01},\eqref{02} should be satisfied $\forall k_0$, we obtain that $C^1_{\uparrow,k}=C^1_{\downarrow,k}=\text{const.}$ since $C^0$ does not depend on $k$ or $s$. We will denote $C^1_{s,k}\equiv C^1$. \begin{itemize} \item $C^2_{s_1,s_2,k_1,k_2}$ as a function of $C^1$\\[-9mm] \end{itemize} \begin{eqnarray} \label{03} C^1e^{i\phi}\sin r- 2C^2_{ss',k,k_0}\cos r&=&0\\* \label{04} C^1e^{i\phi}\sin r- 2C^2_{ss',k_0,k}\cos r&=&0 \end{eqnarray} since equations \eqref{03}, \eqref{04} should be satisfied $\forall k_0$, we obtain that $C^2_{s_1,s_2,k_1,k_2}=C^2$ where $C^2$ does not depend on spins or momenta since $C^1$ does not depend on $k$ or $s$, the only dependence of the coefficients \eqref{vacuumCOMP} with $k_i$ and $s_i$ is given by the Pauli exclusion principle, this dependence comes through function \eqref{gi}. In fact it is very easy to show that all the coefficients are independent of $s_i$ and $k_i$ --apart from the Pauli exclusion principle constraint.-- Using the fact that $C^0$ does not depend on $s_i$ and $k_i$ and noticing that by applying the annihilator on the vacuum state and equalling it to zero, we will always obtain the linear relationship between $C^{n}$ and $C^{n-1}$ given below. \begin{itemize} \item $C^m$ as a function of $C^{m-1}$\\[-9mm] \end{itemize} \begin{eqnarray} \label{05} C^{m-1}e^{i\phi}\sin r- m\,C^m\cos r&=&0\\* \label{06} C^{m-1}e^{i\phi}\sin r- m\,C^m\cos r&=&0 \end{eqnarray} We finally obtain that $C^m$ is a constant which can be expressed as a function of $C^0$ as \begin{equation}\label{coeff2} C^n=\frac{C^0}{m!} e^{im\phi}\tan^m r \end{equation} And then, vacuum state \eqref{vacuumCOMP} can be expressed as \begin{eqnarray}\label{vacuumCOMP2} \nonumber \ket{\tilde 0}&=&C^0\biket{0}{0}+C^1\sum_{\substack{s_1\\k_1}} \biket{\tilde 1}{\tilde 1}\\* \nonumber&&+C^2\sum_{\substack{s_1,s_2\\k_1,k_2}}\xi_{s_1,s_2}^{k_1,k_2} \biket{\tilde 2}{\tilde 2}+\dots\\* \nonumber &&+C^n\!\!\!\!\sum_{\substack{s_1,\dots,s_n\\k_1,\dots,k_n}}\!\!\!\!\xi_{s_1,\dots,s_n}^{k_1,\dots,k_n} \biket{\tilde n}{\tilde n}+\dots\\* &&+C^{2n}\!\!\!\!\sum_{\substack{s_1,\dots,s_{2n}\\k_1,\dots,k_{2n}}}\!\!\!\!\xi_{s_1,\dots,s_{2n}}^{k_1,\dots,k_{2n}} \biket{\tilde n}{\tilde n} \end{eqnarray} where the only parameter not fixed yet is $C^0$. We can now fix $C^0$ by imposing the normalization of the Minkowski vacuum in Rindler coordinates $\braket{0}{0}=1$, \cite{Alicefalls,AlsingSchul} this condition can be written as \begin{equation}\label{series1}|C^0|^2\left[\sum_{m=0}^n\Upsilon_m\tan^{2m}r+\sum_{m=n+1}^{2n}\Upsilon_{2n-m}\tan^{2m}r\right]=1\end{equation} Where \begin{equation}\label{upsilon}\Upsilon_m=\sum_{\substack{s_1,\dots,s_m\\k_1,\dots,k_m}}\!\!\!\!\xi_{s_1,\dots,s_m}^{k_1,\dots,k_m}\end{equation} and we have defined $\Upsilon_0\equiv 1$. This expression gives (formally) the value of $C^0$ (except for a global phase) when considering the populated levels in an arbitrary number of modes of the field \begin{equation}\label{series2}C^0=\left[\sum_{m=0}^n\Upsilon_m\tan^{2m}r+\sum_{m=n+1}^{2n}\Upsilon_{2n-m}\tan^{2m}r\right]^{-1/2}\end{equation} We can see that if we take the limit $a\rightarrow0\Rightarrow r\rightarrow 0$ we recover the Minkowski Vacuum. We will come back to the behavior at the limits below. Notice that the state \eqref{vacuumCOMP2} is only normalizable in this discrete limit. This comes about because the Minkowski and Rindler representations are not unitary equivalent (there is not unitary operator connecting the two vacua)\footnote{this is a old-known problem (see reference \cite{Takagi} chapter 2)}. It prevents us from taking the continuous limit in the expression \eqref{vacuumCOMP} but does not invalidate the treatment as equation \eqref{vacuumCOMP2} can be considered as a superposition of a finite number of individual modes which are perfectly well defined \cite{Takagi}. To address the problem of showing how the presence of spin degrees of freedom affects the entanglement between accelerated observers, it is useful to use the single mode approximation \cite{Alsingtelep,AlsingMcmhMil} in the same way that in \cite{AlsingSchul,Alicefalls}. This is valid if we consider Rob's detector so sensitive to a single particle mode in region I that we can approximate the frequency $\omega_A=k_{0_A}$ observed by Alice to be the same frequency observed by Rob, $\omega_A\sim\omega_R$. As a consequence, the populated levels we are looking at are in this single frequency \cite{AlsingSchul} (See also discussion in \cite{AlsingMcmhMil}) so we can consider the sums over $k_i$ in \eqref{vacuumCOMP2} just like a sum of only one mode $k=\omega_A$. This is equivalent to restricting the analysis to the sector $\ket{0_k}$ of the complete vacuum \eqref{vacua}. Since the goal of this work is to show the effect of spin degrees of freedom on the entanglement for non-inertial observers, this approximation allows us to compare our results with previous literature on scalar and spinless fermion fields \cite{Alicefalls,AlsingSchul}. We have to notice that since the observer Rob is accelerated, his possible measurements are affected by a Doppler-like effect. Given that the velocity of the observer is $\hat v = \hat a \hat t = \hat t/\hat z=\tanh(a t)$, the Doppler effect will shift the sensitivity peak of the detector. Namely, if at the instant $t=0$ his detector is sharply tuned to a frequency $\omega_D=\omega_A$, to compensate this Doppler effect at some instant $t=\tau$ the detector should be tuned to the frequency $\omega_D=e^{a\tau}\omega_A$. This implies that any detector will eventually become insensitive to the populated levels of the Minkowski field. In order to do the theoretical analysis in this work, we can consider either that his detector can be sharply tuned to the frequency $e^{a\tau}\omega_A$ for each instant, or that Rob has a set of individual detectors each one sharply tuned to the proper frequency for each instant. Carrying out this single-mode approximation, the Minkowski vacuum for a single mode is a Rindler two-mode particles/antiparticles squeezed state with opposite spin states in I and IV. Considering that the modes have spin, occupation number is allowed to be 2 for each $k$, being higher occupation numbers forbidden by Pauli exclusion principle. As a consequence \begin{eqnarray}\label{vacuum0} \nonumber \ket{0}&=&V\biket{0}{0}+A\biket{\uparrow}{\downarrow}+B\biket{\downarrow}{\uparrow}\\* &&+ C\biket{\uparrow\downarrow}{\uparrow\downarrow}. \end{eqnarray} Notice that $V$ is the analogous to $C^0$, $A$ and $B$ are analogous to $C^1_{\uparrow}$ and $C^1_{\downarrow}$ respectively and $C$ is analogous to $C^2$ in the expression \eqref{vacuumCOMP} but considering only one representant of all the 2 possible orders for the pair . To obtain the values of the coefficients $V,A,B,C$ we demand that the Minkowski vacuum has to be annihilated by the particle annihilator, $a_s\ket0=0$. Translating this into Rindler coordinates we have \begin{eqnarray}\label{annihil2} \nonumber 0&=&\left[\cos r\,c_{I,s}-e^{i\phi}\sin r\,d^\dagger_{IV,-s}\right]\left[V\biket{0}{0}\right.\\* &+&\left.A\biket{\uparrow}{\downarrow}+B\biket{\downarrow}{\uparrow}+ C\biket{\uparrow\downarrow}{\uparrow\downarrow}\right] \end{eqnarray} which implies \begin{eqnarray}\label{annihil2} \nonumber 0&=&\cos r\left[A\biket{0}{\downarrow}+B\delta_{s\downarrow}\biket{0}{\uparrow}\right.\\* \nonumber &&+\left.C\left(\delta_{s\uparrow}\biket{\downarrow}{\uparrow\downarrow}-\delta_{s\downarrow}\biket{\uparrow}{\uparrow\downarrow}\right)\right]\\* \nonumber&&-e^{i\phi}\sin r\left[V\biket{0}{-s}-A\delta_{s\downarrow}\biket{\uparrow}{\uparrow\downarrow}\right.\\* &&\left.+B\delta_{s\uparrow}\biket{\downarrow}{\uparrow\downarrow}\right]. \end{eqnarray} This equation gives 4 conditions (two for each value of $s$), although only 3 of them are independent \begin{equation} \left.\begin{array}{lcr} A\cos r - Ve^{i\phi}\sin r&=&0\\ C\cos r - Be^{i\phi}\sin r&=&0\\ B\cos r - Ve^{i\phi}\sin r&=&0\\ C\cos r - Ae^{i\phi}\sin r&=&0\\ \end{array}\right\}\Rightarrow \begin{array}{ll}A=B=V e^{i\phi}\tan r\\ C=V e^{2i\phi}\tan^2 r \end{array} \end{equation} To fix $V$ we impose the normalization relation for each field mode $\braket{0}{0}=1\Rightarrow |V|^2=1-|A|^2-|B|^2-|C|^2$, imposing this we finally obtain the values of the vacuum coefficients. \begin{equation}\label{vaccoef} \begin{array}{lcl} V&=&\cos^2 r\\ A&=&e^{i\phi}\sin r\,\cos r \\ B&=&e^{i\phi}\sin r\,\cos r\\ C&=&e^{2i\phi}\sin^2 r\\ \end{array} \end{equation} Notice that comparing this result with expressions \eqref{coeff2} and \eqref{series2}, as we have truncated the series in \eqref{series2}, the value of $V$ will be different from the case when more than one mode is considered --$C^0$ instead of $V$--. If we restrict the series on $m$ to only one mode $n=1$ in \eqref{series1}, we obtain that $C^0\rightarrow1/(1+\tan^2 r)=\cos^2r$ and we get then proper values for $A=B=C^1$ and $C=2! C^2$. Since from \eqref{defr} $a\rightarrow\infty\Rightarrow r\rightarrow \pi/4$, comparing $V$ with \eqref{series2}, we can see that, while under the single-mode approximation the limit of infinite acceleration leads to a finite distribution of the Minkowski vacuum over Rindler states, when considering the multimode Rindler expression for the vacuum state \eqref{vacuumCOMP2} the combined limit of $n\rightarrow\infty$ and $a\rightarrow\infty\Rightarrow r\rightarrow\pi/4$ leads to a complete fading away of the amplitudes over all the Rindler modes as $C_0\rightarrow 0$, which may not be the case for finite $a$. This is beyond the scope of this article but we will discuss in the conclusion that it may have very strong implications on the entanglement of fermionic fields for accelerated observers. So finally, under the single mode approximation, the Minkowski vacuum state in Rindler coordinates is as follows \begin{eqnarray}\label{vacuum} \nonumber \ket{0}&=&\cos^2 r\,\biket{0}{0}+e^{i\phi}\sin r\,\cos r\left(\biket{\uparrow}{\downarrow}\right.\\* &&\left.+\biket{\downarrow}{\uparrow}\right)+e^{2i\phi}\sin^2 r\,\biket{\uparrow\downarrow}{\uparrow\downarrow} \end{eqnarray} Now we have to build the one particle (of spin $s$) state in Rindler coordinates. It can be readily done by applying the Minkowski particle creation operator to the vacuum state $\ket{s}=a^\dagger_s\ket0$, and translating it into Rindler coordinates: \begin{eqnarray}\label{onepart1} \nonumber \ket{s}&=&\left[\cos r\,c^\dagger_{I,s}-e^{-i\phi}\sin r\,d_{IV,-s}\right]\left[\cos^2 r\biket{0}{0}\right.\\* \nonumber&&+e^{i\phi}\sin r\,\cos r\left(\biket{\uparrow}{\downarrow}+\biket{\downarrow}{\uparrow}\right)\\* &&\left.+e^{2i\phi}\sin^2 r\,\biket{\uparrow\downarrow}{\uparrow\downarrow}\right] \end{eqnarray} That means \begin{eqnarray}\label{onepart2} \nonumber\ket\uparrow&=&\cos r \biket{\uparrow}{0}+e^{i\phi}\sin r\biket{\uparrow\downarrow}{\uparrow}\\* \ket\downarrow&=&\cos r \biket{\downarrow}{0}-e^{i\phi}\sin r\biket{\uparrow\downarrow}{\downarrow} \end{eqnarray} The three Minkowski states $\ket0,\ket\uparrow,\ket\downarrow$ correspond to the particle field of mode $k$ observed by Alice. However, since Rob is experiencing a uniform acceleration he will not be able to access to field modes in the causally disconnected region IV, hence, Rob must trace over that inaccessible region as it is unobservable. Specifically, when Rob is in region I of Rindler space-time and Alice observes the vacuum state, Rob could only observe a non-pure partial state given by $\rho_R=\operatorname{Tr}_{IV}\left(\ket{0}\bra{0}\right)$ that is \begin{eqnarray}\label{partialvacuum} \nonumber\rho_R&=&\cos^4 r\ket{0}_I\!\!\bra{0}+\sin^2 r\,\cos^2 r\left(\ket{\uparrow}_I\!\!\bra{\uparrow}\right.\\* &&\left.+\ket{\downarrow}_I\!\!\bra{\downarrow}\right)+\sin^4 r \ket{\uparrow\downarrow}_I\!\bra{\uparrow\downarrow} \end{eqnarray} But while Alice would observe the vacuum state of mode $k$, Rob would observe certain statistical distribution of particles. The expected value of Rob's number operator on the Minkowski vacuum state is given by \begin{eqnarray} \nonumber\langle N_R\rangle&=&\ematriz{0}{N_R}{0}=\operatorname{Tr}_{I,IV}\left(N_R\proj{0}{0}\right)=\operatorname{Tr}_{I}\left(N_R\rho_R\right)\\* &=&\operatorname{Tr}_{I}\left[\left(c_{I\uparrow}^\dagger c_{I\uparrow}+c_{I\downarrow}^\dagger c_{I\downarrow}\right)\rho_R\right] \end{eqnarray} Substituting the expression \eqref{partialvacuum} we obtain \begin{equation} \langle N_R\rangle=2\sin^2 r \end{equation} using \eqref{defr} we obtain that \begin{equation}\label{Unruh} \langle N \rangle=2\frac{1}{e^{2\pi\omega c/a}+1}=2\frac{1}{e^{\hslash\omega/K_BT}+1} \end{equation} where $k_B$ is the Boltzmann's constant and \begin{equation} T=\frac{\hslash\, a}{2\pi k_B c} \end{equation} is the Unruh temperature. Equation \eqref{Unruh} is known as the Unruh effect \cite{DaviesUnr,Unruh}, which shows that, for a two-dimensional space-time, an uniformly accelerated observer in region I detects a thermal Fermi-Dirac distribution when he observes the Minkowski vacuum. We obtain a factor 2 contrarily to Ref. \cite{AlsingSchul} due to the degeneracy factor $2S+1$. \section{Spin entanglement with an accelerated partner}\label{sec5} In previous works \cite{Alicefalls,AlsingSchul} it was studied how Unruh decoherence affects occupation number entanglement in bipartite states as \begin{equation}\label{alsingst} \ket{\Psi}=\frac{1}{\sqrt2}(\ket{00}+\ket{11}) \end{equation}where the figures inside the kets represent occupation number of Alice and Rob modes respectively, barring any reference to the spin of the field modes. Here, where we have included the spin structure of each mode in our setting from the very beginning, it is possible to study the effects of acceleration in spin entanglement decoherence, which is different from the mere occupation number entanglement. First of all, we build a general bipartite state that could be somehow analogous to state \eqref{alsingst} studied in \cite{AlsingSchul}, limiting the occupation number to 1 but including the spins of each mode. \begin{eqnarray}\label{genstate} \nonumber\ket{\Psi}&=&\mu\ket{0_A}\ket{0_R}+\alpha\ket{\uparrow_A}\ket{\uparrow_R}+\beta\ket{\uparrow_A}\ket{\downarrow_R}\\* &&+ \gamma\ket{\downarrow_A}\ket{\uparrow_R}+\delta\ket{\downarrow_A}\ket{\downarrow_{R}} \end{eqnarray} with $\mu=\sqrt{1-|\alpha|^2-|\beta|^2-|\gamma|^2-|\delta|^2}$. The subscripts $A,R$ indicate the modes associated with Alice and Rob respectively. We will suppress the labels $A,R$ from now on, and we will understand that the first character in a ket or a bra corresponds to Alice and the second to Rob: $\ket{s,s'}=\ket{s_A}\ket{s'_R}$ This general setting \eqref{genstate} allows us to study in this section what happens with spin entanglement under acceleration of Rob and also what happens with the occupation number entanglement when considering sates analogous to \eqref{alsingst} but taking the spin structure into account. It will also allow us to discuss, in section \ref{sec6}, the implications of tracing over spins and study only the entanglement on the occupation number degree of freedom compared with \cite{AlsingSchul}. The density matrix in Minkowski coordinates for the state \eqref{genstate} is \begin{eqnarray}\label{Minkowdens} \nonumber\rho^M&=&\mu^2\proj{0,0}{0,0}+\mu\alpha^*\proj{0,0}{\uparrow,\uparrow}+\mu\beta^*\proj{0,0}{\uparrow,\downarrow}\\* \nonumber &&+\mu\gamma^*\proj{0,0}{\downarrow,\uparrow}+\mu\delta^*\proj{0,0}{\downarrow,\downarrow}+|\alpha|^2\proj{\uparrow,\uparrow}{\uparrow,\uparrow}\\* \nonumber &&+\alpha\beta^*\proj{\uparrow,\uparrow}{\uparrow,\downarrow}+\alpha\gamma^*\proj{\uparrow,\uparrow}{\downarrow,\uparrow}+\alpha\delta^*\proj{\uparrow,\uparrow}{\downarrow,\downarrow}\\* \nonumber&&+|\beta|^2\proj{\uparrow,\downarrow}{\uparrow,\downarrow}+\beta\gamma^*\proj{\uparrow,\downarrow}{\downarrow,\uparrow}+\beta\delta^*\proj{\uparrow,\downarrow}{\downarrow,\downarrow}\\* \nonumber&&+|\gamma|^2\proj{\downarrow,\uparrow}{\downarrow,\uparrow}+\gamma\delta^*\proj{\downarrow,\uparrow}{\downarrow,\downarrow}+|\delta|^2\proj{\downarrow,\downarrow}{\downarrow,\downarrow}\\* &&+ \text{n.d.H.c.} \end{eqnarray} where n.d.H.c means non-diagonal Hermitian conjugate, and represents the Hermitian conjugate only for the non-diagonal elements. Computing the density matrix, taking into account that Rob is constrained to region I of Rindler space-time, requires to rewrite Rob's mode in terms of Rindler modes and to trace over the unobservable Rindler's region IV. In appendix \ref{appen} we compute each term of \eqref{Minkowdens} in Rindler's coordinates and trace over the unobserved region IV. Using \eqref{trazapa1}, \eqref{trazapa2}, \eqref{trazapa3} we can easily compute the density matrix for Alice and Rob from \eqref{Minkowdens} since $\rho_{AR}=\operatorname{Tr}_{IV}\rho_M$, resulting in the long expression \begin{eqnarray}\label{generaldensmat} \nonumber\rho_{AR}&=&\mu^2\Big[\cos^4r\proj{0,0}{0,0}+\sin^2r\cos^2r\left(\proj{0,\uparrow}{0,\uparrow}\right.\\* \nonumber&&\left.+\proj{0,\downarrow}{0,\downarrow}\right)+\sin^4r\proj{0,\uparrow\downarrow}{0,\uparrow\downarrow}\Big]+\mu\cos^3r\\* \nonumber&&\times\Big[\alpha^*\proj{0,0}{\uparrow,\uparrow}+\beta^*\proj{0,0}{\uparrow,\downarrow}+\gamma^*\proj{0,0}{\downarrow,\uparrow}\\* \nonumber&&+\delta^*\proj{0,0}{\downarrow,\downarrow}\Big]+\mu\sin^2r\,\cos r\Big[\alpha^*\proj{0,\downarrow}{\uparrow,\uparrow\downarrow}\\* \nonumber&&-\beta^*\!\proj{0,\uparrow}{\uparrow,\uparrow\downarrow}\!+\!\gamma^*\!\proj{0,\downarrow}{\downarrow,\uparrow\downarrow}\!-\!\delta^*\!\proj{0,\uparrow}{\downarrow,\uparrow\downarrow}\Big]\\* \nonumber&&+\cos^2 r\Big[|\alpha|^2\proj{\uparrow,\uparrow}{\uparrow,\uparrow}+\alpha\beta^*\proj{\uparrow,\uparrow}{\uparrow,\downarrow}+\alpha\gamma^*\\* \nonumber&&\times\proj{\uparrow,\uparrow}{\downarrow,\uparrow}+\alpha\delta^*\proj{\uparrow,\uparrow}{\downarrow,\downarrow}+|\beta|^2\proj{\uparrow,\downarrow}{\uparrow,\downarrow}\\* \nonumber&&+\beta\gamma^*\proj{\uparrow,\downarrow}{\downarrow,\uparrow}+\beta\delta^*\proj{\uparrow,\downarrow}{\downarrow,\downarrow}+|\gamma|^2\proj{\downarrow,\uparrow}{\downarrow,\uparrow}\\* \nonumber&&+\gamma\delta^*\proj{\downarrow,\uparrow}{\downarrow,\downarrow}+|\delta|^2\proj{\downarrow,\downarrow}{\downarrow,\downarrow}\Big]+\sin^2r\\* \nonumber&&\times\Big[\left(|\alpha|^2+|\beta|^2\right)\proj{\uparrow,\uparrow\downarrow}{\uparrow,\uparrow\downarrow}+\left(|\gamma|^2+|\delta|^2\right)\\* \nonumber&&\times\proj{\downarrow,\uparrow\downarrow}{\downarrow,\uparrow\downarrow}+\left(\alpha\gamma^*+\beta\delta^*\right)\proj{\uparrow,\uparrow\downarrow}{\downarrow,\uparrow\downarrow}\Big]\\* &&+\text{n.d.H.c.} \end{eqnarray} Here the notation is the same than in the r.h.s. of \eqref{trazapa1}: $\proj{a,r}{a',r'}=\ket{a_A}\ket{r_R}_I\bra{a'_A}\,{_{I}\!\!\bra{r'_R}}$. Notice that the state, which in Minkowski coordinates is pure, gets mixed when the observer Rob is accelerated. Equation \eqref{generaldensmat} will be our starting point, from which we will study different entanglement settings and how Unruh decoherence affects them. To begin with we will compute how acceleration affects the entanglement of spin Bell states when Alice and Rob share a maximally entangled spin state and Rob accelerates. In Minkowski coordinates that means choosing specific coefficients in \eqref{genstate}, particularly, for Bell states we should choose \begin{eqnarray} \ket{\phi^\pm}\Rightarrow \alpha=\pm\delta=\frac{1}{\sqrt{2}}\\* \ket{\psi^\pm}\Rightarrow \beta=\pm\gamma=\frac{1}{\sqrt{2}} \end{eqnarray} and the rest of the other coefficients equal to zero. For such states in Minkowski coordinates, the density matrix of Alice and Rob considering that Rob undergoes an acceleration $a$ is obtained from \eqref{generaldensmat}: \begin{eqnarray}\label{Phibell} \nonumber\rho^{\phi^\pm}_{AR}&=&\frac12\Big[\cos^2 r\Big(\proj{\uparrow,\uparrow}{\uparrow,\uparrow}\pm\proj{\uparrow,\uparrow}{\downarrow,\downarrow}\pm\proj{\downarrow,\downarrow}{\uparrow,\uparrow}\\* \nonumber&&+\proj{\downarrow,\downarrow}{\downarrow,\downarrow}\Big)+\sin^2 r\Big(\proj{\uparrow,\uparrow\downarrow}{\uparrow,\uparrow\downarrow}\\* &&+\proj{\downarrow,\uparrow\downarrow}{\downarrow,\uparrow\downarrow}\Big)\Big] \end{eqnarray} \begin{eqnarray}\label{Psibell} \nonumber\rho^{\psi^\pm}_{AR}&=&\frac12\Big[\cos^2 r\Big(\proj{\uparrow,\downarrow}{\uparrow,\downarrow}\pm\proj{\uparrow,\downarrow}{\downarrow,\uparrow}\pm\proj{\downarrow,\uparrow}{\uparrow,\downarrow}\\* \nonumber&&+\proj{\downarrow,\uparrow}{\downarrow,\uparrow}\Big)+\sin^2 r\Big(\proj{\uparrow,\uparrow\downarrow}{\uparrow,\uparrow\downarrow}\\* &&+\proj{\downarrow,\uparrow\downarrow}{\downarrow,\uparrow\downarrow}\Big)\Big] \end{eqnarray} Notice that, in this case, Alice would have a qubit and Rob would have a qutrit, since for his mode he could have three different possible orthogonal states: particle spin-up, particle spin-down and particle pair. To characterize its entanglement we will use the negativity \cite{Negat} normalized to one (we can multiply it by a constant in order to have negativity equal to one for a maximally entangled state), Therefore, to have negativity equal to 1 for a Bell state we define it as twice the addition of all the negative eigenvalues of the partial transpose density matrix --which consists on transposing Rob's qutrits-- \begin{eqnarray} \nonumber\rho^{\phi^\pm pT}_{AR}\!\!&=&\frac12\Big[\cos^2 r\Big(\proj{\uparrow,\uparrow}{\uparrow,\uparrow}\pm\proj{\uparrow,\downarrow}{\downarrow,\uparrow}\pm\proj{\downarrow,\uparrow}{\downarrow,\uparrow}\\* \nonumber&&+\proj{\downarrow,\downarrow}{\downarrow,\downarrow}\Big)+\sin^2 r\Big(\proj{\uparrow,\uparrow\downarrow}{\uparrow,\uparrow\downarrow}\\* &&+\proj{\downarrow,\uparrow\downarrow}{\downarrow,\uparrow\downarrow}\Big)\Big] \end{eqnarray} \begin{eqnarray} \nonumber\rho^{\psi^\pm pT}_{AR}\!\!&=&\frac12\Big[\cos^2 r\Big(\proj{\uparrow,\downarrow}{\uparrow,\downarrow}\pm\proj{\uparrow,\uparrow}{\downarrow,\downarrow}\pm\proj{\downarrow,\downarrow}{\uparrow,\uparrow}\\* \nonumber&&+\proj{\downarrow,\uparrow}{\downarrow,\uparrow}\Big)+\sin^2 r\Big(\proj{\uparrow,\uparrow\downarrow}{\uparrow,\uparrow\downarrow}\\* &&+\proj{\downarrow,\uparrow\downarrow}{\downarrow,\uparrow\downarrow}\Big)\Big] \end{eqnarray} We can write $\rho^{\phi^\pm pT}_{AR}$ matricially in the basis $\left\{\ket{\uparrow,\uparrow},\ket{\uparrow,\downarrow},\ket{\downarrow,\uparrow},\ket{\downarrow,\downarrow},\ket{\uparrow,\uparrow\downarrow},\ket{\downarrow,\uparrow\downarrow}\right\}$ \begin{equation} \frac12\left(\!\begin{array}{cccccc} \cos^2 r & 0 &0 &0 & 0 & 0\\ 0&0 &\pm\cos^2 r &0 & 0 & 0 \\ 0 &\pm\cos^2 r &0 &0 &0 &0 \\ 0 & 0 &0 &\cos^2 r &0 &0 \\ 0 & 0 &0&0& \sin^2 r &0 \\ 0& 0& 0&0&0& \sin^2 r \\ \end{array}\!\right) \end{equation} which have the same expression than $\rho^{\psi^\pm pT}_{AR}$ in the basis $\left\{\ket{\uparrow,\downarrow},\ket{\uparrow,\uparrow},\ket{\downarrow,\downarrow},\ket{\downarrow,\uparrow},\ket{\uparrow,\uparrow\downarrow},\ket{\downarrow,\uparrow\downarrow}\right\}$. Therefore the four Bell states will have the same eigenvalues which are: \begin{eqnarray} \nonumber\lambda_1=\lambda_2=\lambda_3=\frac12\cos^2 r\\* \lambda_4=\lambda_5=\frac12\sin^2 r\\* \nonumber\lambda_6=-\frac12\cos^2r \end{eqnarray} Since $r=\arctan\left(e^{-\pi\frac{\omega c}{a}}\right)$ $a\rightarrow0\Rightarrow r\rightarrow 0$ and $a\rightarrow\infty\Rightarrow r\rightarrow \pi/4$ so that $\lambda_6$ is negative for all values of the acceleration. This implies, using Peres criterion \cite{PeresCriterion}, that the spin Bell states will be always entangled even in the limit of infinite acceleration. We can readily evaluate the entanglement at the limits $a\rightarrow0$ and $a\rightarrow\infty$ if we compute the negativity (normalized to one for maximally entangled states), that is to say \begin{equation}\label{negativity} \mathcal{N}=2\sum_{\lambda_i<0}|\lambda_i| \end{equation} Applied to our states we obtain that \begin{equation} \mathcal{N}(r)=\cos^2 r \end{equation} In the limit $a\rightarrow0$ we obtain $\mathcal{N}=1$ which is an expected result since $a\rightarrow0$ is the inertial limit. However, in the limit $a\rightarrow\infty$ we obtain $\mathcal{N}=\frac{1}{2}$, which implies that spin entanglement degrades due to the Unruh effect. Fig. 2 shows the negativity as a function of the acceleration of Rob. \begin{figure}\label{fig2} \includegraphics[width=.45\textwidth]{mutualbell} \caption{Negativity and mutual information as a function of the acceleration of Rob when $R$ and $A$ share a maximally entangled state in Minkowski coordinates. Red dashed line is mutual information for all the spin Bell states. Black dotted line is mutual information for the Minkowski state $\frac{1}{\sqrt2}\left(\ket{00}+\ket{\uparrow\downarrow}\right)$ and blue solid line is negativity for both, Bell spin states and $\frac{1}{\sqrt2}\left(\ket{00}+\ket{\uparrow\downarrow}\right)$} \end{figure} The mutual information, which takes into account quantum and classical correlations, is \begin{equation}\label{Imutua} I_{AR}=S_{A}+S_{R}-S_{AR} \end{equation} where $S_{A,R}$ are the Von Neumann entropies of the partial state of Alice and Rob and $S_{AR}$ is the entropy of the whole state. For \eqref{Phibell} and \eqref{Psibell}, the partial states of Alice and Rob ($\rho_A=\operatorname{Tr}_R\rho_{AR}$, $\rho_R=\operatorname{Tr}_A\rho_{AR}$) can be expressed matricially as \begin{equation} \rho_A=\frac12\left(\!\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\!\right) \end{equation} in the basis $\{\ket{\uparrow},\ket{\downarrow}\}$ \begin{equation} \rho_R=\frac12\left(\!\begin{array}{ccc} \cos^2 r & 0 \\ 0 & \cos^2 r &0\\ 0 &0 & 2\sin^2 r \end{array}\!\right) \end{equation} in the basis $\{\ket{\uparrow},\ket{\downarrow},\ket{\uparrow\downarrow}\}$ for all the Bell states, and \begin{equation} \rho_{AR}=\frac12\left(\!\begin{array}{cccc} \cos^2 r & \pm\cos^2 r &0 & 0 \\ \pm\cos^2 r & \cos^2 r &0 & 0 \\ 0 &0 & \sin^2 r & 0 \\ 0 &0 & 0& \sin^2 r \\ \end{array}\!\right) \end{equation} for $\phi^{\pm}$ in the basis $\{\ket{\uparrow,\uparrow},\ket{\downarrow,\downarrow},\ket{\uparrow,\uparrow\downarrow},\ket{\downarrow,\uparrow\downarrow}\}$ and the same expression for $\psi^{\pm}$ in the basis $\{\ket{\uparrow,\downarrow},\ket{\downarrow,\uparrow},\ket{\uparrow,\uparrow\downarrow},\ket{\downarrow,\uparrow\downarrow}\}$. The entropies of these states are \begin{eqnarray}\label{Entrop} \nonumber S_A&=&1\\* \nonumber S_R&=&-\cos^2r\,\log_2\left(\frac12\cos^2r\right)-\sin^2 r\,\log_2\left(\sin^2r\right)\\* \nonumber S_{AR}&=&\cos^2r\,\log_2\left(\cos^2r\right)-\sin^2r\,\log_2\left(\frac12\sin^2r\right)\\* \end{eqnarray} and the mutual information is \begin{equation}\label{mutual2} I_{AR}=2\cos^2r \end{equation} Again we see that in the limit $a\rightarrow 0$ mutual information goes to 2 and in the limit of infinite acceleration it goes to 1. The behavior of the mutual information as a function of $a$ is shown in fig. 1. In \cite{AlsingSchul} it is discussed that Pauli exclusion principle protects the on occupation number entanglement from decoherence, and some degree of entanglement is preserved even at the limit $a\rightarrow\infty$. Here we have obtained a similar result for the spin Bell states, showing that spin entanglement is also degraded by Unruh effect. Next, we will study the case in which Alice and Rob share a different class of maximally entangled state. We consider that in Minkowski coordinates we have \begin{equation}\label{minkstate2} \ket{\Psi}=\frac{1}{\sqrt2}\left(\ket{0_A}\ket{0_R}+\ket{\uparrow_A}\ket{\downarrow_R}\right) \end{equation} which is a maximally entangled state that includes occupation number entanglement along with spin. We study this kind of states as a first analog to the state considered in previous literature \eqref{alsingst}. This state corresponds to the choice \begin{eqnarray}\label{coef22} \beta&=&\mu=\frac{1}{\sqrt2}\\* \alpha&=&\gamma=\delta=0 \end{eqnarray} in equation \eqref{generaldensmat}. The density matrix of such a state is \begin{eqnarray}\label{minkstate2R} \nonumber\rho&=&\frac12\Big[\cos^4r\proj{0,0}{0,0}+\sin^2r\cos^2r\left(\proj{0,\uparrow}{0,\uparrow}\right.\\* \nonumber&&\left.+\proj{0,\downarrow}{0,\downarrow}\right)+\sin^4r\proj{0,\uparrow\downarrow}{0,\uparrow\downarrow}+\cos^3r\\* \nonumber&&\times\left(\proj{0,0}{\uparrow,\downarrow}+\proj{\uparrow,\downarrow}{0,0}\right)-\sin^2r\,\cos r\\* \nonumber&&\times\left(\proj{0,\uparrow}{\uparrow,\uparrow\downarrow}+\proj{\uparrow,\uparrow\downarrow}{0,\uparrow}\right)+\cos^2r\proj{\uparrow,\downarrow}{\uparrow,\downarrow}\\* &&+\sin^2r\proj{\uparrow,\uparrow\downarrow}{\uparrow,\uparrow\downarrow}\Big] \end{eqnarray} Notice the significant difference from the Bell spin states; considering that Rob accelerates means that, this time, Alice has a qubit and Rob has a qu4it. Hence, negativity acts only as a measure of distillable entanglement, and does not account for the possible bound entanglement the system would have \cite{HorodeckiBound}. Since in Rindler coordinates the state \eqref{minkstate2R} is qualitatively different from the Minkowski Bell states \eqref{Phibell}, \eqref{Psibell}, it is therefore worthwhile to study its entanglement and the mutual information degradation as Rob accelerates. The partial transpose of \eqref{minkstate2R} $\sigma=\rho^{pT}$ is \begin{eqnarray}\label{minkstate2R2} \nonumber\sigma&=&\frac12\Big[\cos^4r\proj{0,0}{0,0}+\sin^2r\cos^2r\left(\proj{0,\uparrow}{0,\uparrow}\right.\\* \nonumber&&\left.+\proj{0,\downarrow}{0,\downarrow}\right)+\sin^4r\proj{0,\uparrow\downarrow}{0,\uparrow\downarrow}+\cos^3r\\* \nonumber&&\times\left(\proj{0,\downarrow}{\uparrow,0}+\proj{\uparrow,0}{0,\downarrow}\right)-\sin^2r\,\cos r\\* \nonumber&&\times\left(\proj{0,\uparrow\downarrow}{\uparrow,\uparrow}+\proj{\uparrow,\uparrow}{0,\uparrow\downarrow}\right)+\cos^2r\proj{\uparrow,\downarrow}{\uparrow,\downarrow}\\* &&+\sin^2r\proj{\uparrow,\uparrow\downarrow}{\uparrow,\uparrow\downarrow}\Big] \end{eqnarray} which is an $8\times8$ matrix. $\sigma$ is diagonal by blocks with eigenvalues \begin{eqnarray}\label{eig2} \nonumber\lambda_1&=&\frac12\cos^4r\\* \nonumber\lambda_2&=&\frac12\cos^2r\sin^2r\\* \lambda_3&=&\frac12 \sin^2 r\\* \nonumber\lambda_4&=&\frac12\cos^2r\\* \nonumber\lambda_{5,6}&=&\frac{1}{4}\left(\sin^2r\cos^2r\pm\sqrt{\sin^4r\cos^4r+4\cos^6r}\right)\\* \nonumber\lambda_{7,8}&=&\frac14\left(\sin^4r\pm\sqrt{\sin^8r+4\sin^4r\,\cos^2r}\right) \end{eqnarray} As we can see, $\lambda_8$ is non-positive and $\lambda_6$ is negative for all values of $a$, therefore the state will always preserve some degree of distillable entanglement. If we calculate the negativity we will obtain \begin{equation} \mathcal{N}(r)=\cos^2r \end{equation} which means that for this case, distillable entanglement behaves equally than in the previous case, and negativity on fig. 1 is equally valid for this state. Finally we compute the mutual information of the system whose partial matrices are expressed as \begin{equation} \rho_A=\frac12\left(\!\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\!\right) \end{equation} in the basis $\{\ket{0},\ket{\uparrow}\}$. $\rho_R=$ \begin{equation} \frac12\left(\!\begin{array}{cccc} \cos^4 r & 0 & 0 &0\\ 0 & \!\!\!\sin^2r\,\cos^2 r &0 & 0\\ 0 &0 & \!\!\!\!\cos^2r\left(\sin^2 r+1\right) & 0\\ 0 & 0& 0& \!\!\!\!\!\sin^2r\left(\sin^2r+1\right) \end{array}\!\right) \end{equation} in the basis $\{\ket{0},\ket{\uparrow},\ket{\downarrow},\ket{\uparrow\downarrow}\}$. The eigenvalues of the whole system $6\times6$ matrix $\rho_{AR}$ are \begin{eqnarray} \nonumber \lambda_1&=&\lambda_2=0\\* \nonumber \lambda_3&=&\frac12\sin^2r\cos^2r\\* \nonumber \lambda_4&=&\frac12\sin^4r\\* \nonumber \lambda_5&=&\frac12\cos^2r\left(1+\cos^2r\right)\\* \lambda_6&=&\frac12\sin^2r\left(1+\cos^2r\right) \end{eqnarray} In this case the mutual information as a function of $a$ is not proportional to the negativity. Hence, it is different from the Bell states \eqref{Phibell},\eqref{Psibell}. As it can be seen in Fig. 1 the value of mutual information for \eqref{Phibell}, \eqref{Psibell} and \eqref{minkstate2R} coincide at the limits $a\rightarrow0,a\rightarrow\infty$, but are different in between, obtaining that $I_{AR}^{\text{SpinBell}}\ge I_{AR}^{\text{ModeBell}}$. To conclude this section we stress that the same results will be obtained if the state $\ket{\uparrow,\downarrow}$ in \eqref{minkstate2} is replaced by any other 1 particle bipartite spin state $\ket{s,s'}$. \section{Occupation number entanglement with an accelerated partner and spin $1/2$ fermions}\label{sec6} The previous work \cite{AlsingSchul} on occupation number entanglement between accelerated partners ignored the spin structure of the Dirac field modes. It is not possible to straightforwardly translate a state like \eqref{genstate} into mere occupation number states. This comes about because for a state like \eqref{genstate} the bipartite vacuum component does not have individual spin degrees of freedom as the other components do. In other words, by including the vacuum state in the superposition \eqref{genstate}, the Hilbert space ceases to be factorable in terms of individual spin times particle occupation number subspaces. On the other hand, the bipartite vacuum is a well defined total spin singlet. Hence, the Hilbert space is factorable with respect to the total spin of the system $A-R$ and the occupation number subspaces. Accordingly, to reduce the spin information in the general density matrix \eqref{generaldensmat} we will be forced to consider a factorization of the Hilbert space as the product of the total spin and occupation number subspaces. If we do such a factorization we could consider that we are not able to access to the information of the total spin of the system $A-R$ and then, we should trace over total spin degree of freedom. The equivalence between the standard basis (occupation number-individual spin) and the new basis (occupation number-total spin) is given\footnote{The pair state in the same mode can only be a singlet of total spin due to anticommutation relations of fermionic fields} in equations \eqref{e1} and \eqref{e2} \begin{equation}\label{e1} \begin{array}{cc} \ket{0,0}=\ket{00}\ket{S}& \ket{0,\uparrow\downarrow}=\ket{02}\ket{S}\\*[1.5mm] \ket{0,\uparrow}=\ket{01}\ket{D_+}& \ket{0,\downarrow}=\ket{01}\ket{D_-}\\*[1.5mm] \ket{\uparrow,0}=\ket{10}\ket{D_+}&\ket{\downarrow,0}=\ket{10}\ket{D_-}\\*[1.5mm] \ket{\uparrow,\uparrow\downarrow}=\ket{12}\ket{D_+}&\ket{\downarrow,\uparrow\downarrow}=\ket{12}\ket{D_-}\\*[1.5mm] \ket{\uparrow,\uparrow}=\ket{11}\ket{T_+}&\ket{\downarrow,\downarrow}=\ket{11}\ket{T_-}\\*[1.5mm] \end{array} \end{equation} \begin{equation}\label{e2} \begin{array}{c} \ket{\uparrow,\downarrow}=\frac{1}{\sqrt{2}}\ket{11}\left[\ket{T_0}+\ket{S}\right]\\*[1.5mm] \ket{\downarrow,\uparrow}=\frac{1}{\sqrt{2}}\ket{11}\left[\ket{T_0}-\ket{S}\right] \end{array} \end{equation} where we are using the basis $\ket{n_a\,n_b}\ket{J,J_z}$ and the triplets, doublets and the singlet are denoted as \begin{eqnarray} \nonumber\ket{T_+}&=&\ket{J=1,J_z=1}\\* \nonumber\ket{T_-}&=&\ket{J=1,J_z=-1}\\* \nonumber\ket{T_0}&=&\ket{J=1,J_z=0}\\* \nonumber\ket{D_+}&=&\ket{J=1/2,J_z=1/2}\\* \nonumber\ket{D_-}&=&\ket{J=1/2,J_z=-1/2}\\* \nonumber\ket{S}&=&\ket{J=0,J_z=0}\\* \end{eqnarray} If we rewrite the general state \eqref{genstate} in this basis we obtain \begin{eqnarray}\label{genstateb2} \nonumber\ket{\Psi}&=&\mu\ket{00}\ket{S}+\alpha\ket{11}\ket{T_+}+\frac{\beta+\gamma}{\sqrt2}\ket{11}\ket{T_0}\\* &&+ \frac{\beta-\gamma}{\sqrt2}\ket{11}\ket{S}+\delta\ket{11}\ket{T_-} \end{eqnarray} And the general state when Rob is accelerated \eqref{generaldensmat} in terms of this new basis after reducing the information on the total spin by tracing over this degree of freedom is \begin{equation} \rho^n_{AR}=\sum_{J,J_z}\bra{J,J_z}\rho_{AR}\ket{J,J_z} \end{equation} Which results in a state in the occupation number basis whose entanglement decoherence could be studied and compared with the results in reference \cite{AlsingSchul} in which spin is ignored: \begin{eqnarray}\label{densityred} \nonumber\rho^n_{AR}&=&\mu^2\Big[\cos^4r\proj{00}{00}+2\sin^2r\,\cos^2r\proj{01}{01}\\* \nonumber&&+\sin^4r\proj{02}{02}\Big]+\mu\cos^3r\left(\frac{\beta^*-\gamma^*}{\sqrt2}\proj{00}{11}\right.\\* \nonumber&&\left.+\frac{\beta-\gamma}{\sqrt2}\proj{11}{00}\right)+(1-\mu^2)\Big[\cos^2r\proj{11}{11}\\* &&+\sin^2r\proj{12}{12}\Big] \end{eqnarray} we can readily compute the partial transpose $\sigma^n=(\rho^n_{AR})^{pT}$ \begin{eqnarray}\label{ptrans} \nonumber\sigma^n&=&\mu^2\Big[\cos^4r\proj{00}{00}+2\sin^2r\,\cos^2r\proj{01}{01}\\* \nonumber&&+\sin^4r\proj{02}{02}\Big]+\mu\cos^3r\left(\frac{\beta^*-\gamma^*}{\sqrt2}\proj{01}{10}\right.\\* \nonumber&&\left.+\frac{\beta-\gamma}{\sqrt2}\proj{10}{01}\right)+(1-\mu^2)\Big[\cos^2r\proj{11}{11}\\* &&+\sin^2r\proj{12}{12}\Big] \end{eqnarray} whose eigenvalues are \begin{eqnarray} \nonumber\lambda_1&=&\mu^2\cos^4r\\* \nonumber\lambda_2&=&\mu^2\sin^4r\\* \nonumber\lambda_3&=&(1-\mu^2)\cos^2r\\* \nonumber\lambda_4&=&(1-\mu^2)\sin^2r\\* \nonumber\lambda_{5,6}&\!=&\!\cos^2r\left(\mu^2\sin^2r\pm\mu\sqrt{\mu^2\sin^4r+\cos^2r\frac{|\beta-\gamma|^2}{2}}\right) \end{eqnarray} all the eigenvalues are non-negative except $\lambda_6\le0$. The negativity \eqref{negativity} is, in this case, \begin{equation}\label{negativitymode} \mathcal{N}=2\cos^2r\left|\mu^2\sin^2r-\mu\sqrt{\mu^2\sin^4r+\cos^2r\frac{|\beta-\gamma|^2}{2}}\right| \end{equation} which depends on the proportion of singlet \mbox{$|\beta-\gamma|/\sqrt2$} of the $\ket{11}$ component in the state \eqref{genstate}. When there is no singlet component ($\beta=\gamma$) the negativity is zero. Indeed in the limit $a\rightarrow0$ (Minkowskian limit) \begin{equation}\label{negativitymode0} \mathcal{N}_0=\sqrt2\left|\mu\right|\left|\beta-\gamma\right| \end{equation} That shows that the maximally entangled Minkowski occupation number state (Negativity $=1$) arises tracing over total spin when the starting state is \begin{equation}\label{modesmaxen} \ket{\Psi}=\frac{1}{\sqrt2}\ket{0,0}\pm\frac12\big[\ket{\uparrow,\downarrow}-\ket{\downarrow,\uparrow}\big] \end{equation} or, in the occupation number-total spin bases \begin{equation}\label{modesmaxen2} \ket{\Psi}=\frac{1}{\sqrt2}\Big[\ket{00}\ket{S}\pm\ket{11}\ket{S}\big] \end{equation} That means that, for occupation number entanglement, the only way to have an entangled state of the bipartite vacuum $\ket{00}$ and the one particle state $\ket{11}$ of a Dirac field is through the singlet component of total spin for the $\ket{11}$ component. On the contrary, the state \begin{equation} \ket{\Psi}=\frac{1}{\sqrt2}\Big[\ket{00}\ket{S}\pm\ket{11}\ket{T_{0,\pm}}\big] \end{equation} will become separable after tracing over total spin due to the orthonormality of the basis \eqref{e1}, \eqref{e2}. We have established that the Minkowski maximally entangled state for occupation number arises after tracing over total spin in a state as \eqref{modesmaxen}. Now we will compute the limit of the negativity when the acceleration goes to $\infty$ in order to see its Unruh decoherence and to compare it with the results for occupation number entanglement from \cite{AlsingSchul}. Taking $a\rightarrow\infty \Rightarrow r\rightarrow\pi/4$ in \eqref{negativitymode} \begin{equation}\label{negativitymodeinfty} \mathcal{N}_\infty=\frac12\left|\mu^2-\sqrt{\mu^4+\mu^2|\beta-\gamma|^2}\right| \end{equation} Therefore, for the maximally Minkowski entangled state we have $\mu=1/\sqrt{2}$, $|\beta-\gamma|=1$ and the negativity in the limit is \begin{equation}\label{negalimi} \mathcal{N}_\infty=\frac{\sqrt3-1}{4} \end{equation} This result shows that when we are reducing the total spin information, looking at the occupation number entanglement alone, we see that it is more degraded by Unruh effect than when we considered spin Bell states in the previous section. More importantly, the occupation number entanglement is more degraded than in \cite{AlsingSchul}, where the spin structure of the modes was considered nonexistent. This happens because considering spin structure of each mode, occupation number 2 is allowed. Hence, Pauli exclusion principle protection of the entanglement is weaker than in \cite{AlsingSchul} where the spin is not considered. The negativity dependence on the acceleration is shown in fig. 3 \begin{figure}\label{fig5} \includegraphics[width=.45\textwidth]{mutumode} \caption{Negativity (blue solid line) and mutual information (red dashed line) as a function of the acceleration of Rob when R and A share an occupation number maximally entangled state \eqref{modesmaxen} in Minkowski coordinates after tracing over total spin} \end{figure} We can also compute the mutual information for the state \eqref{densityred} as we did in the rest of the cases. Its analytical expression is quite long and has no special interest, but we can see the dependence of $I_{AR}$ for the Minkowski maximally entangled state \eqref{modesmaxen} with the acceleration in fig. 3, obtaining that $I_{AR}^0=2$ and $I_{AR}^\infty=1/2$ \section{Conclusions and comments}\label{sec7} It is known \cite{Alicefalls,AlsingSchul} that Unruh decoherence degrades entanglement of occupation number states of fields. Here we have shown a richer casuistic that appears when we take into account that each Dirac mode has spin structure. This fact enables us to study interesting effects (such as Unruh decoherence for spin Bell states) and develop new procedures to erase spin information from the system in order to study occupation number entanglement. Along this work we have analyzed how a maximally entangled spin Bell state losses entanglement when one of the partners accelerates. We have seen that, while in Minkowski coordinates Alice and Rob have qubits, when Rob accelerates the system becomes a non-pure state of a qubit for Alice and a qutrit for Rob. In this case spin entanglement for a Dirac field is degraded when Rob accelerates. However some degree of entanglement survives even at the limit $a\rightarrow\infty$. A first analog to the well studied state $(1/\sqrt{2})(\ket{00}+\ket{11})$ but including spin could be, for instance, $(1/\sqrt{2})(\ket{00}+\ket{\uparrow\downarrow})$. This state, unlike the deceivingly similar spin Bell states, becomes a qubit$\times$qu4it when Rob accelerates. Nevertheless, distillable entanglement degrades in the same way as for spin Bell states. We have also introduced a procedure to consistently erase spin information from our setting preserving the occupation number information. We have done it by tracing over total spin. The maximally entangled occupation number state is obtained from the total spin singlet \eqref{modesmaxen} after tracing over total spin. Finally we have shown that its entanglement and mutual information is more degraded than in \cite{AlsingSchul} where the spin structure of Dirac modes was neglected. A reasonable physical argument for this result is that, in our setting, occupation number 2 is allowed for the Dirac field modes, and hence, there is a broader margin for entanglement degradation by Unruh effect. The thermal noise \eqref{Unruh} is obtained when dealing with a two dimensional space-time and massless fields. Mass gap and transverse degrees of freedom modify the counting statistics that is no longer given by thermal noise, but replaced by the so-called Rindler noise \cite{Takagi}, that the the space-time dimension. In this work we were concerned with the specific issues associated to the spin degree of freedom, so the restriction to massless fields in two dimensional space-time adopted here, as well as the single mode approximation, allows a direct comparison with the previous works \cite{Alicefalls,AlsingSchul} which considered massless spinless fields in 2D under those approximations. As a matter of fact, having more than 2 space-temporal dimensions and having massive fields may introduce relevant effects. Allowing the possibility of having momentum off the acceleration direction and having massive fields we would obtain a spread of Minkowski modes over Rindler frequencies \cite{Takagi}. If we carry out the single-mode approximation, the spread of Minkowski modes into Rindler modes can be neglected even for higher dimensions \cite{AlsingSchul,Alsingtelep,AlsingMcmhMil}, but if we want to relax such an approximation (as for the discussion in the next paragraph), those effects should be considered in order to account for the entanglement in non-inertial frames. It would be worthwhile to study those effects in future articles. Another very interesting point that deserves further study is the fact that when we consider more than one populated mode of the complete Minkowski vacuum \eqref{vacua} instead of the single mode approximation, the margin for Unruh degradation increases as we could have in principle a larger number of levels that can be excited by the Thermal/Rindler noise. One could think that these cases would be quite similar to the bosonic case \cite{Alicefalls} where the margin for Unruh decoherence is so broad that no entanglement survives at the limit of $a\rightarrow\infty$. Something similar would apply as well if we relax the single mode approximation allowing a small spread in both Rob's detector and populated levels such that we consider a continuum of accessible levels. These topics will be inspiration for future works. \section{Acknowledgements} The authors thank C. Sabin and J. Garcia-Ripoll for the useful discussions during the elaboration of this paper. This work has been partially supported by the Spanish MCIN Project FIS2008-05705/FIS. E. M-M is partially supported by the CSIC JAE-PREDOC2007 Grant. We are in debt to an anonymous referee, that has pointed out very interesting questions which helped us substantially improve this article.
1,108,101,564,107
arxiv
\section{Introduction} \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{Figures/regression.pdf} \caption{Illustration of regression errors when upgrading from BERT \citep{devlin-etal-2019-bert} to ELECTRA \citep{Clark2020ELECTRA:} for classification. Red circles and green squares denote examples of different classes. Dashed lines represent decision boundaries. } \label{fig:regression} \end{figure} In order to achieve a smooth continuous improvement of NLP applications, it is critical to guarantee consistent operation of the system after an upgrade. New errors introduced during the model upgrade interfere with the existing user experience and are considered to be a \emph{regression} in the quality. Due to the difficulty of modularizing or explaining the behavior of deep neural networks, traditional software regression tests are inapplicable to neural based systems. The cost of arduous error analysis and model patching often exceeds the benefits of model upgrades. Developing methods that ensure backward compatibility during model upgrades without compromise in performance becomes a valuable research direction \citep{yan2021positive,work-in-progress,trauble2021backward,cai2022measuring}. The \textit{prediction backward-compatible model upgrade} problem aims to improve consistency of correct classification predictions between legacy and upgraded models without accuracy loss. \citet{yan2021positive} first studied backward compatibility during model upgrade on image classification tasks. They proposed to enforce the positive congruence of the new model with the old one by applying a knowledge distillation objective \citep{Hinton2015DistillingTK} objective with re-weighting of training samples. Later, \citet{work-in-progress} extended the work of \citet{yan2021positive} by investigating the backward compatibility in NLP classification tasks. They found that their proposed distillation-based approach can help decrease the regression errors of specific linguistic phenomena in NLP classification tasks. Despite progress with both distillation- and ensemble-based regression-mitigation approaches, there are limitations that prevent them from broad practical adoption in ML operations. Distillation-based methods attempt to transfer the prediction power of the old model to the new one on potential regression instances \citep{Hinton2015DistillingTK}. However, given the huge complexity of current neural architectures and relatively scarce training data in downstream tasks, models could have insufficient data to reliably estimate the probable regression cases and carry out the transfer on them \citep{work-in-progress,cai2022measuring}. On the other hand, model ensemble aggregates predictions from differently-trained new models but bears no connection with the legacy version \citep{yan2021positive}. These limitations reveal the two major challenges when striving to ensure backward compatibility. First, the new model could have distinct inductive bias and prediction behavior than the old system, rooted from inherent differences such as architecture, model size, and pretraining procedure \citep{liu2021self}. Second, during new model training, a reliable mechanism is needed in place to bridge the gap between two models and mitigate potential inconsistencies. Inspired by the strength and weakness of prior approaches, we propose \emph{Gated Fusion} to integrate old and new models via gating mechanism \citep{hochreiter1997long,chung2014empirical,gu2016incorporating}, essentially a light-weight ensemble of legacy and upgrade models connected via a learned fusion gate. Specifically, we add a \textit{learned} gate on top of the new model. We combine the logits from old and new models according to the weight from the gate. We train our Gated Fusion model by minimizing the standard cross-entropy error. The intuition is that the gate could learn to put more weights on the old model when the new model cannot produce correct predictions, effectively doing fall-backs that optimizes backward compatibility. Empirical results demonstrate that our proposed approach outperforms other competing methods significantly, where we can obtain on average $62\%$ reduction of total negative flips, i.e. new errors caused by the model upgrade, without any degradation in accuracy performance. The effectiveness of Gated Fusion is validated across three diverse classification tasks and two distinct model upgrade scenarios (a) upgrade to larger model size (b) upgrade to advanced pretrained model, where consistent results are attained across the board. Our main contributions are as follows: \begin{itemize}[itemsep=4pt,topsep=4pt,parsep=0pt,partopsep=0pt] \item We propose Gated Fusion that integrates old and new models via gating mechanism for backward-compatible model upgrade; \item We evaluate competing methods on two distinct and challenging model upgrade scenarios across three diverse classification tasks; \item Empirical results show that our proposed approach significantly outperforms competing methods and achieves regression reductions by a large margin across the board. \end{itemize} \section{The Backward-Compatible Model Upgrade Problem} \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{Figures/method.pdf} \caption{Methods to improve prediction backward compatibility during model upgrade. (a) Distillation-based approach to align predicted logits on potential regression instances \citep{work-in-progress}. (b) Ensemble of old and new models via weighted sum of either predicted logits or probabilities. (c) Our proposed Gated Fusion that learns a gate as a soft switch to dynamically determine whether to fall back to previous predictions.} \label{fig:method} \end{figure*} The goal of backward-compatible model upgrade is to minimize regression errors without compromising the accuracy performance during model upgrade \citep{yan2021positive,work-in-progress}. In this work, we aim to improve the backward compatibility of model predictions in the NLP classification tasks. Following \citet{work-in-progress}, we study the scenario where the underlying pretrained language model (LM) is being upgraded. Let $x$ be a natural language input with a class label $y \in \{1, 2, ..., C\}$. $\mathcal{D}=\{x_i, y_i\}_{i=1}^{N}$ denotes a set of $N$ examples with corresponding labels. A classifier $f$ estimates the class probabilities given the input $\vec{f}(x) = (p(y=1|x), ..., p(y=C|x))^{\top}$. When upgrading from an \emph{old} model $f_{old}$ to a \emph{new} model $f_{new}$, normally with distinct architectures and trained on the same data, an improved model $f^{*}$ is produced based on $f_{old}$ and $f_{new}$. Our goal is for $f^{*}$ to minimize \emph{regression errors} as an additional objective, while still achieving comparable performance to $f^{o}_{new}$, the new model trained in the vanilla setting. Note that $f^{*}$ could be multiple times larger than $f^{o}_{new}$, with model ensemble of $f^{o}_{new}$ as one example \citep{yan2021positive}. \vspace{-0.05cm} \paragraph{Measuring Backward Compatibility.} The backward compatibility is measured via quantifying regression errors on a given regression measurement set $\mathcal{D}_{reg}=\{x_i, y_i\}_{i=1}^{M}$. $\mathcal{D}_{reg}$ could be a hidden customer test set comprising critical use cases, a set of behavioral testing examples for targeted evaluation \citep{ribeiro-etal-2020-beyond}, or the development split from the dataset of interest. In this work, we take the development set as our $\mathcal{D}_{reg}$ for evaluation. For classification, regression errors are characterized by \emph{negative flips}, denoted as $\mathcal{R}_{NF}$ -- the portion of samples in $\mathcal{D}_{reg}$ that flip from correct prediction $f_{old}(x_i)=y_i$ to incorrect output $f_{new}(x_i)\neq y_i$ during model upgrade: \begingroup\makeatletter\def\f@size{10}\check@mathfonts \begin{equation} \mathcal{R}_{NF}(\mathcal{D}_{reg}, \vec{f}_{old}, \vec{f}_{new}) =\frac{|\{x|f_{old}=y, f_{new}\neq y\}|} {|\mathcal{D}_{reg}|}. \end{equation} \endgroup One thing to emphasize is that maximizing classifier performance does not necessarily help in minimizing $\mathcal{R}_{NF}$\xspace\citep{yan2021positive,work-in-progress}. \section{Gated Fusion: Methodology} \subsection{Method Overview} \label{model} To improve backward compatibility in model upgrade, it's crucial to have a mechanism that detects potential regression errors and mitigates them when making predictions. We propose Gated Fusion (GF) to achieve this by learning a gate as a soft switch to choose between generating predictions by the new model or resorting to outputs of the old model. Gated Fusion is inspired by the gating mechanism widely used in other applications. For example, mixing word copying mode with word generation mode for language modeling \citep{merity2016pointer} and summarization \citep{see2017get}. Our proposed Gated Fusion $f^{*}_{GF}$ consists of the old model $f_{old}$, the new model $f_{new}$, and a gating network $g_{\theta}$. The old model $f_{old}$ is the legacy version before upgrade where the parameters are fixed. The new model $f_{new}$ has the same architecture as $f^{o}_{new}$ and is randomly initialized. The gating network $g_{\theta}$ is a multi-layer feed-forward network with sigmoid function. It produces a scalar weight $\alpha_{gate}$ in the range $[0, 1]$ from the output layer of $f_{new}$, denoted as $\mathcal{E}_{new}$: \begingroup\makeatletter\def\f@size{10}\check@mathfonts \begin{equation} \alpha_{gate}(x) = g_{\theta}(\mathcal{E}_{new}(x)). \end{equation} \endgroup \noindent We use $\alpha_{gate}$ to combine the logits of old and new models as our final outputs: \begingroup\makeatletter\def\f@size{10}\check@mathfonts \begin{equation} l^{*}_{GF}(y|x) = ( 1-\alpha_{gate}) \cdot \frac{l_{old}(y|x)}{T} + \alpha_{gate} \cdot l_{new}(y|x), \end{equation} \endgroup where $l(y|x)$ denotes predicted logits from models and $T$ is the temperature scaling to regularize the magnitude of old model's logits. $f_{new}$ and $g_{\theta}$ are then jointly trained end-to-end with cross-entropy loss between our output logits $l^{*}_{GF}(y|x)$ and label distributions on downstream tasks. The intuition behind Gated Fusion is that when $f_{new}$ makes a mistake while $f_{old}$ produces the correct output, the gate $g_{\theta}$ will learn to put more weight on $f_{old}$ in order to minimize the final classification loss. This process effectively mitigates potential negative flips introduced by the model upgrade and thus improves the backward compatibility of final predictions. \subsection{Training and Inference} In practice, training Gated Fusion with randomly initialized $f_{new}$ would make the shallow gating network quickly converge to favor the fully-trained $f_{old}$. To prevent this, we only train $f_{new}$ for the first few epochs to ensure its competence before jointly training $g_{\theta}$ and $f_{new}$ using $l^{*}_{GF}(x)$. In addition, we found that stopping gradient flow from $g_{\theta}$ to $f_{new}$ can further prevent the performance decrease of the new model within Gated Fusion: \begingroup\makeatletter\def\f@size{10}\check@mathfonts \begin{equation} \alpha_{gate}(x) = g_{\theta}(\mathit{stop\_grad}(\mathcal{E}_{new}(x))). \end{equation} \endgroup At inference time, Gated Fusion produces logits from $f_{old}$ and $f_{new}$ as well as the gate value $\alpha_{gate}$ to make output predictions: \begingroup\makeatletter\def\f@size{10}\check@mathfonts \begin{equation} f^{*}_{GF}(x) = \mathit{Softmax}\Big( (1-\alpha_{gate}) \cdot \frac{l_{old}}{T} + \alpha_{gate} \cdot l_{new}\Big). \end{equation} \endgroup \subsection{Inference with Cache}\label{cache} Our proposed Gated Fusion requires $f_{old}$ to be hosted together with the new model. In reality, one could have a resource-constrained setting and request the old model to be discarded at inference. We note that in real applications, repetitive inputs are commonly seen in live traffic \citep{batrinca2015social} and the backward compatibility of model upgrade entails that correct predictions can be preserved on the legacy instances already seen and predicted by the old model. To simulate real scenarios, we randomly cache old model's logits on a portion of test inputs. When getting out-of-cache instances, we use new model's output embedding $\mathcal{E}_{new}(x)$ as key and euclidean distance as metric to search for the nearest cached instance. The cached old-model logits can then be used for Gated Fusion to make predictions without hosting $f_{old}$ at inference. \section{Experiments Setup} \subsection{Model Upgrade Scenarios} We conduct experiments on two representative model upgrade scenarios: (a) upgrade to a larger pretrained model of the same type, where we use \emph{\bert{base} to \bert{large}}. (b) upgrade to a distinct pretrained model with the same size. We use \emph{\bert{base} to \electra{base}} \citep{Clark2020ELECTRA:} as this challenging model upgrade scenario for backward-compatibility, as they are pretrained under different self-supervised learning paradigms. The former uses masked language modeling (MLM) with reconstruction loss, while the latter is pretrained in generative-contrastive (adversarial) fashion with distributional divergence as the loss \citep{liu2021self}. \subsection{Datasets and Implementation} We evaluate our approach across three datasets. They represent different sentence-level classification tasks, from single-sentence to sentence-pair classification, with varying dataset sizes. We use: (a) Stanford Sentiment Treebank (SST-2), a single-sentence task to classify movie review sentiment, with $67$k train and $0.9$k dev set \citep{socher2013recursive}. (b) Microsoft Research Paraphrase Corpus (MRPC) \citep{dolan2005automatically}, a sentence-pair classification task for identifying paraphrases, with $3.7$k train and $0.4$k dev set. (c) Question Natural Language Inference (QNLI), a question-paragraph pair task to determine whether the paragraph contains the answer to the question, with $100$k train and $5.5$k dev set. Datasets are taken from GLUE Benchmark \citep{wang2018glue} and processed with scripts from Hugging Face\footnote{\scriptsize \url{https://huggingface.co/datasets/glue}}. For implementation, we use the sequence classification and pre-trained model parameters from Hugging Face Transformers\footnote{\scriptsize \url{https://huggingface.co/docs/transformers/index}}. Experiments are done in PyTorch \citep{NEURIPS2019_9015} with Tesla V100 GPUs and results are averaged over $5$ random seeds. Learning rate, batch size, and train epoch are tuned during training new model alone on given tasks and then fixed for all backward-compatible solutions. In Gated Fusion, we first train $f_{new}$ alone for first $(N - 1)$ epochs and then jointly train $g_{\theta}$ and $f_{new}$ with Gated Fusion logits $l^{*}_{GF}$ in the last epoch. Further implementation details can be found in the Appendix. \subsection{Baselines} We compare our approach with several strong baselines. (a) Train the new model directly on the target task without any adjustment, i.e. $f^{o}_{new}$. (b) The specialized distillation method proposed in \citet{work-in-progress}, where the KL-divergence of prediction probabilities between old and new models is applied when $p_{old}(y=y_i|x_i) > p_{new}(y=y_i|x_i)$. (c) Model ensemble via majority-voting that was shown to be very effective \citep{yan2021positive,work-in-progress}. Similarly, we use $5$-seed new model ensemble as a strong baseline. (d) The ensemble of the old and new models probabilities, $p^{*}(y|x)=(1-\alpha) \cdot p_{old}(y|x) + \alpha \cdot p_{new}(y|x)$, as well as ensemble of the old and new models logits, $l^{*}(y|x)=(1-\alpha) \cdot l_{old}(y|x) + \alpha \cdot l_{new}(y|x)$, where $\alpha$ is searched among $[0.5, 0.6, 0.7, 0.8, 0.9]$ to maximize backward-compatibility while achieving accuracy on par with the vanilla $f^{o}_{new}$. \section{Results and Analysis} \input{Tables/table1_largeresults} \input{Tables/table2_electraresults} \subsection{Upgrade to a Larger Pretrained Model} Our first model upgrade scenario scales up the size of underlying pretrained language models. We experiment with \bert{base} to \bert{large}, where the model size is tripled (110M vs 340M) and the model depth is doubled (12 vs 24 layers). Table \ref{tab:bertlarge_main} shows the results. For $f^{o}_{new}$, we can observe that the negative flip rates $\mathcal{R}_{NF}$\xspace are usually much larger than the accuracy gains across tasks, which could be the reason to hinder new model adoptions in real-world applications. Besides, when dividing $\mathcal{R}_{NF}$\xspace over the error rate $(1 - \textit{accuracy})$, we can observe that around $30\%$ to $40\%$ of all $f^{o}_{new}$ prediction errors are in fact the \emph{new} errors introduced during model upgrade. \iffalse This is a significant amount of portion as the error rate is the mathematical upper bound of $\mathcal{R}_{NF}$\xspace.\fi For improving prediction backward-compatibility, our proposed Gated Fusion outperforms other competing methods to considerably reduce $\mathcal{R}_{NF}$\xspace without degradation on accuracy. Note that best $\alpha$ values found for the two variants of old-new ensemble are both $0.5$, hence producing identical results. Compared to the vanilla new model, gated fusion obtains absolute $\mathcal{R}_{NF}$\xspace reductions of \textminus$1.40\%$ on SST-2, \textminus$2.94\%$ on MRPC, and \textminus$1.99\%$ on QNLI. These translate to reducing the total negative flip cases by $64.2\%$, $71.4\%$, $73.2\%$, respectively. Compared to the strongest baseline (old-new ensemble), we obtain further absolute $\mathcal{R}_{NF}$\xspace reductions of \textminus$0.28\%$ on SST-2, \textminus$0.49\%$ on MRPC, and \textminus$0.31\%$ on QNLI, which translate to further reducing $12.8\%$, $11.9\%$, and $11.4\%$ of negative flip cases. These results show the effectiveness of our method to mitigate a significant amount of regression errors during model upgrade. \input{Tables/table5_ensemble} \vspace{-0.03cm} \subsection{Upgrade to a Different Pretrained Model} \vspace{-0.03cm} A more challenging upgrade scenario is when old and new models are pretrained under distinctive paradigms, producing two representation spaces of fairly different characteristics \citep{meng2021coco}. We experiment with \bert{base} to \electra{base} in this scenario, where two models have the same size but are pretrained under utterly different schemes, i.e. generative versus adversarial. Table \ref{tab:electra_main} shows the results. For $f^{o}_{new}$, compared with upgrading to \bert{large}, we observe larger accuracy gains and lower $\mathcal{R}_{NF}$\xspace on SST-2 and MRPC. However, on QNLI, upgrading to \electra{base} achieves a higher accuracy gain but an even a higher $\mathcal{R}_{NF}$\xspace. This implies that boosting accuracy and improving backward compatibility could be two related but different objectives. For mitigation strategies, Gated Fusion achieves the lowest negative flip rates across datasets without any accuracy loss. We obtain absolute $\mathcal{R}_{NF}$\xspace reductions of \textminus$0.92\%$ on SST-2, \textminus$1.33\%$ on MRPC, and \textminus$2.01\%$ on QNLI over the vanilla setup, reducing $56.4\%$, $35.7\%$, and $71.3\%$ of overall negative flips, respectively. Compared with upgrading to \bert{large}, we observe that upgrading to \electra{base} has much smaller relative negative flip reductions on SST-2 and MRPC, showing that it could be indeed harder to improve backward-compatibility when upgrading to a distinct pretrained model. In contrast, similar relative negative flip reductions are observed on QNLI across two upgrade scenarios. This could be attributed to the abundant training data of the downstream task. \input{Tables/table3_newalone} \input{Tables/table4_cache} \subsection{Drop Old Model at Inference Time} Our proposed method requires the old model to be hosted together with the new model. A natural question is whether we could train Gated Fusion with the old model and then discard it at inference time to host the new model only. We first experiment with directly dropping the old model within Gated Fusion at inference time. Results in Table \ref{tab:newalone} show that dropping old model in Gated Fusion can still achieve comparable accuracy across the board, suggesting no performance degradation. Nonetheless, we observe that the negative flip rates also fall back to similar positions as training the new model in the vanilla setting. However, in real application scenario, live inputs are often repetitively seen across time and ensuring backward-compatibility means that correct predictions on same instances can be preserved after model upgrade. We experiment with the caching method introduced in section \ref{cache} to store old model's logits on random $X\%$ of test instances where Gated Fusion can later access them for inference. Results in Table \ref{tab:cache} show that with higher percentage of cache, $\mathcal{R}_{NF}$\xspace is gradually reduced towards $\mathcal{R}_{NF}$\xspace of the original Gated Fusion, which is equivalent to $100\%$ cache. Still, we observe a notable gap in $\mathcal{R}_{NF}$\xspace between the partial caching and full settings. We leave the examination of ways to achieve the upper bound in reduction in $\mathcal{R}_{NF}$\xspace with smaller cache to the future work. \input{Tables/table6_examples} \subsection{Limitations of New Model Ensemble} In previous works \citep{yan2021positive,work-in-progress}, new model ensemble via majority voting is shown to effectively reduce negative flips and posed as a difficult-to-beat baseline. Here, we increase the number of models in ensemble to examine its limitations. Results in Table \ref{tab:ensemble} show that ensemble with more models generally help to obtain lower $\mathcal{R}_{NF}$\xspace. However, $\mathcal{R}_{NF}$\xspace converges quickly as number of models increased, where a notable gap remains between new model ensemble and Gated Fusion. Moreover, the results show once more that boosting accuracy does not necessarily improve the backward compatibility in model upgrade. In principle, two sources could cause negative flips during model upgrade (a) the stochasticity during model training, including initializations, data loading order, and optimization process \citep{somepalli2022can}. (b) the distinctions between old and new model hypotheses, including architecture and pretraining data and procedure, leading to different representation space structures and prediction behaviors in terms of decision boundaries. Without an explicit connection to $f_{old}$, new model ensemble can only reduce negative flips primarily caused by the first factor, while our proposed Gated Fusion directly learns to mitigate regression errors regardless of their causes. Besides, as large-scale generative models become more and more powerful and popular \citep{raffel2020exploring,brown2020language,su2021multi}, it would be difficult to fine-tune them multiple times on a target task for ensemble. \subsection{Analysis of Gated Fusion} Comparing $f^{o}_{new}$ with $f^{*}_{GF}$, we can calculate the \textit{fix rate} and \textit{new fault rate} of our Gated Fusion method. During an upgrade, if there are $20$ negative flips with $f^{o}_{new}$ and $16$ out of them can be mitigated by $f^{*}_{GF}$, we obtain the fix rate to be $16/20=80\%$. Similarly, if $f^{*}_{GF}$ introduces another $4$ new negative flips which are not present with $f^{o}_{new}$, the new fault rate is computed to be $4/20=20\%$. We calculate the $5$-seed average of these two rates across different classification tasks and upgrade scenarios. In \bert{base} to \bert{large}, the averaged fix rates by Gated Fusion are $68.4\%$ on SST-2, $83.8\%$ on MRPC, and $82.9\%$ on QNLI, with new fault rates being $4.1\%$ on SST-2, $11.3\%$ on MRPC, and $9.7\%$ on QNLI. In \bert{base} to \electra{base}, Gated Fusion achieves the averaged fix rates $58.0\%$ on SST-2, $50.8\%$ on MRPC, and $75.6\%$ on QNLI, with new fault rates being $2.8\%$ on SST-2, $15.2\%$ on MRPC, and $4.0\%$ on QNLI. These results show that, on average, Gated Fusion is able to eliminate $69.9\%$ of total regression errors while adding only $7.9\%$ new ones, comparing with doing model upgrade without any treatment, i.e. $f^{o}_{new}$. Table \ref{tab:examples} shows a few regression error cases fixed by our proposed approach. In general, Gated Fusion can mitigate negative flips happened on different classes across diverse tasks as well as on inputs with variable lengths. With closer inspections of $f^{*}_{GF}$, we found that when $f_{new}$ produces incorrect predictions and $f_{old}$ gives correct outputs, $g_\theta$ is capable of putting larger weights on $f_{old}$ to ensure the backward compatibility. We also observed that the gate $g_{\theta}$ is more prone to over-fitting when the downstream tasks have smaller training set, e.g. MRPC, or are more difficult in nature, e.g. single-sentence task SST-2 versus sentence-pair tasks, which causes Gated Fusion to introduce more new errors, i.e. higher new fault rates. \section{Discussion} Gated Fusion requires to host both old and new models at inference time, which could raise a concern regarding the increased computational burden. However, in practice, old model’s logits of previous inference instances can be cached in storage and later leveraged in our Gated Fusion. That is, we only need to host the new model with the gate at inference time and leverage old predictions from cache. And for the out-of-cache inputs, backward-compatibility would be less of an issue since users have not observed such examples to make conclusions on the underlying regression. For real-world applications, there could be multiple model updates and thus multiple legacy versions. We note that in this scenario, user experience would be primarily grounded on predictions of the latest legacy version, which are also saved in cache. Our Gated Fusion can hence leverage them and make new model’s predictions compatible to those from the latest legacy version. In addition, we emphasize that the main challenge in the regression reduction research problem is to find the best trade-off between model effectiveness and backward compatibility. In this work, we show that the weighted ensemble of old-new models with a learned gate, which we call Gated Fusion, achieves a better negative flip rate than previously explored methods for regression reduction, while straight-forward ensemble approaches cannot naturally weigh on this trade-off. We don’t claim to invent the gated ensemble of old and new models but rather that our main contribution is to show that by repurposing the classic gating mechanism, the gated ensemble can become the most competitive approach to the challenging model-upgrade regression reduction problem, with no overall performance degradation on two realistic model update scenarios across three different datasets. Recently, more and more NLP products have been deployed in the industry as this field matures. We would like to stress that as better NLP models are being developed, the backward-compatible model upgrade problem naturally emerges as the new research topic strongly motivated by the real-world challenges. While backward-compatibility is currently a niche research topic, we believe that there are many thrilling future directions worth to be investigated. \section{Related Work} \citet{yan2021positive} first studied the backward compatibility of predictions during model upgrade on image classification tasks. Later, \citet{work-in-progress} investigated the similar topic in natural language understanding and formulated it as a constrained optimization problem. They both show that customized variants of knowledge distillation \citep{Hinton2015DistillingTK}, which align the predictions of old and new models on potential regression errors, are effective approaches. A model ensemble has also shown to be surprisingly effective \citep{yan2021positive,work-in-progress}, despite no explicit connection between old and new models. This was credited to variance reduction in model predictions, making it less prone to over-fitting and reducing regression errors indirectly. In this work, we leverage the gating mechanism to combine old and new models to further reduce model upgrade regression errors by a large margin across classification tasks. \citet{cai2022measuring} analyzed and proposed backward congruent re-ranking to reduce regression in model upgrades for structured predictions tasks such as dependency parsing and conversational semantic parsing. \citet{trauble2021backward} proposed an efficient probabilistic approach to locate data instances whose old predictions could be incorrect and update them with ones from the new model. \citet{zhou2022forward} looked into forward compatibility, where new classes can be easily incorporated without negatively impacting existing prediction behavior. More recently, \citet{schumann2023backward} inspected classification model regression during training data updates and mitigated the problem by interpolating between weights of the old and new models. On top of that, learning cross-model compatible embeddings has been extensively explored in visual search \citep{chen2019r3,hu2019towards,wang2020unified}. Several techniques have been proposed to optimize cross-model interoperability of embeddings, including metric space alignment \citep{Shen2020TowardsBR}, architecture search \citep{duggalcompatibility}, and aligning class centers between models \citet{meng2021learning}. In this work, we focus on improving backward compatibility during model upgrade in terms of prediction behavior on classification tasks, i.e. old and new models should produce consistently correct predictions. Reducing regression during model upgrade is also related to continual learning \citep{parisi2019continual,de2019continual,sun2019lamol,chuang2020lifelong,sachidananda2021efficient}, incremental learning \citep{chaudhry2018riemannian,shan2020learn} and concept drifting \citep{gama2014survey,vzliobaite2016overview,ganin2016domain,zhuang2020comprehensive,lazaridou2021pitfalls}. In these problems, models are required to learn from and deal with continuously changing data (in terms of examples, classes or tasks), and also need to prevent the forgetting of previously learnt knowledge. This could be one potential cause of regression observed at inference. However, in backward-compatible model upgrade, a new model, usually with distinct network architecture, is trained from scratch to perform the same task and is expected to behave similarly wherever the previous model predicts correctly. The gating mechanism is widely adopted by recurrent neural networks to effectively control information flows across networks \citep{hochreiter1997long,cho2014properties,van2016pixel,dauphin2017language,lai2019goal} and contextualize embeddings \cite{peters2018deep,lai2020context}. It is then repurposed to act as a switch for the mixture of different prediction modes, notably to combine input word copying based on the pointer network \citep{vinyals2015pointer} with the word generation from output vocabulary \citep{gu2016incorporating,merity2016pointer,see2017get}. Our proposed approach is inspired by these works and leverages the gating mechanism to effectively combine old and new models to improve backward compatibility during model upgrade. \section{Details on Experiment Settings} \subsection{Model Training Hyper-parameters} We search among following hyper-parameter space for the training of the old model $f_{old}$ and the new model in the vanilla setting $f^{o}_{new}$ across all datasets: \begin{itemize}[itemsep=0pt,topsep=0pt,parsep=0pt,partopsep=0pt] \item Learning Rate: $5e^{-6}$, $1e^{-5}$, $3e^{-5}$, $5e^{-5}$ \item Batch Size: $16$, $32$ \item Training Epochs: $3$, $5$, $8$. \end{itemize} The selected hyper-parameters for each model with \textit{(learning rate, batch size, training epoch)}: \begin{itemize}[itemsep=0pt,topsep=0pt,parsep=0pt,partopsep=0pt] \item \bert{base}: \begin{itemize} \item On SST-2: $(\text{lr }1e^{-5}, \text{batch }16, \text{epoch } 5)$ \item On MRPC: $(\text{lr }3e^{-5}, \text{batch }16, \text{epoch } 5)$ \item On QNLI: $(\text{lr }3e^{-5}, \text{batch }32, \text{epoch } 3)$ \end{itemize} \item \bert{large}: \begin{itemize} \item On SST-2: $(\text{lr }1e^{-5}, \text{batch }16, \text{epoch } 5)$ \item On MRPC: $(\text{lr }3e^{-5}, \text{batch }16, \text{epoch } 5)$ \item On QNLI: $(\text{lr }3e^{-5}, \text{batch }32, \text{epoch } 3)$ \end{itemize} \item \electra{base}: \begin{itemize} \item On SST-2: $(\text{lr }1e^{-5}, \text{batch }16, \text{epoch } 5)$ \item On MRPC: $(\text{lr }5e^{-5}, \text{batch }32, \text{epoch } 5)$ \item On QNLI: $(\text{lr }3e^{-5}, \text{batch }32, \text{epoch } 3)$ \end{itemize} \end{itemize} These model training hyper-parameters for a specific model on one specific dataset is then fixed and reused for all the competing methods to improve backward compatibility during model upgrade. \subsection{Distillation Hyper-parameters} The knowledge distillation method from \citet{work-in-progress} imposed an additional loss $\lambda \cdot KL(l_{old} / T, l_{new} / T)$ on potential regression instances. We experimented the best possible hyper-parameters from the following: \begin{itemize}[itemsep=0pt,topsep=0pt,parsep=0pt,partopsep=0pt] \item $\lambda$: $0.1, 1.0, 10.0$ \item Temperature $T$: $0.5, 1.0, 2.0$ \end{itemize} \subsection{Details on Gated Fusion} We initialize the gate $g_{\theta}$ to be a two-layer feed-forward network with the architecture \textit{(Dropout, Linear, LayerNorm, ReLU, Dropout, Linear, Sigmoid)} and fix the hidden size to be $64$ across all our experiments. During the training of Gated Fusion, we only train the $f_{new}$ within $f^{*}_{GF}$ for the first $(N - 1)$ epochs to ensure its competence, where $N$ is the total training epochs. In the last training epoch, we jointly train $g_{\theta}$ and $f_{new}$ using the Gated Fusion logits $l^{*}_{GF}$ with the secondary learning rate $lr2$. To prevent the overfitting of the gate, we also apply \textit{drop\_gate} where at each training step during the last epoch, there is $D\%$ to only train $f_{new}$ within $f^{*}_{GF}$ and $(1 - D)\%$ to train with $l^{*}_{GF}$. The hyper-parameter space of Gated Fusion is listed as follows: \begin{itemize}[itemsep=0pt,topsep=0pt,parsep=0pt,partopsep=0pt] \item Drop Gate ($\%$): $40, 50, 60, 80$ \item Temperature $T$ on old logits: $1.0, 1.2, 1.4, 1.6$ \item lr2: $5e^{-7}$, $1e^{-6}$, $3e^{-6}$, $1e^{-5}$, $3e^{-5}$ \end{itemize} We found that to achieve good results, the gap in logit magnitude of $f_{old}$ and $f_{new}$ needs to be bridged by the temperature when upgrading from \bert{base} to \electra{base}, with $T$ being $1.6$ on SST-2, $1.6$ on MRPC, and $1.2$ on QNLI. On the other hand, $T=1$ gives good results across three datasets when upgrading from \bert{base} to \bert{large}. This could result from the distinct pretraining schemes between models where MLM seem to produce larger magnitude of output logits. \section{Conclusion} Ensuring backward compatibility during model upgrade has become a critical topic in real-world NLP applications. In this work, we proposed a new approach, \emph{Gated Fusion}, that achieves significantly better backward compatibility without compromising accuracy performance on two challenging upgrade scenarios for NLP classification. Experiments demonstrated that our approach outperforms competing methods and achieves negative flip rate reductions by up to $73.2\%$. Our future research includes improving backward compatibility beyond classification to span detection, model upgrades with very large language models, and upgrades on training data or label schema. We hope that this work can inspire further research and make progress towards smoother transitions of prediction powers as NLP systems evolve. \section*{Limitations} Our proposed method mostly works on the upgrades of underlying pretrained language models for NLP classification tasks. Potential limitations include applying our approach on distant tasks such as question answering or information retrieval, upgrade to models from different architecture families such as recurrent neural nets, and the inapplicability of our method to more recent learning formulation such as in-context learning via prompting. \section*{Ethics Statement} Prediction backward compatibility during model upgrade is an emerging research topic to ensure positive congruency and smoother transitions from existing models towards more performant systems. With primary evaluation on accuracy and negative flips, we acknowledge that our method may also inherit social biases and other toxicity persisted in the legacy models. On the other hand, we have noted that fairness and safety have been one of principal criteria when developing system upgrades. Investigations of the inheritance of persistent toxicity and mitigation of it during backward-compatible upgrades merit interests of future research. \section*{Acknowledgements} We would like to acknowledge AWS AI Labs for inspiring discussions, honest feedback, and full support. We are also very grateful to reviewers for judicious comments and valuable suggestions.
1,108,101,564,108
arxiv
\section{Introduction} In an earlier paper \cite{Kos98}, Pekka Koskela wrote about old and new results concerning the quasihyperbolic metric - a metric introduced in \cite{GP76}. Since then this metric has been a key player in much of Pekka's work along with that of many other authors, either as a fundamental geometric tool or as the focus of research. We anticipate that Pekka will find the following results of interest as they include quasihyperbolic geometry as a special case. Throughout this article $\Omega$ denotes a plane domain in the complex number field ${\mathbb C}$; so, $\Omega\subset\bbC$ is open and connected. The \emph{Gaussian curvature} of a (sufficiently) smooth conformal metric $\rho\,ds$ (see \S\ref{s:cfml metrics}) is given by \begin{equation}\label{E:cvtr} {\mathbf K}_\rho:=-\rho^{-2}\Delta\log\rho\,. \end{equation} Classical results---for instance, see \cite[Theorem~1A.6, p.173 and Chapter~II.4, Theorem~4.1, p.193]{BH99}---reveal that if $\rho$ is $\mathcal{C}^3$ smooth, $\log\rho$ is subharmonic in $\Omega$, and the length distance $d$ induced by $\rho\,ds$ is complete, then $(\Omega,d)$ is a metric space of non-positive curvature (equivalently, the universal metric cover $(\tilde{\Omega},\tilde{d})$ of $(\Omega,d)$ is Hadamard). A natural question is whether or not we can relax the above smoothness hypothesis, and we answer this as follows. \begin{thm*} \label{TT:main Let $\rho\,ds$ be a conformal metric on a plane domain $\Omega$ with a complete induced length distance $d$, see (\ref{metric}). Suppose $\rho=\varphi\circ u$ where $u$ is continuous and subharmonic in $\Omega$, $\varphi$ is positive and increasing on an interval containing $u(\Omega)$, and $\log\varphi$ is convex. Then the metric universal cover $(\tilde{\Omega},\tilde{d})$ is a Hadamard space. \end{thm*} Recall that a \emph{Hadamard space} is a complete CAT(0) metric space; see \S\ref{s:CAT0}. \smallskip When $\tilde{\Omega}\stackrel{\Phi}{\to}\Omega$ is a universal cover of $\Omega$, and $(\Omega,d)$ is a length space, there is a unique length distance $\tilde{d}$ on $\tilde{\Omega}$ such that $(\tilde{\Omega},\tilde{d})\stackrel{\Phi}{\to}(\Omega,d)$ is a local isometry and $\tilde{d}$ is given by \begin{equation}\label{metric} \tilde{d}(a,b):=\inf\bigl\{\ell_\rho(\Phi\circ\gamma) \bigm| \gamma\;\text{a path in $\tilde{\Omega}$ with endpoints $a, b$}\bigl\}\,; \end{equation} see \cite[Prop.~3.25, p.42]{BH99} or \cite[p.80]{BBI01}. We call $(\tilde{\Omega},\tilde{d})$ the \emph{metric universal cover} of $(\Omega,d)$. \smallskip There are some immediate consequences of the above Theorem. \begin{cor*} \label{CC:sc} Suppose $d$ is the length distance induced by a conformal metric $\rho\,ds$ on $\Omega$ satisfying the above hypotheses. \begin{enumerate}[\rm(a), wide, labelwidth=!, labelindent=0pt] \item If $\Omega$ is simply connected, then $(\Omega,d)$ is Hadamard and for each pair of points $a,b\in\Omega$ there is a unique $d$-geodesic with endpoints $a,b$. \item For each pair of points $a,b$ in any $\Omega$, each homotopy class of paths in $\Omega$ with endpoints $a,b$ contains a unique $d$-geodesic. \item Should $\rho:\Omega\to{\mathbb R}_+$ be locally Lipschitz, these geodesics have Lipschitz continuous first derivatives. \end{enumerate} \end{cor*} \noindent{\bf Examples.} When $\Omega\subsetneq\bbC$, $\rho_\alpha:=\delta^\alpha$, where $\alpha\in\bbR$ and $\delta(z)=\delta_\Omega(z):={\rm dist}\,(z,\partial \Omega )$ is the Euclidean distance from $z$ to the boundary of $\Omega$, defines a conformal metric $\rho_\alpha\,ds$ on $\Omega$. Since $-\log\delta$ is subharmonic in $\Omega$,\footnote{% For fixed $\zeta\in\partial\Omega$, $z\mapsto-\log|z-\zeta|$ is harmonic in $\bbC\setminus\{\zeta\}\supset\Omega$, so it has the mean value property in $\Omega$, whence $-\log\delta$ does too.} $\log\rho_\alpha$ is subharmonic when $\alpha<0$; it also induces a complete distance for $\alpha\le-1$. Thus $\set{\rho_\alpha\,ds | \alpha\le-1}$ is a class of metrics to which we can apply the above Theorem and Corollary. For $\alpha:=-1$ we obtain the \emph{quasihyperbolic metric} $\delta^{-1}ds=\delta_\Omega^{-1}ds$, and this special case of our Theorem gives \cite[Theorem~A]{Her21a}. Furthermore, if $\gamma\geq 1$, then locally \[ |\delta^\gamma_\Omega(z)-\delta^\gamma_\Omega(w)| \leq 2\alpha \delta^{\gamma-1}_\Omega(z)\; |z-w| \] by the triangle inequality. Hence all these metrics have $C^{1,1}$ geodesics. \section{Preliminaries} \subsection{General Information} We view the Euclidean plane as the complex number field ${\mathbb C}$. Everywhere $\Omega$ is a {\em plane domain}. The open unit disk is ${\mathbb D}:=\{z\in{\mathbb C}:|z|<1\}$ and ${\mathbb C}_*:={\mathbb C}\setminus\{0\}$. The quantity $\delta(z)=\delta_\Omega(z):={\rm dist}\,(z,\partial\Omega)$ is the Euclidean distance from $z\in{\mathbb C}$ to the boundary of $\Omega$. \subsection{Conformal Metrics} \label{s:cfml metrics} % Each positive continuous function $\Omega\xrightarrow{\rho}(0,+\infty)$ induces a length distance $d_\rho$ on $\Omega$ defined by \[ d_\rho(a,b):=\inf_{\gamma } \ell_\rho(\gamma) \quad\text{where}\quad \ell_\rho(\gamma):= \int_\gamma \rho\, ds \] and where the infimum is taken over all rectifiable paths $\gamma$ in $\Omega$ that join the points $a,b$. We describe this by calling $\rho\,ds=\rho(z)|dz|$ a \emph{conformal metric} on $\Omega$. When $(\Omega,d_\rho)$ is complete, the Hopf-Rinow Theorem (see \cite[p.35]{BH99}, \cite[p.51]{BBI01}, \cite[p.62]{Pap05}) asserts that $(\Omega,d)$ is a proper geodesic metric space.\footnote{% See also \cite[Theorem~2.8]{Mar} for Euclidean domains.} When $\rho$ is sufficiently differentiable, say in $\mathcal{C}^2(\Omega)$, the \emph{Gaussian curvature} of $(\Omega,d_\rho)$ is given by \eqref{E:cvtr} This curvature can also be defined for continuous metric densities through an integral formula for the Laplacian, although it is not always finite; see \cite[\S3]{MO86}. This idea was first observed by Heins \cite{Heins} who proved a version of Schwarz' lemma when the weak curvature is bounded above by $-1$. Every hyperbolic plane domain $\Omega$ carries a unique metric $\lambda\,ds=\lambda_\Omega\,ds$ which enjoys the property that its pullback $\Phi^*[\lambda\,ds]$, with respect to any holomorphic universal covering projection $\Phi:\bbD\to\Omega$, is the hyperbolic metric $\lambda_\bbD(\zeta)|d\zeta|=2(1-|\zeta|^2)^{-1}|d\zeta|$ on $\bbD$. Alternatively, $\lambda\,ds$ is the unique maximal (or unique complete) metric on $\Omega$ that has constant Gaussian curvature $-1$. The \emph{quasihyperbolic metric} on $\Omega\subsetneq\bbC$ is $\delta^{-1}ds$, and it is well-known that this metric is complete.\footnote{% This is also true for rectifiably connected non-complete locally complete metric spaces as discussed in \cite[2.4.1, p.43]{HRS20}.} In \cite[Corollary 3.6, p.44]{MO86} Martin and Osgood proved that in plane domains the quasihyperbolic metric has non-positive generalized curvature. \subsection{CAT(0) Metric Spaces} \label{s:CAT0} % Here our terminology and notation conforms exactly with that in \cite{BH99} and we refer the reader to this delightful trove of geometric information about non-positive curvature, and also see \cite{BBI01}. We recall a few fundamental concepts, mostly copied directly from \cite{BH99}. Throughout this subsection, $X$ is a geodesic metric space; for example, $X$ could be a quasihyperbolic plane domain with its quasihyperbolic distance, or a closed rectifiably connected plane set with its intrinsic length distance. \subsubsection{Geodesic and Comparison Triangles} \label{ss:triangles} A \emph{geodesic triangle} $\Delta$ in $X$ consists of three points in $X$, say $a,b,c\in X$, called the \emph{vertices of $\Delta$} and three geodesics, say $\alpha:a\curvearrowright b, \beta:b\curvearrowright c, \gamma:c\curvearrowright a$ (that we may write as $[a,b], [b,c], [c,a]$) called the \emph{sides of $\Delta$}. We use the notation \[ \Delta=\Delta(\alpha,\beta,\gamma) \quad\text{or}\quad \Delta=[a,b,c]:=[a,b]\star[b,c]\star[c,a] \quad\text{or}\quad \Delta=\Delta(a,b,c) \] depending on the context and the need for accuracy. A Euclidean triangle $\bar\Delta=\Delta(\bar{a},\bar{b},\bar{c})$ in ${\mathbb C}$ is a \emph{comparison triangle} for $\Delta=\Delta(a,b,c)$ provided $|a-b|=|\bar{a}-\bar{b}|,|b-c|=|\bar{b}-\bar{c}|,|c-a|=|\bar{c}-\bar{a}|$. We also write $\bar\Delta=\bar\Delta(a,b,c)$ when a specific choice of $\bar{a},\bar{b},\bar{c}$ is not required. A point $\bar{x}\in[\bar{a},\bar{b}]$ is a \emph{comparison point} for $x\in[a,b]$ when $|x-a|=|\bar{x}-\bar{a}|$. \subsubsection{CAT(0) Definition} \label{ss:CAT0} A geodesic triangle $\Delta$ in $X$ satisfies the \emph{CAT(0) distance inequality} if and only if the distance between any two points of $\Delta$ is not larger than the Euclidean distance between the corresponding comparison points; that is, \[ \forall\; x,y\in\Delta\;\text{and corresponding comparison points}\;\bar{x},\bar{y}\in\bar\Delta\;, \quad |x-y| \le |\bar{x}-\bar{y}|\,. \] A geodesic metric space is \emph{CAT(0)} if and only if each of its geodesic triangles satisfies the CAT(0) distance inequality. A complete CAT(0) metric space is called a \emph{Hadamard space}. A geodesic metric space $X$ has \emph{non-positive curvature} if and only if it is locally CAT(0), meaning that for each point $a\in X$ there is an $r>0$ (that can depend on $a$) such that the metric ball $B(a;r)$ (endowed with the distance from $X$) is CAT(0). Each sufficiently smooth Riemannian manifold has non-positive curvature if and only if all of its sectional curvatures are non-positive; see \cite[Theorem~1A.6, p.173]{BH99}. In particular, if $\rho\,ds$ is a smooth conformal metric on $\Omega$ with ${\bf K}_\rho\le0$, then $(\Omega,d_\rho)$ has non-positive curvature. \subsection{Some Potential Theory} \label{ss:potl} % Recalling \eqref{E:cvtr}, we see that when $\Omega\xrightarrow{\rho}(0,+\infty)$ is $\mathcal{C}^3$ smooth with $\log\rho$ subharmonic in $\Omega$, then $(\Omega,d_\rho)$ is a metric space of non-positive curvature; see \cite[Theorem~1A.6, p.173]{BH99}. We utilize this basic fact, but also require the following. \begin{lemma} \label{L:subhar} % Suppose $\Omega\xrightarrow{\rho}(0,+\infty)$ is $\mathcal{C}^2$ smooth with $\log\rho$ subharmonic in $\Omega$. Then $\log(1+\rho)$ is subharmonic in $\Omega$. \end{lemma} \begin{proof}% As $\rho=\exp(\log\rho)$ (with $\exp$ a convex function), it is also subharmonic in $\Omega$. Thus \begin{gather*} \rho_{z\bar{z}} \ge 0 \quad\text{and}\quad \rho^2 \frac{\partial^2}{\partial\bar{z}\partial z}\Bigl[\log\rho\Bigr] = \rho\rho_{z\bar{z}}-\rho_z\rho_{\bar{z}} \ge 0\,, \intertext{whence} \frac{\partial^2}{\partial\bar{z}\partial z}\biggl[\log(1+\rho)\biggr]= \frac{\partial}{\partial\bar{z}}\biggl[\frac{\rho_z}{1+\rho}\biggr] = \frac{(1+\rho)\rho_{z\bar{z}}-\rho_z\rho_{\bar{z}}}{(1+\rho)^2} \ge0\,. \end{gather*} \end{proof}% \begin{rmks}\label{R:subhar} We can apply the above: \begin{enumerate}[\rm(a), wide, labelwidth=!, labelindent=20pt] \item to $\varepsilon\rho$ for any $\varepsilon>0$; \item to $\rho(z):=|z|^\alpha$ in $\Co$ for any $\alpha\in\bbR$; \item whenever $\rho\,ds$ (is sufficiently smooth and) has curvature $\mathbf{K}_\rho\le0$. \end{enumerate} \end{rmks} \subsubsection{Smoothing} \label{ss:smooth} % Let ${\mathbb C}\xrightarrow{\eta}{\mathbb R}$ be $\mathcal{C}^\infty$ smooth with $\eta\ge0$, $\eta(z)=\eta(|z|)$, the support of $\eta$ in ${\mathbb D}$, and $\int_{\mathbb C}\eta=1$. For each $\varepsilon >0$ we set $\eta_\varepsilon (z):=\varepsilon ^{-2}\eta(z/\varepsilon )$. The \emph{regularization} (or \emph{mollification}) of an $L^{1}_{\rm loc}(\Omega)$ function $u:\Omega\to{\mathbb R}$ are the convolutions $u_\varepsilon :=u\ast\eta_\varepsilon $, so \[ u_\varepsilon (z) := \int_{\mathbb C} u(w)\eta_\varepsilon (z-w)\, dA(w)\,, \] which are defined in $\Omega_\varepsilon :=\{z\in\Omega:\delta(z)>\varepsilon \}$. It is well known that $u_\varepsilon \in\mathcal{C}^\infty(\Omega_\varepsilon )$ and $u_\varepsilon \to u$ as $\varepsilon \to0^+$ where this convergence is: pointwise at each Lebesgue point of $u$, locally uniformly in $\Omega$ if $u$ is continuous in $\Omega$, and in $L^p_{\rm loc}(\Omega)$ if $u\in L^p_{\rm loc}(\Omega)$. Moreover, if $u$ is subharmonic in $\Omega$, then so is each $u_\varepsilon $. See for example \cite[Proposition I.15, p.235]{GL86} or \cite[Theorem~2.7.2, p.49]{Ran95}. \section{Proof of Theorem} \label{S:PfThm} Let $\rho\,ds$ be a conformal metric on a plane domain $\Omega$ with an induced length distance $d:=d_\rho$ that is complete. Suppose $\rho=\varphi\circ u$ where $u$ is subharmonic in $\Omega$, $\varphi$ is positive and increasing on an interval containing $u(\Omega)$, and $\log\varphi$ is convex. We demonstrate that the metric universal cover $(\tilde{\Omega},\tilde{d})$ is a Hadamard space. The main idea is to approximate $(\Omega,d)$ by metric spaces $(\Omega_\varepsilon ,d_\varepsilon )$ that all have non-positive curvature. Then a limit argument, similar to that used in the proof of \cite[Theorem~A]{Her21a}, gives the asserted conclusion. We start with the fact that $v:=\log\rho=\log\varphi\circ u$ is subharmonic in $\Omega$. See \cite[Theorem~2.6.3, p.43]{Ran95}. Let $v_\varepsilon :=v\ast\eta_\varepsilon $ be the regularization of $v$ as described in \S\ref{ss:smooth}. Thus $v_\varepsilon$ are defined, $\mathcal{C}^\infty$ smooth, and subharmonic (so, $\Delta v_\varepsilon \geq 0$) in $\{z\in\Omega:\delta(z)>\varepsilon \}$. Moreover, $v_\varepsilon \to v$ as $\varepsilon \to0^+$ locally uniformly in $\Omega$. \medskip The elementary cases where $\Omega=\bbC$ or $\Omega$ is the once punctured plane are left to the reader. \medskip We may therefore assume that $\Omega$ is a hyperbolic plane domain and that the origin lies in $\Omega$. Put $\varepsilon _n:=\delta(0)/n, v_n:=v_{\varepsilon _n}$, and let $\Omega_n$ be the component of $\{z\in\Omega:\delta(z)>\varepsilon _n\}$ that contains the origin. Then $(\Omega_n)_{n=1}^{\infty}$ Carath\'eodory kernel converges to $\Omega$ with respect to the origin. Next, let $\rho_n:=e^{v_n}$. Then $\rho_n>0$ and is $\mathcal{C}^\infty$ in $\Omega_n$. Since $v_n$ is subharmonic in $\Omega_n$, $\rho_n\,ds$ has Gaussian curvature \[ {\bf K}_{\rho_n}=-\rho_n^{-2}\Delta\log\rho_n\le0\quad\text{in $\Omega_n$}\,. \] It follows that the metric spaces $(\Omega_n,d_{\rho_n})$ all have non-positive curvature; see \cite[Theorem~1A.6, p.173]{BH99}. We would like to appeal to the Cartan-Hadamard theorem to assert that the metric universal coverings of $(\Omega_n,d_{\rho_n})$ are CAT(0), but these metric spaces need not be complete.\footnote{% We are grateful to the referee for pointing out this glaring gap in our original argument.} To overcome this roadblock, we employ the hyperbolic metrics $\lambda_n\,ds$ in the domains $\Omega_n$. We define metrics $\sigma_n\,ds$ on $\Omega_n$ via \[ \forall\; z\in\Omega\,,\quad \sigma_n(z):=\rho_n(z)\cdot\bigl(1+\varepsilon_n\lambda_n(z)\bigr)\,. \] According to Fact~\ref{L:subhar} (using Remarks~\ref{R:subhar}(a,c)) $\log\sigma_n$ is subharmonic in $\Omega$. Thus $\sigma_n\,ds$ induces a distance $d_n:=d_{\sigma_n}$ with $(\Omega_n,d_n)$ a complete geodesic metric space of non-positive curvature. We note that $\sigma_n\to\rho$ locally uniformly in $\Omega$. Let $\bbD\xrightarrow{\Phi}\Omega, \bbD\xrightarrow{\Phi_n}\Omega_n$ be holomorphic covering projections with $\Phi(0)=0=\Phi_n(0)$ and $\Phi'(0)>0, \Phi'_n(0)>0$. Since $(\Omega_n)$ Carath\'eodory kernel converges to $\Omega$ with respect to the origin, a theorem of Hejhal's \cite[Theorem~1]{Hej74}\footnote{% See also \cite[Cor.~5.3]{BM22}.} asserts that $\Phi_n\to \Phi$, so also $\Phi'_n\to \Phi'$, locally uniformly in $\bbD$. Let $\tilde{d}, \tilde{d}_n$ be the $\Phi,\Phi_n$ lifts of the distances $d,d_n$ on $\Omega, \Omega_n$ respectively. That is, $\tilde{d}$ and $\tilde{d}_n$ are the length distances on $\bbD$ induced by the pull backs \begin{gather*} \tilde\rho\,ds:=\Phi^*\bigl[\rho\,ds] \quad\text{and}\quad \tilde\sigma_n\,ds:=\Phi_n^*\bigl[\sigma_n\,ds] \intertext{of the metrics $\rho\,ds$ and $\sigma_n\,ds$ in $\Omega$ and $\Omega_n$ respectively. Thus, for $\zeta\in\bbD$,} \tilde\rho(\zeta)\,|d\zeta|=\rho\bigl(\Phi(\zeta)\bigr)|\Phi'(\zeta)|\,|d\zeta| \quad\text{and}\quad \tilde\sigma_n(\zeta)\,|d\zeta|= \sigma_n\bigl(\Phi_n(\zeta)\bigr)|\Phi_n'(\zeta)| \,|d\zeta| \end{gather*} and $(\bbD,\tilde{d})\xrightarrow{\Phi}(\Omega,d), (\bbD,\tilde{d}_n)\xrightarrow{\Phi_n}(\Omega_n,d_n)$ are metric universal coverings. Note that as $(\Omega_n,d_n)$ has non-positive curvature, the Cartan-Hadamard Theorem \cite[Chapter~II.4, Theorem~4.1, p.193]{BH99} asserts that $(\bbD,\tilde{d}_n)$ is CAT(0). Using the locally uniform convergences of $\sigma_n\to\rho$ and $\Phi_n\to\Phi, \Phi'_n\to\Phi'$ (in $\Omega$ and $\bbD$ respectively) we deduce that $\tilde\sigma_n\,ds\to\tilde\rho\,ds$ locally uniformly in $\bbD$. This implies pointed Gromov-Hausdorff convergence of $(\bbD,\tilde{d}_n,0)$ to $(\bbD,\tilde{k},0)$ (see the proof of \cite[Theorem~4.4]{HRS20}) which in turn says that $(\bbD,\tilde{k})$ is a 4-point limit of $(\bbD,\tilde{d}_n)$ and hence, as each $(\bbD,\tilde{d}_n)$ is CAT(0), it follows that $(\bbD,\tilde{d})$ is CAT(0); see \cite[Cor.~3.10, p.187; Theorem~3.9, p.186]{BH99}. Finally, it is a routine matter to check that $(\bbD,\tilde{d})$ is complete; for instance, see \cite[Exercise~3.4.8, p.~80]{BBI01}. \hfill \qed \medskip Finally, the smoothness of geodesics in case the metric density is locally Lipschitz is proved in \cite[Theorem 2.12 \& Theorem 4.3]{Mar}.
1,108,101,564,109
arxiv
\section{Introduction} Speech enhancement, usually called speech denoisng, is a task of improving speech quality and intelligibility \cite{ref_se}. It plays a key role in speech, audio and acoustic applications, such as telecom, hands-free telephone, mobile communication, etc. The feeling of voice interaction will degrade severely when noise exists, especially for complicated noise, such as babble noise and factory noise. The influence could be improved by multi-channel processing technologies if multiple microphones were available \cite{ref_micarray,ref_micarray1}. In this paper, we focus on the problem of single-channel enhancement where only one microphone is used for audio recording. Over the past several decades, lost of methods have been proposed to handle this problem. In general, two categories of methods can be classified, namely traditional signal processing approaches and deep-learning approaches. The traditional methods, such as, spectral subtraction (SS) \cite{ref_ss}, Wiener filtering (WF) \cite{ref_wf}\cite{ref_wf1} and adaptive filtering (AF) \cite{ref_af}, are based on a predefined noise or speech statistical assumption, such as Gaussian distribution or Laplace distribution \cite{ref_gaussian}\cite{ref_distribution}\cite{ref_distribution1}. They are less effective in low Signal-to-Noise Ratio (SNR) and non-stationary noise conditions. It becomes more severe when the noise distribution is different with or deviated from the pre-assumption. Recently, deep learning has been proved to be more effective to complex problems that were previously unattainable with signal processing techniques. It is a data-driven supervised learning approach by learning a mapping function via observing a large number of representative pairs of noisy and noise-free speech samples. Since no statistical assumption is made in advance, this makes it popular to bring deep neural network (DNN) methods into speech enhancement. Mostly, time frequency (T-F) mask is used as the network learning objective \cite{ref_mask}\cite{ref_mask1}. Therefore, the estimated clean speech is obtained by multiplying this mask with the noisy spectrogram and transforming to audio waveform by inverse-short-time fourier transform (iSTFT). Various methods has adopted this kind of mask-estimation structure, such as \cite{ref_cnn,ref_cnn1,ref_lstm,ref_lstm1,ref_lstm2,ref_lstm3}. Nowadays, approaches by directly feeding raw waveforms into a neural network for enhancement and directly output audio waveform has arised, such as SEGAN \cite{ref_end2end}, WaveNet \cite{ref_end2end1}. In this paper, we further explore the attention-based neural network structures for speech enhancement based on the previous work proposed in \cite{ref_lstm3}. Additionally, we proposed to introduce a noise type classification subnetwork into the model. It works parallelly with the denosing subnetwork. This idea is inspired by \cite{ref_class} where a subnetwork of voice activity detection (VAD) is embedded for guiding the denosing subnetwork. They share the same audio encoder and spectrogram encoder that are used for extracting high-level representation of the input audio. Then, the noise type is estimated using a LSTM encoder with attention mechanism. The generated noise context is fed to the denosing subnetwork which is constructed by another LSTM encoder with attention mechanism. The noise context is concatenated with the LSTM embedding of denoising subnetwork for attention mechanism in denoising. This way of using attention is inspired by \cite{ref_attention} where aspect embedding is concatenated with LSTM hidden embedding and used in attention for sentiment classification. In the work, causal local attention where the current frame and the previous frames within a window is considered for attention, is used for considering real-time processing scenarios. Since the noise type information is embedded into the attention mechanism for denoising, a more precise estimation could be gained. We conducted comparison experiments as the same as indicated in \cite{ref_attention}. Experiments show that, the proposed structure can consistently achieve better PESQ performance and generalization ability. \begin{figure*}[htb] \begin{center} \includegraphics[width=150mm]{structure.eps} \end{center} \caption{Network structure of of the proposed noise classification aided attention-based model for speech enhancement.} \vspace*{-3pt} \label{fig:structure} \end{figure*} \section{Network Structure} The network architecture is shown in Fig. \ref{fig:structure}(a). It is constituted by an audio encoder and decoder, and a spectrogram encoder, a noise encoder with attention, a speech encoder with attention, a mask generator. The detailed model structure is shown in Fig. \ref{fig:structure}(b). Encoder module is used to transform short segments of the input waveform into their corresponding spectrograms. The spectrogram is encoded by spectrogram encoder for obtaining a high-level feature representation. Then, this feature representation is fed to two parallel branches, one for noise type classification and the other for speech enhancement. The speech waveform is then reconstructed by transforming the masked representation using a decoder. \subsection{Audio encoder} \label{sec:encoder} The input audio is divided into overlapping segments of length $L$ samples. It is represented by $\mathbf{x}_{k} \in \mathbb{R}^{1 \times L}$, where $k=1, \ldots, {T}$ denotes the segment index and ${T}$ denotes the total number of segments. $\mathbf{x}_{k}$ is transformed into a $N$-dimensional representation, by a 1--D convolution operation $\mathbf{U} \in \mathbb{R}^{1 \times N}$ (denoted by $1$-$D$ $Conv$). It is formulated by a matrix multiplication as \begin{equation} \label{eq_audio_encoder} {\mathbf{w}}=\mathcal{H}(\mathrm{\mathbf{x}} \mathbf{U}) \end{equation} where $\mathbf{U} \in \mathbb{R}^{N \times L}$ contains $N$ vectors (encoder basis functions) with length $L$ for each, $\mathcal{H}(\cdot)$ is the rectified linear unit (ReLU) function to ensure non-negative of the representation. \subsection{Spectrogram encoder} \label{sec:spec_encoder} The spectrogram encoder extracts a high-level feature representation $\mathbf{h}$ from the input spectrogram $\mathbf{w}$: \begin{equation} \label{eq_spec_encoder} \mathbf{h}=\operatorname{Encoder^{spec}}(\mathbf{w}) \end{equation} where $\mathbf{h}$ is the spectrogram embedding and fed to the following encoders for different task. In our work, we adopt LSTM as the encoder that has strong sequential modeling ability leading to superior performances in speech enhancement. \subsection{Noise classification} \label{sec:classification} The noise classification subnetwork is composed of an noise feature extraction constructed by a LSTM layer and an attention mechanism and a classification module constructed by a linear layer with softmax activation function. $\textbf{Noise\ Encoder}$ The spectrogram embedding $\mathbf{h}$ is transformed to noise embedding $\mathbf{h}^{n}$ by a LSTM layer as \begin{equation} \label{eq_noise_encoder} \mathbf{h}^{n}=\operatorname{Encoder^{noise}}(\mathbf{h}) \end{equation} $\textbf{Noise\ Attention}$ The spectrogram embedding $\mathbf{h}$ and the noise embedding $\mathbf{h}^{n}$ are used for noise attention, where the noise embedding $\mathbf{h}^{n}$ acts as query, and the spectrogram embedding $\mathbf{h}$ acts as key and value. It is expressed as \begin{equation} \label{eq_noise_attention} \mathbf{c}^{n}=\text{Attention}\left(\mathbf{h}, \mathbf{h}^{n}\right) \end{equation} where $\mathbf{c}^{n}$ is the generated context vector of the noise attention. As is shown in Fig. \ref{fig:structure}(b), a casual local attention is used to avoid any latency for speech enhancement in practice. Therefore, if we denoise a frame, $\mathbf{x}_{k}$, we calculate attention weights within a window of length $w$ using $\left[\mathbf{x}_{t-w}, \cdots, \mathbf{x}_{t}\right]$. This means that, the corresponding spectrogram embedding $\left[\mathbf{h}_{t-w}, \cdots, \mathbf{h}_{t}\right]$ and is used as the key and vale, while the noise embedding of the $t$-th frame, $\mathbf{h}_{t}^{n}$, is used as the query. Therefore, eq. (\ref{eq_noise_attention}) is rewritten as, \begin{equation} \label{eq_noise_attention} \mathbf{c}_{t}^{n}=\text{Attention}\left(\left[\mathbf{h}_{t-w}, \cdots, \mathbf{h}_{t}\right], \mathbf{h}_{t}^{n}\right) \end{equation} where, the upper subscripts of $K$ and $Q$, representing key and vale, is omitted for simplicity. Thus, a normalized attention weight $\alpha_{t,k}^{n}$ is learned: \begin{equation} \label{eq_noise_attention1} \alpha_{t,k}^{n}=\frac{\exp \left(\operatorname{score}\left(\mathbf{h}_{k}, \mathbf{h}_{t}^{n}\right)\right)}{\sum_{k=t-w}^{t} \exp \left(\operatorname{score}\left(\mathbf{h}_{k}, \mathbf{h}_{t}^{n}\right)\right)} \end{equation} We follow the correlation calculation in [27], so score $\left(\mathbf{h}_{k}, \mathbf{h}_{t}^{n}\right)=\mathbf{h}_{k}^{\top} \mathbf{W h}_{t}^{n}$. Finally, we compute the context vector as the weighted average of $\mathbf{h}_{k}$ as: \begin{equation} \label{eq_noise_attention2} \mathbf{c}_{t}^{n}=\sum_{k=t-w}^{t} \alpha_{t,k}^{n} \mathbf{h}_{k} \end{equation} $\textbf{Classification}$ The context vector of the noise attention, $\mathbf{c}_{t}^{n}$, is concatenated with the noise embedding, $\mathbf{h}_{t}^{n}$, i.e., $\mathbf{d}_{t}^{n}=\left[\mathbf{c}_{t}^{n} ; \mathbf{h}_{t}^{n}\right]$, and fed to linear layer for noise type classification, \begin{equation} \label{eq_noise_linear} \text{class}=\text{softmax} \left(\mathbf{W}_{e}^{n}\mathbf{d}_{t}^{n}]+\mathbf{b}_{e}^{n}\right) \end{equation} where the $[\cdot ; \cdot]$ denotes the concatenation of two vectors. Finally, the noise type if \subsection{Speech denoising} \label{sec:denoising} The denoising subnetwork is composed of an speech feature extraction constructed by a LSTM layer and an attention mechanism and a mask generator constructed by a linear layer with sigmoid activation function. $\textbf{Speech\ Encoder}$ The spectrogram embedding $\mathbf{h}$ is transformed to speech embedding $\mathbf{h}^{s}$ by a LSTM layer as \begin{equation} \label{eq_speech_encoder} \mathbf{h}^{s}=\operatorname{Encoder^{speech}}(\mathbf{h}) \end{equation} $\textbf{Speech\ Attention}$ As is shown in Fig. \ref{fig:structure}(b), the speech embedding and the spectrogram embedding are respectively concatenated with the noise embedding, $\mathbf{d}_{t}^{n}$, i.e., $\mathbf{f}_{t}^{s}=\left[\mathbf{d}_{t}^{n} ; \mathbf{h}_{t}^{s}\right]$, and $\mathbf{f}_{k}=\left[\mathbf{d}_{t}^{n} ; \mathbf{h}_{k}\right]$, where $k =t-w,\cdots,t$. The concatenated speech embedding $\mathbf{f}^{s}_{t}$ acts as query, and the concatenated spectrogram embedding $\mathbf{f}_{k}$ acts as key and value, and speech attention is performed by, \begin{equation} \label{eq_speech_attention} \mathbf{c}_{t}^{s}=\text{Attention}\left(\left[\mathbf{f}_{t-w}, \cdots, \mathbf{f}_{t}\right], \mathbf{f}_{t}^{s}\right) \end{equation} where $\mathbf{c}^{s}_{t}$ is the context vector of the speech attention. Again, he upper subscripts of $K$ and $Q$ is omitted for simplicity. Thus, a normalized attention weight $\alpha_{t,k}^{s}$ is learned: \begin{equation} \label{eq_speech_attention1} \alpha_{t,k}^{s}=\frac{\exp \left(\operatorname{score}\left(\mathbf{f}_{k}, \mathbf{f}_{t}^{s}\right)\right)}{\sum_{k=t-w}^{t} \exp \left(\operatorname{score}\left(\mathbf{f}_{k}, \mathbf{f}_{t}^{s}\right)\right)} \end{equation} Finally, we compute the context vector of speech attention by multiplying the spectrogram embedding, $\mathbf{h}_{k}$, with the corresponding attention weights, $\alpha_{t,k}^{s}$, by, \begin{equation} \label{eq_speech_attention2} \mathbf{c}_{t}^{s}=\sum_{k=t-w}^{t} \alpha_{t,k}^{s} \mathbf{h}_{k} \end{equation} Therefore, the noise information is embedded into the denosing subnetwork, guiding it to gain higher performance. $\textbf{Masking}$ The context vector of the speech attention, $\mathbf{c}_{t}^{s}$, is concatenated with the speech embedding, $\mathbf{h}_{t}^{s}$ and the noise embedding, $\mathbf{d}_{t}^{n}$, and fed to a linear layer to obtain an enhancement vector $\mathbf{e}_{t}^{s}$ as \begin{equation} \label{eq_speech_linear1} \mathbf{e}_{t}^{s}=\tanh \left(\mathbf{W}_{e}^{s}\left[\mathbf{c}_{t}^{s} ; \mathbf{h}_{t}^{s}; \mathbf{d}_{t}^{n}\right]+\mathbf{b}_{e}^{s}\right) \end{equation} where the $[\cdot ; \cdot ; \cdot]$ denotes the concatenation of three vectors. Finally, we form a mask of the input feature $\mathbf{x}_{t}$, and get the final enhanced speech spectrogram $\mathbf{y}_{t}$ as \begin{equation} \label{eq_speech_linear2} \mathbf{y}_{t}=\mathbf{w}_{t} \odot \operatorname{sigmoid}\left(\mathbf{W}_{m}^{s} \mathbf{e}_{t}^{s}+\mathbf{b}_{m}^{s}\right) \end{equation} \subsection{Audio decoder} \label{sec:decoder} The decoder reconstructs the waveform from masked spectrogram using a 1--D transposed convolution operation. It is reformulated as matrix multiplication as \begin{equation} \hat{\mathbf{x}_{t}}=\mathbf{y}_{t} \mathbf{V} \end{equation} where $\hat{\mathbf{x}} \in \mathbb{R}^{1 \times L}$ is the reconstruction of $\mathbf{x}$ and the rows in $\mathbf{V} \in \mathbb{R}^{N \times L}$ are the decoder basis functions, each with length $L$ samples. The overlapping reconstructed segments are summed together to get the final waveforms. \subsection{Training objective} \label{sec:loss_cun} The training objective is minimizing the loss function which is obtained by combining the mean square error (MSE) of the estimated waveform and the cross-entropy of between the estimated classification label, formulated by, \begin{equation} \label{eq:mask_loss} \left\{\begin{array}{l} \operatorname{Loss}_{total}=(1-\alpha) \cdot {\operatorname{Loss}_{\rm{MSE}}} +\alpha \cdot {\operatorname{Loss}_{\rm{CE}}} \\ {\operatorname{Loss}_{\rm{MSE}}} = \sum_{n}^{T}\left(\hat{s}(n)-s(n)\right)^{2} \\ \operatorname{Loss}_{\mathrm{CE}}=-\sum_{n}^{T} \sum_{i=1}^{C} p\left(\hat{c}_{i}(n)\right) \log \left(p\left(c_{i}(n)\right)\right) \end{array}\right. \end{equation} where ${Loss_{\rm{MSE}}}$ is the MSE between the estimated waveform $\hat{s}(n)$ and the referenced one ${s}(n)$, ${Loss_{\rm{CE}}}$ is the cross-entropy between the estimated classification $\hat{c_i}(n)$ and the referrenced classification ${c_i}(n)$, $i \in [1,2,..,C]$ is the the categories, $n$ is the time index, $\alpha \in[0,1]$ is the weighting factor. \section{Experiments} \subsection{Datasets} We follow the same procedure as is indicated in \cite{ref_lstm3} to create synthetic datasets. $Librispeech$ train-clean-360 is used as the clean speech dataset which has 921 speakers and the total number of 104014 audio samples. $Nonspeech\ Sounds$ dataset \cite{ref_nonspeech} is used as the training noise dataset. We use this noise dataset, because this noise dataset is already classified to 20 categories\footnote{http://web.cse.ohio-state.edu/pnl/corpus/HuNonspeech/HuCorpus.html}. We randomly choose 500 speakers from the 921 speakers in the train-clean-360 as the training speakers, and the remaining 421 speakers are used for unseen speakers testing. Specifically, we randomly select a clean speech file from the 500 speakers and a noise file from the nonspeech corpus, generating 21407 utterances in total. Then we randomly select an SNR between 0dB and 20dB, and mix these two files to create a noisy file according to the selected SNR. We divide the generated 21407 noisy files into 13407, 4000 and 4000 for training, validation and testing. Neural network training is conducted using the training set and the loss on the validation set is examined as the convergence condition. The original testing set is named as Test-0. We further create 4 new test sets (Test-1,2,3,4) each with 4000 utterances randomly chosen from the remaining 431 speakers. In Test-1,2, noise files are from the noise pool, Musan, as Test-0. These Test-3,4 are generated by adding different noises (with the training set) from the $Musan$ corpus \cite{ref_musan}. Since the CHIME3 dataset used in \cite{ref_lstm3} is not open access, we use the Musan corpus instead. The SNR of these test sets are illustrated as Table \ref{tab:dataset}. \begin{table}[htb] \setlength\tabcolsep{3pt} \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{0.2cm} \centering \caption{Conditions of noise and SNR for datasets. Utterances in Train, Valid and Test-0 are from the same multi-speaker set; while utterances in Test-1,2,3 and 4 from another set of speakers.} \label{tab:dataset} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \text{Set} & \text{Train} & \text{Valid} & \text{Test-0} & \text{Test-1} & \text{Test-2} & \text{Test-3} & \text{Test-4} \\ \hline \hline \multirow{2}{*}{Noise} & Non- & Non- & Non- & Non- & Non- & \multirow{2}{*}{Musan} & \multirow{2}{*}{Musan} \\ & speech & speech & speech & speech & speech & & \\ \hline Speakers & 500 & 500 & 500 & 431 & 431 & 431 & 431 \\ \hline SNR & 0-20dB & 0-20dB & 0-20dB & 0-20dB & -5-0dB & 0-20dB & -5-0dB \\ \hline \end{tabular} \end{table} \subsection{Configurations} In our experiments, waveforms at 16 kHz sample rate were directly served as the inputs. These models are initialized with the normalized initialization. The loss function used for training the network is Eq. (\ref{eq:mask_loss}). $\emph{Adm}$ algorithm was used for training with an exponential learning rate decaying strategy, where the learning rate starts at $1$$\times$$10^{-4}$ and ends at $1$$\times$$10^{-8}$. The total number of epochs was set to be 200. The criteria for early stopping is no decrease in the loss function on validation set for 10 epochs. We compare our approach (namely CA-Att-LSTM) with conventional OM-LSA method, an LSTM approach without attention mechanism (namely Pure-LSTM) and the attention-based LSTM model in \cite{ref_lstm3} (namely Att-LSTM). The Pure-LSTM has two layers whose first layer size is $H$ as listed in \ref{tab:configuration} and the second layer has 128/256/512 cells. As for our CA-Att-LSTM model, two structures are compared according to whether the noise context $\mathbf{d}_{t}^{n}$ is concatenated with the speech embedding for attention, i.e. CA-Att-LSTM without concatenation (namely CA-Att-LSTM1) and with concatenation (namely CA-Att-LSTM2). The parameters configuration of the proposed network is listed in Table \ref{tab:configuration} where $C$ denotes the number of categories of noise type. \begin{table}[htb] \setlength\tabcolsep{2pt} \setlength{\abovecaptionskip}{0.2cm} \setlength{\belowcaptionskip}{0.2cm} \centering \caption{Network configuration} \label{tab:configuration} \begin{tabular}{c|c|c} \hline \text{Symbol} & \text{Description} & \text{Value} \\ \hline \hline $N$ & Number of filters in encoder and decoder, Eq. (\ref{eq_audio_encoder}) & 512 \\ \hline $L$ & Length of the filters (in samples), Eq. (\ref{eq_audio_encoder}) & 160 \\ \hline $H$ & Hidden size of the spectrogram encoder, Eq. (\ref{eq_spec_encoder}) & 256 \\ \hline $H^n$ & Hidden size of the noise encoder, Eq. (\ref{eq_noise_encoder}) & 60,112,224 \\ \hline $H^s$ & Hidden size of the speech encoder, Eq. (\ref{eq_speech_encoder}) & 60,112,224 \\ \hline $E^n$ & Output size of the linear layer, Eq. (\ref{eq_noise_linear}) & $C$$=$$20$ \\ \hline $E^s$ & Output size of the linear layer, Eq. (\ref{eq_speech_linear1}) & 256 \\ \hline $F$ & Output size of the linear layer, Eq. (\ref{eq_speech_linear2}) & 256 \\ \hline $w$ & Window size of causal local attention, Eq. (\ref{eq_noise_attention}, \ref{eq_speech_attention}) & 5,15,30 \\ \hline \end{tabular} \end{table} \subsection{Results} We first analyze the performance of the baseline methods and the proposed methods on Test-0. The results of PESQ are summarized in Table \ref{tab:test0} where the averaged PESQ of the input noisy audio is $\textbf{1.67}$, and the PESQ after OM-LSA is $\textbf{1.75}$. We can clearly see that all the attention-based methods outperform the two baselines without attention (i.e., the OM-LSA and the Pure-LSTM methods) consistently for different size of parameters, which indicates that introducing attention mechanism to neural network based speech enhancement is beneficial. This indication is consistent with that of \cite{ref_lstm3}. Moreover, by introducing a noise classification subnetwork into the denoising network, our models (i.e., the CA-Att-LSTM1 and the CA-Att-LSTM2 methods) gain better performance in all configurations. This means that the noise information introduced to the network could guide the model to make better denoising. Additionally, by feeding the classification embedding to the speech encoder for attention, a higher PESQ gain can be obtained. This reveals that the noise classification can guide the speech attention to make better estimation. In the experiments, to view the influence of window size of the causal local attention to the denoising performance, the window size was set to 5, 15 and 30 for comparisons. As is shown in the table, the best performance is achieved when $w=5$ and larger $w$ gains no further improvement. This is also consistent with that of \cite{ref_lstm3}. Here, the window size of the causal local attention used for the noise encoder and the speech encoder is same. Specifically, it can be configured with different window size. We left it for the future research. \begin{table}[htb] \centering \caption{PESQ of different models on Test-0.} \label{tab:test0} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{$w$} & \multirow{2}{*}{\text{lstm-size}} & \text{Pure-} & \text{Att-} & \text{CA-Att-} & \text{CA-Att-} \\ & & \text{LSTM} & \text{LSTM} & \text{LSTM1} & \text{LSTM2} \\ \hline \hline \multirow{3}{*}{5} & 128/112/60/60 & 2.34 & 2.41 & 2.43 & 2.56 \\ \cline{2-6} & 256/224/112/112 & 2.43 & 2.50 & 2.52 & 2.63 \\ \cline{2-6} & 512/448/224/224 & 2.50 & 2.59 & 2.64 & 2.72 \\ \cline{1-6} \multirow{3}{*}{10} & 128/112/60/60 & 2.34 & 2.39 & 2.40 & 2.55 \\ \cline{2-6} & 256/224/112/112 & 2.43 & 2.51 & 2.47 & 2.66 \\ \cline{2-6} & 512/448/224/224 & 2.50 & 2.57 & 2.63 & 2.71 \\ \cline{1-6} \multirow{3}{*}{15} & 128/112/60/60 & 2.34 & 2.42 & 2.46 & 2.52 \\ \cline{2-6} & 256/224/112/112 & 2.43 & 2.49 & 2.58 & 2.64 \\ \cline{2-6} & 512/448/224/224 & 2.50 & 2.57 & 2.61 & 2.71 \\ \hline \end{tabular} \end{table} To further showcase the effects of speech enhancement for different methods, a speech utterance (spectrum) randomly selected from Test-0 is shown in Fig \ref{fig:waves}. The clean speech is contaminated with `crowd' noise. As is shown in the figure and indicated by \cite{ref_lstm3}, the traditional OM-LSA method cannot handle this kind of non-stationary noise properly, and the LSTM approach can significantly reduce noise but still with some noise residuals. On the contrast, all the attention-based methods can remove the noise properly. Moreover, since noise type can be estimated and guided to denoise, our model gives a pretty good reconstruction. Some of our testing examples can be found from our repository$\footnote{\url{https://github.com/ROAD2018/noise_aware_attention_based_denoiser}}$. \begin{figure}[H] \centering \includegraphics[width=75mm,height=225mm]{waves.eps} \caption{Spectrograms comparisons for different methods.} \label{fig:waves} \end{figure} To validate the generalization capability of different approaches, experimental results on Test-0,1,2,3,4 are summarized in Table \ref{tab:test1_4} for comparisons. The window size of the causal local attention used in the models are $w = 5$. The LSTM cell size is 512, 448, 256 for the pure LSTM model, the attention-based LSTM model and our models, respectively. The PESQ of the raw noisy audio are $\textbf{1.67}$, $\textbf{1.67}$, $\textbf{1.26}$, $\textbf{1.84}$, $\textbf{1.21}$ for Test-0,1,2,3,4, respectively. We notice that since the dataset mismatch between the training and the testing, all the models have performance degradation when tested on Test-1,2,3,4 compared with Test-0. It is worth noting that our model (i.e., CA-Att-LSTM2) gain the same PESQ vale between Test-0 and Test-1. This may indicate that our CA-Att-LSTM2 model is more robust to unseen speakers. Moreover, the models trained on the data with 0$\sim$20dB SNR have significant performance degradation on the test set with -5$\sim$0dB SNR (Test-2). Further with both mismatched noises and SNR conditions, large performance degradation can be observed on Test-4. But among all the methods, attention-based models have shown better generalization ability and our noise classification aided models gain better performance. \begin{table}[htb] \centering \caption{PESQ of different models on Test-0,1,2,3,4.} \label{tab:test1_4} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{\text{Set}} & \text{OM-} & \text{Pure-} & \text{Att-} & \text{CA-Att-} & \text{CA-Att-} \\ & \text{LSA} & \text{LSTM} & \text{LSTM} & \text{LSTM1} & \text{LSTM2} \\ \hline \hline \text{Test-0} & 1.75 & 2.50 & 2.59 & 2.68 & 2.72 \\ \hline \text{Test-1} & 1.76 & 2.46 & 2.57 & 2.66 & 2.72 \\ \hline \text{Test-2} & 1.29 & 1.82 & 1.84 & 1.86 & 1.89 \\ \hline \text{Test-3} & 1.90 & 2.05 & 2.11 & 2.12 & 2.15 \\ \hline \text{Test-4} & 1.23 & 1.31 & 1.34 & 1.38 & 1.41 \\ \hline \end{tabular} \end{table} \section{Discussions} Through the above experiments, we can conclude that the noise classification can indeed assist to denoise. Therefore, to make better denoise, a possible way is to collect as many noise type as possible for training. In this paper work, we directly used the categories classified in advance. While, in a more realistic scenario, the noise should be classified automatically. This can be realized by clustering the embedding output from a pre-trained audio encoder, such as Speech-VGG \cite{ref_speechvgg}. \section{Conclusions} An noise classification subnetwork is introduced into the attention-based neural network for speech enhancement based on the previous work. The embedding from the classification is used together with the denoising embedding for causal local attention, therefore, guiding the network to maker better denoising. The performance of the proposed network is validated and compared with the previous work and OM-LSA and an pure LSTM approaches, obtaining higher PESQ gain. Moreover, the generalization ability to unseen noise conditions is also validated. In the future, we will explore other encoder structure other than LSTM for speech enhancement, and bring the attention mechanism into multi-channel speech enhancement, and further integrate it with speech recognition.
1,108,101,564,110
arxiv
\section{Introduction} Capacity expansion planning (CEP) problems are powerful tools for the design, analysis and implementation of energy system decarbonisation policies. In such frameworks, the accurate spatiotemporal representation of variable renewable energy generation (RES, e.g., wind, solar PV) is paramount for the precise estimation of capacity requirements \cite{Pfenninger2014}. However, the detailed modelling of RES comes at a high computational cost and ways to mitigate this issue in order to strike the right balance between accuracy and computational effort when solving such problems are necessary, yet seldom proposed. For example, a highly detailed representation of RES within a CEP set-up cast as a linear program (LP) is proposed by MacDonald et al. \cite{MacDonald2016}, yet the reported runtimes (thousands of core hours for large-scale instances) limit its use in practice and its reproducibility. Wu et al. \cite{Callaway2017} also propose an LP-cast CEP framework in which high-resolution RES modelling is made possible via a GIS-based resource assessment tool. Nonetheless, the coefficient matrix stores hourly capacity factor values at each location and is therefore full, which limits the scalability of the proposed method to a few hundreds of candidate RES sites only, thus rendering it unsuitable for large-scale applications. Although plenty of work has been carried out in recent years to develop temporal reduction techniques for RES in CEP settings \cite{Hoffmann2020}, studies tackling the issue of spatial model reduction are scarce. Cohen et al. \cite{Cohen2018} suggest the aggregation of RES in resource regions, with wind and solar PV resources over the contiguous United States being modelled via 356 and 134 profiles, respectively. In a similar vein, Hörsch and Brown \cite{Horsch2017} leverage a CEP framework formulated as an LP to assess the impact of spatial resolution on the outcomes of co-optimizing generation and transmission assets across Europe. A network reduction process based on k-means clustering is incorporated in their method and the resulting topology serves as the basis for modelling renewable resources. More precisely, Europe-wide RES are represented via 37 to 362 different aggregate profiles, depending on the desired number of network clusters. While spatial aggregation approaches, as the ones proposed in \cite{Cohen2018, Horsch2017}, partly mitigate the aforementioned computational issues \cite{MacDonald2016, Callaway2017}, the limited number of RES profiles considered hinders their ability to exploit the benefits of resource diversity which, in turn, can lead to system cost overestimation \cite{Frew2016}. This paper proposes a method to reduce the spatial dimension and decrease the computational requirements of CEP problems while preserving a detailed representation of RES assets. This is achieved by leveraging a two-stage heuristic that can be described as follows. The first stage, which is cast as an LP, is used to screen a set of candidate sites and identify sites that have little impact on optimal system design, which are then discarded. In the second stage, information (geo-positioning and capacity factors time series) about the remaining sites is used as input data in a CEP framework that determines the installed capacities of generation, storage and transmission assets leading to a minimum-cost system configuration. Thus, the proposed method makes it possible to reduce the size of the CEP problem, and therefore enables memory and computation time savings. The paper is structured as follows. Section \ref{methods} details the methods at the core of the proposed two-stage approach. Then, Section \ref{casestudy} briefly describes the case study used to showcase the applicability of the suggested approach before results are reported in Section \ref{results}. Section \ref{conclusion} concludes the paper and discusses future work avenues. \section{Method}\label{methods} The proposed solution method (or \texttt{SM}) is introduced in this section. Firstly, the standard CEP framework (from hereon, the \texttt{FLP}) is formulated. In the remainder of this paper, the \texttt{FLP} denotes the CEP set-up that simultaneously tackles the siting and sizing of RES assets, as well as the sizing of other power system (e.g., generation, storage or transmission) technologies. Then, the screening method for candidate RES sites (\texttt{SITE}) that enables the formulation of a reduced-size CEP framework (from hereon, the \texttt{RLP}) is described. The \texttt{SITE}-\texttt{RLP} sequence will hereafter be referred to as the \texttt{SM}. \subsection{Capacity expansion planning framework} Let $\mathcal{N}_B$ and $\mathcal{L}$ be the sets of existing buses and transmission corridors, respectively. Let $\mathcal{N}_R$ be a set of candidate RES sites that may be connected to buses $n \in \mathcal{N}_B$, which is partitioned into disjoint subsets $\mathcal{N}_R^n$. The CEP formulation reads \vspace{-10pt} \begin{align} \underset{\tiny \begin{array}{cc} \mathbf{K}, (\mathbf{p}_t)_{t \in \mathcal{T}} \end{array}} \min \hspace{-2pt} &\omega \Big[\sum_{\substack{n \in \mathcal{N}_B \\ m \in \mathcal{N}_R^n}} \big(\zeta^{m} + \theta^m_f\big) K_{nm} + \sum_{\substack{n \in \mathcal{N}_B \\ j \in \mathcal{G}\cup\mathcal{S}}} \big(\zeta^{j} + \theta^j_f\big) K_{nj} \nonumber\\&+ \sum_{l \in \mathcal{L}} \big(\zeta^l + \theta^l_f\big) K_l\Big] + \sum_{t \in \mathcal{T}}\Big[ \sum_{l \in \mathcal{L}} \theta^l_v |p_{lt}| \tag{1a}\label{eq:objective}\\&+ \sum_{\substack{n \in \mathcal{N}_B \\ m \in \mathcal{N}_R^n}} \theta^m_v p_{nmt} + \sum_{\substack{n \in \mathcal{N}_B \\ j \in \mathcal{G}\cup\mathcal{S}}} \theta^j_v |p_{njt}| + \sum_{n \in \mathcal{N}_B} \theta^{e} p^{e}_{nt}\Big] \nonumber \end{align} \vspace{-10pt} \allowdisplaybreaks \begin{subequations} \begin{align} \text{s.t.} &\sum_{\substack{m \in \mathcal{N}_R^n}} p_{nmt} + \sum_{g \in \mathcal{G}} p_{ngt} + \sum_{s \in \mathcal{S}} p^D_{nst} + \sum_{l \in \mathcal{L}_n^{+}} p_{lt} + p^{e}_{nt} \nonumber\\ & \hspace{1mm} = \lambda_{nt} + \sum_{s \in \mathcal{S}} p^C_{nst} + \sum_{l \in \mathcal{L}_n^{-}} p_{lt} , \mbox{ } \forall n \in \mathcal{N}_B, \forall t \in \mathcal{T} \tag{1b}\label{eq:subeq_balance}\\[1pt] &p_{nmt} \le \pi_{nmt} (\kappa^0_{nm} + K_{nm}), \nonumber\\ &\hspace{20mm} \forall n \in \mathcal{N}_B, \forall m \in \mathcal{N}_R^n, \forall t \in \mathcal{T} \tag{1c}\label{eq:subeq_res_flow}\\[1pt] &\kappa^0_{nm} + K_{nm} \le \Bar{\kappa}_{nm}, \mbox{ } \forall n \in \mathcal{N}_B, \forall m \in \mathcal{N}_R^n \tag{1d}\label{eq:subeq_res_cap} \\[1pt] &p_{ngt} \le \kappa^0_{ng} + K_{ng}, \mbox{ } \forall n \in \mathcal{N}_B, \forall g \in \mathcal{G}, \forall t \in \mathcal{T} \tag{1e}\label{eq:subeq_disp_flow} \\[1pt] &\kappa^0_{ng} + K_{ng} \le \Bar{\kappa}_{ng}, \mbox{ } \forall n \in \mathcal{N}_B, \forall g \in \mathcal{G} \tag{1f}\label{eq:subeq_disp_cap} \\[1pt] &p_{nst} = -p^C_{nst}+p^D_{nst}, \forall n \in \mathcal{N}_B, \forall s \in \mathcal{S}, \forall t \in \mathcal{T} \tag{1g}\label{eq:subeq_sto_def}\\[1pt] &|p_{nst}| \le \phi_s (\kappa^0_{ns} + K_{ns}), \forall n \in \mathcal{N}_B, \forall s \in \mathcal{S}, \forall t \in \mathcal{T} \tag{1h}\label{eq:subeq_sto_ch}\\[1pt] &e_{nst} \le \kappa^0_{ns} + K_{ns}, \tag{1i}\label{eq:subeq_sto_soc_lim} \forall n \in \mathcal{N}_B, \forall s \in \mathcal{S}, \forall t \in \mathcal{T}\\[3pt] &e_{nst} = \eta^{SD}_s e_{ns(t-1)} + \eta^{C}_s p^C_{nst} - p^D_{nst}/\eta^D_{s}, \nonumber\\ & \hspace{35mm} \forall n \in \mathcal{N}_B, \forall s \in \mathcal{S}, \forall t \in \mathcal{T} \tag{1j}\label{eq:subeq_sto_soc}\\[1pt] &\kappa^0_{ns} + K_{ns} \le \Bar{\kappa}_{ns}, \mbox{ } \forall n \in \mathcal{N}_B, \forall s \in \mathcal{S} \tag{1k}\label{eq:subeq_sto_cap} \\[1pt] &|p_{lt}| \le \kappa^{0}_l + K_l, \mbox{ } \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \tag{1l}\label{eq:subeq_tr_flow} \\[1pt] &\kappa^{0}_l + K_l \le \Bar{\kappa}_l, \mbox{ } \forall l \in \mathcal{L} \tag{1m}\label{eq:subeq_tr_cap} \end{align} \end{subequations} The problem described in (1a-m) minimizes total system cost subject to a set of constraints of the underlying assets. The objective function (\ref{eq:objective}) comprises capital expenditure, fixed and variable operating costs of the generation, storage and transmission assets, as well as the economic penalties associated with unserved demand. Constraint (1b) enforces the energy balance at each bus, while the operation and sizing of RES assets is modelled via (1c-d). Note that a single RES technology $r \in \mathcal{R}$ is associated with each site $m \in \mathcal{N}_R$. Then, conventional generators are modelled via (1e-f) and the operation and sizing of storage units follows (1g-k). Finally, constraints (1l-m) encode the transportation model governing the power flows in transmission links. It is worth noting that, although the absolute values in Eqs. (1a), (1h) or (1l) render the CEP problem described in (1a-m) non-linear, it can be cast as an LP using standard reformulation techniques. \subsection{Renewable sites selection method} The proposed \texttt{SM} works by decoupling the siting and sizing of RES assets. At first, the \texttt{SITE} stage is leveraged to screen the sets of candidate RES locations and identify those sites that play a role in the optimal system design, while discarding the rest. To this end, the siting problem is formulated by i) discarding some complicating variables and approximating a subset of complicating constraints (i.e., the ones associated with dispatchable power generation, storage systems and power flows in transmission lines) and ii) relaxing and taking linear combinations, as well as scaling the right-hand site coefficients of certain equality constraints (i.e., the power balance equations). The objective function (\ref{eq:siting_obj}) is obtained by preserving the terms related to the costs of deploying and operating RES technologies and the economic penalty associated with unserved demand. Then, the constraints discarded from (1a-m) are approximated via two parameters found in (2b). More formally, let $\mathcal{T}$ be the set of time periods, let $\mathcal{T}_{\tau} \subseteq \mathcal{T}, \mbox{ } |\mathcal{T}_{\tau}|=\delta\tau, \mbox{ } \tau = 1, \ldots, T,$ be a collection of disjoint subsets forming a partition of $\mathcal{T}$ into time slices of length $\delta\tau$. More precisely, $\delta\tau$ represents the length of a time slice (e.g., one hour, one day) over which the energy balance in (2b) is enforced and its role is to emulate the behavior of storage assets shifting RES supply in time. Furthermore, let $\xi_{\tau}^n \in \mathbb{R}_+$ denote regional minimum RES feed-in targets enforced over every time slice $\mathcal{T}_\tau, \mbox{ } \tau = 1, \ldots, T$. This parameter enforces a minimum level of local power production from renewable sources which i) mirrors the effect of transmission constraints and ii) accounts for low-carbon legacy generation capacity that would offset the country-specific RES requirements. Constraints (1c-d) are preserved as such and the siting problem thus reads \vspace{-10pt} \begin{align} \underset{\tiny \begin{array}{cc} \mathbf{K}, (\mathbf{p}_t)_{t \in \mathcal{T}} \end{array}} \min \hspace{10pt} & \omega \Big[\sum_{\substack{n \in \mathcal{N}_B \\ m \in \mathcal{N}_R^n}} \big(\zeta^{m} + \theta^m_f\big) K_{nm}\Big] \mbox{ } + \nonumber\\ & \sum_{t \in \mathcal{T}} \Big[ \sum_{\substack{n \in \mathcal{N}_B \\ m \in \mathcal{N}_R^n}} \theta_v^r p_{nmt} + \sum_{n \in \mathcal{N}_B} \theta^{e} p^{e}_{nt} \Big] \label{eq:siting_obj} \tag{2a} \nonumber \end{align} \vspace{-10pt} \begin{subequations} \begin{align} \text{s.t.} & \sum_{t \in \mathcal{T_{\tau}}} \Big[ \sum_{\substack{m \in \mathcal{N}_R^n}} p_{nmt} + p^{e}_{nt} \Big] \ge \xi_{\tau}^{n} \sum_{t \in \mathcal{T_{\tau}}} \lambda_{nt}, \nonumber\\ & \hspace{30mm} \forall n \in \mathcal{N}_B, \mbox{ } \forall \tau \in \{1, \ldots, T\} \tag{2b}\label{eq:en_balance_n}\\[1pt] &p_{nmt} \le \pi_{nmt} (\kappa^0_{nm} + K_{nm}), \nonumber\\ & \hspace{30mm} \forall n \in \mathcal{N}_B, \forall m \in \mathcal{N}_R^n, \forall t \in \mathcal{T}\tag{2c}\label{eq:infeed_def}\\[1pt] &\kappa^0_{nm} + K_{nm} \le \Bar{\kappa}_{nm}, \mbox{ } \forall n \in \mathcal{N}_B, \forall m \in \mathcal{N}_R^m \tag{2d}\label{eq:siting_cap} \end{align} \end{subequations} For every $n \in \mathcal{N}_B$, the problem returns the set of candidate RES sites identified as relevant (with an installed capacity above \SI{1}{MW}) in the optimal system design, i.e. $\mathcal{N}_{\texttt{SITE}}^n$. Then, the \texttt{RLP} is built by replacing $\mathcal{N}_R^n$ with $\mathcal{N}_{\texttt{SITE}}^n$ in constraints (1a-d) of the CEP problem. \section{Case Study}\label{casestudy} \subsubsection*{Input Data} The analysis is conducted for three individual weather years (i.e., 2016, '17 and '18) and over 33 countries within the \textit{ENTSO-E} system. The siting stage relies on hourly-sampled resource data obtained from the \textit{ERA5} reanalysis database \cite{ERA5} at a spatial resolution of \ang{1.0}. The mapping of resource data to capacity factors time series is achieved via the transfer functions of appropriate conversion equipment for each individual technology. More precisely, a site-specific selection of wind generators is carried out based on the \textit{IEC 61400} standard \cite{IEC61400} and four different converters are available for deployment (i.e., the \textit{Vestas V110, V90, V117} and \textit{V164}), each of them suitable for specific wind regimes. The selection of solar energy converters is done on a technology basis, with the \textit{TrinaSolar DEG15MC} module available for utility-scale PV deployment and the \textit{TrinaSolar DD06M} array available for distributed PV generation. A greenfield approach is adopted, i.e., no legacy capacity of RES assets is considered, while the technical potential is estimated via a land eligibility assessment framework \cite{Ryberg2017} that yields eligible surface areas for RES deployment for a set of 1740 candidate sites. A set of assumptions pertaining to the power densities of different generation technologies are then made to map surface areas into maximum allowable installed capacities, i.e., technical potentials. Specifically, a density of \SI{5}{MW/km^2} is considered for wind deployments \cite{WindEurope2020}. With respect to solar PV units, power densities of \SI{40}{MW/km^2} and \SI{16}{MW/km^2} are considered for utility-scale and residential installations, respectively \cite{Trondle2019}. Electricity demand time series for all considered countries are retrieved from the \textit{OPSD} platform \cite{OPSD}. The CEP frameworks (i.e., both the \texttt{FLP} and \texttt{RLP}) follow a centralized planning approach and build upon the 2018 TYNDP dataset, where each European country is modelled as one node \cite{TYNDP2018}. The resulting network topology is displayed in Fig. \ref{fig:topology}. In this exercise, the expansion of the transmission network is limited to the reinforcement of existing links. Furthermore, the total capacity of each link may not exceed twice the 2040 capacity estimated for this link in the TYNDP. Besides the four RES technologies sited in the previous stage, three more generation technologies are available for power generation, namely run-of-river (ROR) and reservoir-based (STO) hydro, as well as combined-cycle gas turbines (CCGT), with the latter being the only of the three that is also sized in (1a-m). The existing capacities of the other two are retrieved from \cite{JRC_HY}, where the existence of \SI{34}{GW} of ROR and \SI{98}{GW} of STO installations is reported. Then, two technologies are available for electricity storage, namely pumped-hydro (PHS) units and Li-Ion batteries. The latter is the only one being sized in (1a-m) and a fixed energy-to-power ratio of \SI{4}{h} is assumed. The legacy capacity of the former is retrieved from \cite{JRC_HY}, where \SI{55}{GW}/\SI{1950}{GWh} of PHS storage is reported. The CEP problem is implemented in PyPSA 0.17 \cite{PyPSA}, while the techno-economic assumptions are gathered in \cite{dox_repo_method}. \begin{figure} \centering \includegraphics[width=.9\linewidth]{media/topology_tyndp.png} \caption{System topology in the capacity expansion planning framework. AC connections displayed in full lines, DC links shown in dotted lines. The 33 nodes form the set $\mathcal{N}_B$ in (1a-m) and (2a-d).} \label{fig:topology} \end{figure} \subsubsection*{Parametrization of the \texttt{SITE} stage} The two parameters of (2a-d) are defined as follows. First, the slicing period $\delta\tau$ is considered to be equal to \SI{24}{h}, which corresponds to the nonzero frequency component of the aggregate EU-wide RES capacity factor time series with the largest amplitude (i.e., as provided by a discrete Fourier transform). Then, the country-dependent $\xi_{\tau}^n$ values are assumed not to be time-dependent and their estimation proceeds as follows. First, the residual demand (i.e., the difference between demand and generation potential of legacy dispachable units) is computed at peak load conditions. Then, the RES generation potential during the same time instants is determined. For each country, if RES potential exceeds the electricity demand for at least half the time steps in the optimization horizon, its potential transmission capabilities (i.e., 2040 TYNDP capacity limits times the length of slicing period $\delta\tau$) are added to the residual demand, as the country is a potential exporter of electricity in the EU-wide system. Conversely, if the electricity demand is higher than the RES potential most of the time, the transmission capabilities of that country are subtracted from the residual demand, as cross-border exchanges will oftentimes be used to cover for the domestic electricity needs. Finally, the $\xi_{\tau}^n$ values are determined as the ratio between the RES potential and the transmission capacity-adjusted residual demand, respectively. \subsubsection*{Implementation} The \texttt{SM}, as well as the \texttt{FLP} are implemented in Python 3.7 and the proposed instances are run on a workstation running under CentOS, with an 18-core Intel Xeon Gold 6140 CPU clocking at \SI{2.3}{GHz} and \SI{256}{GB} RAM. Gurobi 9.0 was used to solve both (1a-m) and (2a-d). The dataset and code used in these simulations are available at \cite{dox_repo_method} and \cite{replan_git}. \section{Results}\label{results} The results of a set of experiments evaluating the performance of the \texttt{SM} against the \texttt{FLP} are detailed in this section. Table \ref{tab:count} summarizes the performance of the siting stage by means of two indicators. First, the technology-specific spatial reduction share ($\gamma_r$) denotes the proportion of initial candidate RES sites discarded via \texttt{SITE}. Then, the screening accuracy ($\alpha_r$) measures the ability of the method to identify the relevant candidate RES sites. More formally, let $\mathcal{R}$ be the set of renewable technologies and let $\mathcal{N}_R^r$ be the subset of sites with technology $r \in \mathcal{R}$ (these subsets are disjoint for different $r$ and form a partition of $\mathcal{N}_R$). Note that for the purpose of this paper, offshore and onshore wind are considered as different resources. In addition, let $\mathcal{N}_{\texttt{FLP}}^r$ and $\mathcal{N}_{\texttt{SITE}}^r$ be the subsets of $\mathcal{N}_R^r$ selected by \texttt{FLP} and \texttt{SITE} where at least \SI{1}{MW} of capacity is deployed, respectively. Then, the screening accuracy is defined as \begin{equation} \alpha_r = \frac{|\mathcal{N}^{r}_{\texttt{SITE}}\cap\mathcal{N}^{r}_{\texttt{FLP}}|}{|\mathcal{N}^{r}_{\texttt{FLP}}|}, \mbox{ } \forall r \in \mathcal{R}, \end{equation} where $|{\mathcal{N}}|$ denotes the cardinality of set $\mathcal{N}$. First, in this table, it can be seen that the relative reduction achieved by \texttt{SITE} varies from 6\% for utility-scale PV to 62\% for distributed PV installations in the 2017 instance, with an average reduction in onshore and offshore wind sites of 38\% and 54\%, respectively. Furthermore, an overall reduction of the number of selected RES sites of up to 54\% is observed across the three considered instances. In other words, less than half of the candidate RES sites are found to be relevant in the optimal system configuration by \texttt{SITE} and subsequently passed to the \texttt{RLP}. With respect to the ability of \texttt{SITE} to identify relevant RES locations, only the distributed PV sites have a selection accuracy score below 85\%. However, the limited deployment of this technology in the solution of the proposed CEP instances enables the screening stage to properly identify over 90\% of the relevant RES sites (i.e., the ones appearing in the \texttt{FLP} solution), irrespective of the weather year considered. \begin{table}[b] \renewcommand{\arraystretch}{1.1} \centering \caption{Technology-specific sites reduction ($\gamma_r$) and screening accuracy ($\alpha_r$) of \texttt{SITE}. Number of candidate sites used in the \texttt{FLP} specified in parantheses.} \label{tab:count} \begin{tabular}{l|cc|cc|cc|cc} \toprule & \multicolumn{2}{c|}{$\mathrm{W_{on}} \hspace{1mm} (590)$} & \multicolumn{2}{c|}{$\mathrm{W_{off}} \hspace{1mm} (417)$} & \multicolumn{2}{c|}{$\mathrm{PV_{u}} \hspace{1mm} (128)$} & \multicolumn{2}{c}{$\mathrm{PV_{d}} \hspace{1mm} (605)$} \\ & $\gamma_r$ & $\alpha_r$ & $\gamma_r$ & $\alpha_r$ & $\gamma_r$ & $\alpha_r$ & $\gamma_r$ & $\alpha_r$ \\ \midrule 2016 & 0.40 & 0.94 & 0.55 & 0.85 & 0.10 & 1.00 & 0.57 & 0.54 \\ 2017 & 0.37 & 0.94 & 0.55 & 0.86 & 0.06 & 1.00 & 0.62 & 0.83 \\ 2018 & 0.36 & 0.93 & 0.52 & 0.85 & 0.16 & 1.00 & 0.59 & 0.59 \\ \bottomrule \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{media/siting_plots.png} \caption{(a) Distribution of geographical distances between pairs of sites identified via \texttt{SITE} and the \texttt{FLP}. (b) Site-specific installed capacity correlation between the \texttt{RLP} and the \texttt{FLP}.} \label{fig:capacity_cdf} \end{figure} However, not all candidate RES sites found in the \texttt{FLP} solution are identified by \texttt{SITE} which selects different locations instead. For instance, when the latter is run with 2016 weather data, it fails to identify a total of 45 sites (14 onshore wind, 12 offshore wind and 19 distributed PV locations, respectively) out of 418 identified in the benchmark. Investigating how far these locations are from the ones selected by the \texttt{FLP} provides a first insight into how different the system designs associated with the two methods are. If the distances between the locations selected via \texttt{SITE} and \texttt{FLP} were found to be small, one would expect the effect of misidentifying sites to be limited, as RES patterns are usually comparable at neighboring sites. Conversely, large distances between sites identified via the two methods would often imply distinct RES patterns and could thus lead to substantial differences in the way the technologies are sized. The result of this analysis is shown in Fig. \ref{fig:capacity_cdf}a. These plots depict, for each technology and weather year, the distribution of distances (expressed in kilometres) between pairs of sites selected via the \texttt{FLP} and \texttt{SITE}, respectively. The procedure used to generate these curves is as follows. First, distances of zero are associated to the pairs of sites found by both methods ($\alpha_r$ shares in Table \ref{tab:count}). Then, each unidentified site in the \texttt{FLP} solution is matched with the geographically closest (based on the geodesic distance) location in the set of \texttt{SITE}-exclusive locations. Once two sites are paired, none of them can be subsequently matched with another. Upon pairing all unidentified sites in the \texttt{FLP} with a counterpart in \texttt{SITE}, a cumulative distribution function of technology-specific distances is plotted. It can be observed in these three plots that, without exception, the $95^{th}$ percentile of the matching distance for any of the four RES technologies falls below \SI{500}{km}. In a European context, it has been previously shown that country-aggregated wind output (usually more spatially heterogeneous than PV generation) is remarkably correlated at distances below the aforementioned threshold, especially in the North Sea basin where most onshore and offshore sites are deployed in the studied instances \cite{Malvaldi2017}. Furthermore, a maximum distance between matched sites of under \SI{1600}{km} is reported for all technologies and weather years, with the largest discrepancies being consistently observed for onshore wind locations. \begin{table*} \renewcommand{\arraystretch}{1.1} \centering \caption{Differences in system-wide capacities between \texttt{FLP} and \texttt{RLP} for various technologies and weather years. A positive value reflects more capacity installed (or higher TSCE) in the \texttt{RLP}, while a negative value indicates more capacity in the \texttt{FLP}. } \label{tab:system_configuration} \begin{tabular}{l|cccccccc|c} \toprule & $\mathrm{W_{on}}$ & $\mathrm{W_{off}}$ & $\mathrm{PV_{u}}$ & $\mathrm{PV_{d}}$ & CCGT & AC & DC & Li-Ion & TSCE \\ Year & [GW] & [GW] & [GW] & [GW] & [GW] & [TWkm] & [TWkm] & [GWh] & [b\euro] \\ \midrule \multirow{2}{*}{2016} & -7.38 & 6.06 & 3.86 & -1.63 & 0.10 & -0.03 & -0.05 & 0.51 & 0.36 \\ & (-16.31\%) & (1.48\%) & (1.92\%) & (-1.01\%) & (0.11\%) & (-0.04\%) & (0.10\%) & (1.99\%) & (0.40\%) \\ \multirow{2}{*}{2017} & -7.94 & 4.96 & 0.60 & 7.51 & 1.01 & -2.33 & -0.16 & 1.18 & 0.44 \\ & (-12.84\%) & (1.17\%) & (0.29\%) & (7.36\%) & (2.30\%) & (-2.90\%) & (-0.29\%) & (4.07\%) & (0.52\%) \\ \multirow{2}{*}{2018} & -13.73 & 12.19 & 2.51 & -5.60 & 1.03 & 0.57 & -0.04 & -1.53 & 0.42 \\ & (-23.34\%) & (2.97\%) & (1.21\%) & (-2.99\%) & (2.54\%) & (0.65\%) & (-0.07\%) & (-3.63\%) & (0.48\%) \\ \bottomrule \end{tabular} \vspace{-1mm} \end{table*} Upon screening the candidate RES locations via \texttt{SITE}, the \texttt{RLP} is run in order to retrieve, among others, the associated installed capacities. Fig. \ref{fig:capacity_cdf}b shows, for each weather year, the correlation between installed capacities of i) the sites identified in the \texttt{FLP} and ii) the sites identified by \texttt{SITE} and sized via \texttt{RLP}. In this plot, round markers (o) denote data points associated with locations that are common to \texttt{FLP} and \texttt{SITE}, while crosses (x) represent data points corresponding to the pairs of sites matched according to the procedure described in the previous paragraph. The first remark in these plots is that in 76\% (for 2016) to 79\% (for 2018) of the cases, the installed capacities of \texttt{FLP} and \texttt{RLP} sites are matched to MW-order precision. Then, it can be observed that most of the (x) markers are situated on the bottom of the corresponding subplots. A complementary analysis of the resource signals associated with these data points suggests the existence of high-quality RES sites exploited by the \texttt{FLP}, but whose \texttt{SITE} counterparts (determined via the distance-based pairing algorithm) exhibit inferior resource quality and thus end up not being part of the \texttt{RLP} solution. In such a situation, the missing capacity, i.e., \texttt{FLP} capacity of the (x) data points in the lower part of the plot, is compensated in the \texttt{RLP} by superior power ratings at (o) sites above the trend line in Fig. \ref{fig:capacity_cdf}b. \begin{table}[b] \renewcommand{\arraystretch}{1.1} \centering \caption{Computational performance assessment of the \texttt{SM}. Numerical values represent reductions associated with the \texttt{SM} expressed in relative terms (\%) with respect to the \texttt{FLP}.} \label{tab:performance} \begin{tabular}{l|ccccc} \toprule Year & Variables & Constraints & Non-Zeros & PMR & SRT\\ \midrule 2016 & 34.54 & 34.67 & 33.48 & 41.37 & 36.56 \\ 2017 & 33.31 & 33.39 & 32.22 & 40.11 & 30.90 \\ 2018 & 33.72 & 33.82 & 32.67 & 39.28 & 46.57 \\ \bottomrule \end{tabular} \end{table} Table \ref{tab:system_configuration} reports, for different data years and for various technologies sized within the CEP stage, the difference between the system-wide installed capacities obtained by the \texttt{FLP} and \texttt{RLP} models, respectively (positive values indicate more capacity in the latter). In the last column, it can be seen that the relative objective function difference (i.e., the TSCE) between the two CEP set-ups does not exceed 0.52\%, irrespective of the weather year considered. However, as suggested in a recent study by Neumann and Brown \cite{Neumann2021}, rather small differences in total system costs can translate into fairly distinct system configurations. In this exercise, differences of 23.3\%, 2.9\%, 1.9\% and 7.3\% are reported for onshore wind, offshore wind, utility-scale and distributed PV, respectively, between the \texttt{RLP} and the \texttt{FLP}. A closer look at the breakdown of capacities per country reveals the reasons behind such differences, as the large majority of the discrepancies observed in Table \ref{tab:system_configuration} are associated to a handful of resource-rich countries (e.g., Ireland, Italy, Spain or the UK). For instance, in 2017 and 2018, the \texttt{FLP} over-sizes onshore wind (and, thus, selects more sites) in Ireland and the UK, and uses it to supply Central Europe. Under the proposed ($\delta\tau, \xi_{\tau}^n$) set-up of the \texttt{SITE} stage, a subset of these locations are not identified (see discussion on the (x) markers in Fig. \ref{fig:capacity_cdf}a) and the associated capacity in the \texttt{FLP} is replaced in the \texttt{RLP} by a mix of offshore wind and distributed PV. Further on in Table \ref{tab:system_configuration}, transmission capacities vary within 2.9\% of the \texttt{FLP} outcome, while a maximum of 4.1\% Li-Ion storage capacity difference can be observed during the same year where distributed PV differed the most from the benchmark (i.e., 2017). Finally, Table \ref{tab:performance} summarizes the computational performance gains (relative to the \texttt{FLP}) achieved by leveraging the \texttt{SM}. More specifically, the reductions in i) the CEP problem size (number of variables, constraints and non-zeros), ii) the peak memory requirements (PMR) and iii) the solver runtime (or SRT, taking into account the solver runtime of both the \texttt{SITE} and \texttt{RLP} stages of the \texttt{SM}) are reported. In this table, it can be observed that the proposed \texttt{SM} leads to an average CEP problem size reduction of 33\% which, in turn, enables an average PMR reduction of 40\% and runtime savings between 31\% and 46\% across the studied instances. \section{Conclusion}\label{conclusion} This paper proposes a method to reduce the spatial dimension of CEP frameworks while preserving an accurate representation of renewable energy sources. This is achieved via a two-stage heuristic. First, a screening stage is used to identify the most relevant sites for RES deployment among a pool of candidate locations and discard the rest. Then, the subset of RES sites identified in the first stage is used in a CEP problem to determine the optimal power system configuration. The proposed method is tested on a realistic EU case study and its performance is assessed against a CEP set-up in which the entire set of candidate RES sites is available. The method shows great promise and manages to consistently identify more than 90\% of the optimal sites while reducing peak memory consumption and solver runtime by up to 41\% and 46\%, respectively. Capacity differences between the solutions provided by the proposed method and the benchmark observed for some weather years suggest that further work on the selection of parameters used in the first-stage siting routine would be useful. Moreover, re-casting the proposed heuristic into a more structured form, e.g., where the siting and sizing of RES assets are used as stages in a Benders-like decomposition framework, is also envisaged as a promising development avenue. \bibliographystyle{ieeetr}
1,108,101,564,111
arxiv
\section{Introduction} Nonlinearity is essential to superconducting circuit implementations of quantum information. It allows for an individually addressable qubit subspace and tunable interactions between qubit circuits. Qubit-qubit interactions in a variety of platforms are mediated by coupler circuits inductively coupled to the qubits, with tunability provided by nonlinear Josephson elements~\cite{Hime2006, vanderPloag2007, Harris2007, Allman2010, Bialczak2011, Chen2014}. Several theoretical treatments of such circuits have been performed, including detailed analyses for tunably coupled flux qubits~\cite{Brink2005,Grajcar2006}, phase qubits~\cite{Pinto2010}, lumped-element resonators~\cite{Tian2008}, and transmon-type (gmon) qubits~\cite{Geller2015}. However, both previous classical and quantum analyses have either been linear or have treated the qubit-coupler coupling strengths perturbatively~\cite{Hutter2006}, and they are therefore expected to break down in the regime of strong coupling or large nonlinearities. In particular, the commonly used classical linear analysis can create the misconception that arbitrary inter-qubit coupling strengths can be achieved with a sufficiently nonlinear coupler circuit, an artifact of extending the linear equations beyond their applicable domain. One platform for which a non-perturbative treatment would be of immediate use is quantum annealing, where strong yet accurate two-qubit interactions are necessary and $k$-qubit or non-stoquastic~\cite{Bravyi2008} interactions are desirable, and where the ability to controllably operate in the strongly nonlinear regime could therefore be highly beneficial. In this work we present a non-perturbative analysis of two or more superconducting qubits inductively coupled through a Josephson coupler circuit. Our treatment is generic in that, as long as the coupling takes the form depicted in Fig.~\ref{diagram}, it is independent of the individual qubit Hamiltonians. In fact, it applies within the infinite dimensional Hilbert space of the underlying circuits implementing the qubits (which can be highly nonlinear with any form for their individual potential energies) and only reduces to the qubit subspace to compute coupling matrix elements. We numerically investigate the accuracy of our theory in various regimes, with focus on the interesting limit of large coupler nonlinearities $\beta_c \approx 2 \pi L_c I_c^{(c)}/\Phi_0 \lesssim 1$ within the monostable regime of the coupler and for large dimensionless coupling strengths $\alpha_j \equiv M_j/L_j$. Here, $L_c$ and $I_c^{(c)}$ are the coupler's inductor and junction (or DC-SQUID) parameters, and $M_j$ and $L_j$ are the mutual and self inductance of the $j$'th qubit, respectively. To perform the analysis, we eliminate the coupler circuit using the Born-Oppenheimer Approximation. In this approximation, the coupler circuit's ground state energy dictates the qubit-qubit interaction potential. This potential naturally decomposes into a classical part, whose origin lies in the classical equations of motion, and a small but non-negligible quantum part originating from the coupler circuit's zero-point fluctuations. We derive an exact expression for the classical part and an approximate expression for the quantum part valid in the experimentally relevant limit of small coupler impedance. Using this interaction potential, we derive explicit and efficiently computable Fourier series for all terms in the effective inter-qubit interaction Hamiltonian, including non-stoquastic terms and $k$-body terms with $k>2$ (although these are found to be small for the investigated parameter regimes). Unlike previous results, the interaction is defined explicitly and not in terms of quantum mechanical averages of the coupler system. As a case study, we apply our results to two coupled flux qubits, using parameters from our recent flux qubit design, the fluxmon \cite{Quintana2016}. We find that our results agree with previous treatments in the appropriate limits, but significantly differ in the highly nonlinear regime. We quantify the accuracy of our results by comparing them to an exact numerical diagonalization of the full system, allowing us to study when the Born-Oppenheimer Approximation breaks down. \section{Interaction mediated by nonlinear circuit} \subsection{Qubit-coupler Hamiltonian} We wish to derive the interaction between $k$ circuits (the qubits) inductively coupled through an intermediate circuit (the coupler) as depicted in Fig.~\ref{diagram}. We begin by deriving the full Hamiltonian describing both qubits and coupler. While the coupler circuit is elementary (it contains just an inductor, capacitor, and Josephson junction in parallel), our only assumption about the qubit circuits is that they interact with the coupler through a geometric mutual inductance, $M_j$. Accordingly, we write the current equations defining their dynamics as~\cite{Devoret1995,Burkard2004}, \begin{align} \label{currentEqs} \begin{split} C \ddot \Phi_c + I_c^{(c)} \sin(2 \pi \Phi_c/\Phi_0) - I_{L,c} & = 0\\ I_{j} - I^{*}_{j} & = 0\,, \quad (1\leq j \leq k)\,. \end{split} \end{align} For the first equation, $\Phi_c$ denotes the flux across the coupler's Josephson junction (and capacitor), $I_{L,c}$ denotes the current through the coupler's inductor, and $\Phi_0 = h/(2 e)$ is the flux quantum. The second equation simply states that the current $I_{j}$ through qubit $j$'s inductor is equal to the current $I^{*}_{j}$ flowing through the rest of the qubit circuit (represented by box `$q_j$' in the figure). The basic inductive and flux quantization relationships are then \begin{align} \label{fluxQuant} \begin{split} L_c I_{L,c} + \sum_{j=1}^k M_j I_{j} & = \Phi_{L,c}\\ L_{j} I_{j} + M_j I_{L,c} & = \Phi_{j}\\ \Phi_{L,c} & = \Phi_{cx} - \Phi_c\,, \end{split} \end{align} where $\Phi_{cx}$ is the external flux bias applied to the coupler loop and $\Phi_j$ is the flux across qubit $j$'s inductor. Using these equations and some algebra one can rewrite the current equations in terms of the flux variables, \begin{align} \label{currentEqs2} \begin{split} C \ddot \Phi_c + I_c^{(c)} \sin(2 \pi \Phi_c/\Phi_0) + \frac{1}{\tilde L_c }\left( \Phi_{c} -\Phi_{cx}+ \sum_{j=1}^k \alpha_j \Phi_{j} \right) & = 0\\ \frac{\Phi_{j}}{L_j} + \alpha_j \frac{1}{\tilde L_c}\left( \Phi_{c}-\Phi_{cx}+ \sum_{j'=1}^k \alpha_{j'} \Phi_{j'} \right) - I^{*}_{j} & = 0\,, \end{split} \end{align} where \begin{align*} \alpha_j& \equiv \frac{M_j}{L_j}\\ \tilde L_c & \equiv L_c - \sum_{j=1}^k \alpha_j M_j\,. \end{align*} The rescaled coupler inductance, $\tilde L_c$, represents the shift in the coupler's inline inductance due to its interaction with the qubits. Although we could similarly rescale the qubit inductances in the second equation~\eqref{currentEqs2}, we instead keep separate all terms that depend on the mutual inductance, $\alpha_j$. To complete the derivation of the Hamiltonian, we note that equations~\eqref{currentEqs2} are just the Euler-Lagrange equations for the qubits and coupler. Since the $\Phi$-dependent terms correspond to derivatives of the potential energy ($\pdp{U}{\Phi_c}$ and $\pdp{U}{\Phi_{j}}$), we quickly arrive at the corresponding Hamiltonian for the coupled systems \begin{equation} \label{HExact} \hat H = \frac{\hat Q_c^2}{2 C} - E_{J_c} \cos(2 \pi \hat \Phi_c/\Phi_0) + \frac{\left( \hat \Phi_{c}-\Phi_{cx}+ \sum_{j=1}^k \alpha_j \hat \Phi_{j} \right)^2}{2 \tilde L_c} + \sum_{j=1}^k \hat H_{j} \,. \end{equation} Here $\hat H_{j}$ (obtained from $ \frac{\Phi_{j}}{L_j}-I^{*}_{j}$) denotes the Hamiltonian for qubit $j$ in the absence of the coupler (i.e., in the limit $\alpha_j\rightarrow 0$), $\hat Q_c$ is the canonical conjugate to $\hat \Phi_c$ satisfying $[\hat \Phi_c,\hat Q_c] = i \hbar$, and the coupler's Josephson energy is $E_{J_c} = \Phi_0 I_{c}^{(c)}/2 \pi$. \begin{figure} \includegraphics{diagram.eps} \caption{A generic circuit for inductive coupling of two or more superconducting circuits (left column). Each smaller circuit $\{q_i,L_i\}$ represents a single qubit. The strength and type of coupling can be tuned via an external magnetic flux $\Phi_{cx}$ applied through the main coupler loop (right-hand side). The coupler's junction may alternatively be a DC SQUID forming an effective Josephson junction with tunable $I_c^{(c)}$ via a separate flux bias.} \label{diagram} \end{figure} \subsection{Born-Oppenheimer Approximation} To obtain the effective interaction between the qubits, we now eliminate the coupler's degree of freedom. In other words, we apply the Born-Oppenheimer Approximation~\cite{Born1927} by fixing the (slow) qubit degrees of freedom and assuming that the (fast) coupler is always in its ground state. This is analogous to the Born-Oppenheimer Approximation in quantum chemistry, in which the nuclei (qubits) evolve adiabatically with respect to the electrons (coupler). The coupler's ground state energy (a function of the slow qubit variables, $\Phi_j$) then determines the interaction potential between the qubits. This approximation is valid as long as the coupler's intrinsic frequency is much larger than other energy scales in the system, namely the qubits' characteristic frequencies and qubit-coupler coupling strength. We begin by considering the coupler-dependent part of the Hamiltonian, ${\hat H_c = \hat H - \sum_j \hat H_{j}}$. We re-express this operator in terms of standard dimensionless parameters, \begin{align} \label{Hc} \begin{split} \hat H_c &= E_{\tilde L_c}\left( 4 \zeta_c^2 \, \frac{\hat q_c^2}{2} + U(\hat \varphi_c; \varphi_x) \right)\\ U( \varphi_c; \varphi_x ) & = \frac{\left( \varphi_c - \varphi_x\right)^2}{2 } + \beta_c\cos( \varphi_c) \,, \end{split} \end{align} where \begin{align} \label{paramDefs} \begin{split} E_{\tilde L_c} & = \frac{(\Phi_0/2 \pi)^2}{\tilde L_c}\\ \zeta_c & = \frac{2 \pi e}{\Phi_0}\sqrt{\frac{\tilde L_c}{C}} = 4 \pi \tilde Z_c /R_K\\ \beta_c & = 2 \pi \tilde L_c I_{c}^{(c)}/\Phi_0 = E_{J_c}/E_{\tilde L_c} \\ \hat q_c & = \frac{\hat Q_c}{2 e}\\ \hat \varphi_c & = \frac{2 \pi}{\Phi_0}\hat \Phi_c + \pi\\ \varphi_{cx} & = \frac{2 \pi}{\Phi_0} \Phi_{cx} + \pi\\ \hat \varphi_j & = \frac{2 \pi}{\Phi_0} \hat \Phi_{j}\\ \varphi_x & = \varphi_{cx} - \left(\sum_{j=1}^k \alpha_j \varphi_{j} \right) \\ [\hat \varphi_c,\hat q_c] & = i\,. \end{split} \end{align} Note that we have defined $\hat \varphi_c$ and $\varphi_{cx}$ with an explicit $\pi$ phase shift, which flipped the sign in front of $\beta_c \cos(\varphi_c)$. Typical coupler inductive energies are on the order of $E_{\tilde L_c}/h \sim 0.5-2$ THz~\cite{Harris2007,Allman2010,Allman2014}. For reasons that will become clear shortly, we assume ${\beta_c \lesssim 1}$ (monostable coupler regime) and low impedance (${\zeta_c\ll 1}$), consistent with typical qubit-coupler implementations\footnote{ For example, $\zeta_c$ is estimated to be $0.013$ in Ref.~\cite{Allman2010}, $0.04$ in the most recent gmon device \cite{Neill2016}, and $0.05$ in our initial fluxmon coupler design~\cite{Quintana2017}.}. Importantly, we are momentarily treating the external flux $\varphi_x$ as a scalar parameter of the Hamiltonian. This is analogous to the Born-Oppenheimer Approximation in quantum chemistry, where the nuclear degrees of freedom are treated as scalar parameters modifying the electron Hamiltonian. Since $\varphi_x$ is a function of the qubit fluxes $\varphi_j$, the coupler's ground state energy $E_g(\varphi_x)$ acts as an effective potential between the qubit circuits. The full effective qubit Hamiltonian under Born-Oppenheimer is then $\hat H_{\textrm{BO}} = \sum_j \hat H_j + E_g(\hat \varphi_x)$, where the variable $\varphi_x$ is promoted back to an operator. (See Appendix Sections~\ref{BOValidity} and~\ref{BONonadiabatic} for a detailed discussion of this approximation.) In order to derive an analytic expression for the ground state energy, $E_g(\varphi_x)$, we must first decompose it into classical and quantum parts. This natural decomposition allows for a very precise approximation to the ground state energy, because the classical part (corresponding to the classical minimum value of $H_c$) is the dominant contribution to the energy and can be derived {\it exactly}. The quantum part (corresponding to the zero-point energy) is the only approximate contribution, though it is relatively small for typical circuit parameters. To begin our analysis, we write the potential energy $U(\hat \varphi_c; \varphi_x )$ in a more suggestive form, \begin{equation} U(\hat \varphi_c; \varphi_x) = U_{\textrm{min}}(\varphi_x) + U_{\textrm{ZP}}(\hat \varphi_c; \varphi_x)\,. \end{equation} Here the scalar $$U_{\textrm{min}}(\varphi_x) = \min_{\varphi_c} U(\varphi_c ; \varphi_x ) = \frac{(\varphi_c^{(*)}-\varphi_x)^2}{2} + \beta_c \cos(\varphi_c^{(*)})$$ is the value of the coupler potential at its minimum point $\varphi_c^{(*)}$, i.e. its `height' (overall offset) above zero. Setting $E_g(\varphi_x)$ equal to only $E_{\tilde L_c} U_{\textrm{min}}(\varphi_x)$ corresponds to a completely classical analysis of the coupler dynamics (originating from equation~\eqref{currentEqs2}, prior to quantizing the Hamiltonian; see Appendix Section~\ref{classical}). Unlike $U_{\textrm{min}}(\varphi_x)$, the operator $$U_{\textrm{ZP}}(\hat \varphi_c; \varphi_x) = U(\hat \varphi_c; \varphi_x) - U_{\textrm{min}}(\varphi_x)$$ does not have a classical analogue -- it corresponds to extra energy due to the finite width of the coupler's ground state wave-function. Combining this operator with the charging energy defines the coupler's zero-point energy \begin{equation} \label{ZPE} U_{\textrm{ZPE}}(\varphi_x) = \min_{\langle \psi \ket{\psi} = 1} \bra{\psi} \left( 4 \zeta_c^2 \, \frac{\hat q_c^2}{2} + U_{\textrm{ZP}}(\hat \varphi_c; \varphi_x) \right) \ket{\psi}\,. \end{equation} (This minimization picks out the ground state.) The coupler's ground state energy is then the sum of the classical and zero-point energy terms, \begin{equation} E_g/E_{\tilde L_c} = U_{\textrm{min}}(\varphi_x) + U_{\textrm{ZPE}}(\varphi_x)\,. \end{equation} Both contributions to the energy are parameterized by the qubit-dependent flux $\varphi_x$, which is what allows us to treat $E_g$ as an effective qubit-qubit interaction potential. In the following two sections we compute an exact expression for $U_{\textrm{min}}(\varphi_x)$ and an approximate expression for $U_{\textrm{ZPE}}(\varphi_x)$ as Fourier series in $\varphi_x$. These are combined in Section~\ref{interactionHamiltonian} to produce an expression for the full qubit-qubit interaction Hamiltonian \eqref{Hint}, the key result of our work. \subsection{Classical contribution to the interaction potential} We first discuss the classical component of the coupler's ground state energy. From equation~\eqref{Hc}, the minimum value $U_{\textrm{min}}(\varphi_x)$ can be expressed in terms of the minimum point $\varphi_c^{(*)}$ as \begin{equation} \label{EcInter} U_{\textrm{min}}(\varphi_x) = U(\varphi_c^{(*)}; \varphi_x) = \frac{\left(\beta_c \sin(\varphi_c^{(*)} ) \right)^2}{2} + \beta_c \cos(\varphi_c^{(*)})\,, \end{equation} where we have used the fact that $\varphi_c^{(*)}$ is a critical point, \begin{equation} \label{trans} \left. \partial_{\varphi_c} U(\varphi_{c}; \varphi_x) \right|_{\varphi_c = \varphi_c^{(*)}} = \varphi_c^{(*)} - \varphi_x - \beta_c \sin(\varphi_c^{(*)}) = 0\,. \end{equation} Importantly, the parameter $\varphi_c^{(*)}$ is a function of $\varphi_x$ and is defined implicitly as the solution to equation~\eqref{trans}. This equation is identical to the classical current equation~\eqref{currentEqs2} in the large coupler plasma frequency limit $\tilde L_c C \rightarrow 0$ (Appendix Section~\ref{classical}). Although equation \eqref{EcInter} is exact, it is not useful unless we can express $\varphi_c^{(*)}$ as an explicit function of the qubit degrees of freedom (i.e., the variable $\varphi_x$). To motivate how to do this, we observe that the transcendental equation~\eqref{trans} is unchanged under the transformation $\varphi_{c}^{(*)}\rightarrow \varphi_{c}^{(*)} + 2 \pi$, $\varphi_{x}\rightarrow \varphi_x + 2 \pi$, and similarly $U_{\textrm{min}}(\varphi_x)$ is a periodic function of $\varphi_{c}^{(*)}$ (equation~\eqref{EcInter}). This suggests that we can express $U_{\textrm{min}}(\varphi_x)$ as a Fourier series in $\varphi_x$. Indeed, as shown in Appendix Section~\ref{expMuSeries}, for every integer $\mu$, \begin{equation} \label{transInvert} e^{ i \mu \varphi_c^{(*)}} = \sum_{\nu} e^{i \nu \varphi_x} A_{\nu}^{(\mu)}\,, \end{equation} where \begin{equation} \label{fourierCoefs} A_{\nu}^{(\mu)} = \left\{\begin{array}{cc} \delta_{\mu,0} - \frac{\beta_c}{2}(\delta_{\mu,1} + \delta_{\mu,-1})& \nu = 0\\ \frac{\mu J_{\nu-\mu}(\beta_c \nu)}{\nu} & \nu \neq 0 \end{array}\right.\,, \end{equation} and $J_\nu(x)$ denotes the Bessel function of the first kind. (Unless otherwise specified, summation indices in this text go over all integers.) Using this equation with $\sin(\varphi_c^{(*)}) = \frac{1}{2 i}\left(e^{i \varphi_c^{(*)}} - e^{-i \varphi_c^{(*)}}\right)$, we define \begin{align} \label{sinBeta} \begin{split} \sin_{\beta_c}(\varphi_x) & \equiv \sin(\varphi_c^{(*)}) \\ &= \sum_{\nu} e^{i \nu \varphi_x} \frac{1}{2 i}\left(A_\nu^{(1)} - A_\nu^{(-1)} \right)\\ & = \sum_{\nu >0} \frac{2 J_{\nu}({\beta_c} \nu)}{ {\beta_c} \nu} \sin(\nu \varphi_x)\,. \end{split} \end{align} The function $\sin_{\beta_c}(\varphi_x)$ is the explicit solution to $\sin(\varphi_c^{(*)})$ satisfying equation~\eqref{trans}, and therefore satisfies the identity \begin{equation} \label{sinBetaCharacteristicEquation} \sin_{\beta_c}(\varphi_x) = \sin(\varphi_x + \beta_c \sin_{\beta_c}(\varphi_x))\,. \end{equation} In the context of Josephson junctions, $\sin_{\beta_c}(\varphi_x)$ represents the current through the junction as a function of the external flux bias\footnote{For a loop containing only a linear inductor and a Josephson junction, the current through the junction as a function of external bias satisfies $I_J/I_{c}^{(c)} = \sin(\varphi_{cx} + \beta_c I_J/I_{c}^{(c)})$. This is exactly the defining relation of the $\sin_{\beta_c}$ function, equation~\eqref{sinBetaCharacteristicEquation}.}. Since $\sin_{\beta_c}(\varphi_x) = \sin(\varphi_c^{(*)})$ we can also explicitly write $\varphi_c^{(*)}$ as \begin{equation} \label{varphicstar} \varphi_c^{(*)} = \varphi_x + \beta_c \sin_{\beta_c}(\varphi_x)\,. \end{equation} Substituting these results into equation~\eqref{EcInter}, we get an explicit expression for the minimum value $U_{\textrm{min}}(\varphi_x)$, \begin{equation} \label{EcInter2} U_{\textrm{min}}(\varphi_x) = \frac{(\beta_c \sin_{\beta_c}( \varphi_x))^2}{2} + \beta_c \cos(\varphi_x + \beta_c \sin_{\beta_c}(\varphi_x))\,. \end{equation} We now derive the Fourier series for $U_{\textrm{min}}(\varphi_x)$ as a function of $\varphi_x$. Taking the derivative of Equation~\eqref{EcInter2} with respect to $\varphi_x$, one may verify that \begin{equation} \label{EcDeriv} \partial_{\varphi_x} U_{\textrm{min}}(\varphi_x) = - \beta_c \sin_{\beta_c}(\varphi_x)\,. \end{equation} Here we have used the identity, \begin{equation} \label{sinBetaDeriv} \partial_{\varphi_x} \sin_{\beta_c}(\varphi_x) = \frac{\cos(\varphi_x + \beta_c \sin_{\beta_c}(\varphi_x))}{1 - \beta_c \cos(\varphi_x + \beta_c \sin_{\beta_c}(\varphi_x))}\,, \end{equation} which can be derived directly from equation~\eqref{sinBetaCharacteristicEquation}. Equation~\eqref{EcDeriv} is analogous to $$\partial_{\varphi_x}\cos(\varphi_x) = -\sin(\varphi_x)\,,$$ which suggests that we define $U_{\textrm{min}}(\varphi_x)$ as \begin{equation} \label{EcFinal} U_{\textrm{min}}(\varphi_x) = \beta_c \cos_{\beta_c}(\varphi_x)\,. \end{equation} In analogy with the sine and cosine functions, we define the $\cos_{\beta}(\varphi)$ function as the formal integral of $\sin_{\beta}(\varphi)$, \begin{align} \label{cosBetaSeries} \begin{split} \cos_{\beta}(\varphi_x) & \equiv 1 - \int_{0}^{\varphi_x} \sin_{\beta}(\theta) \,\mbox{d}\theta \\ & = \frac{\beta}{2}\left( \sin_{\beta}(\varphi_x)\right)^2 + \cos(\varphi_x + \beta \sin_{\beta}(\varphi_x))\\ & = 1 + \sum_{\nu >0} \frac{2 J_{\nu}({\beta} \nu)}{ {\beta} \nu^2} \left(\cos(\nu \varphi_x) - 1\right) \\ & = -\frac{\beta}{4} + \sum_{\nu \neq 0} \frac{J_{\nu}({\beta} \nu)}{ {\beta} \nu^2} e^{i \nu \varphi_x} \,. \end{split} \end{align} We prove the equality of each of these expressions in Appendix~\ref{cosBetaIdentity}. Equations \eqref{EcFinal} and ~\eqref{cosBetaSeries} exactly characterize the classical part of the coupler's ground state energy, $E_g$. As shown in Fig.~\ref{EgPlots}(a), $U_{\textrm{min}}(\varphi_x)$ is the dominant contribution to $E_g$ in the small impedance limit $\zeta_c \ll 1$. Substituting the definition $\varphi_x = \varphi_{cx} - \sum_j \alpha_j \varphi_j$ into equation~\eqref{EcFinal}, we can interpret $U_{\textrm{min}}(\varphi_x) = \beta_c\cos_{\beta_c}\left(\varphi_{cx} - \sum_j \alpha_j \varphi_j \right)$ as a {\it scalar potential} mediating an interaction between the qubit circuits\footnote{This potential emerges from the conservative vector field, ${\bar S(\varphi_1,\varphi_2,\,...\,,\varphi_k)} ={\beta_c \sin_{\beta_c}(\varphi_x) \sum_j \alpha_j \bar e_j} ={ -\beta_c \nabla \cos_{\beta_c}\left(\varphi_{cx} - \sum_j \alpha_j \varphi_j\right)}$, where $\bar e_j$ denotes the unit vector associated with the degree of freedom $\varphi_j$.}. \subsection{Quantum contribution to the interaction potential} \label{ZPEDerivation} We now discuss the quantum part of the coupler ground state energy. This is given by the ground state energy of $\hat H_c - E_{\tilde L_c}U_{\textrm{min}}(\varphi_x)$ (equation~\eqref{ZPE}), which represents the coupler's zero-point energy. To approximate this energy we expand the zero-point potential, $U_{\textrm{ZP}} = U(\hat \varphi_c;\varphi_x) - U_{\textrm{min}}(\varphi_x)$, about the classical minimum point $\varphi_c^{(*)}$. Since $U_{\textrm{ZP}}(\varphi_c; \varphi_x)$ and its derivative vanish at the minimum point $\varphi_c^{(*)}$, the Taylor series of $U_{\textrm{ZP}}$ is of the form \begin{equation} \label{linHam} \hat H_c/E_{\tilde L_c} = U_{\textrm{min}}(\varphi_x) + \left( 4 \zeta_c^2\, \frac{\hat q_c^2}{2} + \frac{U_{\textrm{ZP}}''(\varphi_c^{(*)}; \varphi_x)}{2}(\hat \varphi_c - \varphi_c^{(*)})^2 \right) + O\left((\hat \varphi_c - \varphi_c^{(*)})^3\right)\,, \end{equation} where \begin{equation} U_{\textrm{ZP}}''(\varphi_c; \varphi_x) = \partial_{\varphi_c}^2 U(\varphi_c; \varphi_x) = 1 - \beta_c \cos(\varphi_c)\,. \end{equation} If we neglect the terms of order $O((\hat \varphi_c - \varphi_c^{(*)})^3)$, the zero-point energy of $\hat H_c$ is the same as for a harmonic oscillator, \begin{align} \label{UZPEfirst} \begin{split} U_{\textrm{ZPE}} &\simeq \frac{1}{2}\sqrt{4 \zeta_c^2 U''_{\textrm{ZP}}(\varphi_c^{(*)}; \varphi_x)} \\ & =\zeta_c \sqrt{1 - \beta_c \cos(\varphi_c^{(*)})}\,. \end{split} \end{align} The harmonic approximation is the second approximation we use to derive the qubit-qubit interaction potential. (The zero-point energy $E_{\tilde L_c} U_{\textrm{ZPE}} \rightarrow E_{\tilde L_c} \zeta_c = \frac{\hbar}{2\sqrt{\tilde L_c C}}$ in the limit $\beta_c\rightarrow 0$, as expected for the linear coupler limit.) As we did for the classical component $U_{\textrm{min}}(\varphi_x)$, we wish to compute the Fourier series of $U_{\textrm{ZPE}}$ in the qubit-dependent flux parameter $\varphi_x$. To do so, we first write $U_{\textrm{ZPE}}$ as a Fourier series in $\varphi_c^{(*)}$, \begin{equation} \sqrt{1 - \beta \cos(\varphi_c^{(*)})} = \sum_{\mu} G_\mu(\beta)e^{i \mu \varphi_c^{(*)}}\,, \end{equation} where the functions $G_{\mu}(\beta)$ satisfy\footnote{ The generalized binomial $\binom{z}{k}= \frac{1}{k!}(z)(z-1)(z-2)\,...\,(z-k+1)$ for integer $k\geq0$ and is zero for negative integers $k$. } \begin{align} \begin{split} G_\mu(\beta) & = \sum_{l\geq 0} \binom{1/2}{\mu+2l}\binom{\mu+2l}{l}\left(-\frac{\beta}{2} \right)^{\mu+2l}\\ & = \left(-\frac{\beta}{2}\right)^\mu \binom{1/2}{\mu} {_2F_1}\left(\frac{\mu}{2}-\frac{1}{4},\frac{\mu}{2}+\frac{1}{4};1+\mu; \beta^2 \right)\,, \end{split} \end{align} and ${_2F_1(a,b;c;z)}$ is the confluent hypergeometric function. Combining this with equation~\eqref{transInvert} in the previous section, we obtain the desired series, \begin{align} \label{UZPE} \begin{split} U_{\textrm{ZPE}}(\varphi_x) & = \zeta_c\left( G_{0}(\beta_c)- \beta_c G_{1}(\beta_c)+ \sum_{\nu\neq 0}e^{i \nu \varphi_x} \left(\frac{1 }{\nu} \sum_\mu \mu\, G_{\mu}(\beta_c) J_{\nu-\mu}(\beta_c \nu) \right) \right)\,. \end{split} \end{align} We derive the above identities in Appendix~\ref{sqrtCosFS}. The functions $G_{\mu}(\beta_c)$ decay exponentially in $\mu$, so numerical evaluation of the inner sum typically requires only a few terms (see Appendix~\ref{truncationError}). In Fig~\ref{EgPlots}(b). we compare our approximate value for $U_{\textrm{ZPE}}$ (equations~\eqref{UZPEfirst} and \eqref{UZPE}) to the numerically exact zero-point energy (equation \eqref{ZPE}). \subsection{Total interaction Hamiltonian} \label{interactionHamiltonian} Having computed both classical and quantum parts of the coupler ground state energy $E_g$, we now set this quantity equal to the qubit-qubit interaction potential. In the language of physical chemistry, $E_g(\varphi_x)$ is the potential energy surface that varies with the qubit flux variables, $\varphi_j$. We can immediately read off this value from equations~\eqref{EcFinal} and~\eqref{UZPE}, \begin{align} \label{EgFinal} \begin{split} E_{g}(\varphi_x)/E_{\tilde L_c} & = \beta_c \cos_{\beta_c}(\varphi_x) + U_{\textrm{ZPE}}(\varphi_x) \\ & = \sum_{\nu} e^{i\nu \varphi_x} B_{\nu}\,, \end{split} \end{align} wher \begin{equation} \label{interactionSeries} B_{\nu} = \left\{\begin{array}{cc} -\frac{\beta_c^2}{4}+ \zeta_c\left( G_{0}(\beta_c)- \beta_c G_{1}(\beta_c)\right) & \nu = 0\\ \frac{ J_{\nu}(\beta_c \nu)}{\nu^2} + \zeta_c \left(\sum_{\mu} \frac{\mu}{\nu} G_{\mu}(\beta_c) J_{\nu-\mu}(\beta_c \nu) \right)& \nu \neq 0 \end{array}\right.\,. \end{equation} With this result we can complete the Born-Oppenheimer Approximation: substituting for $\varphi_x = \varphi_{cx} - \sum_j \alpha_j \varphi_{j}$, the interaction potential mediated by the coupler is thus \begin{equation} \label{Hint} \hat H_{\textrm{int}} = E_{g}\left(\varphi_{cx} - \sum_{j} \alpha_j \hat \varphi_j\right) = E_{\tilde L_c} \sum_{\nu}B_{\nu} e^{i \nu \varphi_{cx}} e^{-i \nu \left(\sum_{j}\alpha_j \hat \varphi_j\right)} \,. \end{equation} We note that~\cite{Abramowitz1964}, since $J_{-\nu}(x) = J_{\nu}(-x) = (-1)^\nu J(x)$ and $G_{-\mu}(\beta_c) = G_{\mu}(\beta_c)$, the Fourier coefficients are symmetric, $B_\nu = B_{-\nu}$. Thus $\hat H_{\textrm{int}}$ is an Hermitian operator (as expected) and can be expressed as a Fourier cosine series. \begin{figure}[h] \includegraphics[width = \textwidth,trim={2cm 0 2cm 0},clip]{EgPlots.eps} \caption{a) Coupler ground state energy as a function of external flux bias, $\varphi_x$. Solid lines: exact ground state energy of $\hat H_c$ (equation~\eqref{Hc}) computed by diagonalizing in the first $50$ harmonic oscillator basis states. The coupler parameters correspond to $\beta_c = 0.95$ and $\zeta_c = 0.1$ (dark blue), $0.05$ (magenta), $0.01$ (light orange), respectively. Dashed, black line: classical component of the coupler ground state energy, computed using the scalar function $U_{\textrm{min}}(\varphi_x) = \beta_c \cos_{\beta_c}(\varphi_x)$. b) Coupler zero-point energy as a function of external flux bias, $\varphi_x$. Solid lines: difference between the exact ground state energy $E_g/E_{\tilde L_c}$ (computed numerically as above) and the classical energy contribution, $U_{\textrm{min}}(\varphi_x)$. Overlayed dashed lines: linearized approximation to the coupler zero-point energy, computed using equation~\eqref{UZPE} and truncating the series at $|\nu|\leq \nu_{\textrm{max}} = 100$. Inset are the same curves, restricted to the bias range $\varphi_x \in [0,0.03] \times 2 \pi$.} \label{EgPlots} \end{figure} We stress two important points related to equation \eqref{Hint}, which is the central result of our work. First, our result leads to quantitatively different predictions from previous treatments~\cite{Brink2005,Tian2008,Geller2015}. These expand the coupler ground state energy to second order in the flux variables $\hat \varphi_j$ to derive an `effective mutual inductance' between the qubits. As we shall see, the discrepancy between these results is most pronounced when the qubit-coupler interaction $\alpha_j$ is large or when the coupler nonlinearity $\beta_c$ approaches 1.\footnote{Note that as $\alpha_j$ or $\beta_c$ increase, one must also ensure that the Born-Oppenheimer Approximation remains valid.} Second, since the Fourier coefficients $B_\nu$ decay quickly to zero~\cite[equation 9.1.63]{Abramowitz1964}, the interaction $\hat H_{\textrm{int}}$ is a smooth, bounded function of the qubit flux operators. This remains true even in the regime of large coupler nonlinearity ($\beta_c \approx 1$), and it reinforces the physical intuition that the qubit-qubit coupling strength cannot diverge as $\beta_c\rightarrow 1$.\footnote{Indeed, since $\exp(-i \nu \sum_j \alpha_j \hat \varphi_j)$ is a unitary operator, every matrix element of $\hat H_{\textrm{int}}/E_{\tilde L_c}$ is bounded by $\sum_j |B_{\nu}|\leq \beta_c(1 + \beta_c/4) -\zeta_c\left(\sqrt{1-\beta_c} -G_0(\beta_c) + \beta_c G_1(\beta_c) \right)$. See Appendix Section~\ref{truncationError}.} We conclude this section by discussing the approximations used to reach equation~\eqref{Hint}. First, the Born-Oppenheimer Approximation is used to replace the coupler Hamiltonian with its ground state energy. This is equivalent to assuming the full system wavefunction (in the flux basis) is of the form \begin{equation} \label{BOansatz0} \Psi(\varphi_c,\bar \varphi_q,t) = \psi_{g}(\varphi_c; \bar \varphi_q) \, \chi(\bar \varphi_q,t)\,. \end{equation} Here the function $ \psi_{g}(\varphi_c; \bar \varphi_q)$ denotes the ground state of the coupler Hamiltonian $\hat H_c$, equation~\eqref{Hc}. Like $\hat H_c$, we view this wavefunction as parameterized by the qubit flux variables, $\bar \varphi_q = (\varphi_1,\varphi_2,\,...\,\varphi_k)$. Inserting this ansatz into the full Hamiltonian's ($\hat H_c + \sum_j \hat H_j$) Schroedinger equation, in Appendix Section~\ref{BOValidity} we integrate out the coupler degree of freedom and obtain a reduced equation of motion for just the qubit wavefunction, $\chi(\bar \varphi_q)$. Up to a small correction (discussed below), the resulting dynamics corresponds to an effective qubit Hamiltonian, $\hat H_{\textrm{BO}} = E_g(\hat \varphi_x) + \sum_j \hat H_j$. Although intuitively similar, the ansatz wavefunction used above is distinct from standard adiabatic elimination\cite{Brion2007}, since that approximation accounts for virtual transitions into higher energy excited states. Born-Oppenheimer is a valid approximation when transitions out of the coupler ground state (the ansatz~\eqref{BOansatz0}) are suppressed. Heuristically, this holds when the characteristic qubit energy scale $\hbar \omega_q$ is much less than the gap between coupler's ground and first excited state energies. For $\beta_c<1$ not too close to one, a good bound for this condition is \begin{equation} \hbar \omega_q \ll \frac{\hbar}{\sqrt{\tilde L_c C}}\sqrt{1 -\beta_c}\,, \end{equation} where on the right hand side we have approximated the coupler's energy gap by twice its (linearized) minimum zero point energy\footnote{The right hand side is only an approximate lower bound for the coupler's energy gap, which in fact does not vanish as $\beta_c \rightarrow 1$.}. More concretely, there are two corrections to Born-Oppenheimer that determine when it breaks down. First, the Born-Oppenheimer Diagonal Correction~\cite{Tully1976,Valeev2003} is a direct modification to the coupler mediated potential, $E_g(\hat \varphi_x)$, which requires no change to the ansatz wavefunction~\eqref{BOansatz0}. We analyze this correction in Appendix Sections~\ref{BOValidity} and find that it is negligible for typical circuit parameters. More important are non-adiabatic corrections to Born-Oppenheimer, which are associated with transitions from the ansatz wavefunction $\psi_{g}(\varphi_c; \bar \varphi_q) \, \chi(\bar \varphi_q,t)$ to excited states of the coupler. We derive formal expressions for these corrections in Appendix Section~\ref{BONonadiabatic}, though due to their complexity we do not have concise analytical expressions bounding their size. Instead we have carried out a numerical study (Section~\ref{numStudy}) to validate our approximation for typical flux qubit circuit parameters. The second approximation used to derive equation~\eqref{Hint} is the harmonic approximation to the coupler's zero-point energy (equation~\eqref{linHam}). This is mainly a concern when the coupler bias is close to peak coupling, $\varphi_{cx} \approx 0$ (mod $2 \pi$), and the coupler nonlinearity $\beta_c$ approaches $1$ (cf. inset of Fig.~\ref{EgPlots}b); in that limit the harmonic approximation to the zero-point energy ($U_{ZPE} = \zeta_c \sqrt{1 - \beta_c \cos(\varphi_c^{(*)})}$) vanishes and the quartic correction to $\hat H_c$ becomes relevant. As we shall see below, the zero-point energy component of $E_g$ does have a non-negligible effect on the qubit dynamics, but for typical coupler impedances and non-zero bias $\varphi_{cx}$ the inaccuracy in the harmonic approximation is small (see also Fig~\ref{ZPEComparison} in the Appendix). \section{Projection into the qubit basis} \label{reductionQubit} We now describe an efficient method for computing the qubit dynamics mediated by the coupler. It applies to any number of qubits interacting through a single coupler and arises from the generic qubit Hamiltonian derived in the previous section, \begin{equation} \label{HFull} \hat H = \hat H_{\textrm{int}} +\sum_{j =1}^k \hat H_{j}\,. \end{equation} Here $\hat H_{j}$ is the local Hamiltonian of qubit $j$ in the absence of the coupler and $\hat H_{\textrm{int}}$ is the general interaction Hamiltonian of equation \eqref{Hint}. Our method is based on the Fourier decomposition of $\hat H_{\textrm{int}}$, a sum of operators of the form ${\exp(i \nu \sum_j \alpha_j \hat \varphi_j) = \prod_j \exp(-i \nu \alpha_j \hat \varphi_j)}$. This product form means we need only compute matrix elements of single qubit operators (cf. equation \eqref{cjDef}). Accordingly, the cost of this method scales only linearly in the number of distinct qubits. The effect of the local Hamiltonians $\hat H_j$ on the qubit dynamics is implementation dependent. To compute the dynamics induced by the coupler, we restrict our analysis to the `qubit subspace' of each qubit Hamiltonian. (Typically these are spanned by the ground and first excited state of $\hat H_{j}$.) Accordingly, we let $\ket{0}_j$ and $\ket{1}_j$ denote a basis for the local qubit subspace of $\hat H_{j}$. The projection operator into this space is then \begin{equation} \hat P_{j} = \ketbrad{0}_j + \ketbrad{1}_j \,. \end{equation} Within this convention we define the Pauli operators $(I,\sigma_x,\sigma_y,\sigma_z)$ in the usual way. We now consider the projection of the exponential operators used in the Fourier series description of $\hat H_{\textrm{int}}$ (equation~\eqref{Hint}). Written within the qubit subspace, we have: \begin{align} \label{expPhi} \begin{split} \hat P_{j}\, e^{-i s \hat \varphi_j}\, \hat P_{j} &= \sum_\eta c^{(j)}_\eta(s) \sigma_\eta^{(j)} \,, \end{split} \end{align} where $\eta \in \{ I,x,y,z\}$ indexes the identity operator and three Pauli operators acting on qubit $j$. Using the identity \begin{equation} \label{pauliId} \mbox{tr}[\sigma_{\alpha}\sigma_{\beta} ]/2 = \delta_{\alpha \beta}\,, \end{equation} we see that \begin{equation} \label{cjDef} c_\eta^{(j)}(s) = \mbox{tr}\left[\frac{\sigma^{(j)}_\eta}{2}e^{-i s\hat \varphi_j} \right]\,, \end{equation} or more explicitly (and dropping the qubit index $j$), \begin{align} \label{cDef} \begin{split} c_I(s) & = \frac{\bra{0}e^{-i s \hat \varphi}\ket{0} + \bra{1}e^{-i s \hat \varphi}\ket{1}}{2}\\ c_x(s) & = \frac{\bra{0}e^{-i s \hat \varphi}\ket{1}+\bra{1}e^{-i s \hat \varphi}\ket{0}}{2} \\ c_y(s) & = i \frac{\bra{0}e^{-i s \hat \varphi}\ket{1}-\bra{1}e^{-i s \hat \varphi}\ket{0}}{2} \\ c_z(s) & = \frac{\bra{0}e^{-i s \hat \varphi}\ket{0} - \bra{1}e^{-i s \hat \varphi}\ket{1}}{2} \,. \end{split} \end{align} We note that in general these coefficients are complex valued and differ between each qubit. To finish our analysis we also project $\hat H_{\textrm{int}}$ into the qubit subspace. We again write this projection as a sum of Pauli operators, \begin{equation} \label{HintProjection} \hat P_q \hat H_{\textrm{int}} \hat P_q = \sum_{\bar \eta} g_{\bar \eta} \, \sigma_{\bar \eta} \,, \end{equation} where $\hat P_q = \hat P_{1}\otimes\hat P_{2}\otimes\,...\,\otimes \hat P_{k}$ and the vector ${\bar \eta} = (\eta_1,\eta_2,\,...\,\eta_{k})$ denotes the corresponding product of Pauli operators, \begin{equation} \label{sigmaAlpha} \sigma_{\bar \eta} = \sigma_{\eta_1}^{(1)}\otimes \sigma_{\eta_2}^{(2)}\otimes\,...\,\otimes \sigma_{\eta_k}^{(k)} \,. \end{equation} With this decomposition we directly compute \begin{align} \label{gEta} \begin{split} g_{\bar \eta} & = \mbox{tr}\left[\frac{\sigma_{\bar \eta}}{2^k} \hat H_{\textrm{int}} \right]\\ & = E_{\tilde L_c} \sum_{\nu } \mbox{tr}\left[ \frac{\sigma_{\bar \eta}}{2^k} B_{\nu} e^{i \nu \varphi_{cx}} e^{-i \nu \left(\sum_{j}\alpha_j \hat \varphi_j\right)} \right]\\ & = E_{\tilde L_c} \sum_{\nu } B_{\nu} e^{i \nu \varphi_{cx}} \prod_{j = 1}^k \mbox{tr}\left[ \frac{\sigma_{\eta_j}^{(j)}}{2} e^{-i \nu \alpha_j \hat \varphi_j } \right] \\ & = E_{\tilde L_c} \sum_{\nu } B_\nu e^{i \nu \varphi_{cx}} \prod_{j = 1}^k c_{\eta_j}^{(j)}(\nu \alpha_j) \,. \end{split} \end{align} Each line of the above calculation follows from \eqref{pauliId}, \eqref{Hint}, \eqref{sigmaAlpha}, and \eqref{cjDef}, respectively. (This equation also encompasses the individual qubit operators induced by the presence of the coupler, e.g., for $\bar \eta = (x,I,I,\,...\,,I)$.) Thus the calculation of $g_{\bar \eta}$ reduces to computing the single qubit coefficients $c_{\eta_j}^{(j)}(\nu \alpha_j)$ and evaluating the sum in \eqref{gEta}. For realistic calculations the sum~\eqref{gEta} must be truncated at some maximum value $\nu_{\textrm{max}}$, though for $\beta_c < 1$ the truncation error decays rapidly with $\nu_{\textrm{max}}$ (since the functions defining $B_{\nu}$ decay exponentially in $\nu$, see \cite[equation 9.1.63]{Abramowitz1964}). We give a technique for bounding this error in Appendix Section~\ref{truncationError}. We remark that the reduction into the qubit subspace is actually an approximation of the qubit dynamics. This is because $\hat H_{\textrm{int}}$ generally has non-zero matrix elements between the qubit subspace $\mathcal{P}$ (represented by projector $\hat P_q$) and its complement, $\mathcal{Q}$. Hence the projection in equation~\eqref{HintProjection} is valid only in the limit that transitions into $\mathcal{Q}$ are suppressed. This occurs if there is a large energy gap between $\mathcal{P}$ and $\mathcal{Q}$, but unfortunately this is not always the case. For example, for three distinct qubits with low nonlinearity, it is possible to observe a resonance\footnote{The energy splittings $E_{mn}^{(j)} = E_m^{(j)} - E_n^{(j)}$ are defined with respect to the local qubit Hamiltonian, $\hat H_j$. } of the form $E_{20}^{(1)} = E_{10}^{(2)} + E_{10}^{(3)}$. The multi-qubit transition $\ket{g, e_1, e_1}\rightarrow \ket{e_2, g, g}$ (where $\ket{g}, \ket{e_m}$ denote the ground and $m$th excited state) thus conserves energy with respect to the local Hamiltonian $\sum_j \hat H_j$. Such accidental degeneracies can occur even in the highly nonlinear case where the qubit energies are far from evenly spaced. As long as these resonant transitions correspond to non-negligible matrix elements of $\hat H_{\textrm{int}}$, over time the composite qubit system can be mapped outside of the qubit subspace $\mathcal{P}$. One must therefore take special care to account for degeneracies when using equation~\eqref{gEta}, especially when more than two qubits interact through the same coupler. A standard technique accounting for the higher energy states is the Schrieffer-Wolff transformation\cite{Bravyi2011}. This treatment is based on algebraic transformations acting on a Hilbert space with more than four states, so applying it to continuous variable circuits would likely preclude any analytical results as we have obtained for the Born-Oppenheimer Approximation\footnote{The Schrieffer-Wolff transformation is not equivalent to the standard Born-Oppenheimer Approximation applied in our text. Indeed, while the former explicitly depends on matrix elements involving higher energy excited states, the latter is only explicitly dependent on the (scalar) ground state energy of the coupler degree of freedom.}. A practical approach would be to use the Schrieffer-Wolff transformation to account for the higher energy qubit states after using the Born-Oppenheimer Approximation to account for the coupler. This has the advantage of first removing the coupler Hilbert space, which greatly reduces the numerical cost of applying Schrieffer-Wolff. We note that result~\eqref{gEta} in principle allows for couplings absent in linear theories describing $\hat H_{\textrm{int}}$. For example, it predicts non-zero $k$-body ($k>2$) couplings between multiple qubits, which could be a powerful feature in a quantum annealer where `tall and narrow' potential barriers allow quantum tunneling to outperform classical counterparts~\cite{Boixo2016}. From a quantum information perspective it would also be interesting to engineer tunable non-commuting couplings, for example $\sigma_x\otimes\sigma_x$ and $\sigma_x\otimes\sigma_z +\sigma_z\otimes\sigma_x$. Interactions of this second type are non-stoquastic, i.e. they may have positive off-diagonal elements in any computational basis. These are believed necessary to observe exponential quantum speedups over classical algorithms~\cite{Bravyi2008,Biamonte2008}. The presented analytic derivation in this paper makes it possible to consider inductive couplings to implement such non-stoquastic terms. We consider these kinds of couplings in Section~\ref{kLocalNonStoquastic}. \section{Two qubit case and linear approximations} In this section we limit our consideration to the case of two coupled flux qubits (Fig.~\ref{diagram2Qubits}). To compare our analysis to previous work, we linearize the coupler-mediated interaction potential $E_g(\varphi_{x})$ (equation~\eqref{EgFinal}) about the {\it qubit} degrees of freedom and show that it reproduces the standard picture of an effective mutual inductance mediated by the coupler~\cite{Brink2005,Tian2008,Geller2015}. This result is perturbative in the qubit-coupler interaction strength $\alpha_j = M_j/L_j$ and is therefore equivalent to the weak coupling limit. In the subsequent section we will compare the predictions of this linear theory our nonlinear result. We conclude this section with a different treatment of the qubit-qubit coupling, valid when the qubit basis states have a definite parity. Interestingly, where the linear theory treats the coupling in terms of the second derivative of $E_g$, this (more precise) theory expresses it as a second order {\it finite difference}\cite{Averin2003}. This distinction between continuous and discrete derivatives allows us to bound the error between the linear theory and nonlinear theory of the previous section. \subsection{Flux qubit Hamiltonian} \label{fluxQubit} \begin{figure} \includegraphics{diagram2Qubit.eps} \caption{Standard flux qubits with interaction mediated by an inductive coupler.} \label{diagram2Qubits} \end{figure} We begin by describing the flux qubit Hamiltonian. The circuit diagrams of these qubits are identical to those of the coupler, though their characteristic frequencies are necessarily smaller. Similarly to the coupler, they are characterized by three parameters\footnote{Other forms of flux qubit also exist~\cite{Mooij1999,Koch2007,Yan2016}. Our analysis can be similarly applied in these cases, with resulting numerical examples showing the same qualitative trends.}: \begin{align} \begin{split} E_{L_j} & = \frac{ (\Phi_0/2\pi)^2}{L_j}\\ \zeta_j & = \frac{2 \pi e}{\Phi_0}\sqrt{\frac{ L_j}{C_j}}= 4 \pi Z_j/R_K \\ \beta_j & = \frac{2 \pi}{\Phi_0} L_j I_{j}^{(c)}= E_{J_j}/E_{L_j} \,, \end{split} \end{align} Here $E_{L_j}$ represents the characteristic energy of the qubit's linear inductor and the dimensionless parameter $\zeta_j$ represents its characteristic impedance. These parameters are related to the $LC$ plasma frequency through ${f_{LC,j} = \frac{1}{2 \pi\sqrt{L_j C_j}} = 2 \zeta_j E_{L_j}/h}$. For typical flux qubit implementations of this type \cite{Harris2010, Quintana2017} $E_{L_j}/h$ is on the order of hundreds of GHz while $\zeta_j$ is between $0.01$ and $0.1$, so that $f_{LC,j}$ ranges from a few to tens of GHz. The parameter $\beta_j$ represents the nonlinearity in the qubit circuit due to the Josephson element. This parameter can vary between circuit designs, and unlike the coupler, within our analysis it is relevant to consider regimes where $\beta_j>1$ (corresponding to a multi-well potential). The qubit Hamiltonian has an identical form to the coupler Hamiltonian of equation~\eqref{Hc}, \begin{equation} \label{HQubit} \hat H_j = E_{L_j}\left(4 \zeta_j^2 \frac{\hat q_j^2}{2} + \frac{(\hat \varphi_j - \varphi_{jx})^2}{2}+\beta_j \cos(\hat \varphi_j ) \right)\,, \end{equation} where the qubit charge and flux variables satisfy $[\hat \varphi_j,\hat q_j] = i$ and $\varphi_{jx}$ denotes an external flux bias. In the following sections, the basis we use for the qubit subspace is the ground and first excited state of $\hat H_j$. \subsection{Linearization of the ground state energy} \label{EgLinearized} To linearize the qubit-qubit interaction potential we assume the weak coupling limit, $\alpha_j = M_j/L_j \ll 1$. This allows us to expand the coupler's ground state energy to second order in $\alpha_j$, leading to a quadratic interaction within the Born-Oppenheimer Approximation. To begin, we use equations ~\eqref{EgFinal} and \eqref{HQubit} to write the full Hamiltonian for the system, \begin{equation} \label{2QubitHam} \sum_{j} \hat H_j + E_g(\hat \varphi_x)\,, \end{equation} where $\hat \varphi_x$ is defined as \begin{equation} \label{varphiX2Qubit} \hat \varphi_x = \varphi_{cx} -\alpha_1 \hat \varphi_1 - \alpha_2 \hat \varphi_2\,, \end{equation} and \begin{align} \label{U12} E_{g}(\varphi_x)/E_{\tilde L_c} = \beta_c \cos_{\beta_c}(\varphi_x)+ \zeta_c \sqrt{1 - \beta_c \cos(\varphi_x + \beta_c \sin_{\beta_c}(\varphi_x))} \,. \end{align} Here $\varphi_{cx}$ denotes the external flux applied to the coupler's inductive loop. We have also used equation~\eqref{UZPEfirst} for the definition of the zero-point energy (it will not be necessary to compute its Fourier series) and substituted equation~\eqref{varphicstar} for $\varphi_c^{(*)}$. We now expand the interaction potential $E_g(\varphi_x)$ to second order in the mutual inductance parameters $\alpha_j$ (i.e., about the point $\left. \varphi_x \right|_{\alpha_j = 0} = \varphi_{cx}$). Using the fact that $\pdp{\hat \varphi_x}{\alpha_j} = - \hat \varphi_j$ (cf. equation~\eqref{varphiX2Qubit}), from equation~\eqref{2QubitHam} we compute the effective Hamiltonian \begin{align} \label{Heff} \begin{split} \hat H_{\textrm{eff}} & = \sum_j \hat H_j + \left( E_g'(\varphi_{cx})(\hat \varphi_x - \varphi_{cx}) + \frac{1}{2}E_g''(\varphi_{cx})(\hat \varphi_x - \varphi_{cx})^2 \right) + O(\alpha^3) \\ &= \sum_j \left(\hat H_j - \alpha_j E_g'(\varphi_{cx})\hat \varphi_j\right) + \frac{1}{2} E_g''(\varphi_{cx}) \sum_{j,k} \alpha_j \alpha_k \hat \varphi_j \hat \varphi_k + O(\alpha^3)\,. \end{split} \end{align} We use equations~\eqref{U12} and~\eqref{sinBetaDeriv} to compute the dependence of these terms on the coupler bias $\varphi_{cx}$, \begin{align} \label{U12p} E_g'(\varphi_{cx})/E_{\tilde L_c} & = - \beta_c \sin_{\beta_c}(\varphi_{cx})\left(1 - \frac{\zeta_c}{2}\left(1 - \beta_c \cos(\varphi_{cx} + \beta_c \sin_{\beta_c}(\varphi_{cx})) \right)^{-3/2} \right)\\ \begin{split} \label{U12pp} E_g''(\varphi_{cx})/E_{\tilde L_c} = & - \frac{\beta_c \cos(\varphi_{cx} + \beta_c \sin_{\beta_c}(\varphi_{cx}))}{1 - \beta_c \cos(\varphi_{cx} + \beta_c \sin_{\beta_c}(\varphi_{cx}))}\\ & + \zeta_c \beta_c \left(\frac{\cos(\varphi_{cx} + \beta_c \sin_{\beta_c}(\varphi_{cx})) - \beta_c - \beta_c\sin_{\beta_c}^2(\varphi_{cx})/2}{ 2\left(1 - \beta_c \cos(\varphi_{cx} + \beta_c \sin_{\beta_c}(\varphi_{cx}))\right)^{7/2}}\right)\,. \end{split} \end{align} The first order terms in equation~\eqref{Heff} (proportional to $E_g'$) correspond to local fields acting on individual qubits, while the second order terms are equivalent to an effective mutual inductance between the qubits. Note that we have neglected the constant term $E_g(\varphi_{cx})$ since it has a trivial effect on the qubit dynamics\footnote{On the other hand, it was not valid to ignore the potential minimum when we computed the ground state energy of the coupler. In that case the potential minimum $U_{\textrm{min}}(\varphi_x)$ varied with the qubit flux variables, whereas here it is completely independent of the qubits' state.}. Let us compare the local field terms in equation~\eqref{Heff} to the quantum treatment in Ref.~\cite[Section 4]{Brink2005}. These terms ($\propto E_g'$) can be incorporated into each qubit Hamiltonian as a shift in its external flux bias, \begin{align} \begin{split} \varphi_{jx} &\rightarrow \varphi_{j x} + \delta \varphi_{j x}\\ \delta \varphi_{j x} &= - \alpha_j \frac{E_g'(\varphi_{cx})}{E_{L_j}} \\ &= - \frac{M_j}{\tilde L_c}E_g'(\varphi_{cx})/E_{\tilde L_c}\\ & = \frac{2 \pi}{\Phi_0} M_j I_c \,. \end{split} \end{align} In the last line we equated our result to equation~(44) of Ref.~\cite{Brink2005}, which identifies $\delta \varphi_{j x }$ with the current through the coupler's inductor. Indeed, rearranging terms and using $\beta_c = \frac{2 \pi}{\Phi_0} \tilde L_c I_c^{(c)}$ and equation~\eqref{U12p}, we get \begin{equation} I_c = I_{c}^{(c)}\sin_{\beta_c}(\varphi_{cx})\left(1 + O(\zeta_c) \right)\,. \end{equation} As expected, the first ($\zeta_c$-independent) term is exactly the current flowing through the coupler's Josephson junction. On the other hand, the second term (proportional to $\zeta_c$) has an inherently quantum origin: the coupler's zero-point energy (equation~\eqref{EgFinal}). The description of the coupling terms ($\propto E_g''$) in $\hat H_{\textrm{eff}}$ is analogous to that of the local fields. Writing the qubit `current operator' as $\hat I_j = \frac{\Phi_0}{2 \pi L_j} \hat \varphi_j$, the interaction in equation~\eqref{Heff} is described in terms of an effective mutual inductance~\cite{Brink2005}, \begin{equation} \label{MEffDef} E_g''(\varphi_{cx}) \alpha_1 \alpha_2 \hat \varphi_1 \hat \varphi_2 = \left(M_{1} M_2 \chi_c\right) \hat I_1 \hat I_2 \,, \end{equation} where the coupler's linear susceptibility is \begin{equation} \chi_c = \frac{1}{\tilde L_c} E_g''(\varphi_{cx})/E_{\tilde L_c}\,. \end{equation} As it was for the coupler current $I_c$, the first term describing $\chi_c$ (cf. equation~\eqref{U12pp}) is in agreement with previous works~\cite{Brink2005,Tian2008} and corresponds to an essentially classical treatment. Again, the $\zeta_c$-dependent term is an added quantum contribution due to the coupler's zero-point energy. Finally, we note that equation~\eqref{Heff} also includes corrections proportional to $\chi_c \hat\varphi_{j}^2$. These are a source of `nonlinear cross talk' typical in flux qubit experiments and have the effect of shifting each qubit's linear inductance (and therefore energy gap)~\cite{Harris2010,Allman2010,Chen2014}. To calculate the qubit dynamics within the linear theory, we project the coupler-dependent terms of $\hat H_{\textrm{eff}}$ (equation~\eqref{Heff}) into the qubit subspace. We define the basis for this subspace as the ground and first excited state of the qubit Hamiltonian, $\hat H_j$. The local and coupling terms then become \begin{align} \label{gEtaLin} \begin{split} g_{\eta_1 \eta_2}^{\textrm{lin}} & =\tr{\frac{\sigma_{\eta_1}^{(1)}\otimes \sigma_{\eta_{2}}^{(2)}}{4} \left(E_g'(\varphi_{cx})\left(\alpha_1 \hat \varphi_1 + \alpha_2 \hat \varphi_2 \right) +\frac{1}{2}E_g''(\varphi_{cx})\left(\alpha_1 \hat \varphi_1 + \alpha_2 \hat \varphi_2 \right)^2 \right)}\,, \end{split} \end{align} where $E_g'$ and $E_g''$ are defined in equations~\eqref{U12p} and ~\eqref{U12pp}. For the interaction term $\sigma_x^{(1)}\otimes \sigma_x^{(2)}$, this expression simplifies to \begin{align} \label{gxxlin} \begin{split} g_{x x}^{\textrm{lin}} &= E_g''(\varphi_{cx}) \alpha_1 \alpha_2 \bra{ 0 0}\hat \varphi_1 \hat \varphi_2 \ket{1 1} \\ & = \chi_c(\varphi_{cx}) M_1 M_2 I_{p}^{(1)} I_{p}^{(2)}\,, \end{split} \end{align} where we have used equation~\eqref{MEffDef} and defined the persistent current\footnote{In the absence of bias $\varphi_{j x}$, the $\hat H_j$ eigenstates have either even or odd parity wave-functions. This is in contrast to the `persistent current' basis commonly used in double-well flux qubits, which correspond to $\ket{\pm} = \frac{1}{\sqrt{2}}(\ket{0} \pm \ket{1})$. In that case, we would interchange $\sigma_x \leftrightarrow \sigma_z$ and redefine $I_p^{(j)}\rightarrow \frac{1}{2}\left((\hat I_j)_{00} - (\hat I_j)_{11}\right)$. }, \begin{equation} I_{p}^{(j)} = (\hat I_j)_{01} = \frac{\Phi_0}{2 \pi L_j} \bra{0} \hat \varphi_j \ket {1}\,. \end{equation} A similar calculation can be carried out for the local field terms. We stress that equations~\eqref{U12p} and~\eqref{U12pp} are approximations. This is because, as with the nonlinear theory, the coupler's zero-point energy (the second term in equation~\eqref{U12}) is obtained by linearizing the coupler Hamiltonian about its classical minimum point. Indeed, the zero-point energy contributions ($\propto \zeta_c$) diverge even more rapidly as $\beta_c \rightarrow 1$ (for $\varphi_{cx} = 0$). As an alternative to this approximation, it is possible to compute $E_g'$ and $E_g''$ numerically using standard perturbation theory. Specifically, for any eigenstate $\ket{\psi_m}$ of $\hat H_c$ (parameterized by $\varphi_x$) with eigenvalue $E_m$, we observe that \begin{align} \label{EigenDeriv} \begin{split} \partial_{\varphi_x} E_m /E_{\tilde L_c}& = \bra{\psi_m} \left( \partial_{\varphi_x} \hat H_c /E_{\tilde L_c} \right) \ket{\psi_m} \\ &= \bra{\psi_m} \left( \varphi_x - \hat \varphi_c \right) \ket{\psi_m} \\ \ket{\partial_{\varphi_x}\psi_m} & = - (E_m - \hat H_c)^{-1} \partial_{\varphi_x}\left((E_m - \hat H_c) \right) \ket{\psi_m}\\ & = -\frac{E_{\tilde L_c}}{E_m - \hat H_c} \hat \varphi_c \ket{\psi_m}\,. \end{split} \end{align} (Here $(E_m - \hat H_c)^{-1}$ denotes the pseudo-inverse, which vanishes on $\ket{\psi_m}$.) Carrying out the second derivative for $m = g$ then gives \begin{equation} \label{EigenDeriv2} \partial_{\varphi_x}^2 E_g(\varphi_x)/E_{\tilde L_c} = 1 + 2 \bra{\psi_g} \hat \varphi_c \frac{E_{\tilde L_c}}{E_g - \hat H_c} \hat \varphi_c \ket{\psi_g}\,. \end{equation} Thus the first and second derivatives of $E_g$ can be obtained diagonalizing $\hat H_c$ and performing the above matrix operations. While this calculation exactly accounts for the coupler's zero-point energy, it is computationally more expensive compared to the analytic theories. \subsection{Coupling as a finite difference and errors in the linear theory} \label{FDCoupling} We now derive an approximate expression for the qubit-qubit coupling that is more refined than the linear approximation. What results is a nonlinear function of qubit flux variables' first and second moments. Whereas the linear theory coupling is proportional to the second derivative of the coupler energy ($E_g''$), this approximation expresses the coupling as a second order finite difference\cite{Averin2003}. It thus accounts for higher orders in the Taylor Series of $E_g$. This produces a more accurate approximation in the strong coupling limit that does not diverge as $\beta_c \rightarrow 1$. This analysis will also allow us to bound the error in the (analytic) linear theory. We start by defining the `qubit subspace' of the qubit Hamiltonians. We set the basis as the ground and first excited state of each qubit's Hamiltonian. For simplicity, we assume identical qubits and also that the qubits' local potential energy functions are symmetric (e.g., zero external bias in equation~\eqref{HQubit}). This is reflected in the symmetry of the ground and excited state wave-functions. The wave-functions can then be written in terms of a reference wave-function, \begin{equation} \label{qubitWF} \bra{\varphi} j \rangle = \frac{\psi_r(\varphi - \varphi_p) + (-1)^j \psi_r(-\varphi - \varphi_p)}{\sqrt{2}} \,. \end{equation} where $j = 0,1$ denotes the eigenstate index -- as well as the parity -- of each wave-function. The (normalized) reference wave-function $\psi_r(\varphi - \varphi_p) = \frac{1}{\sqrt{2}}(\bra{\varphi} 0 \rangle + \bra{\varphi} 1 \rangle)$ is defined with respect to an offset $\varphi_p$ so that it is approximately centered at the origin, \begin{align} \begin{split} \int\mbox{d} \varphi \, \psi_r^2(\varphi) & = 1 \\ \int\mbox{d} \varphi \, \psi_r^2(\varphi) &\varphi = 0 \,. \end{split} \end{align} The flux offset $\varphi_p$ in equation~\eqref{qubitWF} is typically associated with the persistent current of the flux qubit, \begin{equation} \varphi_p = \bra{0} \hat \varphi \ket{1} = \frac{2 \pi}{\Phi_0} L_j I_p \,. \end{equation} In the case of a two-well qubit potential, we can intuitively think of $\psi_r(\varphi- \varphi_p)$ as a having a single peak approximately centered at one of the local minima (near the point $\varphi = \varphi_p$). It will also prove useful to consider the second moment of $\hat \varphi$, \begin{equation} 2 \zeta_{\textrm{eff}} \equiv \int\mbox{d} \varphi \, \psi_r^2(\varphi) \varphi^2 = \frac{ \bra{0} (\hat \varphi - \varphi_p)^2 \ket{0} + \bra{1} (\hat \varphi - \varphi_p)^2 \ket{1} }{2} \,. \end{equation} The effective impedance $\zeta_{\textrm{eff}}$ thus determines the characteristic width of $\psi_r$.\footnote{In the harmonic limit $\beta_j = 0$ (cf. Equation~\eqref{HQubit}), this definition of the effective impedance coincides with the qubit impedance, $\zeta_{j} = \zeta_{eff}$.} We now express the $xx$ coupling predicted by our nonlinear theory in terms of the reference wave-function. Since the eigenstate wave-functions are real valued, this coupling is equal to the matrix element $\bra{00} \hat H_{\textrm{int}} \ket{11}$. Using $\hat H_{\textrm{int}} = E_g(\varphi_{cx} - \alpha(\hat \varphi_1 + \hat \varphi_2))$, we substitute equation~\eqref{qubitWF} and integrate over the flux variables to get \begin{align} \begin{split} g_{xx} = & \int \mbox{d} \varphi_1 \, \mbox{d}\varphi_2 \, \braket{0}{\varphi_1}\braket{\varphi_1}{1}\braket{0}{\varphi_2}\braket{\varphi_2}{1} E_g(\varphi_{cx} - \alpha( \varphi_1 + \varphi_2)) \\ = & \frac{1}{4}\int \mbox{d} \varphi_1 \, \mbox{d}\varphi_2 \, \left( \psi_r^2(\varphi_1 - \varphi_p) - \psi_r^2(\varphi_1 + \varphi_p) \right) \left( \psi_r^2(\varphi_2 - \varphi_p) - \psi_r^2(\varphi_2 + \varphi_p) \right)\\ & \quad \times E_g(\varphi_{cx} - \alpha( \varphi_1 + \varphi_2)) \\ = & \frac{1}{4}\int \mbox{d} \varphi_1 \, \mbox{d}\varphi_2 \, \psi_r^2(\varphi_1 ) \psi_r^2(\varphi_2 ) \, E_g^{FD}(\varphi_{x})\,. \end{split} \end{align} In the last line we have shifted the flux variables $\varphi_1,\varphi_2$ by $\pm \varphi_p$ and introduced the second order finite difference of $E_g$, \begin{equation} \label{EgFD} E_g^{FD}(\varphi_{x} ) = E_g(\varphi_{x} + 2 \alpha \varphi_p) + E_g(\varphi_{x} - 2 \alpha \varphi_p) - 2 E_g(\varphi_{x} )\,, \end{equation} where again we have written the total external coupler flux as $$\varphi_x = \varphi_{cx} - \alpha(\varphi_1 + \varphi_2)\,.$$ Introducing the notation $\Mean{f(\hat \varphi_1, \hat \varphi_2)}_{r,r} = \int \mbox{d} \varphi_1 \, \mbox{d}\varphi_2 \, \psi_r^2(\varphi_1 ) \psi_r^2(\varphi_2 ) f(\varphi_1,\varphi_2)$, we see that the coupling $g_{x x}$ can be written as the average of the finite difference of $E_g$ with respect to the reference wave-function $\psi_r$, \begin{equation} \label{gxxFD} g_{xx} = \frac{1}{4}\Mean{E_g^{FD}(\hat \varphi_x )}_{r,r}\,. \end{equation} This definition for $g_{xx}$ is equivalent to the nonlinear theory result, equation~\eqref{gEta}. We can approximate the coupling by assuming the reference wave-function $\psi_r(\varphi)$ is a Gaussian. Since its first two moments satisfy $\Mean{\hat \varphi}_r = 0$ and $\Mean{\hat \varphi^2}_r = 2 \zeta_{eff}$, we have \begin{equation} \psi_{r}^{\textnormal{Gauss}}(\varphi) = (2 \pi \zeta_{eff})^{-1/4} \exp\left(-\frac{\varphi^2}{4 \zeta_{eff}}\right)\,. \end{equation} Substituting the explicit Fourier series~\eqref{EgFinal} into equation~\eqref{gxxFD} then gives a sum of Gaussian integrals, \begin{align} \label{gxxGauss} \begin{split} g_{xx}^{\textrm{Gauss}} & = \frac{E_{\tilde L_c}}{4}\sum_{\nu} B_{\nu} e^{i \nu \varphi_{cx} } \left( e^{i \nu 2 \alpha \varphi_p } + e^{-i \nu 2 \alpha \varphi_p} - 2 \right) \Mean{ e^{- i \nu \alpha(\hat \varphi_1 + \hat \varphi_2)}}_{r,r}\\ & = -E_{\tilde L_c}\sum_{\nu} B_{\nu} e^{i \nu \varphi_{cx}} \sin^2(\nu \alpha \varphi_p) e^{-\alpha^2 \nu^2 \zeta_{eff}}\,. \end{split} \end{align} This approximation allows us to still incorporate higher order corrections in $\alpha_j$ while avoiding the need for computing any matrix elements beyond those in $\varphi_p$ and $\zeta_{eff}$. We can recover the linear theory result of the previous section by making two approximations on equation~\eqref{gxxFD}. First, we notice that $E_g^{FD}(\varphi_{x} )/ (2 \alpha \varphi_p)^2$ is the finite difference approximation to the second derivative, \begin{equation} \label{EgFDApprox} E_g^{FD}(\varphi_x) = E_g''(\varphi_{x})(2 \alpha \varphi_p)^2 + R_{1}\,, \end{equation} where the remainder term $R_{1}$ is bounded by\footnote{This bound can be derived by Taylor expanding $E_g(\varphi_x \pm 2 \alpha \varphi_p)$ to third order and using the Lagrange form for the (fourth order) remainder. Substituting into equation~\eqref{EgFD} causes the zeroth, first, and third order terms to cancel.} \begin{align} \label{R1Bound} \begin{split} |R_{1}| & \leq 2 \frac{(2 \alpha \varphi_p)^4}{4!} \max_{ |\delta \varphi_x| \leq 2 \alpha \varphi_p} |E_g^{(4)}(\varphi_x + \delta \varphi_x)|\\ & \leq 2 \frac{(2 \alpha \varphi_p)^4}{4!} \max_{ \varphi_x } |E_g^{(4)}(\varphi_x)|\,. \end{split} \end{align} Next, we expand $E_g''(\varphi_x)$ to first order about the point $\varphi_x = \varphi_{cx}$, \begin{equation} \label{DDEgApprox} E_g''(\varphi_{x}) = E_g''(\varphi_{cx}) - \alpha (\varphi_1 + \varphi_2) E_g^{(3)}(\varphi_{cx}) + R_{2}\,, \end{equation} where the second remainder term is similarly bounded by \begin{align} \label{R2Bound} \begin{split} |R_{2}| & \leq \frac{ \alpha^2 (\varphi_1+\varphi_2)^2}{2} \max_{ |\delta \varphi_x| \leq |\alpha(\varphi_1 + \varphi_2) | } |E_g^{(4)}(\varphi_{cx} + \delta \varphi_x)|\\ & \leq \frac{ \alpha^2 (\varphi_1+\varphi_2)^2}{2} \max_{ \varphi_x } |E_g^{(4)}(\varphi_{x}) |\,. \end{split} \end{align} Finally, we substitute equations \eqref{EgFDApprox} and \eqref{DDEgApprox} into \eqref{gxxFD} to get\footnote{The third derivative term vanishes since the reference function is centered at zero, $\Mean{\hat \varphi_1 + \hat \varphi_2}_{r,r} = 0$.} \begin{equation} g_{xx} = \frac{1}{4} \left((2 \alpha \varphi_p)^2 E_g''(\varphi_{cx}) + \Mean{(2 \alpha \varphi_p)^2 \hat R_2 + \hat R_1}_{r,r} \right)\,. \end{equation} The first term on the right hand side is exactly the linear theory result $g_{xx}^{\textrm{lin}}$, equation~\eqref{gxxlin}. Using equations~\eqref{R1Bound} and~\eqref{R2Bound} we can also bound the error in the linear theory, \begin{equation} |g_{xx} - g_{xx}^{\textrm{lin}}| \leq \alpha^4 \varphi_p^2 \left(2 \zeta_{eff} + \frac{1}{3} \varphi_p^2\right) \max_{ \varphi_x } |E_g^{(4)}(\varphi_{x}) |\,. \end{equation} Further, if we only consider the classical part of $E_g(\varphi_x)$ (i.e., set $\zeta_c \rightarrow 0$), it is straightforward but tedious\footnote{Take two derivatives of~\eqref{U12pp} using~\eqref{sinBetaDeriv}.} to compute the maximum of $E_g^{(4)}(\varphi_{x})$, \begin{equation} \max_{ \varphi_x }|E_g^{(4)}(\varphi_{x})| \stackrel{\zeta_c = 0}= |E_g^{(4)}(0)| = \frac{E_{\tilde L_c} \beta_c}{(1- \beta_c)^4}\,. \end{equation} Hence, assuming the quantum correction to $E_g$ is small, $g_{xx}^{\textrm{lin}}$ approximates $g_{xx}$ well in the limits \begin{equation} \label{linApproxErrorBound} E_{\tilde L_c }\beta_c \left(\frac{\alpha}{1 - \beta_c}\right)^4 \varphi_p^2 \left(2 \zeta_{eff} + \frac{1}{3} \varphi_p^2\right) \ll |g_{xx}^{\textnormal{lin}}|\,. \end{equation} This affirms physical intuition regarding the validity of the linear, analytic approximation: it is comparable to the nonlinear theory in the limits of weak qubit-coupler interaction ($\alpha = M_j/L_j \ll 1$), small qubit persistent current ($I_p \propto \varphi_p \ll 1$), and/or coupler nonlinearity $\beta_c$ not too close to one. \section{ Numerical study} \label{numStudy} We have carried out a numerical study to evaluate the different approximations described in the text. Our first goal is to validate the Born-Oppenheimer Approximation. We numerically test the breakdown of this approximation in Section~\ref{BOBreakdown}. The following section focuses on the different theories used to approximate the coupler ground state energy. The main result of our work is the exact, analytic expression for the classical part of $E_g$ (i.e., the classical minimum of $H_c$) combined with the harmonic approximation to the coupler zero-point energy (equation~\eqref{linHam}). We refer to this treatment as {\it nonlinear, analytic} (NA) since it expresses $E_g$ as a Fourier Series in $\varphi_x$. As a simplification, we may Taylor expand our approximate expression to second order about the point $\varphi_{x} = \varphi_{cx}$ (i.e., $\alpha_j =0$) to get an {\it linear, analytic} (LA) form for $E_g$. Alternatively, instead of using the analytic expression for the first and second derivatives of $E_g$, we may numerically compute them about $\varphi_{x} = \varphi_{cx}$ using perturbation theory (see equation~\eqref{EigenDeriv}). We call this approximation to $E_g$ the {\it linear, numerical} (LN) theory. Our numerics will focus on distinguishing these theories. Specifically, we investigate the parameter regimes where each theory is valid and compare their effective qubit dynamics. Finally, we calculate the size of some non-stoquastic and $3$-local interactions predicted by the nonlinear theory. \subsection{ Breakdown of Born-Oppenheimer Approximation} \label{BOBreakdown} We first numerically probe the limits of the Born-Oppenheimer Approximation\footnote{Most of the circuit parameters affect this approximation, so we can only note some qualitative trends. Detailed, quantitative discussions of corrections to Born-Oppenheimer are in Appendix Sections~\ref{BOValidity} and~\ref{BONonadiabatic}.}. To do so we have calculated the exact, low energy spectrum of two flux qubits interacting with a coupler circuit (treated as an independent degree of freedom). This is done by representing the full Hamiltonian in the harmonic oscillator eigenstate basis (see Appendix Section~\ref{numericsMethods} for details). We then compare the spectrum to the one predicted under the Born-Oppenheimer Approximation. That is, we consider the Hamiltonian $\hat H_{\textrm{BO}} = \hat H_1 + \hat H_2 + \hat H_{\textrm{int}}$, where $\hat H_j$ is the local Hamiltonian for qubit $j$ and $\hat H_{\textrm{int}} = E_g(\hat \varphi_x)$ is the qubit-dependent ground state energy of the coupler. As a reference, we consider a parameter regime where all of our approximations work well: $\zeta_j = \zeta_c = 0.05, \alpha_j = 0.05, \beta_c =0.75, E_{\tilde L_c}/E_{L_j} = 3,$ and $\beta_j \geq 0.5$. This can be seen in Fig.~\ref{BOBreakdownVaryBetaJReference}, which shows the different spectrum calculations at the maximum coupling bias point, $\varphi_{cx} = \varphi_{jx} = 0$. Tuning the coupler parameters far beyond this regime causes the Born-Oppenheimer Approximation to fail. We modify the coupler circuit parameters away from the reference point to observe their effect on the Born-Oppenheimer Approximation. Generally, we find that Born-Oppenheimer is valid when the coupler Hamiltonian's ground state energy gap is much larger than the qubit energy gaps. Since the coupler energy gap scales approximately linearly with $\zeta_c$ (for fixed $E_{\tilde L_c}$), we can test this intuition by decreasing the coupler impedance\footnote{ At the reference parameters and $\varphi_{x} =0$, the ground state energy gap of $\hat H_c$ is $\sim 5.32 \times 10^{-2} E_{\tilde L_c} = 1.60 \times 10^{-1} E_{L_j}$. Decreasing $\zeta_c$ to $0.02$ decreases the gap to $\sim 2.06 \times 10^{-2} E_{\tilde L_c} = 6.18 \times 10^{-2} E_{L_j}$, which is comparable to the observed qubit spectra.}. Comparing Fig.~\ref{BOBreakdownVaryBetaJSmallZetaC} to the reference regime (Fig.~\ref{BOBreakdownVaryBetaJReference}), we see that decreasing $\zeta_c$ from $0.05$ to $0.02$ causes all of the Born-Oppenheimer theories to break down. The theory also breaks down when the coupling strength $\alpha_j = M_j/L_j$ is too large, because a sufficiently strong qubit-coupler interaction allows the coupler to populate excited states beyond its ground state (cf. Section~\ref{BONonadiabatic}). This is seen in Fig.~\ref{BOBreakdownVaryBetaJBigAlpha}, where we increase the value of $\alpha_j$ from $0.05$ to $0.1$\footnote{An alternative reason for the mismatch in Fig.~\ref{BOBreakdownVaryBetaJBigAlpha} is that our approximation to $E_g$ is inaccurate for large $\alpha_j$. But if that were the case, the nonlinear, analytic (NA) theory should still work since it describes $E_g$ to all orders in $\alpha_j$.}. We also consider the effect of coupler nonlinearity, $\beta_c$. In the limit of zero flux bias ($\varphi_{cx} = 0$ mod $2\pi$) corresponding to maximum coupling, the coupler gap closes exponentially quickly with increasing $\beta_c$, and therefore the Born-Oppenheimer Approximation breaks down\footnote{How quickly the gap closes depends on the coupler impedance. A larger impedance means exponential decay in the gap starts at larger values of $\beta_c$.}. In Fig.~\ref{BOBreakdownVaryBetaJBigBetaC} we see that increasing $\beta_c$ from $0.75$ to $0.95$ causes all of our theories to incorrectly predict the spectrum. However, in this case the mismatch in the spectrum could also be due to errors in the approximate representation of $E_g$, discussed below. Despite the observed spectrum mismatch, Born-Oppenheimer can still hold at large nonlinearity if the bias $\varphi_{cx}$ is finite: as seen in Fig.~\ref{BOBreakdownVaryPhiCX}, for $\varphi_{cx} \geq 0.02 \times 2 \pi$ there is good agreement between the exact spectrum and the one predicted by the NA theory. For sufficiently large $\varphi_{cx}$, the spectra of all theories for $E_g$ agree with the exact spectrum (cf. Fig.~\ref{BOBreakdownVaryPhiCXLargerRange}). Finally, the inductive energy $E_{\tilde L_c}$ sets the overall energy scale of the coupler, so it scales linearly with the coupler gap and increasing this parameter should improve the Born-Oppenheimer Approximation. Although $E_{\tilde L_c}$ also sets the energy scale of the coupling, we mention that for $k$ coupled qubits the coupling strength $\alpha_j \propto M_j$ is bounded by $\frac{1}{k} \sqrt{E_{L_j}/E_{\tilde L_c} }$, and for typical circuit implementations it should scale as $\propto E_{\tilde L_c}^{-1}$. A qualitative summary of the observed trends can be found in Fig.~\ref{appTrends}. \begin{figure}[t!] \begin{tabular*}{0.535 \textwidth}{|c | c | c | c | c| c|} \hline Increase: & $E_{\tilde L_c}/E_{L_j}$ & $\alpha_j$ & $\zeta_c$ & $\beta_c$ & $|\varphi_{cx}|$\\ \hline Born-Oppenheimer & better$^*$ & worse$^*$ & better & worse & better \\ \hline linear analytic (LA) $E_g$ & N/A & worse & worse & worse & better \\ \hline linear numerical (LN) $E_g$ & N/A & worse & N/A & worse & better \\ \hline nonlinear analytic (NA) $E_g$ & N/A & N/A & worse & worse & better\\ \hline \end{tabular*} \caption{The response of various approximations to increases in specific circuit parameters. $^*$: For $k$ identical qubits, the mutual inductance is physically bounded as $M_j \lesssim \frac{1}{k} \sqrt{L_j L_c}$, so $\alpha_j = M_j / L_j \leq \frac{1}{k} \left(E_{\tilde L_c} / E_{L_j} \right)^{-1/2}$. Physically, increasing $\left(E_{\tilde L_c} / E_{L_j} \right)$ (by decreasing the coupler length scale) should correspond to a proportional decrease in $M_j$. } \label{appTrends} \end{figure} \subsection{ Comparison of linear and nonlinear theories} We now consider the parameter regimes that distinguish the different theories modeling $E_g$. These regimes can be explained by the limitations of each theory's approximation. For example, while it is numerically exact, the LN theory correctly describes the effective potential to only second order in $\alpha_j$. Hence we expect it to be inaccurate where the order $O(\alpha^3)$ terms of $E_g(\varphi_x)$ are relevant. On the other hand, the NA theory incorporates the effect of $\alpha$ to all orders, but uses the harmonic approximation to describe the zero-point energy component of $E_g$. In the limit $\beta_c \rightarrow 1$ this approximation breaks down\footnote{Indeed, the harmonic approximation to the coupler zero-point energy is $E_{\tilde L_c} \zeta_c \sqrt{ 1 - \beta_c \cos(\varphi_{c}^{(*)})}$, where $\varphi_{c}^{(*)} = \varphi_x + \beta_c \sin_{\beta_c}(\varphi_x)$ is the classical minimum point determined by the total external bias $\varphi_{x}$. The limit $\varphi_x \rightarrow 0$, $\beta_c \rightarrow 1$ causes the harmonic zero-point energy to vanish.}, although the zero-point energy is a relatively small contribution to $E_g$ (for small impedance $\zeta_c$). The LA theory suffers from both limitations and should only be accurate in the limit where both previous theories agree; thus we will not focus on this theory in our comparisons. Qualitatively, the breakdown of each approximation occurs in the limit of large nonlinearity $\beta_c$, coupling $\alpha_j$, and near the maximal coupling bias $\varphi_{cx} = 0$. When all of these conditions hold, both the LN and NA theories are insufficient to describe the interaction. We shall also find intermediate regimes where one of these theories is more accurate than the other. One regime where the NA theory holds while the linear theories do not ($\beta_c = 0.95$, non-zero $\varphi_{cx}$) corresponds to non-negligible non-stoquastic and $k$-local interactions (discussed in the next section). The qubit dynamics predicted by both LN and NA theories can be inaccurate when the coupler is tuned to maximum coupling, $\varphi_{cx} = 0$. This is true, to a small extent, even in the reference regime ($\beta_c = 0.75, \alpha_j = 0.05$, and $\zeta_c = 0.05$, Fig.~\ref{BOBreakdownVaryBetaJReference}) where all theories predict the spectrum accurately. For these coupler parameters, the qubit dynamics (i.e., the qubit Hamiltonian coefficients $g_{\bar \eta}$) predicted by each theory are close to equal at almost every coupler bias $\varphi_{cx}$ (cf. Fig.~\ref{qubitTermsAtReference}). However, there is a slight discrepancy near the maximal coupling limit $|\varphi_{cx}| \leq 0.01 \times 2 \pi$ (cf. inset of Fig.~\ref{qubitTermsAtReference}), which suggests that at least one theory is inadequate. To investigate this discrepancy, we compute the $xx$ couplings for the NA and LN theories at varying coupler impedances near $\varphi_{cx} = 0$. We first consider the classical limit of small coupler impedance, $\zeta_c \rightarrow 0$. The zero-point energy component of $E_g$ vanishes in this limit, so that the NA prediction becomes exact. As seen in Fig.~\ref{compareQubitDynamics}(a), the NA and LN predictions still disagree in this limit. Thus the LN theory is slightly inaccurate in predicting effect on the qubit dynamics of the classical component of $E_g$. Since this contribution to $E_g$ does not change when increasing $\zeta_c$, the small error in the LN predictions persists even for $\zeta_c = 0.05$ \footnote{Note that the Born-Oppenheimer Approximation is only valid for non-zero $\zeta_c$. The predicted coupling $g_{xx}$ in the $\zeta_c \rightarrow 0$ limit therefore only illustrates the classical contribution to this coupling.}. On the other hand, we can also consider the weak coupling limit, $\alpha_j \ll 1$, where the LN theory is exact (up to order $O(\alpha^3)$). In this limit, the two theories still only agree when we also take the classical limit of small coupler impedance, $\zeta_c = 0.01$ (cf. Fig.~\ref{compareQubitDynamics}(b)). This indicates that the NA theory also has a small but non-negligible error due to its approximation of the coupler zero-point energy (which is approximately proportional to $\zeta_c$). Thus, near the maximum coupling bias $\varphi_{cx} = 0$, both theories may be slightly inaccurate in predicting the qubit dynamics. Yet decreasing the coupler nonlinearity from $\beta_c = 0.75$ to $\beta_c = 0.5$ causes the predictions of both theories to agree, even at maximum coupling bias $\varphi_{cx} = 0$ (Fig.~\ref{compareQubitDynamics}(c)). This is not surprising, as the harmonic approximation to the zero-point energy improves as the coupler nonlinearity decreases, thereby improving the accuracy of the analytic theories\footnote{To see why this is the case, we consider the coupler Hamiltonian linearized about its classical minimum point, equation~\eqref{linHam}. At bias $\varphi_{x} = 0$, the next leading order correction is quartic, with effective potential $ \frac{(1 - \beta_c)}{2} (\hat \varphi_c - \varphi_c^*)^2 + \frac{\beta_c}{24}(\hat \varphi_c - \varphi_c^*)^4 +O(\alpha^6)$. The higher order corrections are therefore small for $\beta_c = 0.5$.} (cf. Fig.~\ref{ZPEComparison}). Similarly, the derivatives of the LA theory (equations~\eqref{U12p}, \eqref{U12pp}) suggest that the higher order corrections in $\alpha$ become less important for smaller $\beta_c$. While both theories agree in this limit, we also see in Fig.~\ref{compareQubitDynamics}(c) that the coupler zero-point energy still has a significant effect on the observed coupling. It is therefore important to account for non-zero coupler impedance, especially for high precision modeling and calibration of inductively coupled circuits. The regime of high coupler impedance draws a sharper contrast between the NA and LN theories. In Fig.~\ref{BOBreakdownVaryBetaJBigZetaC} we compute the energy spectrum of the coupled qubits but increase the impedance $\zeta_c$ from $0.05$ to $0.1$. This is expected to improve the accuracy of the Born-Oppenheimer Approximation since the coupler gap is approximately doubled. At the same time, it should worsen the NA (and LA) theory because the harmonic approximation to the zero-point energy (the quantum contribution to $E_g$) becomes more significant (cf. the inset of Fig.~\ref{EgPlots}). Since the LN theory represents the zero-point energy numerically exactly (at least to second order in $\alpha$), it is insensitive to this change. We note that this discrepancy only exists near $\varphi_{cx} = 0$, since away from this point the NA theory's harmonic approximation improves (cf. Fig.~\ref{ZPEComparison}). Indeed, for $\varphi_{cx} \gtrsim 0.05 \times 2 \pi$ we find that the predicted qubit dynamics (coefficients $g_{\bar \eta}$) of each theory all agree, as seen in Fig.~\ref{qubitTermsLargeZetaC}. The regime of large coupler nonlinearity allows us to draw another contrast between the two theories. As noted previously, at the maximum coupling point $\varphi_{cx} = 0$ neither theory represents the spectrum accurately (cf. Fig.~\ref{BOBreakdownVaryBetaJBigBetaC}) when we increase $\beta_c$ from $0.75$ to $0.95$. Yet when we bias the coupler away from this point, we find that spectrum predicted by the nonlinear (NA) theory agrees with exact diagonalization past the bias point $\varphi_{cx} \gtrsim 0.01 \times 2 \pi$ (cf. Fig.~\ref{BOBreakdownVaryPhiCX}). This is explained by noting that $\varphi_{cx} = 0.01 \times 2 \pi$ is approximately point where the harmonic approximation to the coupler zero-point energy becomes accurate (up to an additive constant, as seen in Fig.~\ref{ZPEComparison}). Indeed, this also explains why, for $\varphi_{cx} \gtrsim 0.01 \times 2 \pi$, both analytic and numerical {\it linear} theories (LA and LN) predict approximately the same spectrum in cf. Fig.~\ref{BOBreakdownVaryPhiCX}. Importantly, there is an intermediate regime ($0.01 \times 2 \pi \lesssim \varphi_{cx} \lesssim0.02 \times 2 \pi$) where the NA theory correctly predicts the spectrum while both LN and LA theories do not\footnote{For sufficiently large biases all theories correctly predict the circuit spectrum and qubit dynamics. This can be seen in Figures~\ref{BOBreakdownVaryPhiCXLargerRange} and~\ref{qubitTermsLargeBetaC}).}. This stresses the importance of including higher order terms when describing the coupler-mediated interaction, as there is also a discrepancy in the predicted qubit dynamics (cf. Fig.~\ref{qubitTermsLargeBetaC}) in this regime. Interestingly, this regime is also where we observe non-negligible non-stoquastic interactions between the qubits. We also note that, although we do not expect them to accurately predict the observed coupling $g_{xx}$ at $\varphi_{cx} \approx 0$, both NA and LN ~Fig.~\ref{qubitTermsLargeBetaC} {\it do not diverge} in the high nonlinearity limit. This is in contrast to the linear, analytic (LA) theory, which predicts an arbitrarily large value as $\beta_c \rightarrow 1$, even coming from the classical contribution to $E_g$ (equations~\eqref{U12pp} and~\eqref{gxxlin}). The strong coupling ($\alpha_j$) limit shows the same contrast between the NA and LN theories as the large nonlinearity limit. Again, while we find that at maximum coupling bias ($\varphi_{cx} = 0$) and $\alpha_j = 0.1$ neither theory is adequate (Fig.~\ref{BOBreakdownVaryBetaJBigAlpha}), the NA theory accurately predicts the low energy spectrum even for small, non-zero bias $\varphi_{cx}$ (Fig.~\ref{BOBreakdownVaryPhiCXBigAlpha}). There is also a similar contrast in the predicted qubit dynamics, as seen in Fig.~\ref{qubitTermsAtLargeAlphaJ}. \begin{figure}[h!] \includegraphics[width = \textwidth,trim={2cm 0 1.7cm 0cm},clip]{BOBreakdownVaryBetaJReference.eps} \caption{All Born-Oppenheimer theories accurately predict the low energy spectra in the `reference' regime. We consider a single coupler circuit interacting with two identical flux qubits for varying qubit nonlinearity $\beta_j$. (All circuits are at zero bias, $\varphi_{cx} = \varphi_{jx} = 0$.) Solid curves represent exact numerical diagonalization of the full Hamiltonian (equation~\eqref{HExact}). The black dashed, dark blue crossed, and light green dotted curves correspond to the nonlinear analytic (NA), linear analytic (LA), and linear numerical (LN) theories of the Born-Oppenheimer Approximation, respectively. (See Appendix Section~\ref{numericsMethods} for a detailed description of each calculation.) } \label{BOBreakdownVaryBetaJReference} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={3cm 0 4cm 0},clip]{qubitTermsAtReference.eps} \caption{Excluding a small discrepancy near the maximal coupling bias $\varphi_{cx}=0$, all Born-Oppenheimer theories predict the same qubit dynamics in the reference regime. Shown are coupler-induced qubit coefficients for $\hat H_{\textrm{int}} = E_g(\hat \varphi_x)$ at the reference parameters (Fig.~\ref{BOBreakdownVaryBetaJReference}, with $\beta_j = 1.05$). The solid dark blue, dashed magenta, and dotted black curves correspond to the predictions of the nonlinear analytic (NA), linear analytic (LA), and linear numerical (LN) theories, respectively. Plots a), b), and c) correspond to the $xx$, $xI$, and $zI$ terms, respectively. All calculations were carried out in the `parity' basis (see Appendix Section~\ref{numericsMethods} for more details).} \label{qubitTermsAtReference} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={3.8cm 0 4.4cm 0},clip]{compareQubitDynamics.eps} \caption{Discrepancy between the different Born-Oppenheimer theories near the maximal coupling bias, $\varphi_{cx} = 0$. Solid curves: $xx$ coupling predicted by the nonlinear analytic (NA) theory, for coupler impedance $\zeta_c = 0.05$ (dark blue), $\zeta_c = 0.03$ (magenta), and $\zeta_c = 0.01$ (light orange). Overlayed dotted curves correspond to the $xx$ coupling predicted by the linear numerical (LN) theory at the same coupler parameters. The top curves in plot a) correspond to the `reference' coupler parameters described in the text ($\beta_c = 0.75,\alpha_j = 0.05, \zeta_c = 0.05$). The curves in plots b) and c) correspond to the weak coupling ($\alpha_j \rightarrow 0.01$) and low nonlinearity $\beta_c \rightarrow 0.5$ limits. In all calculations the qubit parameters were fixed at $\beta_j =1.05, \zeta_j = 0.05, \varphi_{jx} =0$. Since the `parity' basis was used to define the Hamiltonian coefficients, the $g_{xx}$ interaction is strictly stoquastic (i.e., it is a $zz$ coupling in the computational, `persistent current' basis). All calculations were carried out as done for Fig.~\ref{qubitTermsAtReference} (see Appendix Section~\ref{numericsMethods} for more details).} \label{compareQubitDynamics} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={2cm 0 1.7cm 0cm},clip]{BOBreakdownVaryBetaJBigZetaC.eps} \caption{Increasing coupler impedance decreases the accuracy of the analytic (NA and LA) theories, while leaving the numerical theory unchanged. We consider the low energy spectrum of two coupled flux qubits, but double the coupler impedance relative to the reference regime (Fig.~\ref{BOBreakdownVaryBetaJReference}). Solid curves represent exact numerical diagonalization of the full Hamiltonian (equation~\eqref{HExact}). The black dashed, dark blue crossed, and light green dotted curves correspond to the nonlinear analytic (NA), linear analytic (LA), and linear numerical (LN) theories of the Born-Oppenheimer Approximation, respectively. (See Appendix Section~\ref{numericsMethods} for a detailed description of each calculation.)} \label{BOBreakdownVaryBetaJBigZetaC} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={3cm 0 4cm 0},clip]{qubitTermsAtLargeZetaC.eps} \caption{Increasing coupler impedance $\zeta_c$ increases discrepancy between the analytic and numerical theories (relative to the reference regime,~\Cref{qubitTermsAtReference}). For $xx$ and $z I $ terms (plots a,c), a discrepancy between analytic (NA and LA, solid dark blue and dashed magenta) and numerical (NL, dotted black) theories exists near maximum coupling, $\varphi_{cx} = 0$. The theories match closely for the local $xI$ term (plot b). Calculations were carried out for qubit parameters $ \zeta_j = \alpha_j = 0.05, \beta_j = 1.05, \varphi_{jx} = 0$ and coupler parameters $\beta_c = 0.75, \zeta_c =0.1$ (twice the impedance of the reference regime). All calculations were carried out in the `parity' basis (see Appendix Section~\ref{numericsMethods} for more details).} \label{qubitTermsLargeZetaC} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={2cm 0 1.4cm 0cm},clip]{BOBreakdownVaryPhiCX.eps} \caption{Born-Oppenheimer theories fail to predict the low energy spectrum for high coupler nonlinearity (near $\varphi_{cx} = 0$). We consider a single coupler circuit interacting with two identical flux qubits for varying coupler bias, $\varphi_{cx} \ll 1$. Circuit parameters are identical to the reference regime (Fig.~\ref{BOBreakdownVaryBetaJReference}), except qubit nonlinearity is fixed at $\beta_c = 1.05$ and coupler nonlinearity $\beta_c$ is increased from $0.75$ to $0.95$. Solid curves represent exact numerical diagonalization of the full Hamiltonian (equation~\eqref{HExact}). The black dashed, dark blue crossed, and light green dotted curves correspond to the nonlinear analytic (NA), linear analytic (LA), and linear numerical (LN) theories of the Born-Oppenheimer Approximation, respectively. The NA theory agrees well with exact diagonalization for $\varphi_{cx}\gtrsim 0.01 \times 2 \pi$. The large oscillations observed in the LA spectrum are due to the divergences in the analytic expressions for the first and second derivatives of $E_g$ as $\beta_c \rightarrow 1$ (equations~\eqref{U12p} and~\eqref{U12pp}). Fig.~\ref{BOBreakdownVaryPhiCXLargerRange} shows the same calculation for a larger range of bias values, $\varphi_{cx} \in [0,0.2] \times 2 \pi$. (See Appendix Section~\ref{numericsMethods} for a detailed description of each calculation.)} \label{BOBreakdownVaryPhiCX} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={4cm 0 4cm 0},clip]{qubitTermsAtLargeBetaC.eps} \caption{Increasing coupler nonlinearity $\beta_c$ increases discrepancy between the analytic and numerical theories (relative to the reference regime,~\Cref{qubitTermsAtReference}). Plots a), b), and c) correspond to the $xx$, $xI$, and $zI$ terms, respectively, with coupler nonlinearity increased from $\beta_c = 0.75$ to $\beta_c = 0.95$ relative to the reference regime. The solid dark blue, dashed magenta, and dotted black curves correspond to the predictions of the nonlinear analytic (NA), linear analytic (LA), and linear numerical (LN) theories, respectively. For $\varphi_{cx} \lesssim 0.01 \times 2 \pi$ none of the theories are expected to be accurate (Fig.~\ref{BOBreakdownVaryPhiCX}). The LA and LN theories agree for $\varphi_{cx} \gtrsim 0.01 \times 2 \pi$, indicating that the harmonic approximation to the zero-point energy converges (Fig.~\ref{ZPEComparison}). Thus the NL theory (making only the harmonic approximation) is expected to be accurate for $\varphi_{cx} \gtrsim 0.01 \times 2 \pi$. The discrepancy between the NA and LN theories for $\varphi_{cx} \approx 0.01 \times 2 \pi$ indicates that higher order terms neglected by the LN theory are significant. The divergence of the LA calculation is due to the divergences in the analytic expressions for the first and second derivatives of $E_g$ as $\beta_c \rightarrow 1$ (equations~\eqref{U12p} and~\eqref{U12pp}). All calculations were carried out in the `parity' basis. To account for higher coupler nonlinearity, the sums used in the NA calculated (Eqn. ~\eqref{gEta}) were truncated at $|\nu| \leq 200$ (see Appendix Section~\ref{numericsMethods} for more details).} \label{qubitTermsLargeBetaC} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={3cm 0 4cm 0},clip]{qubitTermsAtLargeAlphaJ.eps} \caption{Coupler-induced qubit coefficients for $\hat H_{\textrm{int}} = E_g(\hat \varphi_x)$ at strong coupling $\alpha_j$. Shown are coupler-induced qubit coefficients for $\hat H_{\textrm{int}} = E_g(\hat \varphi_x)$ in the strong coupling limit (Fig.~\ref{BOBreakdownVaryBetaJReference}, with $\beta_j = 1.05$ and $\alpha_j$ increased from $0.05$ to $0.1$). The solid dark blue, dashed magenta, and dotted black curves correspond to the predictions of the nonlinear analytic (NA), linear analytic (LA), and linear numerical (LN) theories, respectively. Plots a), b), and c) correspond to the $xx$, $xI$, and $zI$ terms, respectively. All calculations were carried out in the `parity' basis (see Appendix Section~\ref{numericsMethods} for more details).} \label{qubitTermsAtLargeAlphaJ} \end{figure} \subsection{$3$-body and non-stoquastic interactions} \label{kLocalNonStoquastic} We have also calculated the strength of some $3$-local and non-stoquastic interactions predicted by our nonlinear theory. Such interactions are absent in linear theories: The quadratic representation of $E_g$ precludes any $k$-local qubit couplings with $k>2$. Similarly, in the `parity' qubit basis an interaction of the form $\hat \varphi_1 \otimes \hat \varphi_2$ can only produce $xx$ couplings due to symmetry considerations\footnote{Equivalently, in the standard (persistent current) basis, we would only observe $zz$-type couplings.}. In order to ensure the validity of our results, we assume coupler and qubit parameter regimes for which the nonlinear, analytic Hamiltonian~\eqref{Hint} correctly reproduces the 2-qubit spectrum. We note that there are other proposals in the literature for exotic couplings involving superconducting qubits\cite{Sameti2017,Chancellor2017,Vinci2017}. Although the physical mechanisms driving these exotic couplings differ from those observed in our work, a key similarity is the need for non-linearity in the coupler device. Indeed, the interactions predicted by our analytic theory vanish in the limit of zero coupler nonlinearity, $\beta_c \rightarrow 0$. In Fig.~\ref{kLocalCouplings} we consider a system of three flux qubits interacting with a single coupler circuit and compare the 3-qubit coupling $\sigma_x\otimes\sigma_x\otimes\sigma_x$ to analogous $1$-local and $2-$local terms. Since we have not verified that the exact spectrum of the three qubit system matches the one predicted by our approximations, we have chosen a more conservative value for the coupler nonlinearity ($\beta_c = 0.5$) relative to the reference regime discussed in the previous section ($\beta_c = 0.75$)\footnote{At the maximal coupling point $\varphi_{cx} = 0$ and impedance $\zeta_c = 0.05$, this change increases the ground state energy gap of $\hat H_c$ from $5.32 \times 10^{-2} E_{\tilde L_c}$ to $7.19 \times 10^{-2} E_{\tilde L_c}$. }. We find that the maximum 3-body coupling ($\sim 1.71 \times 10^{-5} E_{\tilde L_c}$) is more than an order of magnitude smaller than maximum 2-body coupling ($\sim 5.35 \times 10^{-4} E_{\tilde L_c}$). For qubit energy scale $E_{L_j} = 200$ GHz and given $E_{\tilde L_c} / E_{L_j} = 3$, these correspond to maximum couplings of $g_{xxx} \sim 10.3$ MHz and $g_{xxI} = 321$ MHz, compared to the bare (coupler-free) qubit splitting of $884$ MHz. We note that the computed 3-local interaction can be increased significantly by modifying the circuit parameters\footnote{For example, increasing $\beta_c$ from $0.5$ to $0.75$ increases the maximum 3-local coupling approximately five-fold, to $g_{xxx} \sim 8.63 \times 10^{-5} E_{\tilde L_c} = 51.8$ MHz. This occurs at bias $\varphi_{cx} \sim 0.0272 \times 2\pi$, where the approximation to the zero-point energy is expected to hold well (cf.~Fig.\ref{ZPEComparison}).}, although one must be careful that the approximations we have discussed are still valid. The nonlinear theory predicts small but non-negligible non-stoquastic couplings. These couplings are of the form $zz$ or $xz$ in our chosen `parity' basis. Like the typical (stoquastic) $xx$ couplings, we find that these terms increase with coupler nonlinearity $\beta_c$\footnote{This can be explained from the generic coupling formula~\eqref{gEta}: the local $z$ Pauli coefficients $c_z(\nu \alpha_j)$ (equation~\eqref{cDef}) vanish at $\nu = 0$ and peak in magnitude for finite values of $\nu$. The Fourier coefficients $B_\nu$ defining the interaction decay exponentially with $\nu$ but also tend to increase with increasing $\beta_c$. The coupling itself is a sum of products of these terms, so increasing the nonlinearity tends to increase the magnitude of $g_{zz}$.}. Even so, for even large coupler nonlinearity $\beta_c = 0.95$, the non-stoquastic terms tend to be small compared to the $xx$ couplings, as seen in Fig.~\ref{exoticCouplings}. As noted previously, for such large $\beta_c$ the nonlinear, analytic theory is only accurate away from $\varphi_{cx} = 0$. Yet this region is specifically where non-stoquastic interactions are non-negligible (see inset). These interactions are of order $1-2 \times 10^{-4} E_{\tilde L_c}$, even for $\varphi_{cx}\gtrsim 0.01 \times 2 \pi$ where the nonlinear theory correctly predicts the qubit spectrum (Fig.~\ref{BOBreakdownVaryPhiCX}). For the given circuit parameters and typical $E_{L_j} = 200$ GHz, this corresponds to $xz$ and $zz$ interactions on the order of $100$ MHz. \begin{figure}[h!] \includegraphics[width = \textwidth,trim={0 0 .5cm 0},clip]{kLocalCouplings.eps} \caption{Coupler-mediated 3-local interactions are small for typical parameter regimes. Comparison of k-qubit $x$-type couplings for three interacting qubits (in the parity basis): The value of $g_{\bar \eta}$ was computed for $\bar \eta = (x,I,I)$ (dark blue), $(x,x,I)$ (magenta), and $(x,x,x)$ (light orange) using the nonlinear, analytic theory (Section~\ref{reductionQubit}). (Inset is a semi-logarithmic plot of $|g_{\bar \eta}|/E_{\tilde L_c}$.) The qubit and coupler parameters were $\beta_j = 1.05$, $\zeta_j = 0.05$, and $\varphi_{jx} = 0$ and $\beta_c = 0.5$ and $\zeta_c = 0.05$, respectively. All calculations were carried out in the `parity' basis (see Appendix Section~\ref{numericsMethods} for more details).} \label{kLocalCouplings} \end{figure} \begin{figure}[h!] \includegraphics[width = \textwidth,trim={.5cm 0 .5cm 0},clip]{exoticCouplings.eps} \caption{The nonlinear theory predicts small but non-negligible non-stoquastic couplings. Main figure: Comparison of 2-qubit couplings depending on coupling type. (Inset is the same plot for the reduced bias range $\varphi_{cx} \in [0,0.04] \times 2 \pi$, focused on only the $xz$ and $zz$ couplings.) The value of $g_{\bar \eta}$ was computed for $\bar \eta = (x,x), (x,z),$ and $(z,z)$. The physical and numerical parameters used in this calculation were identical to those in Fig.~\ref{kLocalCouplings}, except that we assume a coupler $\beta_c =0.95$. Note that the interaction Hamiltonian of the nonlinear, analytic (NA) theory closely predicts the 2-qubit spectrum only for $\varphi_{cx} \gtrsim 0.01 \times 2 \pi$, cf. Fig.~\ref{BOBreakdownVaryPhiCX}. All calculations were carried out in the `parity' basis, so that the non-stoquastic interactions correspond to $(x,z)$ and $(z,z)$. To account for higher coupler nonlinearity, the sums used in the NA calculated (Eqn. ~\eqref{gEta}) were truncated at $|\nu| \leq 200$ (see Appendix Section~\ref{numericsMethods} for more details).} \label{exoticCouplings} \end{figure} \section{Conclusions} We have presented a non-perturbative analysis of a generic inductive coupler circuit within the Born-Oppenheimer Approximation. This provides an explicit and efficiently computable Fourier series for any term in the effective qubit-qubit interaction Hamiltonian. We also account for finite coupler impedance (associated with the coupler's zero-point energy), which gives small but non-negligible quantum corrections to the predicted qubit Hamiltonian. Our results apply whenever the Born-Oppenheimer Approximation and harmonic approximation to the coupler ground state energy are valid (otherwise, there will be deviations as outlined in the numerical study). Importantly, the regime of large coupler nonlinearity and strong coupling $M_j/L_j$ where our results correctly predict the low energy spectrum while deviating significantly from standard linear theories. This regime corresponds to large observed qubit-qubit couplings, as well as small but non-negligible non-stoquastic interactions. Our analysis is also able to accommodate $k$-body interactions with $k>2$. Although for the considered circuit parameters both $k$-body and non-stoquastic interactions are weak, our theory provides a means to optimize these interactions without resorting to perturbative constructions. As another avenue of investigation, in Appendix Section~\ref{generalization} we show how our theory can be generalized to more complex circuit configurations. We expect that our work will be of use in more accurately modeling existing superconducting qubit devices. \begin{acknowledgements} We thank Vadim N. Smelyanskiy and Mostafa Khezri for insightful discussions and helpful comments regarding the text. \end{acknowledgements}
1,108,101,564,112
arxiv
\section{Introduction} \label{sec:introduction} Cybersecurity consists of protecting systems, networks, and programs against cyber-attacks that aim to access, modify or destroy sensitive information, extort money from users, or disrupt normal business processes. Cyber-attacks against networks are rising. Computer and network attacks and their countermeasures become more and more complicated~\cite{Roschke2009}. The understanding of attack realization against a system is essential. Attack graphs show possible paths that adversaries can use to reach their goals successfully. There exist various types of attack graph models, mainly~\cite{Aguessy2017}: logical, topological, and probabilistic models. Logical models represent an attack as a logical predicate requiring successful preconditions for the attack to be perpetrated. This type of model accurately represents the process by which humans judge whether an attack is either possible or not. Topological models offer a higher-level view of possible attacks in an information system, representing an attack as a way of accessing new resources. Finally, probabilistic models, e.g., using Bayesian theories, assign probabilities to nodes and attack steps. In this paper, we choose to favor logical attack graphs. The rationale for our choice is as follows. Both topological and probabilistic models provide less precision than logical models, f.i., in terms of explainability about how attacks were performed. Indeed, logical attack graphs illustrate the causes of the attacks instead of snapshots of the attack steps~\cite{Ou2006}. This offers several advantages. For instance, the size of the graph increases in a polynomial manner, whereas in other approaches it can increase exponentially. Moreover, in a logical attack graph, causality relations between adversaries and systems are already represented in the logical statements of nodes and edges. In the other approaches, one may need to go through Boolean variables to identify the cause of an adverse situation that allows adversaries' actions in a stage, hence increasing processing and inference complexity. In the case of logical attack graphs, the exploitation of existing vulnerabilities on an asset is the main cause of the attack. Our work aims to tackle the following question: \emph{how can real-time system monitoring enrich a priori logical attack graphs by taking into account vulnerability and network configuration updates?} We claim that a posteriori enrichment of the graphs would make possible to fulfill certain preconditions that were not taken into account in the generation of the initial graph. The new process may also allow to discover if the system is now exposed to different situations that can augment the attacks from the initial goals to other detrimental events, causing even further damages. The use of semantic information about system vulnerabilities leads our analysis. Cybersecurity operators often rely on CVE (Common Vulnerability and Exposure) reports for information about vulnerable systems and libraries to prevent vulnerabilities exploitation. These reports include descriptions, disclosure sources, and manually-populated vulnerability characteristics. Characterizing software vulnerabilities is essential to identify the root cause of the vulnerability, as well as to understand its consequences and attack mechanisms. The use of ontologies~\cite{uschold1996ontologies} is a proper way to represent and communicate facts and relations between multiple agents. Several ontologies describing collections of publicly known software vulnerabilities exist. Examples include the SEPSES (Semantic Processing of Security Event Streams) ontology~\cite{kiesling2019sepses}, which describes vulnerabilities extracted from the CVSS\footnote{\url{https://www.first.org/cvss/}} (Common Vulnerability Scoring System) database; and the Vulnerability Description Ontology (VDO)~\cite{booth2016vulnerability}, proposed by the National Institute of Standards and Technology (NIST), in an effort to characterizing software vulnerabilities in a standard manner. The inference abilities of those existing ontologies justify their use for the enrichment of logical attack graphs. Moreover, it can help to guarantee that the attack graph remains faithful to system updates. This includes processing evidences of vulnerability exploitation, i.e., mapping of monitoring alerts against semantic ontologies. To validate our approach, we conduct experimental work using the following setup. We use a scanner of vulnerabilities (based on Nessus Essentials\footnote{\url{https://www.tenable.com/products/nessus/nessus-essentials}}) to discover and list vulnerabilities in a given monitored system. The results are consumed by Mulval~\cite{Ou2005MulVALAL}, a logic-based attack graph engine. We add system monitoring, using Prelude-OSS\footnote{\url{https://www.prelude-siem.com/en/oss-version/}}, an opens-source Security Information and Event Management (SIEM) system, enhanced with additional tools to trigger and post-process attack alerts. We also instantiate precise attacks to change the state of the system (i.e., exploitation of vulnerabilities) and use a recent implementation of VDO\footnote{\url{https://github.com/usnistgov/vulntology}} to enrich the initial attack graph by augmenting the predicates of the initial graph with the semantic data of VDO and the alerts from Prelude-OSS. Alerts trigger a search within the graph, and expand those paths related to a successful vulnerability exploitation. \medskip \noindent \textbf{Paper Organization ---} Section~\ref{sec:background} provides a background of the subject and some preliminaries on the use of attack graphs. Section~\ref{sec:approach} presents our attack-graph enrichment approach. Section~\ref{sec:experiments} provides the experimental results. Section~\ref{sec:relatedwork} surveys related work. Section~\ref{sec:conclusion} concludes the paper. \section{Background} \label{sec:background} \subsection{Literature on Attack Graphs} Cyber-attacks are frequently represented in the information security literature as attack graphs. The idea is to identify all those potential paths that an adversary can take in order to perpetuate the exploitation of a series of vulnerabilities and compromise an information system. Different approaches exist, with respect to the way how we can construct and use such attack graphs. Early literature on attack graphs used them to determine whether specific goal states can be reached by an adversary who is attempting to penetrate a system~\cite{Lippmann2005AnAR}. The starting vertex of the graph represents the initial state of the adversary in the network. The remaining vertices and edges may represent the actions conducted by the adversary, and the system changes due to the adversary actions. Actions may represent adversarial execution of vulnerabilities in the system. A series of actions may represent the adversary steps toward an escalation of privileges in the system, e.g., to obtain enough privileges on different devices or network components in the system. Actions can be combined using either OR (disjunctive) or AND (conjunctive) logic predicates~\cite{Schneier99}, as well as other attributes, such as the costs associated to the actions, their likelihood and probability of success, etc. In the end, a complete attack graph is expected to show all the possible sequences of actions that will allow the adversary to successfully perpetrate the attack (e.g., to penetrate into the system). Some other representations and uses are possible. For instance, instead of using vertices to represent system states and edges to represent attack actions, we can represent actions as vertices and system states as edges; instead of using single adversary starting locations, we can also assume multiple adversary starting locations or multiple targets and goals, etc. Other models use directed graphs. For instance, topological attack graphs directly use topological nodes to represent information about systems' assets. The edges represent an adversary's steps to move from a topological parent node to a topological child node. The type of attack (exploitation of a vulnerability, credential thief, etc.) related to an attack step describes how the adversary can move between nodes. The set of conditions associated with an attack step depends on the type of attack. A sensor can be associated with an attack step, a sensor that may alert that this attack has been detected. Similarly, probabilistic models using Bayesian networks can also represent attack graphs via directed acyclic graphs. Nodes represent random variables and edges represent probabilistic dependencies between variables~\cite{Aguessy2016}. An example is BAM (Bayesian Attack Model)~\cite{aguessy:hal-01144971}, which builds upon Bayesian attack trees. Nodes represent transitions, conditions, and sensors. Each node represents a Boolean random variable with two mutually exclusive states. A Bayesian transition node represents the random variable that describes the success or fail of a transition. A Bayesian condition node represents the random variable that describes if the condition is fulfilled. A Bayesian sensor node is a random variable that describes the state of a sensor. These nodes are linked with edges, which indicates that the child node conditionally depends on the state of its parents. Compared to Bayesian networks, logical attack graphs provide some practical advantages. First, the use of acyclic graphs in Bayesian networks requires from heuristics to suppress paths that an adversary can follow. The inference of a Bayesian attack graph is very complex, since it needs to delve into the Boolean variables and follow several steps upstream to identify the adverse situation causes that enable an adversary's action at a stage. This leads to performance problems. In logical attack graphs, the causality is specified as graph edges. Thus, the inference of a logical attack graph is straightforward. Logical attack graphs are also more elaborated than topological attack graphs. In the sequel, we elaborate further on logical attack graph modeling. \subsection{Logical Attack Graph Modelling} \label{sec:preliminaries} We define next some preliminary concepts such as Graph, Directed Graph, and AND-OR Graph, as underlying requirements for logical attack graph modeling~\cite{1386,Aguessy2017}. \newtheorem{definition}{Definition}[section] \begin{mydef}[Graph]\label{def:graph} A Graph is a set $V$ of vertices, and a set $E$ of unordered and ordered pairs of vertices, denoted by $G(V;E)$. An unordered pair of vertices is an edge, while an ordered pair is an arc. A graph containing edges alone is non-oriented or undirected; a graph containing arcs alone is called oriented or directed. \end{mydef} \begin{mydef}[Directed Graph]\label{def:directedgraph} A directed graph $G(V;A)$ consists of a non\-empty set $V$ of vertices and a set $E$ of arcs formed by pairs of elements of $V$. In a directed graph: \begin{itemize} \item The parent or source of an arc $(v_1; v_2) \in A; v_1 \in V; v_2 \in V ,$ is $v_1$. \item The child or destination of an arc $(v_1; v_2) \in A; v_1 \in V; v2 \in V ,$ is $v_2$. \item The incoming arcs of a node $v$ are all the arcs for which v is the child: $\forall a = (v_1; v) \in A, with$ $v_1 \in V$. \item The outgoing arcs of a node $v$ are all the arcs for which v is the parent: $\forall a = (v; v2) \in A, with$ $v_2 \in V$. \item the indegree $deg^{\_}(v)$ of a vertex $v \in V$ is the number of arcs in A whose destination is the vertex $v$: $deg^{\_}(v)$= Card(\{$v_i; \forall v_i \in V; (vi; v) \in A$\}). \item the outdegree $deg^{+}(v)$ of a vertex $v \in V$ is the number of arcs in A whose destination is the vertex $v$: $deg^{+}(v)$= Card(\{$v_i; \forall v_i \in V; (vi; v) \in A$\}). \item a root is a vertex $v \in V$ for which $deg^{\_}(v)$ = 0 (no incoming arc). \item a sink is a vertex $v \in V$ for which $deg^{+}(v)$ = 0 (no outgoing arc). \end{itemize} \end{mydef} \begin{mydef}[AND-OR Graph]\label{def:and-or-graph} An AND-OR graph is a directed graph where each vertex $v$ is either an OR or an AND. A vertex represents a sub-objective and according to its type (AND or OR), it requires either the conjunction or disjunction of its children, to be fulfilled. A root node $n$ of an AND-OR graph can be called a precondition as it does not require any other node $n$ to be fulfilled. \end{mydef} According to Definitions~\ref{def:graph}, \ref{def:directedgraph}, and~\ref{def:and-or-graph}, logical attack graphs are based on AND-OR logical directed graphs. The nodes are logical facts describing adversaries' actions or the pre-requisites to carry them out. The edges correspond to the dependency relations between the nodes. Various operators can be taking in account in a logical attack graph depending on the approach. The more popular operators are AND and OR. AND operator describes the achievement's requirement of all the facts of its children for the logical fact of a node to be achieved. OR operator describes the achievement's requirement of at least one fact of its children for the logical fact of a node to be achieved. \section{Our Approach} \label{sec:approach} We assume that after the generation of a (proactive) attack graph, using a priori knowledge about vulnerabilities and network data, both networks and vulnerabilities may evolve (i.e., the configuration of system devices may change, software updates may be enforces, etc.). Hence, the network is not exposed to the same vulnerabilities as the beginning of the attack graph generation process. It is essential to update the attack graph according to systems' changes. When updating a logical attack graph, causality relations between adversaries and systems shall be represented in the logical statements of nodes and edges. We propose a logical attack graph enrichment approach based on ontologies. Before moving forward with our approach, we start by introducing a representative use case that will help up to explain the rationale of our approach. Examples based on the use case scenario are used to exemplify how our approach conducts the generation and enrichment of attack graphs, as well as other tasks, such as periodic monitoring and ontology analysis. \subsection{Use Case Scenario} \label{sec:use-case} This section describes a use case scenario provided by smart city stakeholders. We provide first the general context associated to the scenario, then we focus more in detail on possible attack consequences described by the stakeholders. \subsubsection{General Description} An infectious disease spreads across multiple continents. Health authorities impose unexpected lockdowns on several countries. When the situation seems over, politicians decide to apply some unpopular restrictions, to prevent new spreading waves of the disease. The population gets furious. Violent groups connected through the internet take it as an opportunity to launch attacks against assets associated to public services. Their goal is to cause panic among the population. \subsubsection{Panic and Violence on a Transportation Service } Politicians decide to engage in another period of lockdown. Protesters are loudly shouting outside a municipal building. Social media respond positively to the movement. A mass of citizens arrives at the location. Public transportation in the area is heavily affected, causing long delays. Tensions and altercations rise with the increase of protesters. A fake alert, pretending to come from the municipality network, force people to evacuate the area. People get injured. Images and videos of altercations, evacuation, and car fires are posted on social media. At the same time, a denial-of-service cyber-attack against the municipality network is perpetrated. Machines and sensors get out-of-service, causing further delays in the transportation service of the city. People trying to leave the area start fighting, forcing the authorities to close all transportation services. The mass of people in a given bus affects the health of several passengers. \subsection{Generation of the Attack Graph} The generation of a logical attack graph requires the definition of rules describing causality relations. As an example, we consider code execution. Code execution on a machine allows an adversary to have access to a host. This scenario corresponds to the logical implication detailed by the following rule: \begin{center} \framebox[11cm][l]{ \begin{footnotesize} \begin{minipage}{11cm} $execCode(h,a) \rightarrow canAccesHost(h)$ \end{minipage} \end{footnotesize} } \end{center} where $canAccesHost(h)$ is a logical rule describing the accessibility to host $h$, and $execCode(h,a)$ a fact assessing that an adversary $a$ executed code in $h$. The example can be extended as follows: \begin{center} \framebox[11cm][l]{ \begin{footnotesize} \begin{minipage}{11cm} $execCode(h,a) \land hasCredentialsOnMemory(h,u) \rightarrow harvestCredentials(h,u)$ \end{minipage} \end{footnotesize} } \end{center} where $harvestCredentials(h,u)$ describes a series of credentials harvesting on host $h$, $execCode(h,a)$ the fact that an adversary $a$ is executing code on host $h$, and $hasCredentialsOnMemory(h,u)$ the fact of storing the credentials on the memory of host $h$. The example describes the fact of an adversary harvesting the credentials of a previous user that logged onto the system, by finding them in the memory of that precise system. \begin{comment} \begin{algorithm}[H] \caption{} \label{alg:alg7} $canAccesHost(h)$: Rule describing accessibility to host $h$. $execCode(h,a)$: Fact assessing that an adversary $a$ executed code in $h$. \[ execCode(h,a) \rightarrow canAccesHost(h) \] \end{algorithm} \end{comment} \begin{comment} \begin{algorithm}[H] \caption{} \label{alg:alg8} $harvestCredentials(h,u)$: Rule describing credentials harvesting on host h. $execCode(h,a)$: Fact that the adversary a is executing code on the host h. $hasCredentialsOnMemory(h,u)$: The credentials are saved on the memory of host h. \[ execCode(h,a) \land hasCredentialsOnMemory(h,u) \rightarrow harvestCredentials(h,u) \] \end{algorithm} \end{comment} \subsection{Monitoring the Information System} In order to update the attack graph based on the real-time state of the system, we can also monitor the information system. The output of the monitoring process can get continuously mapped with the initial nodes of the attack graph, in order to find out if a vulnerability is being exploited. The mapping between the monitored system and the attack graph is described bellow: \begin{center} \framebox[11cm][l]{ \begin{footnotesize} \begin{minipage}{11cm} $\forall{n \in N}: (vulExists(h,x,y,z) \land networkServiceInfo(h,s,p,a,u) \rightarrow F_{1}$ \end{minipage} \end{footnotesize} } \end{center} where $(vulExists(h,x,y,z)$ describes the existence of a vulnerability $x$ on host $h$ which allows action $y$ resulting in $z$. Likewise, $networkServiceInfo(h,s,p,a,u)$ describes that user $u$ has a session open on host $h$ where a given service product $p$ is installed, using port $a$ and protocol $p$. The evaluation of a successful mapping implies finding further details about specific vulnerabilities. We propose to use a vulnerability ontology to conduct such a process, represented by $F_1$. Next, we provide some more details about this process using semantic information about concrete vulnerabilities. \begin{comment} \begin{algorithm}[H] \caption{Mapping of the system and the attack graph} \label{alg:alg0} $N$: Nodes in attack graph. $vulExists(h,x,y,z)$: Vulnerability $x$ exists on host $h$ and allows the exploit $y$ that can result in $z$. $networkServiceInfo(h,s,p,a,u)$: The user $u$ has a session opened on host $h$ where product $p$ is installed and port $a$ is opened for protocol $p$. $F_1$: Look for the vulnerability characteristics in the vulnerability ontology \[ \forall{n \in N}: (vulExists(h,x,y,z) \land networkServiceInfo(h,s,p,a,u)) \rightarrow F_{1} \] \end{algorithm} \end{comment} \subsection{Vulnerabilities and Ontologies} Vulnerability information is necessary for both the attack graph generation process and the updates. Vulnerabilities enable the adversary to take actions towards the initial adversarial goals, or alternative actions affecting the system in different ways. Lists of uniquely identifiers in CVE (Common Vulnerabilities and Exposures), a collection of publicly known software vulnerabilities, are complemented with valuable descriptions about the vulnerability, its preconditions and post-conditions, and practical ways to be exploited. The information contained in CVE's descriptions can also lead to other valuable characterizations, for example, impact to the system if the vulnerability is exploited. An ontology is a formal description of a field of knowledge and is represented by descriptive logic. An ontology brings semantic support and unifies unstructured data. Ontologies have been widely used in the field of cybersecurity, for instance, to represent vulnerability classes and their inner relations. Table~\ref{table:CVE-2002-0392} shows an example, representing the classification of a given vulnerability listed in CVE (with identifier CVE-2002-0392). Ontologies also offer inference abilities, which we will use to enrich logical attack graphs when the exploitation of a vulnerability is being reporting during the monitoring process of a vulnerable system. \begin{table}[h!] \fontsize{7.5}{10}\selectfont \begin{tabularx}{1\textwidth} { | >{\raggedright\arraybackslash}X | >{\raggedright\arraybackslash}X | >{\raggedright\arraybackslash}X | >{\raggedright\arraybackslash}X | >{\raggedright\arraybackslash}X | } \hline CVE-ID & Product & Type & Action & Impact\\ \hline CVE-2002-0392 & Apache & remote & Code Execution & Privilege Escalation\\ \hline \end{tabularx} \caption{Classification of CVE-2002-0392 characteristics.} \label{table:CVE-2002-0392} \end{table} \begin{algorithm}[b] \caption{Enrichment of a proactive attack graph based on a vulnerability ontology and monitored system information} \label{alg:alg1} $h_1$: A threat exists on a vulnerable component of the monitored system. $h_2$: Post-conditions of the exploited vulnerability are found in the ontology. $P_1$: Add new path on the attack graph. \[ (h_{1}\land h_{2}) \rightarrow P_{1} \] \end{algorithm} \begin{algorithm}[t] \caption{Inference rule for mass on buses scenario} \label{alg:alg2} $v_1$: Node corresponds to reboot of a machine. $v_2$: Node corresponds to mass on buses. The child or destination of an arc $(v_1; v_2) \in A; v_1 \in V; v_2 \in V ,$ is $v_2$. \[ (v_{1} \land v_{2}) \rightarrow (v_{1}; v_{2}) \] The inference rule is: \[ \inference {v_{1} \quad v_{2}}{(v_{1}; v_{2})}[r] \] \end{algorithm} \begin{comment} \begin{algorithm}[H] \caption{Enrichment of a proactive attack graph based on a vulnerability ontology and monitored system information} \label{alg:alg3} \SetAlgoLined \KwResult{enriched attack graph} initialization\; \If{new alert}{ Save target information\; \For{$n \in Nodes$}{ \If{$n$ contains target information $vulnerability$ exists on the target}{ Verify $vulnerability$ characteristics in ontology\; \If{new post-conditions are found}{ Add new paths to attack graph\; } } } } \end{algorithm} \begin{algorithm}[H] \caption{Inference rule for mass on buses} \label{alg:alg4} \SetAlgoLined \KwResult{Paths that bring to mass on buses} initialization\; \If{$postconditions$ is Reboot}{ \For{$n$ in $Nodes$}{ \If{$n$ correspond to mass on buses}{ Add $postconditions$ is the source of $n$ \; } } } \end{algorithm} \end{comment} \begin{comment} \begin{algorithm}[H] \SetAlgoLined \KwResult{enriched attack graph} $c \gets 0$\; $dn \gets 0$\; $dr \gets 0$\; Change color of attacked\_node\; \For{$n$ in derived\_nodes}{ $dn \gets number\_derivation\_nodes\_n$\; \For{$nd$ in other\_derivation\_nodes of $n$}{ \eIf{ $nd$ is primitive\_node}{ $c \gets c+1$ }{ \eIf{$nd$ is fact\_node}{ \For{$fn$ in derivation\_rule\_nodes of $nd$}{ $i \gets 0$\; $dr \gets number\_derivation\_nodes\_fn$\; \For{$rn$ in $fn$}{ \eIf{$rn$ is primitive\_node}{ $i \gets i+1$ }{ \eIf{$rn$ is fact\_node}{ Do the same process for its derivation nodes }{ Wait until next alert\; } } }{ \eIf{$i$ is equal to $dr$}{ This rule is instanciated\; $c \gets c+1$\; change color of this\_rule\_node\; change color of fact\_node\; }{ Wait until next alert\; } } }{} }{ Wait until next alert\; } } }{ \eIf{$c$ is equal to $dn-1$}{ This rule is instantiated\; Change color of this\_node\; Do the same process for its derived nodes\; }{ Wait until next alert \; } } } \caption{Attack Graph Update} \label{alg:alg2} \end{algorithm} \end{comment} \subsection{Enrichment of Attack Graphs} Algorithm~\ref{alg:alg1} represents our proposed approach for enriching attack graphs based on a vulnerability ontology and monitoring system information. When a threat exists on a vulnerable component of the monitored system, it is necessary to look through the vulnerability characteristics to find its post-conditions. Those post-conditions enrich the attack graph with new paths. The inference rules allow knowing what those new paths can bring to the attack goal. As an example, Algorithm~\ref{alg:alg2} shows an inference rule based on the definition of a Directed Graph in Definition~\ref{def:directedgraph}, and deduces the consequence of restarting a device in the scenario of Section~\ref{sec:use-case} (i.e., a cyber-attack on a municipality network that takes a given device out-of-service for a while). During the attack, the lack of communication between an application and a server causes a problem in the logistics of the transportation service. There is a delay in the bus service. This scenario is not anticipated in the initial graph. It is necessary to update the graph and add the new path that allows the adversary to reach the goal (i.e., cause panic and violence of people). Updating the graph will help the operators to inform the authorities and explore the best remediation strategy to mitigate the damages as soon as possible. Therefore, it is necessary to define inference rules like the one shown in Algorithm~\ref{alg:alg2}, to update the graph in such a situation. \section{Implementation} \label{sec:experiments} \subsection{Setup} \label{sec:setup} In order to validate our approach, we instantiate the scenario depicted in Figure~\ref{fig:cps}. It represents a cyber-physical system monitored by a Security Information and Event Management (SIEM) system, based on Prelude-OSS\footnote{\url{https://www.prelude-siem.com/en/oss-version/}}. We use a virtual machine representing the starting device of the scenario, another machine to instantiate the breach point, and a third one representing the critical asset. We use Nessus Essentials\footnote{\url{https://www.tenable.com/products/nessus/nessus-essentials}} to discover and list vulnerabilities in the monitored system. Data from Nessus is consumed by MulVAL~\cite{Ou2005MulVALAL}, a reasoning engine based on logical programming, to generate a logic-based attack graph. We also use a practical implementation\footnote{\url{https://github.com/usnistgov/vulntology}} of NIST's Vulnerability Description Ontology (VDO)~\cite{booth2016vulnerability}, and Prelude-OSS to map the information contained in VDO into the attack graph, upon reception of Prelude-OSS' alerts. The rationale of the scenario depicted in Figure~\ref{fig:cps} is as follows. An adversary succeeds to execute arbitrary code on the starting device by connecting remotely through RDP (Remote Desktop Protocol, a network service that provides users with graphical means to remotely control computers). The adversary can then read the memory of the starting device. The credentials of the administrator are saved in the memory of the starting device. Then, the adversary harvests those credentials. We assume that the administrator can connect to all the machines in the domain, to remotely manage them. Then, an adversary capable of reusing the credentials can log onto the breach point and remotely connect to the critical asset. The adversary also perpetrates a DNS Poisoning attack~\cite{hu2018measuring}, in order to eavesdrop network traffic. The adversary also perpetrates some integrity attacks, in order to modify application level information, such as the bus schedule and routes, to perturb the influence of traffic and cause a congestion increase. This causes citizens taking the wrong buses at the wrong time, leading into the situation of panic and violence mentioned in Section~\ref{sec:use-case}. In parallel, the adversary reuses the domain credentials to steal some other access keys and impersonate other users (shown in Figure~\ref{fig:cps} with steps \emph{Access Keys Stealer} and \emph{User Compromise}). This parallel scenario leads to the exploitation of other vulnerabilities and an eventual denial-of-service attack. \begin{figure}[!t] \centering \includegraphics[scale=0.34]{FIG/CPS} \caption{Cyber-physical attack scenario. An adversary exploits the vulnerability associated to CVE-2019-0708 on a Starting Device. Then, administrator credentials are harvested from the memory of the device, and reused by the adversary to take control over a critical asset. The attack affects both physical and digital elements associated to the system (e.g., people and services).} \label{fig:cps} \end{figure} \subsubsection{MulVAL} \label{sec:mulval} Based on the scenario shown in Figure~\ref{fig:cps}, we create input data for MulVAL, as well as interaction rulesets associated to VDO. We encode the new interaction rules as Horn clauses~\cite{Ou2005MulVALAL}. The first line corresponds to a first-order logic conclusion. The remaining lines represent the enabling conditions. The clauses below corresponds to the following statement from the scenario shown in Figure~\ref{fig:cps}: \emph{'the breach point credentials can be harvested on the starting device only if there is previously an execution code exploit on the starting device and the credentials of the administrator are saved onto the memory of the starting device'}. \begin{verbatim} harvestCredentials(_host, _lastuser) :- execCode(_host, _user), hasCredentialsOnMemory(_host, _lastuser) \end{verbatim} The clauses below represent the following facts: \emph{'it is possible to log into the breach point with the administrator credentials when these credentials have been harvested and because the breach point and the starting device are on the same network and can communicate through a given protocol and port}. \begin{verbatim} logOn(_host, _user) :- networkServiceInfo(_host, _program, _protocol, _port, _user), hacl(_host, _h, _protocol, _port), harvestCredentials(_h, _user) \end{verbatim} \subsubsection{Ontology} \label{sec:ontology} We use VDO, an ontology of CVEs proposed by NIST. Figure~\ref{fig_vdo}, from~\cite{Gonzalez}, represents various attributes of the VDO ontology, for characterization of software vulnerabilities. Various attributes, such as \textit{Attack Threater}, \textit{Impact Method} and \textit{Logical Impact} are mandatory. The value of \textit{Attack Threater} characterizes the area or place from which an attack must occur. \textit{Impact Method} describes how a vulnerability can be exploited. \textit{Logical Impact} describes the possible impacts a successful exploitation of the Vulnerability can have. For each CVE affecting the monitored system, we can fulfill the classes of information from the ontology, according to the description and metrics of the CVE.\\ \begin{figure}[h] \centering \includegraphics[scale=0.5]{FIG/The-NIST-VDO_W640} \caption{The NIST Vulnerability Description Ontology (VDO)~\cite{Gonzalez}. This figure represents the different classes of NIST vulnerability ontology with their properties.\label{fig_vdo}} \end{figure} A simplified description of \emph{CVE-2019-0708} according to VDO would state, among other details, that '\emph{a remote code execution vulnerability exists in Remote Desktop Services, formerly known as Terminal Services, which can be exploited by an unauthenticated attacker connecting to the target system using TCP or UDP traffic and sending specially crafted requests}. \begin{table}[h!] \centering \includegraphics[scale=0.72]{FIG/table2} \begin{comment} \fontsize{6.5}{12}\selectfont \centering \begin{tabularx}{1.3\textwidth} { | >{\raggedright\arraybackslash}X | >{\raggedright\arraybackslash}X | } \hline \multicolumn{2}{|c|}{Vulnerability: cve.mitre.org CVE-2019-0708} \\ \hline \multicolumn{2}{|c|}{Provenance: https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2019-0708} \\ \hline Scenarion:1 & The first scenario \\ \hline Product: \newline cpe:2.3:o:microsoft:windows\_7:-:sp1:*:*:*:*:*:* cpe:2.3:o:microsoft:windows\_server\_2003:-:sp2:*:*:*:*:x64:* cpe:2.3:o:microsoft:windows\_server\_2003:-:sp2:*:*:*:*:x86:* cpe:2.3:o:microsoft:windows\_server\_2003:r2:sp2:*:*:*:*:*:* cpe:2.3:o:microsoft:windows\_server\_2008:-:sp2:*:*:*:*:*:* cpe:2.3:o:microsoft:windows\_server\_2008:r2:sp1:*:*:*:*:itanium:* cpe:2.3:o:microsoft:windows\_server\_2008:r2:sp1:*:*:*:*:x64:* cpe:2.3:o:microsoft:windows\_vista:-:sp2:*:*:*:*:*:* cpe:2.3:o:microsoft:windows\_xp:-:sp2:*:*:professional:*:x64:* cpe:2.3:o:microsoft:windows\_xp:-:sp3:*:*:*:*:x86:* & Scenario 1 is in relation to the operating system \\ \hline Attack Theater: Remote & \\ \hspace{1mm}Remote Type: Internet & Crafted requests are sent to the target remotely using RDP.\\ \hline Barrier: Privilege Required & The adversary does not need any authorization to the HostOS. \\ \hspace{1mm}Privilege Level: Anonymous & \\ \hspace{2mm} Relating to Context: Host OS & \\ \hline Context: HostOS & One of the Contexts with recognized impacts due to the vulnerability\\ \hline Entity Role: Primary Authorization & \\ Entity Role: Vulnerable & The Host OS is the initial authorization scope and is also the vulnerable Context \\ \hline Impact Method: Trust Failure & \\ \hspace{1mm}Trust Failure Type: Failure of Inherent Trust & \\ Impact Method: Code Execution & Unauthentificated RDP connection to target system allows remote code execution. This code execution can lead to denial of service or modification of memory \\ \hline Logical Impact: Service Interrupt & \\ \hspace{1mm}Service Interrupt Type: Panic & \\ \hspace{2mm}Scope: Limited & \\ \hspace{3mm}Criticality: High & \\ \hspace{1mm}Service Interrupt Type: Reboot & \\ \hspace{2mm}Scope: Limited & \\ \hspace{3mm}Criticality: High & \\ Logical Impact: Write(Direct) & \\ \hspace{1mm}Location: Memory & \\ \hspace{2mm}Scope: Limited & \\ \hspace{3mm}Criticality: High & \\ Logical Impact: Read(Direct) & \\ \hspace{1mm}Location: Memory & \\ \hspace{2mm}Scope: Limited & \\ \hspace{3mm}Criticality: High & \\ \hline \end{tabularx} \end{comment} \caption{Attributes associated with CVE-2019-0708.} \label{table:1} \end{table} \subsubsection{Prelude-ELK} \label{sec:preludeelk} We use an extended version of Prelude-OSS with ELK (Elasticsearch, Logstash, and Kibana). The code is available online\footnote{\url{https://github.com/Kekere/prelude-elk}}. Elasticsearch allows us indexing and processing unstructured data. It also provides a distributed web interface to access the resulting information. Logstash is the parsing engine associated with Elasticsearch for collecting, analyzing, and storing logs. It can integrate many sources simultaneously. Finally, Kibana is a data visualization platform that provides visualization functionalities on indexed content in Elasticsearch. Users can create dashboards with charts and maps of large volumes of data. The addition of the ELK stack into Prelude-OSS allows the injection and visualization of third-party logs received from both system and network components via TCP/IP messages. The collection of data can still be combined with the classic collection and visualization tools of Prelude-OSS. For instance, we can keep using Prelude’s LML (Log Monitoring Lackey) and third-party sensors to monitor and process Syslog messages generated from different hosts on heterogeneous platforms. In addition, we also extend Prelude-OSS with Suricata\footnote{\url{https://suricata.io/}}, as a third-party sensor reporting alerts on the exploitation of known vulnerabilities. We install Rsyslog Windows Agent~\footnote{https://www.rsyslog.com/windows-agent/} and Suricata on each of the virtual machines, in order to monitor them with the ELK extension of Prelude-OSS. Logstash inserts the alerts into Elasticsearch (in JSON format). The results are processed in real-time, mapping the alerts and VDO's data while conducting our attack graph enrichment process. \subsubsection{Web Interface} \label{sec:webinterface} We create a web interface using PHP, Javascript, JQuery, D3.JS, and HTML, where we upload the XML outputs of Mulval. The inference engine converts the XML into JSON. From this JSON, the engine displays a web visualization of the attack graph. The server consults the Elasticsearch index in real-time. The last alert's IP address, port, and protocol match the attack graph. When a vulnerability is likely to be exploited, the engine consults the vulnerability ontology to find other post-conditions for the vulnerability. The tool updates the attack graph according to the ontology. \subsection{Results} \label{sec:results} \begin{figure}[hptb] \centering \subfigure[Initial attack graph. The adversary gains network access on Node $25$. When all the preconditions are met on each stage, the adversary can take actions represented by green nodes to reach Node $1$, which represents the adversarial goal (i.e., panic and violence on mass buses).\label{fig:before}]{ \includegraphics[width=\textwidth]{FIG/cpbeforeenrichment} } \subfigure[Enriched attack graph. A new path towards Node $1$ (panic and violence on mass buses), is discovered using the ontology. The adversary can now take a much shorter path to reach the final goal. \label{fig:after}]{ \includegraphics[width=0.6\textwidth]{FIG/cpafterenrichment} } \caption{Sample results. (a) Attack graph generated for the scenario of mass on buses. (b) The same attack graph, once enriched with data from the ontology. In both graphs, a red node represents the existence of a vulnerability on a given resource. An orange node represents network configuration (e.g., characteristic of a machine or connection between two machines in the network). When preconditions are respected, a yellow node represents an inference rule that leads to a fact (represented by a green node). \label{fig:before-after}} \end{figure} Figure~\ref{fig:before}, represents the attack graph generated for the scenario depicted in Figure~\ref{fig:cps}. The goal, represented by Node $1$, is to cause panic and violence (see use-case scenario described in Section~\ref{sec:use-case}). A red node represents the existence of a vulnerability on a device. An orange node represents network configuration, e.g., characteristics of a device, connection between two deices in the network, etc. When the preconditions are respected, a yellow node represents the inference rules leading to a fact. Facts are represented by green nodes. For instance, Node $26$ represents remote connectivity of the Starting Device in Figure~\ref{fig:cps}, which can be remotely accessed using RDP (Remote Desktop Protocol) services. Node $27$ concerns the location of the adversary in the network. Node $25$ represents the rule that leads the adversary to gain direct network access (i.e., Node $24$) on the starting device, when preconditions on Nodes $26$ and $27$ are met. Node $29$ concerns the existence of the vulnerability \emph{CVE-2019-0708} on the starting device. \emph{CVE-2019-0708} consists of a remote code execution vulnerability. Node $28$ concerns the network configuration of the Starting Device. Some other practical details encoded in the graph correspond to the operating systems (Windows 7), open TCP ports (3389), and identity of the user at the Starting Device (username \emph{olivia}). Finally, Node $23$ has Nodes $24$, $28$, and $29$ as main preconditions. Alerts are processed with Prelude-ELK (cf. Section~\ref{sec:preludeelk}) in real-time. The inference engine finds exploited nodes based on network information associated to a victim device (i.e., the Starting Device in Figure~\ref{fig:cps}), such as IP address, protocol, and port. The service consults VDO (i.e., our vulnerability ontology) to find other post-conditions associated with \emph{CVE-2019-0708}. For instance, it compares the operating system associated to the victim device against the list of products listed in \emph{CVE-2019-0708}. As a result, an enriched attack graph is derived. Figure~\ref{fig:after} represents such an enriched attack graph. The four nodes highlighted with the red square correspond to the new nodes added to the enriched attack graph. They represent the logical impacts derived from the ontology. Node $33$ describes the Starting Device being restarted. Node $35$ describes a system crash of the Starting Device (i.e., Starting Device stops functioning properly). The consequence of Nodes $33$ and $35$ (i.e., Starting Device restarting or unavailable) leads to the mass on buses scenario (i.e., by inference, Nodes $33$ and $35$ are targeting now Node $2$, which is the rule concerning mass on buses). In Figure ~\ref{fig:after}, the four added nodes represent a new path that the adversary can take to cause panic and violence. As we can see, the enriched graph is now an acyclic graph. The new path is shorter than the predicted one. The adversary can reach the goal, represented by Node $1$, much sooner than expected. This difference would makes operators aware that it is more urgent to apply a remediation plan. \section{Related Work} \label{sec:relatedwork} \subsection{Attack Graph Generation Approaches} Work by Ghosh and Ghosh~\cite{Ghosh2012} propose a planning-based approach for attack graph generation and analysis. In this approach, initial network configuration and description of exploits serve as input for the minimal attack graph generation. Shirazi et al.~\cite{B2019} present an approach for modeling attack-graph generation and analysis problems as a planning problem. They present a tool called AGBuilder that generates attack graphs using the Planning Domain Definition Language (PDDL) from extracted vulnerability information. Roschke et al.~\cite{Roschke2009} present an approach of vulnerability information extraction for attack graph generation using MulVAL. The proposal integrated attack graph workflows with SIEM alerts in terms of data fusion and correlation. Alerts are filtered based on vulnerability and system information of the attack graph. The correlation process can reveal a new way the network can be attacked. In such a case, the attack graph is updated. Compared to those aforementioned approaches, we monitor the network to update the attack graph based on state change of the network and generate attack graphs based on network information received from Nessus scans. We also enrich the attack graph based on vulnerability information from CVEs and alerts received from a SIEM. We use a logical attack graph generation approach instead of a planning-based attack graph generation one. With a logical approach, the inference is more straightforward. Moreover, the semantics abilities enhance attack graph enrichment with ontology. We use a vulnerability ontology to correlate alerts with the system and vulnerability information. \subsection{Ontology and Attack Graph Generation} Falodiya et al.~\cite{Falodiya2018} propose an ontology-based approach for attack graphs. The idea is to use an exploit dependency attack graph, an equivalence of logical attack graphs. The work presents an algorithmic solution to traverse the attack graph and add the extracted data into the ontology. Lee et al.~\cite{Lee2019} also propose an approach for converting an attack graph into an ontology. Their formalism is based on an attack-graph approach by Ingols et al.~\cite{Ingols}. The extraction of semantics from the graph is then labeled to build an RDF (Resource Description Framework) graph. Using RDF schema is beneficial for inferences from data and enhance searching. Wu et al.~\cite{Wu2018} propose as well an attack graph generation approach based on the inference ability of cybersecurity ontologies. In our approach, we use these abilities of semantic languages to enrich logical attack graphs easily. We use a NIST standardized ontology (VDO, for Vulnerability Description Ontology) as the primary source of such vulnerability semantics. VDO corresponds well with the logical attack graph approach. It provides mandatory classes such as \textit{Logical Impact} and \textit{Product}, which we use to map alerts with attack graph nodes. New attack paths can be discovered when looking for other logical impacts of a given CVE in VDO. With this approach, we avoid recomputing the attack graph from scratch in the reasoning engine each time the system state change. The semantic abilities of logical attack graphs and ontologies also allow us to take an incremental approach to update the graphs. This improves the automation of the enrichment process (i.e., cybersecurity operators do not have to manually modify inputs to update the attack graphs). \section{Conclusion} \label{sec:conclusion} We have proposed an ontology-based approach for attack graph enrichment. We use logical graph modeling, in which attacks are represented with predicates. Successful precondition validation represents successful attack perpetration. Compared to other similar approaches, such as topological and probabilistic attack graphs, our approach simplifies the inference process, since graphs' edges specify now causality. We have implemented the proposed approach using existing software. We have validated the approach based on a cyber-physical use-case, proposed by smart-city stakeholders. We have validated the full approach, from the generation of an initial attack graph (using network vulnerability scans), to the enrichment of the graph (mapping monitoring alerts and ontology semantics in real-time). The predictions of the initial graph get successfully updated into the enriched graph, based on attack evidences and semantic augmentation.\\ \noindent \textbf{Acknowledgements ---} We acknowledge financial support from the European Commission (H2020 IMPETUS project, under grant agreement 883286).
1,108,101,564,113
arxiv
\section{Introduction} \label{ch:intro} \input{tex/1intro} \section{Problem Formulation} \label{ch:formulation} \input{tex/3formulation} \section{Proposed Method} \label{ch:method} \input{tex/3proposed} \section{Results} \label{ch:results} \input{tex/4results} \section{Conclusions} \label{ch:conclusions} \input{tex/5conclusions} \subsection*{Acknowledgment} This work was partially funded by the Academy of Finland project 327912 REPEAT and the Swedish strategic research project ELLIIT.The authors gratefully acknowledge Lund University Humanities Lab. \bibliographystyle{IEEEbib} \subsection{Sensor networks calibration} \subsection{Algebraic preprocessing} Let $d_{ij}$ denote the distance between the $i$th receiver and $j$th transmitter. Define also \begin{equation} \tilde{d}_{ij}=d_{ij}^2-d_{i1}^2=(f_{ij}-o_j)^2-(f_{i1}-o_1)^2 \end{equation} for $i=1,\ldots,m$ and $j=2,\ldots,n$. Hence, the term $\tilde{d}_{ij}$ depends on the measurements and offsets. By algebraic manipulation of the previous equation we obtain \begin{equation} -2(\vec{s}_j-\vec{s}_1)^\mathsf{T}\vec{r_i}=\tilde{d}_{ij}-\norm{\vec{s}_j}^2+\norm{\vec{s}_1}^2. \label{eq:rec_lin} \end{equation} Since we know the transmitter positions, the previous equation is linear in the receivers coordinates. We can use this to eliminate the variables as follows \begin{itemize} \item \textbf{2D 3r/3s}: we obtain $2$ equations like \eqref{eq:rec_lin} for each receiver. As each receiver has two unknowns, we can solve the receivers as a function of the offsets. Finally, substituting these into the $3$ equations between the $i$th receiver and first transmitter we obtain $3$ equations in $3$ unknown offsets in the form $\norm{\vec{r}_i-\vec{s}_1}^2=(f_{i1}-o_1)^2$. These can be robustly solved by homotopy continuation. \item \textbf{3D 4r/4s}: same numerical recipe of 3r/3s \item \textbf{2D 2r/4s}: Considering only the first $3$ transmitters, we can repeat the same procedure as that of 3r/3s and obtain $2$ equations in $3$ unknown offsets. Furthermore, noticing that \begin{equation} \begin{split} d_{24}^2-d_{14}^2&=(f_{24}^2-o_4)^2-(f_{14}-o_4)^2\\&=-2(f_{24}-f_{14})o_4+f_{24}^2-f_{14}^2, \end{split} \end{equation} we obtain a linear equation in the $4$th offset and hence also that variable can be eliminated. Adding to the $2$ previously obtained equation the equation $(f_{14}-o_4)^2=\norm{\vec{r}_1-\vec{s}_4}^2$ we finally obtain $3$ equations in $3$ offsets and we can solve the resulting system with homotopy continuation. \item \textbf{3D 2r/6s}: same numerical recipe of 2r/4s \end{itemize} \subsection{Homotopy Continuation} Homotopy continuation is a numerical iterative algorithm from algebraic geometry used to solve systems of polynmial equations \cite{morgan1987computing} which has proved itself useful in several applications \cite{malioutov2005homotopy, fabbri2020trplp}. Let $\vec{F}(\vec{x}):\mathbb{R}^n\rightarrow\mathbb{R}^n$ be a vector of $n$ polynomials in $n$ variables. Our goal is to solve the system $\vec{F}(\vec{x})=\vec{0}$. To do so, we first construct a starting system $\vec{G}(\vec{x})=\vec{0}$ that can be easily solved. The only requirement on $\vec{G}$ is that it must have at least as many distinct solutions as $\vec{F}$. Several techniques to construct such a system exist. In this work, we use the so called \textit{polyhedral initialisation} described in \cite{huber1995polyhedral}. Next, we can define the \textit{homotopy} \begin{equation} \vec{H}(\vec{x}, t) = (1-t)\vec{F}(\vec{x}) + \gamma t\vec{G}(\vec{x}), \label{eq:homotopy} \end{equation} where $\gamma$ is a randomly chosen complex number with $\norm{\gamma}=1$ (introduced for numerical stability) and $t$ is a new variable. The key observation is now that the solution of $\vec{H}(\vec{x},t=0)=\vec{0}$ corresponds to the solution of $\vec{F}(\vec{x})=\vec{0}$ and the solution of $\vec{H}(\vec{x},t=1)=\vec{0}$ corresponds to the solution of $\vec{G}(\vec{x})=\vec{0}$. Furthermore, it can be shown that when $t$ varies from $1$ to $0$, the roots of \eqref{eq:homotopy} vary smoothly from the roots of $\vec{G}$ to the roots of $\vec{F}$. This gives a recipe for the homotopy solver: fix a small step $h$ and iteratively solve the equation $\vec{H}(\vec{x},t_k-h)$ using Newton iteration and the solution of $\vec{H}(\vec{x},t_k)$ as initial guess. By the smoothness assumption, the solution at each step will be close to the solution at the previous step and hence the system can be solved efficiently in a few iterations. The homotopy algorithm is initialized with the solution of $\vec{G}(\vec{x})=\vec{0}$ which can be computed efficiently by the assumptions on $\vec{G}$. In the experiments of this paper, we use the publicly available \textsf{HomotopyContinuation.jl} \cite{timme2018homotopy} library. \subsection{Study on the number of solutions} First, we want to characterise the solutions of the minimal configurations, i.e.\ compute how many (complex) solutions the system of polynomial equations has, and how many of these are real. To determine this, we generated 1000 random instances of the problem and solved with our homotopy solvers. The total number of complex solutions was also computed symbolically using Macaulay2 software \cite{eisenbud2001computations}, so actually proving that the result is correct. The results are reported in Table \ref{tab:num-sol}, where we also report the running time of the homotopy solvers. \begin{table}[tb] \centering \caption{Solutions of MOM configurations.} \label{tab:num-sol} \begin{tabular}{c||c|c|c|c|c} &&\multicolumn{3}{c}{Real solutions}\\ Configuration&tot. sols&min&avg.&max&time [s]\\\hline\hline 2r/4s 2D&24&4&9&20&0.05\\ 3r/3s 2D&28&2&7&18&0.13\\ 4r/4s 3D&92&2&4&20&0.6\\ 2r/6s 3D&48&4&7&20&3\\ \end{tabular} \end{table} As could be expected by their nonlinearity, MOM configurations don't have a unique solution. It is however interesting to notice that the number of feasible (i.e. real) solutions is strictly smaller than the total number of solutions. Particularly, the 4r/4s presents the highest number of solutions, but the percentage of real solutions is significantly smaller. \subsection{Solvers to find a unique solution} Without further information, it is not possible to determine which of the real solutions correspond to the original network configuration. If a unique solution is desired, then at least one extra point needs to be added. Hence we consider now the subminimal configurations with one extra transmitter, that is 3r/4s and 2r/5s in 2D and 4r/5s and 2r/7s in 3D. These problems can be solved as follows: first solve the corresponding minimal configuration by leaving the last transmitter out. Next, for each candidate solution compute the extra offset as the average of the offsets computed with the $m$ equations in form \eqref{eq:range} corresponding to the extra transmitter. Finally, substitute the full solutions in the original equations and choose as final estimate the one with the smallest residual error. The proposed solvers were benchmarked with both clean and noisy data. For clean data, we show the error distributions for the relative errors in the histograms in Figure \ref{fig:results}. For noisy data, additive white Gaussian noise with variable variance was added to the measurements before solving. The median relative error as function of the noise level is depicted in the lower row of Figure \ref{fig:results}. As the figure shows, the proposed solvers are stable and robust to noise. \subsection{Real data} We also evaluated our system using real data. The setup consisted of 12 ($m=12$) omni-directional microphones (the T-bone MM-1) spanning a volume of $4.0 \times 4.6 \times 1.5$ meters. A speaker was moved through the setup while emitting sound. Ground truth positions for the microphones and speaker positions were found using a Qualisys motion capture system. The microphones were all internally synchronized, but we assume that the time of sound emission from the speaker is unknown. We use the position estimation of the sound sources from the Qualisys system. Consequently, the microphone positions and emission times correspond to the situation of unknown receivers and offsets, while the sender positions are assumed to be known. In the experiment a song was played through the speaker and the arrival times $t_{ij}$ were found using GCC-PHAT \cite{Knapp1976}. This resulted in a total of $n=151$ sound events with available pseudoranges. Next, we sampled $4$ receivers and $5$ transmitters and solved the problem using our $4r/5s$ solver. Finally, we solved the remaining offsets as described above and trilaterated the remaining receivers. This estimate was further finalized using Levenberg-Marquardt algorithm. As a final result, the mean position error for the receivers was $\SI{10}{\centi\meter}$.
1,108,101,564,114
arxiv
\section{Observations} The UVOT utilizes seven broadband filters during the observation of GRBs. The characteristics of the filters --- central wavelength ($\lambda_c$), FWHM, zero points (the magnitudes at which the detector registers $1 {\rm ~count\,s^{-1}}$; $m_z$), and the flux conversion factors ($f_{\lambda}$) --- can be found in Table~\ref{tab5} \citep{PTS2007,RPWA2005}. The flux density conversion factors are calculated based on model GRB power law spectra with a redshift ranging from $0.3 < z < 1.0$ \citep{PTS2007}\footnote{The most recent calibration data are available from the {\em Swift} calibration database at http://swift.gsfc.nasa.gov/docs/heasarc/caldb/swift/}. The nominal image scale for UVOT images is $0\farcs502 {\rm ~pixel^{-1}}$ (unbinned). UVOT data is collected in one of two modes: event (or photon counting) and image. Event mode captures the time of the arriving photon as well as the celestial coordinates. The temporal resolution in this mode is $\sim11\,{\rm ~ms}$. In image mode, photons are counted during the exposure and the position is recorded, but no timing information is stored except for the start and stop times of the exposure. Because the spacecraft has limited data storage capabilities, most UVOT observations are performed in image mode since the telemetry rate is significantly lower than event mode observations. Since the launch of {\em Swift}, the automated observing sequence of the UVOT has been changed a few times in order to optimize observations of GRBs. The automated sequence is a set of variables which includes, but is not limited to, the filters, modes, and exposure times. The basic automated sequence design consists of finding charts and a series of short, medium, and long exposures in various filters. The finding charts are typically taken in both event and image mode simultaneously, in both the $white$ and $v$ filters, and have exposure times ranging from $100-400\,{\rm ~s}$. A subset of these finding charts are immediately telemetered to ground-based telescopes to aid in localizing the GRB\footnote{A discussion of the finding chart and simultaneous observations in event and image mode is beyond the scope of this paper. The reader is referred to \citet{RPWA2005}. For observations between 2006 Jan 10 to 2006 Feb 13, and 2006 Mar 15 to the present, a second set of finding charts was included in the sequence. The second set of finding chart exposures are taken in the same way as the first set, except that exposures in the $white$ filter are taken in image mode only.}. After completion of the $white$ and $v$ finding charts, a series of short exposures is typically taken in event mode, in all seven broadband filters, and has exposure times ranging from $10-50\,{\rm ~s}$. A series of medium exposures is then taken in image mode, in all seven broadband filters, and has exposure times ranging from $100-200\,{\rm ~s}$. Finally, a series of long exposures is taken in image mode, in all seven broadband filters, and has typical exposure times of $900\,{\rm ~s}$.\footnote{Between 2005 Jan 17 to 2006 Jan 9 and between 2006 Feb 24 to 2006 Mar 14, exposures taken in uvw2, uvm2, and uvw1 were taken in event mode.} In all cases, exposures can be cut short due to observing constraints. This catalog covers UVOT observations of 229 GRB afterglows from 2005 Jan 17 to 2007 Jun 16. It includes bursts detected by {\em Swift} BAT, HETE2, INTEGRAL, IPN and observed by UVOT. A total of 211 BAT-detected bursts were observed by the UVOT (after instrument turn on) representing 93\% of the BAT sample. Those that were not observed by the UVOT were either too close in angular distance to a bright ($\sim 6 {\rm ~mag}$) source, or occurred during UVOT engineering observations. Not included in the catalog are nine bursts first detected by BAT and INTEGRAL and observed by UVOT but with no afterglow position reported by the XRT or ground based observers (see Table~\ref{tab6}). Inspection of the UVOT images reveals no obvious afterglows for these bursts. Hereafter, we adopt the notation $F(\nu ,t) \propto t^{-\alpha} \nu^{-\beta}$ for the afterglow flux density as a function of time, where $\nu$ is the frequency of the observed flux density, $t$ is the time post trigger, $\beta$ is the spectral index which is related to the photon index $\Gamma$ ($\beta = \Gamma - 1$) , and $\alpha$ is the temporal decay slope. \section{Construction of the Databases and Catalog} To provide a baseline for understanding the work described below, we define three words in the context of this catalog: database, catalog, and photometry pipeline. The database is the repository for all UVOT GRB data processed by the photometry pipeline. The catalog is the compilation of the top-level data from the UVOT database, as well as other sources (i.e. the BAT catalog \citep{ST2007}, the Gamma-ray burst Coordinate Network \citep[GCN;][]{BS1995, BS1998} Circulars, etc.) that provides the primary characteristics of each burst. The photometry pipeline is the script that combines the required FTOOLs (with {\tt uvotsource} performing the photometry) to produce the database and catalog. Below we describe the construction of the image and event databases, the catalog, and the requisite quality checks. \subsection{Image Database Construction} The UVOT GRB photometric image database is a collection of raw photometric measurements of UVOT images. The first step in constructing the database was to build an archive of UVOT GRB images. To ensure that all of our images and exposure maps benefitted from consistent and up-to-date calibrations and processing, the {\em Swift} Data Center reprocessed the entire UVOT GRB image archive\footnote{Version HEA\_06DEC2006\_V6.1.1\_SWIFT\_REL2.6(BLD20)PATCHED3\_14MAR2007 was used.}. For example, images taken early in the mission did not benefit from MOD-8 pattern noise correction. All reprocessed images are now MOD-8 corrected. A number of images in our archive did not have a fine aspect correction applied; many of these images were recovered by running {\tt uvotskycorr}. Lastly, we corrected a number of images which had improper OBJECT keywords. We developed an IDL based image processing pipeline to perform aperture photometry on our archive of re-processed sky images. Figure~\ref{fig-flowchart} is a diagram of the UVOT {\tt uvotphot} photometric pipeline software (Version 1.0). The heart of {\tt uvotphot} is the {\em Swift} UVOT tool {\tt uvotsource}. The pipeline used HEADAS Version 6.4 and the 2007 November UVOT CALDB. Photometry was performed on individual sky images using the curves of growth from the {\em Swift} CALDB for the aperture correction model. Upper limits were reported for sources $< 3\sigma$. Figure~\ref{fig-flowchart} shows the additional inputs to the uvotphot pipeline software. The GRB information file contains the best reported source position and error estimate for each burst. In a small number of cases, we refined the positions to better center bursts in our source apertures. References for the best reported burst positions can be found in Table~\ref{tab4}. Source region files specify a simple circular inclusion region of a given aperture size. Aperture photometry using a $3.0\arcsec$ radius aperture was performed for Version 1.0 of the database. Since a $5.0\arcsec$ radius aperture (containing $85.8\pm3.8\%$ of the PSF) was used for calibrating the UVOT, an aperture correction is applied to the data \citep{PTS2007}. Background regions contain inclusion regions for background estimation and exclusion regions to mask out sources and/or features in each field. In most cases we use a standard annular inclusion region of inner and outer radii of $27\farcs5$ and $35\farcs0$ around each burst. To decrease the number of background region files required by our pipeline, we constructed composite region files to mask out sources and features in all bands. A small number of fields required non-standard region files. The background is calculated by taking the average background of all the pixels in the net background region. The region files (and postage stamp images) used in our processing can be found at the {\em Swift} website. Sources within $15\arcsec$ of each burst, which may contaminate photometry in our source apertures, are listed in the {\tt UvotSourceTable.fits} product. For each source we record its position and the measured magnitude in each filter. Source contamination is evidenced as a non-zero offset in the light curve data products. UVOT images which suffer from other sources of contamination or degradation (not including nearby sources) have been identified. Image quality information is recorded as a series of ``flags" which correspond to 1) bursts embedded in a large halo structure from nearby bright stars, 2) images where the burst was near the edge of the FoV, 3) images with charge trails in the source aperture, 4) images with diffraction spikes in the source aperture, 5) images which do not have fine aspect corrections, and 6) bursts embedded in crowded fields. Flags set to true (``T") indicate images which are of poor quality. Images considered to be of low quality and have at least one quality flag raised also have the Quality Flag set to true (``T"). For images that were generated from both image and event mode simultaneously (usually the finding charts are taken in this dual mode), the event mode data are excluded from the image database. A description of the event mode data can be found in Section 3.2 below. \subsection{Event Database Construction} For each GRB, the first orbit $v$ and $white$ event lists were obtained from the the Swift Data Center. Like the sky images, the event lists benefited from reprocessing using UVOT(20071106) and MIS(20080326) calibration and processing pipeline. The $v$ and $white$ are the only filters for which event lists have been included in the catalog because these filters are used for the finding chart exposures, which are expected to show variability on the shortest timescales. The event processing pipeline is based on PERL and implements the {\it Swift} software and calibration found in the HEADAS 6.3.2 release. The pipeline refines the aspect of the event lists and then extracts the photometry. During the slew, immediately after the BAT trigger, the star tracker reports step-like changes in position to the attitude file. The first stage of the pipeline corrects the positions in the attitude file using {\tt attjumpcorr} and then recomputes the positions of every photon in the event list using the corrected attitude file and the FTOOL {\tt coordinator}. The pipeline then refines the aspect of the event list by creating images every $10 {\rm ~s}$ and applying aspect correction software to each image. The aspect correction software locates the stars in each image and compares their positions to those in the USNO-B1 catalogue, correcting for proper motions. For each image, it computes the RA and DEC offset between the stars in the image and those in the USNO-B1 catalogue. The offset is converted into pixels and then is applied to the position of each photon in the event list during the time interval of the image. To extract the photometry the pipeline runs {\tt uvotevtlc} on the aspect corrected event lists using $10 {\rm ~s}$ uniform binning, the $5\arcsec$ source regions and the background regions used to extract the image photometry. The event lists, like the images, can suffer from a number of sources of contamination or degradation. These sources have been identified and flagged in a similar fashion as the UVOT images. \subsection{Catalog Construction} The UVOT GRB Catalog was constructed by combining information from various sources. Burst positions, trigger time, and time to the first UVOT observation were extracted from the image database. The magnitude of the first and peak detections were also determined for each filter. A value of ``-99" indicates that no UVOT data was available, while a value of ``99" indicates a $<3\sigma$ detection. Additional information in the catalog was gleaned from the literature. A reference to the best reported burst position is provided. Also included is a flag indicating which observatory discovered each burst. Galactic absorption and HI column density along the line of sight, $T_{90}$, redshift, GRB fluence in the $15-150 {\rm ~keV}$ band, radio flux, and a flag to indicate detections in ground-based $R-K$ bands are provided for each burst. Temporal slopes are derived for bursts with a sufficient number of significant detections, from which magnitudes are computed at $2000 {\rm ~s}$ (see Section 5). \subsection{Quality Control} To verify the quality of the data, the following checks were performed: photometric and astrometric stability of field stars, light curve production and investigation of previously detected and non-detected afterglows, comparison of photometry to previously published results, and visual examination of flagged images. A description of each quality check is found below. The stability of photometric and astrometric measurements made by the pipeline were tested by applying the pipeline to stars, located in the UVOT GRB fields, which have reliable astrometric and photometric measurements from the Sloan Digital Sky Survey \citep[SDSS;][]{YDG00} database. From this database 108 test stars in 32 GRB fields were selected which were: within $2\arcmin$ of the GRB location, have magnitudes such that the stars are detectable in the UVOT images, and are detected at the $3\sigma$-level. Since we can not select where GRBs are located, the field stars were not selected to be standard stars. However, obvious variable stars were rejected from the sample. Astrometric and photometric measurements were made of these stars in every UVOT image that covers their locations. Statistical characterization of the distributions of positions and count rates of the stars was done in order to quantitatively describe both the internal and absolute accuracy and precision of the pipeline measurements. The mean position offsets in each band, relative to the USNO-B1 positions, defines the absolute accuracy of the astrometry. The USNO-B1 positions are used because the USNO-B1 catalog covers the entire sky and all UVOT positions are determined from this catalog\footnote{It has been noted by \citet{MDG03} that there is a systematic offset as large as $0\farcs25$ between the SDSS and USNO-B1 positions {\em after} correcting for proper motions.}. The peak of the angular offset is $0\farcs31$, $0\farcs41$, $0\farcs31$, $0\farcs19$, $0\farcs22$, $0\farcs19$, and $0\farcs14$ for the uvw2, uvm2, uvw1, $u$, $b$, $v$, and $white$ filters, respectively (cf. Figure~\ref{fig-astrometric}). To determine the internal precision of the astrometry, we have calculated the Rayleigh scale parameter for the distributions of angular offsets from the mean stellar positions. A Rayleigh distribution holds if the offsets in RA and DEC are independent and normally distributed with the same standard deviation, which is then equivalent to the Rayleigh scale parameter. The internal astrometric precision (given by the Rayleigh scale parameter) of each of the UVOT bands is $0\farcs27$, $0\farcs28$, $0\farcs24$, $0\farcs22$, $0\farcs21$, $0\farcs21$, and $0\farcs17$ for the uvw2, uvm2, uvw1, $u$, $b$, $v$, and $white$ bands respectively. The mean count rates of the stars, converted to magnitudes on the Johnson system, is compared with the SDSS magnitudes transformed to the Johnson system, to define the absolute accuracy of the photometry. The conversions from SDSS and UVOT magnitudes to the Johnson system are based on work by \citet{JS05} and \citet{PTS2007}, respectively. The average absolute photometric offset between the SDSS and UVOT (${\rm SDSS-UVOT}$) is $+0.076$ ($\sigma=0.052$), $+0.010$ ($\sigma=0.049$), and $-0.068$ ($\sigma=0.027$) magnitudes ($3\sigma$ confidence limit) for the $u$, $b$, and $v$ filters, respectively (cf. Figure~\ref{fig-color}). The standard deviation of the mean count rates defines the internal precision of the photometry. The average standard deviation about the mean, over the nominal magnitude range of $13.8-20.7$ is 0.11, 0.13, and 0.09 mag for the uvw2, uvm2, and uvw1 filters, respectively, and over the nominal magnitude range of $11.5-19.4$ is 0.06, 0.05, 0.06, and 0.05 mag for the $u$, $b$, $v$, and $white$ filters, respectively (cf. Figure~\ref{fig-photometric}). We also compared the catalog entries to published values. In a small number of cases, no sources were detected in the database whereas the literature provides light curves for faint detections. This is because the photometry database is constructed with individual, not co-added, images. The UVOT burst database results were also compared against the following published results in order to perform a consistency check for the following detected burst afterglows: GRBs 060218 \citep{CS06}, 050525A \citep{BAJ06}, 060313 \citep{RPWA06}, 061007 \citep{SP07}, 050319 \citep{MKO06}, 050318 \citep{SM05}, 050603 \citep{GD06}, 051117A \citep{GM07}, 050801 \citep{DM07}, 050730 \citep{PM07}, and 050802 \citep{OS2007}. The comparison resulted in all the catalog data being consistent with the published data. Images will still be in the UVOT catalog even if certain quality checks are not passed. Some of the bad aspect correction flags were determined by keywords in the image files themselves, while others were flagged by the {\tt uvotphot} photometry pipeline. Below is a description of the checks performed on all flagged images. {\it Settling}: When the spacecraft reaches the target location, it requires a brief period of time to lock onto the target; this period is known as settling. During this time the UVOT is observing in event mode, typically in the $v$-filter. Because the UVOT detector voltage is changing during this period, the count rate is not calibrated; therefore, these images are flagged as settling exposures and should be used with caution. The criteria for flagging an image as a settling exposure is if the image is: the first in the first observation sequence, taken in event mode, taken in the $v$-filter, and the exposure time is $<11 {\rm ~s}$. Each GRB observation for which the first finding chart exposure is present should include a settling exposure just prior to the start of the observation. This condition was checked in the database file, and cases where no settling image was flagged were investigated; this ensures that all settling images are flagged as such. To ensure that non-settling images have not been erroneously flagged, any settling mode images that were found to occur {\it after} the initial finding chart exposure were also investigated. {\it Aspect Correction}: A small number of images which have a successful fine aspect correction are blurred due to uncorrected movement of the spacecraft. These images have been flagged. {\it Charge Trailing or In Halo}: A subset of the database have been generated by filtering for anything flagged as ``charge trailing" or ``in halo." Individual fields from this list were visually inspected to verify the condition. A random check in individual fields was also made in order to check for the existence of these conditions in cases where they were not flagged. {\it Edge Effect or Crowded Field}: Some images are not well centered on the burst and suffer from ``edge effects", while other bursts are embedded in crowded star fields. These images were flagged. {\it Near Bright Star}: Co-added images in $b$ and uvw1 from the initial snapshot in each field were visually examined to search for nearby bright stars. Nearby bright stars with diffraction spikes impinging on the source aperture were flagged. {\it Cumulative Quality}: If any of the quality flags above are set to true (``T") this flag is also set to true. These images should be treated with caution when used in a dataset. Approximately $10\%$ of all images are flagged. The largest contribution to flagged images are a result of in halo and charge trailing, totaling $\sim6.2\%$ and $\sim1.3\%$ of the images, respectively. \section{Database and Catalog Formats} The {\em Swift} UVOT Image Mode Burst Database (sample columns and rows are provided in Table~\ref{tab1}), the {\em Swift} UVOT Event Mode Burst Database (sample columns and rows are provided in Table~\ref{tab7}), and the {\em Swift} UVOT Burst Catalog (sample columns and rows provided in Table~\ref{tab2}) can be found at the {\em Swift} website. The databases and catalog are available in the following file formats: (1) a standard ASCII file with fixed column widths (size = $12.3 {\rm ~MB}$, $521 {\rm ~kB}$, \& $21 {\rm ~kB}$ for the image and event databases, and the catalog, respectively) and (2) a binary FITS table (size = $13.9 {\rm ~MB}$, $560 {\rm ~kB}$, \& $28 {\rm ~kB}$ for the image and event databases, and the catalog, respectively). The Image Mode Database contains 86 columns and 63,315 rows. Each column is described in Table~\ref{tab3}. Except for the object ID, time of burst trigger, filter, quality flags, name of the FITS extension, trigger number, and filename in columns 1, 6, 12, and 76-86, all entries are in floating point, exponential, or integer format. The Event Mode Database contains 41 columns and 9402 rows. Each column is described in Table~\ref{tab8}. Except for the object ID, time of burst trigger, filter, quality flags, trigger number, and filename in columns 1, 6, 12, and 38-41, all entries are in floating point, exponential, or integer format. The Burst Catalog contains 81 columns and 229 rows. Each column is decribed in Table~\ref{tab4}. Except for the object ID, position reference, time of burst trigger, detection flags, and the notes in columns 1, 5, 9, and 78-81, respectively, all entries are in floating point, exponential, or integer format. Below are the notes on the image database columns found in Table~\ref{tab3}. Each note identifies the nomenclature and description of each column.\\ 1. OBJECT: The object identification. The format is GRB$yymmddX$, where $yy$ is the last two digits of the year of the burst, $mm$ is the month, $dd$ is the day (in UTC), and $X$ is used to represent a second, third, fourth, etc., burst occuring on a given day by the letters `B' or `C'. Only the last seven characters are listed in the catalog (i.e. the ``GRB'' is dropped from each entry). 2. RA: The best J2000.0 right ascension, in decimal degrees, as found in the GCN Circulars\footnote{http://gcn.gsfc.nasa.gov/gcn3\_archive.html}, \citet{GMR07}, and \citet{BNR07}. Thirteen GRBs have improved positions that were calculated from the centroid of the summed images in the UVOT image database; these have been identified in column 81 of the catalog. 3. DEC: The best J2000.0 declination, in decimal degrees, as described in RA above. 4. POS\_ERR: Positional uncertainty, in arcseconds, as described in RA above. 5. TRIGTIME: The time of the burst trigger as measured in {\em Swift} mission elapsed time (MET). MET is measured in seconds and starts on 2001 January 1, 00:00:00.000 (UTC). The MET of the launch of {\em Swift}, 2004 November 20, 18:35:45.865 (UTC), is 122668545.865. 6. TRIG\_UT: The time of the burst trigger as measured in Universal time (UTC) (e.g. 2005-017-12:52:36). 7. TIME: TSTART + TELAPSE/2 (see columns 8 \& 11). 8. TSTART: The MET start time of the exposure. 9. TSTOP: The MET stop time of the exposure. 10. EXPOSURE: The exposure time, in seconds, including the following corrections: detector dead time, time lost when the on-board shift-and-add algorithm tosses event data off the image, time lost when the UVOT Digital Processing Unit stalls because of high count rates, and time lost due to exposures beginning with the UVOT blocked filter. 11. TELAPSE: TSTOP - TSTART, in seconds. 12. FILTER: The UVOT filter used for the exposure (uvw2, uvm2, uvw1, $u$, $b$, $v$, and $white$). 13. BINNING: The binning factor ($1=1\times1 {\rm ~binning}$ and $2=2\times2 {\rm ~binning}$). 14. APERTURE: The source aperture radius, in arcseconds. 15. SRC\_AREA: The area of the source region, in square arcseconds, computed by multiplying the number of pixels found by XIMAGE within the source radius by the area of each pixel. This value can differ from the specified area $\pi r^2$ by up to $2\%$ because XIMAGE selects whole pixels within the source radius. This approach produces an area slightly larger or smaller than $\pi r^2$. Simulations reveal that the $1\sigma$ difference between the exact and XIMAGE areas are $1.0\%$ and $1.5\%$ for a 10 and 6 pixel radius, respectively. The error in photometry is much less than these area fluctuations because source counts are concentrated in the center of the aperture and the aperture correction uses the radius corresponding to the XIMAGE area. 16. BKG\_AREA: The area, in square arcseconds, of the background region. It is calculated by taking the number of pixels in the background annulus and multiplying by the area of each pixel. Masked regions are excluded therefore only net pixels are included. This differs slightly from the exact area $\pi(r_o-r_i)^2$, but we are only interested in the background surface brightness, so the difference is not significant. 17. PLATE\_SCALE: The plate scale, in arcseconds per pixel, of the image. The error in the mean plate scale is $\pm 0\farcs0005 {\rm ~pixel^{-1}}$. 18. RAW\_TOT\_CNTS: Total number of counts measured within the source region. 19. RAW\_TOT\_CNTS\_ERR: The binomial error in RAW\_TOT\_CNTS. The binomial error is given by ${\rm (RAW\_TOT\_CNTS)}^{1/2}$ * ${\rm ((NFRAME - RAW\_TOT\_CNTS)/NFRAME)}^{1/2}$. NFRAME = TELAPSE / FRAMETIME, where ${\rm FRAMETIME} = 0.011032 {\rm ~s}$ for the full FoV. NFRAME is the number of CCD frames (typically one every $\sim11 {\rm ~ms}$). A discussion of the measurement errors in the UVOT can be found in \citet{KR08}. 20. RAW\_BKG\_CNTS: Total number of counts measured in the background annulus. 21. RAW\_BKG\_CNTS\_ERR: The binomial error in RAW\_BKG\_CNTS. The binomial error is given by ${\rm (RAW\_BKG\_CNTS)}^{1/2}$ * ${\rm ((NFRAME - EFF\_BKG\_CNTS)/NFRAME)}^{1/2}$. EFF\_BKG\_CNTS = RAW\_BKG\_CNTS * 80 / BKG\_AREA. The effective counts in the background (EFF\_BKG\_CNTS) is calculated because the background area is larger than the coincidence region. The value 80 is the area (in square arcseconds) of our circular aperture with a radius of $5\arcsec$. 22. RAW\_STD\_CNTS: Total number of counts measured within the standard $5\arcsec$ aperture. This constant value is based on the size of the current calibration aperture. 23. RAW\_STD\_CNTS\_ERR: Binomial error associated with RAW\_STD\_CNTS. 24. RAW\_TOT\_RATE: The total measured count rate, in counts per second, in the source region. Calculated using RAW\_TOT\_CNTS / EXPOSURE. 25. RAW\_TOT\_RATE\_ERR: RAW\_TOT\_CNTS\_ERR / EXPOSURE. 26. RAW\_BKG\_RATE: The total measured count rate, in counts per second per square arcsecond, in the background region. Calculated using RAW\_BKG\_CNTS / EXPOSURE / BKG\_AREA. 27. RAW\_BKG\_RATE\_ERR: RAW\_BKG\_CNTS\_ERR / EXPOSURE / BKG\_AREA. 28. GLOB\_BKG\_RATE: The global background rate, in counts per second per square arcsecond. The global background of each image is modeled as a Gaussian distribution. An iterative ``Sigma Clipping" is performed to eliminate contributions from field stars above the $3\sigma$ level. The global background is then reported as the arithmetic mean of the clipped distribution along with the number of samples in the clipped distribution. In images where the background is well sampled, the ratio of local to global background rates is a good indicator of sources embedded in the halo of a nearby bright star. 29. GLOB\_BKG\_AREA: The area, in square arcseconds, of the global background region. 30. RAW\_STD\_RATE: The total measured count rate, in counts per second, in the coincidence loss region. Calculated using RAW\_STD\_CNTS / EXPOSURE. 31. RAW\_STD\_RATE\_ERR: RAW\_STD\_CNTS\_ERR / EXPOSURE. 32. COI\_STD\_FACTOR: The coincidence-loss correction factor for the coincidence-loss region. This is calculated as follows. First, the COI\_STD\_RATE (which is not recorded) is calculated using the theoretical coincidence loss formula and the polynomial correction to RAW\_STD\_RATE = RAW\_STD\_CNTS / EXPOSURE \citep[see eq. 1-3 in][]{PTS2007}. The value COI\_STD\_FACTOR is then the ratio COI\_STD\_RATE / RAW\_STD\_RATE. 33. COI\_STD\_FACTOR\_ERR: The uncertainty in the coincidence correction \citep[see eq. 4 in][]{PTS2007}. 34. COI\_BKG\_FACTOR: The coincidence-loss correction factor for the background region. 35. COI\_BKG\_FACTOR\_ERR: The uncertainty in the coincidence correction of the background counts within the source aperture. 36. COI\_SRC\_CNTS: The coincidence-loss corrected counts in the source region. Calculated using (RAW\_TOT\_CNTS - (RAW\_BKG\_CNTS * SRC\_AREA / BKG\_AREA)) * COI\_STD\_FACTOR. 37. COI\_SRC\_CNTS\_ERR: The error associated with COI\_SRC\_CNTS. Calculated using (RAW\_TOT\_CNTS\_ERR - (RAW\_BKG\_CNTS\_ERR * SRC\_AREA / BKG\_AREA)) * COI\_STD\_FACTOR\_ERR. 38. COI\_BKG\_CNTS: The coincidence-loss corrected counts in the background region. Calculated using RAW\_BKG\_CNTS * COI\_BKG\_FACTOR. 39. COI\_BKG\_CNTS\_ERR: The error associated with COI\_BKG\_CNTS. Calculated using RAW\_BKG\_CNTS\_ERR * COI\_BKG\_FACTOR\_ERR. 40. COI\_TOT\_RATE: The coincidence-loss corrected raw count rate, in counts per second, in the source region. Calculated using RAW\_TOT\_RATE * COI\_STD\_FACTOR. 41. COI\_TOT\_RATE\_ERR: Error in the COI\_TOT\_RATE = RAW\_TOT\_RATE\_ERR * COI\_STD\_FACTOR. 42. COI\_BKG\_RATE: The coincidence-loss corrected background surface count rate, in counts per second per square arcsecond. Calculated using\\ RAW\_BKG\_RATE * COI\_BKG\_FACTOR. 43. COI\_BKG\_RATE\_ERR: Error in coincidence corrected background surface brightness. Calculates using RAW\_BKG\_RATE\_ERR * COI\_BKG\_FACTOR. 44. COI\_SRC\_RATE: Coincidence corrected net count rate, in counts per second. Calculated using COI\_TOT\_RATE - COI\_BKG\_RATE * SRC\_AREA. 45. COI\_SRC\_RATE\_ERR: Error in the coincidence corrected net count rate. The errors in the source rate and the background rate are added in quadrature:\\ ${\rm (COI\_TOT\_RATE\_ERR^2 + (COI\_BKG\_RATE\_ERR * SRC\_AREA)^2)^{1/2}}$. 46. AP\_FACTOR: Aperture correction for going from a $3\arcsec$ radius to a $5\arcsec$ radius aperture for the $v$ filter. This is computed using the PSF stored in the CALDB by {\tt uvotapercorr}. This is always set to 1.0 unless the {\tt CURVEOFGROWTH} method is used. The source radius is defined to be $({\rm SRC\_AREA}/\pi)^{1/2}$, so that one uses an effective source radius to the actual pixel area used by XIMAGE. 47. AP\_FACTOR\_ERR: The $1\sigma$ error in AP\_FACTOR. AP\_FACTOR\_ERR = \\ AP\_COI\_SRC\_RATE\_ERR / COI\_SRC\_RATE\_ERR. 48. AP\_SRC\_RATE: Final aperture and coincidence loss corrected count rate used to derive the flux and magnitudes. Calculated using AP\_FACTOR * COI\_SRC\_RATE. 49. AP\_SRC\_RATE\_ERR: Error on the final count rate. Calculated using\\ $({\rm COI\_SRC\_RATE\_ERR^2 + (fwhmsig * COI\_SRC\_RATE)^2})^{1/2}$. The ``fwhmsig" parameter is the fractional RMS variation of the PSF which is set to $3\arcsec$. This variation is propagated through the uncertainty calculation, and is added in quadrature to the corrected measurement uncertainty. 50. MAG: The magnitude of the source in the UVOT system computed from \\ AP\_COI\_SRC\_RATE. The value is set to 99.00 for upper-limits. 51. MAG\_ERR: The one-sigma error in MAG. Unless otherwise specified, all errors are the 1-sigma statistical errors based on Poisson statistics. The value is set to 9.90E+01 if MAG was an upper limit. 52. SNR: Signal-to-noise ratio calculated from COI\_TOT\_RATE and COI\_BKG\_RATE. 53. MAG\_BKG: The sky magnitude, in magnitudes per square arcsecond, in the UVOT system computed from COI\_BKG\_RATE. 54. MAG\_BKG\_ERR: The one-sigma error in MAG\_BKG. 55. MAG\_LIM: The ``N"-sigma limiting magnitude in the UVOT system computed from the RAW quantities. 56. MAG\_LIM\_SIG: ``N" for MAG\_LIM, where ``N" is a chosen parameter. The database uses a value of 3 for N. 57. MAG\_COI\_LIM: The magnitude at which the count rate is one count per CCD frame. 58. FLUX\_AA: The flux density in ${\rm ~erg\,s^{-1}\,cm^{-2}\,\AA^{-1}}$. 59. FLUX\_AA\_ERR: The one-sigma error in FLUX\_AA. 60. FLUX\_AA\_BKG: The flux density of the sky in ${\rm ~erg\,s^{-1}\,cm^{-2}\,\AA^{-1}}$ per square arcsecond. 61. FLUX\_AA\_BKG\_ERR: The one-sigma error in FLUX\_AA\_BKG. 62. FLUX\_AA\_LIM: The approximate flux density limit based on an average GRB spectrum, in ${\rm ~erg\,s^{-1}\,cm^{-2}\,\AA^{-1}}$. 63. FLUX\_AA\_COI\_LIM: The flux density at which the count rate is one count per frame time, in ${\rm ~erg\,s^{-1}\,cm^{-2}\,\AA^{-1}}$. 64. FLUX\_HZ: The flux density in ${\rm ~mJy}$. 65. FLUX\_HZ\_ERR: The one-sigma error in FLUX\_HZ. 66. FLUX\_HZ\_BKG: The flux density of the sky in ${\rm ~mJy}$ per square arcsecond. 67. FLUX\_HZ\_BKG\_ERR: The one-sigma error in FLUX\_HZ\_BKG. 68. FLUX\_HZ\_LIM: The ``N"-sigma limiting flux density in ${\rm ~mJy}$, corresponding to MAG\_LIM. 69. FLUX\_HZ\_COI\_LIM: The flux density at which the count rate is one count per frame time, in ${\rm ~mJy}$. 70. NEAREST\_RA: The J2000.0 right ascension, in decimal degrees, of the closest non-GRB source within $15\arcsec$ to the burst position, as determined by {\tt uvotdetect}. 71. NEAREST\_DEC: The J2000.0 declination, in decimal degrees, of the closest non-GRB source within $15\arcsec$ to the burst position, as determined by {\tt uvotdetect}. 72. NEAREST\_MAG: The magnitude, in the given UVOT band, of the object at CLS\_SRC\_RA and CLS\_SRC\_DEC, as determined by {\tt uvotdetect}. If there is no source found within $15\arcsec$ radius of the burst position then this value is set to 99.00. 73. CENTR\_RA: The source's centroided right ascension, in decimal degrees, as determined by a 2D gaussian fit of the UVOT data. If the fit failed then the value is set to 999.000000. 74. CENTR\_DEC: The source's centroided declination, in decimal degrees, as determined by a 2D gaussian fit of the UVOT data. If the fit failed then the value is set to 999.000000. 75. CENTR\_ERR: The larger of the fit errors between CENTR\_RA and CENTR\_DEC divided by the square-root of the number of counts (N). Given as a $1\sigma$ error. If the fit failed, then a value of $-1.0$ is assigned. N = MAX(1 , RAW\_TOT\_CNTS - [RAW\_BKG\_RATE * EXPOSURE * SRC\_AREA]). 76. SETTLE\_FLAG: Settling images sometimes have a poor aspect solution which creates doublets out of field stars. Settling images also suffer from detector gain issues because the High Voltages may still be ramping up part way into the exposure. Such images have an undefined photometric calibration and should be used cautiously. Images fitting all of the following are flagged as settling exposures, i.e. this flag is true (T): first exposure of Segment 0, image taken in Event mode, and ${\rm EXPOSURE} < 11 {\rm ~s}$. 77. ASPECT\_FLAG: The {\em Swift} spacecraft pointing accuracy is $\approx5\arcsec$. The astrometric error is improved to about $0\farcs3$ by comparing source positions to the USNO-B catalog. For a small number of images the automated procedure did not produce an aspect solution. Such images are flagged as true (T). 78. TRAIL\_FLAG: A number of $v$ and $white$ images suffer from the effects of charge trailing. This happens when bright sources align with the source aperture in the CCD readout direction. Visible bright streaks along CCD columns (RAWY) sometimes complicate photometric measurements. These images are flagged true (T). 79. CROWDED\_FLAG: If the field appears crowded upon visual inspection the image is flagged true (T). 80. SPIKE\_FLAG: If a diffraction spike impinges upon the source region then the image is flagged true (T). 81. EDGE\_FLAG: If the source is sufficiently close to the edge of the image such that the exposure across the background region is variable then the image is flagged true (T). 82. HALO\_FLAG: A few bursts lie within the halo of bright stars, which can produce erroneous photometric measurements. Such situations are flagged by comparing the local background to the global background. These images are flagged true (T). 83. QUALITY\_FLAG: Cumulative quality flag. This flag is set to true (T) when any of the following quality flags are true (T): SETTLE\_FLAG, ASPECT\_FLAG, TRAIL\_FLAG, SPIKE\_FLAG, EDGE\_FLAG, or HALO\_FLAG. 84. TRIG\_NUM: The {\em Swift} triggering number for the burst. 85. EXTNAME: The name of the FITS extension that contains this observation. The name of the extension has the following format: \{filter\}\{exposure ID\}\{I/E\} where \{filter\} = wh, vv, bb, uu, w1, m2, or w2 for the $white$, $v$, $b$, $u$, uvw1, uvm2, and uvw2, respectively, and \{I/E\} represents image or event mode (e.g. vv133535746I). 86. IMAGE\_NAME: Name of the image (e.g. \\ 00020004001/uvot/image/st00020004001ubb\_sk.img.gz[1]). \\ Below are the notes on the event database columns found in Table~\ref{tab8}. Each note identifies the nomenclature and description of each column.\\ 1-17. OBJECT, RA, DEC, POS\_ERR, TRIGTIME, TRIG\_UT, TIME, TSTART, \\ TSTOP, EXPOSURE, TELAPSE, FILTER, BINNING, APERTURE, SRC\_AREA, \\BKG\_AREA, \& PLATE\_SCALE: Same as columns 1-17 in Table~\ref{tab3}, respectively. 18-21. COI\_STD\_FACTOR, COI\_STD\_FACTOR\_ERR, COI\_BKG\_FACTOR, \& \\ COI\_BKG\_FACTOR\_ERR: Same as columns 32-35 in Table~\ref{tab3}, respectively. 22-28. COI\_TOT\_RATE, COI\_TOT\_RATE\_ERR, COI\_BKG\_RATE, \\ COI\_BKG\_RATE\_ERR, COI\_SRC\_RATE, COI\_SRC\_RATE\_ERR\footnote{The values for COI\_SRC\_RATE\_ERR \& AP\_SRC\_RATE\_ERR are not calculated correctly by the pipeline software and are therefore too large. These values will be corrected in future versions of the database.}, \& AP\_FACTOR: Same as columns 40-46 in Table~\ref{tab3}, respectively. 29-32. AP\_SRC\_RATE, AP\_SRC\_RATE\_ERR, MAG, \& MAG\_ERR: Same as columns 48-51 in Table~\ref{tab3}, respectively. 33-34. FLUX\_AA \& FLUX\_AA\_ERR: Same as column 58-59 in Table~\ref{tab3}, respectively. 35-39. CENTR\_RA, CENTR\_DEC, CENTR\_ERR, SETTLE\_FLAG, ASPECT\_FLAG: Same as columns 73-77 in Table~\ref{tab3}, respectively. 40-41. TRIG\_NUM \& FILENAME: Same as column 85-86 in Table~\ref{tab3}. \\ Notes on the {\em Swift} UVOT burst catalog columns found in Table~\ref{tab4}:\\ 1-4. OBJECT, RA, DEC, \& PNT\_ERR: Same as columns 1-4 in Table~\ref{tab3}, respectively. 5. POS\_REF: The position reference for columns 2-4. References are from the the GCN Circulars, \citet{GMR07}, and \citet{BNR07}. 6. DISC\_BY: The ``discovery flag" indicating which spacecraft discovered the GRB. The flag is an integer from 0-2 representing: 0 = {\em Swift}, 1 = HETE2, 2 = INTEGRAL, and 3 = IPN. 7. T90: $T_{90}$, in seconds, as defined by \citet{ST2007} for the {\em Swift} discovered bursts in the $15-350 {\rm ~keV}$ band. For GRBs 050408 \citep{ST05}, 051021 \citep{OJ05}, 051028 \citep{HK05b}, 051211A \citep{KN05}, and 060121 \citep{AM05} discovered by HETE2, $T_{90}$ in the $30-400 {\rm ~keV}$ band is provided as in the GCN Circulars ($T_{90}$ for GRB 060121 is provided in the $80-400 {\rm ~keV}$ band). $T_{90}$ for the INTEGRAL and IPN discovered bursts, as well as the remaining HETE2 bursts, were not available in the GCN Circulars. In all cases where $T_{90}$ was not available this value is set to $-99.0$. 8-9. TRIGTIME \& TRIG\_UT: Same as columns 5 and 6 from Table~\ref{tab3}, respectively. 10. FRST\_TSTART: The start time of the first UVOT observation measured in seconds from the burst trigger. 11. FRST\_W2: The first reported magnitude in the uvw2-filter. If no detections are reported then this value is set to 99.00. If not observed in the uvw2-filter then this value is set to $-99.00$. 12. FRST\_W2\_T: The time since burst, in seconds, to the middle of the exposure of FRST\_W2. If FRST\_W2 = $\pm 99.00$ then this value is set to -1.0. 13. FRST\_M2: Same as FRST\_W2 except for the uvm2-filter. 14. FRST\_M2\_T: The same as FRST\_W2\_T except for the uvm2-filter. 15. FRST\_W1: Same as FRST\_W2 except for the uvw1-filter. 16. FRST\_W1\_T: The same as FRST\_W2\_T except for the uvw1-filter. 17. FRST\_UU: Same as FRST\_W2 except for the $u$-filter. 18. FRST\_UU\_T: The same as FRST\_W2\_T except for the $u$-filter. 19. FRST\_BB: Same as FRST\_W2 except for the $b$-filter. 20. FRST\_BB\_T: The same as FRST\_W2\_T except for the $b$-filter. 21. FRST\_VV: Same as FRST\_W2 except for the $v$-filter. 22. FRST\_VV\_T: The same as FRST\_W2\_T except for the $v$-filter. 23. FRST\_WH: Same as FRST\_W2 except for the $white$-filter. 24. FRST\_WH\_T: The same as FRST\_W2\_T except for the $white$-filter. 25. PEAK\_W2: The peak reported magnitude in the uvw2-filter. If no detections are reported then this value is set to 99.00. If not observed in the uvw2-filter then this value is set to $-99.00$. 26. PEAK\_W2\_T: The time since burst, in seconds, to the middle of the exposure of PEAK\_W2. If PEAK\_W2 = $\pm 99.00$ then this value is set to -1.0. 27. PEAK\_M2: Same as PEAK\_W2 except for the uvm2-filter. 28. PEAK\_M2\_T: The same as PEAK\_W2\_T except for the uvm2-filter. 29. PEAK\_W1: Same as PEAK\_W2 except for the uvw1-filter. 30. PEAK\_W1\_T: The same as PEAK\_W2\_T except for the uvw1-filter. 31. PEAK\_UU: Same as PEAK\_W2 except for the $u$-filter. 32. PEAK\_UU\_T: The same as PEAK\_W2\_T except for the $u$-filter. 33. PEAK\_BB: Same as PEAK\_W2 except for the $b$-filter. 34. PEAK\_BB\_T: The same as PEAK\_W2\_T except for the $b$-filter. 35. PEAK\_VV: Same as PEAK\_W2 except for the $v$-filter. 36. PEAK\_VV\_T: The same as PEAK\_W2\_T except for the $v$-filter. 37. PEAK\_WH: Same as PEAK\_W2 except for the $white$-filter. 38. PEAK\_WH\_T: The same as PEAK\_W2\_T except for the $white$-filter. 39. ALPHA\_W2: In the case of two or more afterglow detections in the uvw2-filter for a given burst, all occuring between $\sim300-100,000 {\rm ~s}$, the temporal slope ($\alpha_{uvw2}$) for that filter is provided. If two or more detections are not found for any given segment, the value is set to $-99.99$. The value is calculated using, \begin{equation} f_{\lambda(uvw2)} = At^{-\alpha_{uvw2}}, \end{equation} where $f_{\lambda(uvw2)}$ is the flux density, $A$ is the amplitude, and $t$ is the time since burst. 40. ALPHA\_W2\_ERR: The one-sigma error in ALPHA\_W2. If ALPHA\_W2 = $-99.99$ then ALPHA\_W2\_ERR = $-99.99$. 41. ALPHA\_W2\_AMP: The amplitude of ALPHA\_W2. If ALPHA\_W2 = $-99.99$ then ALPHA\_AMP = $-99.99$. 42. MAG\_ALPHA\_W2: The computed uvw2-filter magnitude at $2000 {\rm ~s}$ derived from using ALPHA\_W2. If ALPHA\_W2 = $-99.99$ then MAG\_ALPHA\_W2 = $-99.99$. 43. ALPHA\_M2: Same as ALPHA\_W2 except for the uvm2-filter. 44. ALPHA\_M2\_ERR: Same as ALPHA\_W2\_ERR except for the uvm2-filter. 45. ALPHA\_M2\_AMP: Same as ALPHA\_W2\_AMP except for the uvm2-filter. 46. MAG\_ALPHA\_M2: Same as MAG\_ALPHA\_W2 except for the uvm2-filter. 47. ALPHA\_W1: Same as ALPHA\_W2 except for the uvw1-filter. 48. ALPHA\_W1\_ERR: Same as ALPHA\_W2\_ERR except for the uvw1-filter. 49. ALPHA\_W1\_AMP: Same as ALPHA\_W2\_AMP except for the uvw1-filter. 50. MAG\_ALPHA\_W1: Same as MAG\_ALPHA\_W2 except for the uvw1-filter. 51. ALPHA\_UU: Same as ALPHA\_W2 except for the $u$-filter. 52. ALPHA\_UU\_ERR: Same as ALPHA\_W2\_ERR except for the $u$-filter. 53. ALPHA\_UU\_AMP: Same as ALPHA\_W2\_AMP except for the $u$-filter. 54. MAG\_ALPHA\_UU: Same as MAG\_ALPHA\_W2 except for the $u$-filter. 55. ALPHA\_BB: Same as ALPHA\_W2 except for the $b$-filter. 56. ALPHA\_BB\_ERR: Same as ALPHA\_W2\_ERR except for the $b$-filter. 57. ALPHA\_BB\_AMP: Same as ALPHA\_W2\_AMP except for the $b$-filter. 58. MAG\_ALPHA\_BB: Same as MAG\_ALPHA\_W2 except for the $b$-filter. 59. ALPHA\_VV: Same as ALPHA\_W2 except for the $v$-filter. 60. ALPHA\_VV\_ERR: Same as ALPHA\_W2\_ERR except for the $v$-filter. 61. ALPHA\_VV\_AMP: Same as ALPHA\_W2\_AMP except for the $v$-filter. 62. MAG\_ALPHA\_VV: Same as MAG\_ALPHA\_W2 except for the $v$-filter. 63. ALPHA\_WH: Same as ALPHA\_W2 except for the $white$-filter. 64. ALPHA\_WH\_ERR: Same as ALPHA\_W2\_ERR except for the $white$-filter. 65. ALPHA\_WH\_AMP: Same as ALPHA\_W2\_AMP except for the $white$-filter. 66. MAG\_ALPHA\_WH: Same as MAG\_ALPHA\_W2 except for the $white$-filter. 67. BBMINUSVV: MAG\_ALPHA\_BB - MAG\_ALPHA\_VV. If MAG\_ALPHA\_BB or\\ MAG\_ALPHA\_VV is $-99.99$ the value for BBMINUSVV is also set to $-99.99$. 68. W1MINUSVV: MAG\_ALPHA\_W1 - MAG\_ALPHA\_VV. If MAG\_ALPHA\_W1 or\\ MAG\_ALPHA\_VV is $-99.99$ the value for W1MINUSVV is also set to $-99.99$. 69. RED: $E(B-V)$ as found in \citet{SFD98}. Galactic extinction can be corrected using the procedure described in \citet{CCM89}. The extinction in the UVOT bands can be expressed as, \begin{equation} A_{\lambda} = E(B-V)[aR_v + b], \end{equation} where $R_v=3.1$ and $\lambda$ is the UVOT filter (uvw2, uvm2, uvw1, $u$, $b$, and $v$). The values for $a$ in each filter are $-0.0581$, 0.0773, 0.4346, 0.9226, 0.9994, and 1.0015, respectively. The values for $b$ are 8.4402, 9.1784, 5.3286, 2.1019, 1.0171, and 0.0126, respectively. All values for $a$ and $b$ were determined as described in \citet{CCM89}. No correction factor is provided for the $white$ filter as the large width of the FWHM makes any extinction correction highly dependent on the spectral energy distribution of the source. 70. NH: The logarithm of the absorption column density ($N_H$) along the line of sight as defined in \citet{KPMW05}. 71. ZZ: The redshift of the burst. If no redshift was found the value is set to 99.9999. 72. ZZ\_METH: The flag indicating how the redshift was determined. The flag is an integer from 0-4 representing: 0 = no redshift determined, 1 = absorption, 2 = emission, or 3 = Lyman break. 73. ZZ\_GCN: The GCN Circular number where the information for ZZ and ZZ\_METH was reported. If no redshift was reported this value is left blank. 74. FLUENCE\_BAT: The prompt BAT fluence of the burst, in $10^{-8} {\rm ~erg ~cm^{-2}}$ ($15-150 {\rm ~keV}$ band), as reported in \citet{ST2007}. If no fluence was reported, or for HETE2, INTEGRAL, or IPN discovered bursts, the value is set to $-99.0$. 75. FLUX\_RAD: The radio flux of the burst, in ${\rm mJy}$, as reported in the GCN Circular. If an upper limit was reported the value is set to 99.000. If no radio observation was reported than the value is set to $-99.000$. 76. FLUX\_RAD\_GCN: The GCN Circular number where the FLUX\_RAD was reported. If no flux was reported this value is left blank. 77. RADIO\_FREQ: The observed frequency of the detection in FLUX\_RAD expressed in GHz. 78. DET\_IR: Flag indicating whether a detection in the $R-K$ bands was reported in the GCN Circulars. ``F" = No, ``T" = Yes. 79. DET\_UVOT: Flag indicating whether a detection in any of the UVOT bands was found. ``F" = No, ``T" = Yes. 80. DET\_RADIO: Flag indicating whether a detection in the radio was reported in the GCN Circulars. ``F" = No, ``T" = Yes. 81. NOTES: Notes on individual objects. \section{Catalog Summary} From the catalog we can construct some of the general characteristics of the burst sample. Figure~\ref{fig-aitoff} displays the celestial coordinates of all UVOT observed GRBs, and whether or not they were detected. As is expected, the distribution of all bursts is random on the sky \citep[cf.][]{MCA92}. From the sample of the 229 observed GRBs, $\sim26\%$ are detected by the UVOT at the $3\sigma$-level (in an individual exposure), $\sim60\%$ are detected by ground-based observations (although these are typically redder detections than reported by the UVOT) as reported in the GCNs, and $\sim40\%$ have no reported detections. The distribution of times to the start of the first observation (i.e. the settling exposure) by UVOT is shown in Figure~\ref{fig-firstobs}. The median time to burst observation is $110 {\rm ~s}$ for all observations. If only the main peak of the distribution is considered, i.e. the $\sim70\%$ of the sample with $\Delta t < 300 {\rm ~s}$, the median time to burst observation is $86 {\rm ~s}$. In most cases the UVOT is observing the very early stages of the afterglow. In some cases, UVOT is observing during the end of the prompt emission phase. The delay in observing the remaining $\sim30\%$ of bursts is typically due to Earth occultations or to the inherent lag time for non-{\em Swift} burst alerts. Figure~\ref{fig-distribution} illustrates the distribution of exposure times in each filter in the first $2000 {\rm ~s}$ following the GRB detection, as well as the total exposure in each filter for all bursts. Because the finding charts are typically taken in the $white$ and $v$ filters, and since the finding charts dominate the observing time during the first $\sim2000 {\rm ~s}$, the early light curves predominantly have $white$ and $v$ data points. The distribution of the brightest UVOT $v$-filter magnitudes for each detected burst, which is almost always the first or second exposure, is shown in Figure~\ref{fig-peakmag}. For time-to-observation of bursts $<500 {\rm ~s}$ and for Galactic reddening $<0.5$ the UVOT pipeline detects in a {\em single} exposure (i.e. no coadding of frames) an afterglow in $\sim27\%$ of the cases. For time-to-observation of bursts $\geq500 {\rm ~s}$ and for Galactic reddening $<0.5$ the UVOT pipeline detects in a {\em single} exposure an afterglow $\sim22\%$ of the time. If these samples of early and late observed bursts are subdivided into long ($T_{90}>2 {\rm ~s}$) and short ($T_{90}\leq2 {\rm ~s}$) bursts, then the detection rate for a single exposure is $\sim27\%$ for both long and short for the early observed bursts, while it is $\sim29\%$ for the long and $\sim12\%$ for the short late observed bursts. These values will increase as future versions of the catalog use optimal coaddition (see Section 6). Initial work indicates that for bursts observed within $500 {\rm ~s}$ and for Galactic reddening $<0.5$ the UVOT pipeline detects an afterglow in $\sim40\%$ of the cases. Many of the remaining $\sim60\%$ of ``dark" bursts can be explained by circumburst extinction, high redshift Lyman-$\alpha$ blanketing and absorption, and suppression of the reverse shock \citep[cf.][]{RPWA06b,FJU01,GPJ98,HJP98}. We have fit power law models to the light curves of bursts that are well-sampled in the UVOT observations. The definition of ``well-sampled'' here is that for a single band, an afterglow must be detected in at least two independent images taken between 300\,s and $1\times 10^{5}\,s$ after the burst. A total of 42 bursts are considered ``well-sampled." Often the first few points on a light curve, up to several hundred seconds after a burst, are not consistent with a single power law fit that describes the rest of the light curve \citep[i.e. GRBs 050730, 050820A, 060124, 060206, \& 060614; cf.][]{OS08}. In those cases, the early points have been omitted from the power law fit. The remaining points were fit with a function consisting of a single power law and a constant offset. The constant was included to account for any remaining residual in the sky subtraction, or to account for possible host galaxy contribution to the flux. The best fit parameters were determined by minimizing the $\chi^{2}$ value of the fit to the coincidence loss corrected linear count rates and their errors. The values of the fit parameters are given in the {\em Swift} UVOT Burst Catalog. Light curves and fits are shown in Figure\,9, for the well-sampled bursts in the bands for which there is the largest number of detections of a given burst afterglow. In the case of GRB\,060218, a power-law is not a satisfactory description of the light curve, so its light curve is shown without a fit. Figure~\ref{fig-061021} shows the light curves in all seven UVOT filters for GRB\,061007. The lower right panel of Figure~\ref{fig-061021} shows all of the light curves for GRB\,061007 scaled to match the $v$ band light curve fit at $2000 {\rm ~s}$. We note that the light curves have been plotted on a log-linear scale as opposed to the traditional log-log scale. This is done for diagnostic reasons in order to examine if the light curves approach zero. If the light curve approaches zero then there is no host galaxy contributing to the overall results. If the light curve is above zero then there is host galaxy contribution. If the light curve was below zero then a problem was identified and fixed. Up to this point, we have only illustrated light curves on the basis of the image database. Figure~\ref{fig-event} demonstrates the light curve for GRB 060607A as generated from a version of the event database. The event database provides the capability to probe the very early development of the afterglow. A discussion of the early afterglow features is presented elsewhere \citep{OS08}. From the light curves a study of the interrelationship between bursts can be made. The majority of the well-sampled light curves are fit by a single power-law after several hundred seconds. Figure~\ref{fig-alpha} illustrates the distribution of temporal slopes across all filters and bursts while Figure~\ref{fig-alphafilters} shows the distribution of temporal slopes as a function of filter. The temporal slope ranges from $-0.09$ to $-3$. The average temporal slope for the entire sample is $\alpha = 0.96$ (the dispersion about the mean, $\sigma = 0.48$) which is consistent with other published results \citep{KDA07}. For the individual filters the average $\alpha$ is 1.30 $(\sigma = 0.43)$, 1.31 $(\sigma = 0.41)$, 0.96 $(\sigma = 0.33)$, 0.86 $(\sigma = 0.38)$, 1.05 $(\sigma = 0.42)$, 1.00 $(\sigma = 0.63)$, and 0.83 $(\sigma = 0.36)$ for the uvw2, uvm2, uvw1, $u$, $b$, $v$, and $white$ filters, respectively. A Kolmogorov-Smirnov (KS) test between the UV and optical data sets (excluding the $white$ filter) indicates that the probability of the two data sets being different is 0.087. By using the temporal slopes, we have calculated the magnitude of the light curves at $2000 {\rm ~s}$ in each available UVOT band. The average $v$-band magnitude at $2000 {\rm ~s}$ is 18.02 ($\sigma = 1.59$). Figure~\ref{fig-colors} reveals the color-color relationship between the uvw1, $b$, and $v$ colors. Typical values for $b-v$ are $\sim0.5$ with a small amount of scatter. On the other hand, values for ${\rm uvw1}-v$ are $\sim0$ with a large degree of scatter. Figures~\ref{fig14}~\&~\ref{fig15} provide a comparison of the X-ray flux at $11 {\rm ~hours}$ ($F_{X,11}$) in the $0.3-10 {\rm ~keV}$ band, and the prompt $\gamma$-ray fluence ($S_\gamma$) in the $15-150 {\rm ~keV}$ band to the optical flux at $2000 {\rm ~s}$ ($f_{2000}$)\footnote{The X-ray fluxes are provided by the {\em Swift} XRT team and will be published in an upcoming paper \citep{BDN08}. The prompt $\gamma$-ray fluences are found in \citet{ST2007}.}. Evident from the figures is a general trend of X-ray flux or $\gamma$-ray fluence to correspond to optical brightness. Using the Spearman rank correlation, the data are strongly correlated ($p = 8.8\times10^{-4}$) and marginally correlated ($p = 0.0184$) for $F_{X,11}$ and $S_\gamma$, respectively. An X-ray-to-optical correlation has been suggested previously \citep{RPWA06b,RE05,JP04,DM03}. We note that none of these values have been corrected for redshift. This is outside the scope of this paper and left for future work. We also provide the redshift distribution of the burst sample (see Figure~\ref{fig-z}). The redshifts were not determined by the UVOT but rather by spectroscopic measurements with ground based instruments. From the distribution it can be seen that the UVOT is sampling bursts across the entire redshift range up to the UVOT redshift limit of $z\sim5.1$. \section{Future Work} This catalog is useful for examining the relationship between optically detected and undetected bursts. Future versions of the database will incorporate various enhancements to the current version. Some of these enhancements include: using filter dependent region files, more fully automating quality checks, adding functionality to the FTOOLs {\tt uvotsource}, and optimal coaddition of images. For the current version of the database, composite region files were used. Future versions would use filter dependent (non-composite) region files for each burst. There are several cases where a dense population of sources in the $v$ filter make for a complicated background region file, while the field is not at all complicated in the UV filters. A better background estimate can be determined if separate region files were used. Future databases may be able to reclaim some exposures flagged as ``poor quality" images. Our method for finding images contaminated by charge trails or diffraction spikes is to sum all images in a given filter and observation sequence, visually inspect them, and manually flag contaminated images. This works well for small apertures, but future databases will require photometry for each object in a wide range of apertures, which would potentially require a different set of quality files for each aperture size. A better way to handle this problem would be to identify bright stars in each field and determine whether the spacecraft roll angle would align any of them in the CCD readout direction so as to affect a given aperture at the GRB location. This eliminates the need for manual checking of a large number of images. By automating the process only the individual images that are contaminated would be flagged, as opposed to flagging entire observation sequences of images. Future databases would also check the photometric stability using field stars for each image. Several improvements to the UVOT tools used in generating the database have been proposed. Currently, {\tt uvotsource} is not capable of performing sigma clipping of background pixels. Adding this functionality would improve the background computations. Since {\tt uvotsource} uses a Gaussian model of the background, it has difficulty in estimating images with extremely low background counts as we often see in UV images. Adding a Poisson or Binomial background model would greatly improve our results in some cases. In this database version, no coaddition of the individual images was attempted. Future work will include optimal coaddition of the data using the method proposed by \citet{MA08}. Using this method will identify more detections from this and future databases. \acknowledgments We gratefully acknowledge the contributions from members of the {\em Swift} team at the Pennsylvania State University (PSU), University College London/Mullard Space Science Laboratory (MSSL), NASA/Goddard Space Flight Center, and our subcontractors, who helped make this instrument possible. This work is sponsored at PSU by NASA contract NAS5-00136 and at MSSL by funding from the Science and Technology Facilities Council (STFC). \facility{Swift(UVOT)}